Overview
Skills
Job Details
Seeking a Sr Data Engineer with at least 13 years of experience. Must have prior exp. working with business & I.T. leaders and have a true consulting mindset vs. an order taker.
Duration: contract position/12 months
Work location: Remote USA (eastern time-zone work hours)
Duties:
-Interact with business & technical stakeholders to gather & understand requirements.
-Design scalable data solutions & document technical designs.
-Develop production-grade, high-performance ETL pipelines using Spark & PySpark.
-Perform data modeling to support business requirements.
-Write optimized SQL queries using Teradata SQL, Hive SQL, and Spark SQL across platforms such as Teradata and Databricks Unity Catalog.
-Implement CI/CD pipelines to deploy code artifacts to platforms like AWS & Databricks.
-Orchestrate Databricks jobs using Databricks Workflows.
-Monitor production jobs, troubleshoot issues, and implement effective solutions.
-Participate in agile ceremonies including sprint planning, grooming, daily stand-ups, demos, & retrospectives.
Exp/Skills:
-Strong hands-on exp. with Spark, PySpark, Shell scripting, Teradata & Databricks.
-Proficient writing complex SQL queries & stored procedures.
-Prior exp. implementing Databricks Data Lake & Data Warehouse.
-Familiarity with agile methodologies & DevOps tools such as Git, Jenkins & Artifactory.
-Exp. with Unix/Linux shell scripting (KSH) & basic Unix server administration.
-Knowledge of job scheduling tools like CA7 Enterprise Scheduler.
-Hands-on exp. with AWS services including S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch.
-Expertise in Databricks components such as Delta Lake, Notebooks, Pipelines, cluster management & cloud integration (Azure/AWS).
-Exp. with Jira and/or Confluence.
-Demonstrated creativity, foresight & sound judgment in planning & delivering technical solutions.