Data Engineer

Overview

Hybrid
Depends on Experience
Contract - W2
Contract - 12 Month(s)
25% Travel

Skills

Data Architecture
Data Engineering
Data Flow
Data Modeling
Data Processing
AWS
Data Warehouse
Docker
Databricks
Data Quality
Apache Beam
Apache Kafka
Apache Spark
HIPAA
Kubernetes
Python
PyTorch
SQL
Microsoft Azure
ML
ML Ops
ETL
GCP
ELT

Job Details

Job Title: Senior Data Engineer

Location: Charlotte, NC (Hybrid)

Duration: 12+ Months

Employment: W2 Only (No C2C / 1099)

Job Summary:

We are seeking a highly skilled Senior Data Engineer with a strong background in data architecture, pipeline development, and hands-on experience supporting AI/ML initiatives. The ideal candidate will be responsible for building scalable data infrastructure, enabling advanced analytics, and supporting machine learning workflows across the organization.

Key Responsibilities:

  • Design, build, and maintain robust, scalable, and secure data pipelines for structured and unstructured data.
  • Collaborate with data scientists and ML engineers to prepare and optimize datasets for training, validation, and inference.
  • Develop and manage data lakes, data warehouses, and real-time streaming solutions using modern cloud platforms (e.g., Azure, AWS, or Google Cloud Platform).
  • Implement data quality, lineage, and governance practices to ensure reliable and compliant data usage.
  • Support the deployment of machine learning models into production environments, ensuring performance and scalability.
  • Automate data workflows and integrate with MLOps pipelines using tools like Airflow, MLflow, or Kubeflow.
  • Work closely with cross-functional teams to understand business requirements and translate them into data solutions.
  • Monitor and troubleshoot data infrastructure, ensuring high availability and performance.

Required Qualifications:

  • 12+ years of experience in data engineering, with a strong focus on cloud-based data platforms.
  • Proficiency in SQL, Python, and data processing frameworks like Spark, Databricks, or Apache Beam.
  • Experience with cloud services such as Azure Data Factory, AWS Glue, or Google Cloud Platform Dataflow.
  • Solid understanding of data modeling, ETL/ELT, and data warehousing concepts.
  • Familiarity with machine learning workflows, including feature engineering and model deployment.
  • Experience with MLOps tools and practices.
  • Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines.
  • Exposure to real-time data streaming technologies like Kafka or Pub/Sub.
  • Knowledge of AI/ML frameworks such as TensorFlow, PyTorch, or Scikit-learn.
  • Strong understanding of data security, privacy, and compliance standards (e.g., GDPR, HIPAA).

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.