Senior Data Engineer-w2

Overview

Remote
Depends on Experience
Contract - Independent
Contract - W2

Skills

Amazon Web Services
Apache Kafka
Apache Parquet
Apache Spark
Attention To Detail
Cloud Computing
Collaboration
Communication
Computerized System Validation
Conflict Resolution
Continuous Delivery
Continuous Integration
Data Engineering
Data Governance
Databricks
Docker
ELT
Extract
Transform
Load
Git
Good Clinical Practice
Google Cloud Platform
JSON
Kubernetes
Microsoft Azure
Orchestration

Job Details

Senior Data Engineer

Location: Bloomfield, CT or Woodbridge, NJ (open to 100% remote candidates )

Duration: 8 months base contract; possible extensions

Notes:

100% remote is fine

MUST HAVE strong Python, SQL, Pyspark, and Databricks

Job Description:

Job Summary:

We are seeking a highly skilled and experienced Senior Data Engineer with a strong background in Databricks, SQL, and Python, pyspark to join our data engineering team. The ideal candidate will have a proven track record of designing, building, and deploying scalable data pipelines and solutions in cloud environments. You will be responsible for end-to-end development, from data ingestion to deployment, ensuring high performance and reliability.

Required Qualifications:

5+ years of professional experience in data engineering or related roles.

Strong proficiency in Databricks, SQL, and Python, Pyspark.

Experience with end-to-end deployment of data solutions in cloud environments (e.g., Azure, AWS, Google Cloud Platform).

Solid understanding of ETL/ELT processes, data modeling, and data warehousing concepts.

Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration tools (e.g., Airflow).

Experience with structured and unstructured data formats (e.g., Parquet, JSON, CSV).

Strong problem-solving skills and attention to detail.

Excellent communication and collaboration skills.

Preferred Qualifications:

Experience with Delta Lake or other Databricks ecosystem tools.

Knowledge of data governance, security, and compliance standards.

Familiarity with containerization (Docker) and Kubernetes.

Exposure to real-time data processing (e.g., Kafka, Spark Streaming).

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Cyma Systems Inc