Big Data Engineer @Hybrid McLean or Richmond VA

Overview

On Site
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 6 month(s)

Skills

Recruiting
Innovation
Big Data
Analytical Skill
Cloud Computing
Extract
Transform
Load
ELT
Financial Services
Python
PySpark
SQL Tuning
Amazon S3
Electronic Health Record (EHR)
Amazon Redshift
SQL
Relational Databases
Data Processing
Continuous Integration
Continuous Delivery
Git
DevOps
Data Engineering
Amazon Web Services
Step-Functions
Orchestration
Data Lake
Data Warehouse
Docker
Kubernetes
Apache Spark
Streaming
Amazon Kinesis
Apache Kafka
Machine Learning (ML)
Workflow

Job Details




Join a Global Leader in Talent Solutions TekIntegral Inc.

Who We Are:

TekIntegral Inc. isn't just another staffing company, we're a hub of innovation, connecting top talent with the client needs.

Recognized for 140% growth in the past three years.

Our Moto - Right Talent. Right Time. Right Place. Right Price.

Please find below the job description.

Position: Big Data Engineer - Python / Pyspark

Location: Hybrid McLean or Richmond VA

Type: Contract

Visa:" "The client is not willing to sponsor visas at this time." "

Description:

Preferred Ex Capital One experience needed more chances for interview

We are looking for an experienced Big Data Engineer skilled in PySpark, Apache Spark, and AWS to design, build, and optimize large-scale data pipelines and analytical platforms. The ideal candidate will have hands-on experience with distributed data processing, cloud-native data services, and modern ETL/ELT frameworks. You will work closely with data scientists, analysts, and platform teams to ensure reliable, scalable, and secure data solutions.

Required skills:

Design, develop, and maintain data pipelines using Python, PySpark, and Apache Spark for large-scale batch and streaming data processing.

Build and enhance ETL/ELT workflows to ingest, transform, cleanse, and deliver high-quality data to downstream systems.

Develop reusable data processing frameworks and modular code for scalable data engineering solutions.

Optimize Spark jobs for performance, memory efficiency, and distributed execution.

Technical Skills

Experience in financial services is a big plus in recent

10+ years of hands-on data engineering experience.

Strong programming skills in Python and experience building data-centric applications.

Advanced proficiency in PySpark and Apache Spark (RDDs, DataFrames, Spark SQL, performance tuning).

Experience with AWS data ecosystem:

S3, EMR, Glue, Lambda, Kinesis, Athena, Redshift

Strong understanding of SQL and experience working with data warehouses and relational databases.

Experience with distributed systems, large-scale data processing, and parallel computation.

Familiarity with CI/CD, Git, and DevOps processes for data engineering.

Experience with Airflow, AWS Step Functions, or similar orchestration tools.

Needed Skills

Experience with data lake and data warehouse architectures.

Experience with containerized deployments (Docker, Kubernetes, EKS).

Exposure to streaming frameworks (Spark Streaming, Kinesis, Kafka).

Familiarity with machine learning workflows and model deployment pipelines.



Email



Why Work with us ?

TekIntegral is an equal opportunity employer, dedicated to fostering a workplace where diverse talents and perspectives are valued.

We make all employment decisions based on merit, ensuring a culture of respect, fairness, and opportunity for all, regardless of age, gender, ethnicity, disability, or other protected characteristics.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.