Overview
Hybrid
$Depends on Experience
Accepts corp to corp applications
Contract - W2
50% Travel
Skills
AWS
Python
Pyspark
EMR
Spark
Kafka OR Kenesis
Lambda
Glue
Athena
Job Details
As a Data Engineer with deep expertise in Python, Pyspark, AWS, big data ecosystems, and SQL/NoSQL technologies, driving scalable, real-time data solutions with CI/CD and stream-processing frameworks.
Experience: - 8+ Years
Contract
In-person Interview Required
Location: - Chicago, IL (Local Only)
Educational Qualifications: -
- Engineering Degree BE/ME/BTech/MTech/BSc/MSc.
- Technical certification in multiple technologies is desirable.
Mandatory skills: -
- Python, PySpark, AWS
Good to have skills: -
- EMR, Spark, Kafka/ Kinesis
Responsibilities: -
- Proficient developer in multiple languages, Python is a must, with the ability to quickly learn new ones.
- Expertise in SQL (complex queries, relational databases preferably PostgreSQL, and NoSQL database - Redis and Elasticsearch).
- Extensive big data experience, including EMR, Spark, Kafka/ Kinesis, and optimizing data pipelines, architectures, and datasets.
- AWS expert with hands-on experience in Lambda, Glue, Athena, Kinesis, IAM, EMR/PySpark, Docker,
- Proficient in CI/CD development using Git, Terraform, and agile methodologies.
- Comfortable with stream-processing systems (Storm, Spark-Streaming) and workflow management tools (Airflow).
- Exposure to knowledge graph technologies (Graph DB, OWL, SPARQL) is a plus.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.