Data Engineer - AWS, Bedrock

  • Chicago, IL
  • Posted 1 day ago | Updated 6 hours ago

Overview

On Site
Accepts corp to corp applications
Contract - Long term

Skills

AWS
Python
SQL
PySpark
DATA ENGINEER
BEDROCK

Job Details

Data Engineer specializing in AWS Bedrock Experience: 7+ Years Location: Chicago, IL 3 days/week What is in it for you?

A highly skilled Data Engineer specializing in AWS Bedrock and modern data platforms, responsible for designing, building, and optimizing scalable data solutions and pipelines for advanced analytics and AI-driven applications.

Responsibilities:
  • Design, develop, and maintain robust data pipelines and architecture for large-scale data processing.

  • Implement and optimize data workflows using AWS services (Glue, Lambda, EMR, Kinesis) and Bedrock.

  • Collaborate with data scientists and ML engineers to integrate machine learning models into production environments.

  • Ensure data quality, security, and compliance across all stages of the data lifecycle.

  • Develop CI/CD pipelines for data engineering projects using Git, Terraform, and containerization tools.

  • Work with streaming and batch processing frameworks (Spark, Kafka/Kinesis, Spark Streaming).

  • Manage and optimize relational (PostgreSQL) and NoSQL databases (Redis, Elasticsearch).

  • Monitor and troubleshoot data systems for performance and reliability.

  • Stay updated on emerging technologies in big data, AI, and cloud platforms.

  • Collaborate closely with teams in an Agile/Scrum environment.

Educational Qualifications:
  • Engineering Degree BE/ME/BTech/MTech/BSc/MSc.

  • Technical certification in multiple technologies is desirable.

Skills: Mandatory Skills:
  • Programming: Strong proficiency in Python; ability to learn other languages quickly.

  • AWS Expertise: Hands-on experience with AWS Bedrock, Lambda, Glue, Athena, Kinesis, IAM, EMR/PySpark.

  • Big Data Technologies: EMR, Spark, Kafka/Kinesis, Airflow.

  • Databases: Advanced SQL (complex queries), PostgreSQL, Redis, Elasticsearch.

  • CI/CD & Infrastructure: Git, Terraform, Docker; experience with agile methodologies.

  • Stream Processing: Spark Streaming or similar frameworks.

Good to Have Skills:
  • Knowledge Graph Technologies: Graph DB, OWL, SPARQL.

  • Machine Learning Frameworks: TensorFlow, PyTorch, Scikit-learn, XGBoost.

  • Model Deployment: Flask, FastAPI, Docker, Kubernetes, TensorFlow Serving, TorchServe.

  • Exposure to Databricks and workflow orchestration tools.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.