Overview
On Site
Depends on Experience
Contract - W2
Contract - 12 Month(s)
Skills
data engineering
java
Scala
Apache
Spark
Flink
AWS
SQL
Job Details
Responsibilities:
- Design and develop scalable data pipelines using Apache Spark, Flink, and Scala
- Build. and maintain data integration solutions across various data sources using AWS services.
- Develop efficient, reusable, and reliable code in Java and Scala.
- Implement real-time stream processing and batch processing architectures.
- Collaborate with data scientists, architects, and other engineers to develop end-to-end data solutions.
- Monitor and optimize performance of data workflows and job executions.
- Ensure data quality, security, and compliance throughout the lifecycle.
- Troubleshoot and resolve data-related technical issues.
Required Skills & Qualifications:
- 5+ years of professional experience in data engineering or backend software development.
- Strong programming skills in Java and Scala.
- Hands-on experience with Apache Spark and Apache Flink for batch and stream processing.
- Solid experience working with AWS services such as S3, EMR, Lambda, Kinesis, Glue, and Redshift.
- Proficiency in designing data models and working with large datasets.
- Familiarity with CI/CD practices and tools like Git, Jenkins, or similar.
- Strong understanding of distributed systems and cloud-native design patterns.
Preferred Qualifications:
- Experience with containerization tools such as Docker and orchestration using Kubernetes.
- Familiarity with data lake architecture and modern data stack concepts.
- Knowledge of SQL and NoSQL databases.
- Experience with monitoring and logging tools like Prometheus, Grafana, or CloudWatch.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.