Overview
Hybrid
Depends on Experience
Accepts corp to corp applications
Contract - W2
Skills
TensorFlow
PyTorch
Microservices
Orchestration
Performance Tuning
Problem Solving
DevOps
Data Analysis
Data Engineering
Data Manipulation
Data Processing
Communication
Apache Kafka
Apache Spark
Artificial Intelligence
Big Data
Agile
Amazon Redshift
Scrum
Snow Flake Schema
Storage
Streaming
Extract
Transform
Load
Machine Learning (ML)
PySpark
Python
Workflow
Continuous Delivery
Continuous Integration
Data Warehouse
Docker
Kubernetes
Amazon S3
Amazon Web Services
Analytical Skill
Collaboration
Conflict Resolution
Job Details
Key Responsibilities:
- Design, develop, and maintain robust ETL workflows and data pipelines.
- Work with large-scale data processing and storage solutions on AWS, including services like S3, Glue, Athena, Lambda, and Redshift.
- Write clean, efficient, and scalable code using Python and PySpark.
- Integrate AI/ML models using frameworks such as TensorFlow or PyTorch into data workflows.
- Develop and optimize batch and streaming data pipelines using technologies like Apache Spark, Kafka, and Snowflake.
- Collaborate with cross-functional teams in an Agile environment to deliver high-impact data engineering solutions.
- Apply DevOps best practices, including CI/CD pipelines and container orchestration with Docker and Kubernetes.
- Contribute to architecture design and implementation of microservices-based and distributed systems.
- Perform performance tuning and troubleshoot production data issues.
- Leverage Palantir Foundry platform for building and maintaining secure, scalable, and reusable data applications (if applicable).
Required Skills & Qualifications:
- 1.5 to 4 years of hands-on experience in data engineering.
- Strong experience with AWS services for data processing and storage (S3, Glue, Athena, Lambda, Redshift).
- Proficiency in Python and PySpark for data manipulation and transformation.
- Experience with big data frameworks like Apache Spark and Kafka.
- Familiarity with Snowflake and data warehousing concepts.
- Understanding of microservices architecture and distributed systems.
- Exposure to AI/ML tools (TensorFlow, PyTorch) and how they integrate into pipelines.
- Experience with DevOps practices, including CI/CD tools and container orchestration (Docker, Kubernetes).
- Prior experience working in Agile/Scrum teams.
- Palantir Foundry experience or certification is a strong plus.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Data Analytics or AWS Certified Solutions Architect)
- Palantir Foundry Certification (if available)
- Strong analytical and problem-solving skills
- Excellent communication and team collaboration abilities
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.