Overview
Skills
Job Details
We are seeking a highly skilled Data Engineer with strong expertise in AWS Cloud Services, Big Data technologies, and API development. The ideal candidate will design, develop, and optimize scalable data solutions, leveraging modern cloud and DevOps practices to support advanced analytics and machine learning workloads.
Cloud Architecture & Development
Design and implement solutions using AWS services such as EMR, SageMaker, DynamoDB, ElasticSearch, ECS Fargate, and RDS.
Ensure high availability, scalability, and security of cloud-based applications.
Big Data Engineering
Develop and optimize data pipelines using PySpark for large-scale data processing.
Integrate data from multiple sources for analytics and machine learning use cases.
API Development
Build and maintain RESTful APIs using Python frameworks such as Flask, Django, or FastAPI.
Ensure APIs are secure, performant, and well-documented.
DevOps & Automation
Implement CI/CD pipelines using Jenkins for automated deployments.
Collaborate with operations teams to streamline release processes and improve system reliability.
Strong hands-on experience with AWS Cloud Services (EMR, SageMaker, DynamoDB, ElasticSearch, ECS Fargate, RDS).
Proficiency in Big Data programming using PySpark.
Solid experience in API development with Python frameworks (Flask/Django/FastAPI).
Knowledge of DevOps practices and Jenkins automation for CI/CD.
Familiarity with containerization (Docker) and orchestration (Kubernetes/ECS) is a plus.
Excellent problem-solving skills and ability to work in a fast-paced environment.
Bachelor s or Master s degree in Computer Science, Engineering, or related field.
Experience with machine learning workflows and data science tools.
Understanding of security best practices in cloud environments.