Overview
Skills
Job Details
Job Description
Design develop and maintain scalable data pipelines and ETL processes to support data integration and analytics using data bricks.
Contribute to best practices for data engineering, ensuring data qualtity, reliability and Performance
Contributed to data modernization by leveraging cloud solutions and optimizing data processing workflows (databricks specific performance methodology, Tuning Table structures, Cluster configurations)
Experience with messaging (Kafka) based integration of microservices, (ECS, or eks), Streaming pipeline in Databricks
Familiarity with CI/CDD, application resiliency and security
Practical cloud native experience in AWS and deployment using terraform
Proficiency, in scripting languages like python, etc.
Advanced knowledge of RDBMS, like aurora for etl and optimizations
Experience in open search