Overview
Skills
Job Details
Responsibilities
Design, develop, and maintain scalable ETL pipelines to support both analytical and operational business needs.
Work with large datasets using Spark, Databricks, and Python/Scala/Java for data transformation and processing.
Develop and optimize data models and architectures in Snowflake and other relational databases.
Implement data orchestration and workflow automation using Airflow or similar tools.
Collaborate with cross-functional teams to ensure data reliability, consistency, and accessibility.
Manage and monitor data storage, retrieval, and integration on AWS (S3 and related services).
Support reporting and analytics initiatives, ensuring high-quality data delivery.
< data-start="1373" data-end="1407">Required Qualifications</>
6+ years of professional experience in Data Engineering.
Strong proficiency in SQL, Python/Scala/Java, and distributed data frameworks like Spark.
Hands-on experience with Databricks, Snowflake, and AWS (S3).
Solid understanding of data modeling, architecture, and ETL development.
Experience with Airflow or similar orchestration tools.
Familiarity with Marketing Intelligence and data integration tools such as Datorama, Improvado, or FiveTran.
Strong knowledge of Agile/Scrum development practices.
Bachelor s or Master s degree in Computer Science, Information Systems, or a related field.
< data-start="2112" data-end="2143">Preferred Attributes</>
Excellent problem-solving and analytical skills.
Ability to work independently and collaborate in a hybrid team environment.
Strong communication skills and attention to detail.