Skills:
Minimum of 3 - 8 years of experience in data engineering with a focus on cloud environments.
Proven success in data modeling, ETL pipeline development, and data warehouse implementation.
Solid experience with data warehouse platforms such as SQL Server, Oracle, Redshift, or Teradata.
Exposure to Big Data ecosystems, including tools and technologies like NoSQL databases, Hadoop, Hive, HBase, Pig, Spark, or Elasticsearch.
Proficiency in Python, .NET, Java, or equivalent programming languages used for data engineering.
Experience in the healthcare domain, especially with healthcare data or EDI transactions.
Familiarity with API development and integration.
Background in data product development, integrating diverse data sources into unified platforms.
Hands-on experience in designing and managing data pipelines and distributed data processing systems using Azure, AWS, or Google Cloud Platform technologies (e.g., SQL Server, Redshift, S3, EC2, Data Lake, Data Factory).
Demonstrated ability to provide technical leadership and mentor teams on data engineering best practices.
Relevant Microsoft Azure certifications are a plus.
Proven track record of handling ambiguity, setting priorities effectively, and delivering results within agile frameworks.
Education:
Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or an equivalent technical discipline.