Job Title: Data Engineer III
Location: 5001 Kingsley Drive, Cincinnati OH 45227
Duration: 6+ months contract
TECHNICAL SKILLS
Must Have
- Building and optimizing ETL pipelines using IBM DataStage, Snowflake, DBT, and MettleCI
- Experience with data pipeline testing (Unit testing for ETL/ELT components, Integration and regression testing, Data quality and validation frameworks, Automated testing within CI/CD workflows)
- Strong technical expertise to deliver efficient, accurate, and scalable data solutions that follow established governance and coding standards
JOB DESCRIPTION
Essential Job Requirements:
This Data Engineer III contract role will support the Principal Data Engineer by developing, testing, and monitoring highquality ETL code for several timesensitive, enterpriselevel initiatives. The position requires strong technical expertise to deliver efficient, accurate, and scalable data solutions that follow established governance and coding standards. Core responsibilities include building and optimizing ETL pipelines using IBM DataStage, Snowflake, dbt, and MettleCI, with a focus on code quality, data accuracy, and operational reliability. The ideal candidate will be able to rapidly contribute to critical project milestones while maintaining strict adherence to best practices and governance processes.
Qualifications:
Design, build, test, and maintain scalable data management systems and pipelines.
Develop high performance algorithms, predictive models, prototypes, and custom software components to support analytics and data processing needs.
Ensure all data systems adhere to business requirements, governance standards, and industry best practices.
Demonstrated experience with data pipeline testing, including: Unit testing for ETL/ELT components, Integration and regression testing, Data quality and validation frameworks, Automated testing within CI/CD workflows
Integrate new data management tools, engineering technologies, and automation capabilities into existing architectures.
Establish and optimize processes for data modeling, data mining, and data production workflows.
Research and identify new opportunities for leveraging existing data assets.
Utilize a variety of programming languages, integration tools, and platforms to connect systems and enable seamless data flow.
Collaborate closely with cross functional partners, including solution architects, IT teams, and data scientists, to achieve project objectives.
Recommend and implement improvements to enhance data quality, reliability, and performance.
Maintain and update disaster recovery procedures to ensure system resilience.