Overview
On Site
Full Time
Skills
Data Integration
Meta-data Management
Management
Software Engineering
Code Review
Continuous Integration and Development
Data Governance
Continuous Monitoring
Code Refactoring
Databricks
Systems Architecture
Data Quality
Documentation
Data Architecture
Testing
Process Optimization
Database
Computer Science
Data Science
Programming Languages
Python
Scala
Java
Rust
Oracle
OLTP
Extract
Transform
Load
ELT
Apache Spark
SOAP
RPC
GraphQL
OLAP
Data Warehouse
Star Schema
NoSQL
Database Design
MongoDB
Message Broker
RabbitMQ
Apache Kafka
Red Hat Linux
Microsoft Azure
Amazon Web Services
Cloud Computing
Continuous Integration
Continuous Delivery
Jenkins
Ansible
Progress Chef
Puppet
Docker
Kubernetes
Job Details
Senior Data Engineer
Responsibilities:
Build high performance data systems including databases, APIs, and data integration pipelines.
Implement a metadata-driven architecture and infrastructure as code approach to automate and simplify the design, deployment, and management of data systems.
Facilitate the adoption of data and software engineering best practices, including code review, testing, and continuous integration and delivery (CI/CD).
Develop and establish a data governance framework.
Continuous monitoring of process performance and implement improvements for efficiency including fine-tuning existing ETL processes, optimizing queries, or refactoring code.
Assess and make optimal use of cloud platforms and technologies, especially Azure and Databricks to enhance system architecture.
Implement data quality checks and build processes to identify and resolve data issues.
Create and maintain documentation for data architecture, standards, and best practices.
Contribute designs, code, tooling, testing, and operational support.
Identify opportunities for process optimization and automation to enhance data operations efficiency.
Requirements:
7 plus years of experience working as a Data Engineer, Data Architect, Database Developer or similar roles.
Bachelor's Degree in Computer Science, Data Science, Engineering or related field.
7+ years of experience in programming languages such as Python, Scala, Java, Rust or similar.
7+ years of experience in Oracle OLTP database design and development.
7+ years of experience in building ETL / ELT data pipelines using Apache Spark, Airflow, dbt or similar.
7 years of experience in developing APIs that include REST, SOAP, RPC, GraphQL or similar.
7 years of experience in OLAP / data warehouse design and development using dimensional (Kimball star-schema), data vault or Inmon methodologies.
5 years of experience in NoSQL database design and development preferably on MongoDB or similar.
5 years of experience working with message broker platforms like RabbitMQ, Apache Kafka, Red Hat AMQ or similar.
5 years of experience working with Azure or AWS cloud-native technologies / services.
Working knowledge of CI/CD automation tools or services like Jenkins, Ansible, Chef, Puppet or similar.
Working knowledge of containerized platform or services like Docker, Kubernetes or similar.
#RecruitPS
Responsibilities:
Build high performance data systems including databases, APIs, and data integration pipelines.
Implement a metadata-driven architecture and infrastructure as code approach to automate and simplify the design, deployment, and management of data systems.
Facilitate the adoption of data and software engineering best practices, including code review, testing, and continuous integration and delivery (CI/CD).
Develop and establish a data governance framework.
Continuous monitoring of process performance and implement improvements for efficiency including fine-tuning existing ETL processes, optimizing queries, or refactoring code.
Assess and make optimal use of cloud platforms and technologies, especially Azure and Databricks to enhance system architecture.
Implement data quality checks and build processes to identify and resolve data issues.
Create and maintain documentation for data architecture, standards, and best practices.
Contribute designs, code, tooling, testing, and operational support.
Identify opportunities for process optimization and automation to enhance data operations efficiency.
Requirements:
7 plus years of experience working as a Data Engineer, Data Architect, Database Developer or similar roles.
Bachelor's Degree in Computer Science, Data Science, Engineering or related field.
7+ years of experience in programming languages such as Python, Scala, Java, Rust or similar.
7+ years of experience in Oracle OLTP database design and development.
7+ years of experience in building ETL / ELT data pipelines using Apache Spark, Airflow, dbt or similar.
7 years of experience in developing APIs that include REST, SOAP, RPC, GraphQL or similar.
7 years of experience in OLAP / data warehouse design and development using dimensional (Kimball star-schema), data vault or Inmon methodologies.
5 years of experience in NoSQL database design and development preferably on MongoDB or similar.
5 years of experience working with message broker platforms like RabbitMQ, Apache Kafka, Red Hat AMQ or similar.
5 years of experience working with Azure or AWS cloud-native technologies / services.
Working knowledge of CI/CD automation tools or services like Jenkins, Ansible, Chef, Puppet or similar.
Working knowledge of containerized platform or services like Docker, Kubernetes or similar.
#RecruitPS
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.