Overview
On Site
Full Time
Skills
Dependability
IT Operations
Data Integration
Meta-data Management
Management
Software Engineering
Code Review
Continuous Integration and Development
Continuous Monitoring
Code Refactoring
Systems Architecture
Data Quality
Documentation
Data Architecture
Testing
Operational Efficiency
Process Optimization
Leadership
Innovation
IT Management
Data Engineering
Analytics
Computer Science
Data Science
Programming Languages
Python
Scala
Java
Rust
Oracle
OLTP
Extract
Transform
Load
ELT
Apache Spark
SOAP
RPC
GraphQL
OLAP
Data Warehouse
Star Schema
NoSQL
Database Design
MongoDB
Message Broker
RabbitMQ
Apache Kafka
Red Hat Linux
Microsoft Azure
Amazon Web Services
Cloud Computing
Continuous Integration
Continuous Delivery
Jenkins
Ansible
Progress Chef
Puppet
Docker
Kubernetes
Conflict Resolution
Problem Solving
Databricks
Business Intelligence
Microsoft Power BI
IBM Cognos Analytics
Tableau
Caching
Memcached
Redis
Database
Elasticsearch
Apache Solr
Data Modeling
ERwin
ER/Studio
Data Governance
Master Data Management
Informatica
TIBCO Software
Job Details
Benefits Summary
Overview
The Sr. Data Engineer requires expertise in DataOps and is responsible for guaranteeing dependable, scalable and effective data processes and solutions by combining components, functions and skills from data engineering, software engineering and IT Operations. This role is critical to AM Best's goal of developing, maintaining and scaling a sophisticated platform for data and analytics.
Responsibilities
Create and support scalable data solutions:
Increase operational efficiency:
Leadership and innovation:
Qualifications
Skills
7+ years of experience in programming languages such as Python, Scala, Java, Rust or similar
7+ years of experience in Oracle OLTP database design and development
7+ years of experience in building ETL / ELT data pipelines using Apache Spark, Airflow, dbt or similar
7 years of experience in developing APIs that include REST, SOAP, RPC, GraphQL or similar
7 years of experience in OLAP / data warehouse design and development using dimensional (Kimball star-schema), data vault or Inmon methodologies
5 years of experience in NoSQL database design and development preferably on MongoDB or similar
5 years of experience working with message broker platforms like RabbitMQ, Apache Kafka, Red Hat AMQ or similar
5 years of experience working with Azure or AWS cloud-native technologies / services
Working knowledge of CI/CD automation tools or services like Jenkins, Ansible, Chef, Puppet or similar
Working knowledge of containerized platform or services like Docker, Kubernetes or similar
Must be hands-on, have a passion for data and problem solving, and an innovative mind-set to help deliver results across projects
Pluses:
Experience working with Databricks data intelligence platform
Experience working with business intelligence tools / platforms like PowerBI, Cognos, Tableau or similar
Experience working with caching databases: Memcache,Redis or similar
Experience working with search databases and platforms: Elasticsearch, Apache Solr or similar
Experience working with data modeling tools: ERwin, ERStudio, Navicat, SqlDBM or similar
Knowledge of data governance tools: Colibra, Alation or similar
Knowledge of master data management tools: Informatica, TIBCO, Riversand, Ataccama or similar
- Flexible and hybrid work arrangements
- Paid time off/Paid company holidays
- Medical plan options/prescription drug plan
- Dental plan/vision plan options
- Flexible spending and health savings accounts
- 401(k) retirement savings plan with a Roth savings option and company matching contributions
- Educational assistance program
Overview
The Sr. Data Engineer requires expertise in DataOps and is responsible for guaranteeing dependable, scalable and effective data processes and solutions by combining components, functions and skills from data engineering, software engineering and IT Operations. This role is critical to AM Best's goal of developing, maintaining and scaling a sophisticated platform for data and analytics.
Responsibilities
Create and support scalable data solutions:
- Build high performance data systems including databases, APIs, and data integration pipelines
- Implement a metadata-driven architecture and infrastructure as code approach to automate and simplify the design, deployment, and management of data systems
- Facilitate the adoption of data and software engineering best practices, including code review, testing, and continuous integration and delivery (CI/CD)
- Develop and establish a data governance framework
- Continuous monitoring of process performance and implement improvements for efficiency including fine-tuning existing ETL processes, optimizing queries, or refactoring code
- Assess and make optimal use of cloud platforms and technologies, especially Azure and Databricks to enhance system architecture
- Implement data quality checks and build processes to identify and resolve data issues
- Create and maintain documentation for data architecture, standards, and best practices
- Contribute designs, code, tooling, testing, and operational support
Increase operational efficiency:
- Identify opportunities for process optimization and automation to enhance data operations efficiency
Leadership and innovation:
- Provide technical leadership to the data engineering team and actively lead design discussions
- Ensure that the new data infrastructure remains modern and efficient by staying knowledgeable of the latest tools, technologies, and methodologies in the data and analytics space
Qualifications
- 7 plus years of experience working as a Data Engineer, Data Architect, Database Developer or similar roles
- Bachelor's Degree in Computer Science, Data Science, Engineering or related field
Skills
7+ years of experience in programming languages such as Python, Scala, Java, Rust or similar
7+ years of experience in Oracle OLTP database design and development
7+ years of experience in building ETL / ELT data pipelines using Apache Spark, Airflow, dbt or similar
7 years of experience in developing APIs that include REST, SOAP, RPC, GraphQL or similar
7 years of experience in OLAP / data warehouse design and development using dimensional (Kimball star-schema), data vault or Inmon methodologies
5 years of experience in NoSQL database design and development preferably on MongoDB or similar
5 years of experience working with message broker platforms like RabbitMQ, Apache Kafka, Red Hat AMQ or similar
5 years of experience working with Azure or AWS cloud-native technologies / services
Working knowledge of CI/CD automation tools or services like Jenkins, Ansible, Chef, Puppet or similar
Working knowledge of containerized platform or services like Docker, Kubernetes or similar
Must be hands-on, have a passion for data and problem solving, and an innovative mind-set to help deliver results across projects
Pluses:
Experience working with Databricks data intelligence platform
Experience working with business intelligence tools / platforms like PowerBI, Cognos, Tableau or similar
Experience working with caching databases: Memcache,Redis or similar
Experience working with search databases and platforms: Elasticsearch, Apache Solr or similar
Experience working with data modeling tools: ERwin, ERStudio, Navicat, SqlDBM or similar
Knowledge of data governance tools: Colibra, Alation or similar
Knowledge of master data management tools: Informatica, TIBCO, Riversand, Ataccama or similar
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.