Professional Services Consultant 3

Overview

On Site
$100
Accepts corp to corp applications
Contract - W2
Contract - 6 Month(s)

Skills

python
Kubernetes
SAP
SQL
web services
shell scripting
api
data science
Cloud Computing
Load
Linux Administration
Data Integrity
Microsoft Azure
Amazon Web Services
Pivotal
Snow Flake Schema
Collaboration
Communication
Management
Analytical Skill
Conflict Resolution
Problem Solving
Apache Airflow
Data Integration
Extract
Transform
FOCUS
Data Warehouse
Analytics
Articulate
Continuous Delivery
Continuous Integration
Data Engineering
Data Governance
Data Lake
Data Quality
Decision-making
Emerging Technologies
Innovation
Performance Tuning
Scalability
Professional Services
Replication

Job Details

The ideal candidate will have a strong foundation in Python programming, experience with Snowflake for data warehousing, proficiency in AWS and Kubernetes (EKS) for cloud services management, and expertise in CI/CD practices, Apache Airflow, DBT, and API development. Experience with SAP data replication using HVR is highly desirable, as it plays a critical role in our enterprise data strategy. This role is essential to enhancing our data integration capabilities and supporting our data-driven initiatives.

Role and Responsibilities

As the Technical Data Integration Engineer, you will play a pivotal role in shaping the future of our data integration engineering initiatives. You will be part of a talented team of data integration engineers while remaining actively involved in the technical aspects of the projects. Your responsibilities will include:

Hands-On Contribution: Continue to be hands-on with data integration engineering tasks, including data pipeline development, EL processes, and data integration. Be the go-to expert for complex technical challenges.

Integrations Architecture: Design and implement scalable and efficient data integration architectures that meet business requirements. Ensure data integrity, quality, scalability, and security throughout the pipeline.

Tool Proficiency: Leverage your expertise in Snowflake, SQL, Apache Airflow, AWS, API, Python, and HVR for SAP replication to architect, develop, and optimize data solutions. Stay current with emerging technologies and industry best practices.

Data Quality: Monitor data quality and integrity, implementing data governance policies as needed.

Cross-Functional Collaboration: Collaborate with data science, data warehousing, analytics, and other cross-functional teams to understand data requirements and deliver actionable insights.

Performance Optimization: Identify and address performance bottlenecks within the data infrastructure. Optimize data pipelines for speed, reliability, and efficiency.

Qualifications

Minimum Bachelor's degree in Computer Science, Engineering, or related field. Advanced degree is a plus.

5+ years of hands-on experience in data engineering.

Familiarity with cloud platforms, such as AWS or Azure.

Expertise in Apache Airflow, Snowflake, SQL, Python, Shell scripting, API gateways, web services setup.

Experience with SAP data replication using HVR or similar tools.

Strong experience in full-stack development, AWS, Linux administration, data lake construction, data quality assurance, and integration metrics.

Excellent analytical, problem-solving, and decision-making abilities.

Strong communication skills, with the ability to articulate technical concepts to non-technical stakeholders.

A collaborative mindset, with a focus on team success.

If you are a results-oriented Data Integration Engineer with a strong background in Apache Airflow, Snowflake, Python, and SAP replication using HVR, we encourage you to apply. Join us in building data solutions that drive business success and innovation.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.