Ab Initio ETL Developer

Overview

On Site
Depends on Experience
Contract - W2
Contract - 12 Month(s)

Skills

Ab Initio
Ab Initio Developer
Amazon Web Services
Apache Hadoop
Apache Hive
Apache Kafka
Apache Spark
Big Data
Extract
Transform
Load
Google Cloud Platform
Kubernetes
RDBMS
PL/SQL
Microsoft Azure
ETL
Hadoop
Spark
Hive
Kafka
Oracle
SQL Server
Teradata
DB2

Job Details

Required skills:
Please submit only candidates who are authorized to work in the United States.
Only applicants who are currently local to Dallas Texas or are willing to relocate will be considered.
Design, develop, and deploy ETL processes using Ab Initio GDE.
Build high-performance data integration and transformation pipelines.
Work with Ab Initio Co-Operating System, EME (Enterprise Meta Environment), and metadata-driven development.
Develop and optimize graphs for batch and real-time processing.
Integrate with RDBMS (Oracle, SQL Server, Teradata, DB2, etc.) and external data sources.
Implement continuous flows, web services, and message-based integration with Ab Initio.
o Continuous Flows (Co-Op & GDE)

Nice to have skills:
Exposure to AWS, Azure, or Google Cloud Platform for cloud-based data solutions.
Experience with big data ecosystems (Hadoop, Spark, Hive, Kafka) is a strong plus.
Containerization (Docker, Kubernetes) knowledge desirable.
Monitoring & Security:
Job monitoring and scheduling experience (Control-M, Autosys, or similar).
Familiarity with security standards, encryption, and access management.

Skills: Design, develop, and deploy ETL processes using Ab Initio GDE.
Build high-performance data integration and transformation pipelines.
Work with Ab Initio Co-Operating System, EME (Enterprise Meta Environment), and metadata-driven development.
Develop and optimize graphs for batch and real-time processing.
Integrate with RDBMS (Oracle, SQL Server, Teradata, DB2, etc.) and external data sources.
Implement continuous flows, web services, and message-based integration with Ab Initio.
o Continuous Flows (Co-Op & GDE)
o Plans and Psets
o Conduct-It for job scheduling and orchestration
o Graphs and Parameter Sets
Nice to have:
Exposure to AWS, Azure, or Google Cloud Platform for cloud-based data solutions.
Experience with big data ecosystems (Hadoop, Spark, Hive, Kafka) is a strong plus.
Containerization (Docker, Kubernetes) knowledge desirable.
Monitoring & Security:
Job monitoring and scheduling experience (Control-M, Autosys, or similar).
Familiarity with security standards, encryption, and access management.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.