IBM DataStage ETL Developer

Remote • Posted 3 hours ago • Updated 3 hours ago
Contract W2
Remote
$50 - $60/hr
Fitment

Dice Job Match Score™

🤯 Applying directly to the forehead...

Job Details

Skills

  • IBM
  • DataStage

Summary

IBM DataStage ETL Developer

Duration: 12+Months

Required Candidate Location: Remote

Type of Interview Required: Video

Software Engineer with strong ETL experience to design, build, and support file-to-table data transformations using IBM InfoSphere DataStage. You ll turn inbound file feeds into reliable, auditable SQL Server table loads with solid performance, clear error handling, and repeatable operations. The candidate will Design, develop, and maintain IBM DataStage ETL jobs that ingest file feeds (CSV, fixed-width, delimited) and load curated destination tables in SQL Server. Build end-to-end ETL flows, including staging, transformations, validations, and publishing to downstream schemas.Perform source-to-target mapping and implement transformation logic based on business and technical requirements. Use common DataStage stages and patterns (e.g., Sequential File, Transformer, Lookup, Join/Merge, Aggregator, Sort, Funnel, Remove Duplicates), with attention to partitioning and parallel job design.

Job Description:

Role summary Software Engineer with strong ETL experience to design, build, and support file-to-table data transformations using IBM InfoSphere DataStage. You ll turn inbound file feeds into reliable, auditable SQL Server table loads with solid performance, clear error handling, and repeatable operations.

Key responsibilities

  • Design, develop, and maintain IBM DataStage ETL jobs that ingest file feeds (CSV, fixed-width, delimited) and load curated destination tables in SQL Server.
  • Build end-to-end ETL flows, including staging, transformations, validations, and publishing to downstream schemas.
  • Perform source-to-target mapping and implement transformation logic based on business and technical requirements.
  • Use common DataStage stages and patterns (e.g., Sequential File, Transformer, Lookup, Join/Merge, Aggregator, Sort, Funnel, Remove Duplicates), with attention to partitioning and parallel job design.
  • Write, optimize, and tune SQL Server queries, stored procedures, and T?SQL scripts used in ETL workflows.
  • Implement restartable and supportable jobs: parameterization, robust logging, reject handling, auditing columns, and reconciliation checks.
  • Apply data quality controls (format checks, referential checks, null/duplicate checks, threshold checks) and produce clear exception outputs for remediation.
  • Monitor and troubleshoot ETL runs using DataStage Director/Operations Console and SQL Server tooling; perform root-cause analysis and fix defects.
  • Improve performance through job design tuning (partitioning strategy, sorting choices, buffering, pushdown where appropriate) and SQL tuning (indexes, statistics, set-based logic).
  • Participate in code reviews, testing, documentation, and release activities; maintain clear runbooks and operational procedures.
  • Collaborate with business analysts, data modelers, QA, and production support to deliver stable pipelines.

Required skills and experience

  • Hands-on IBM DataStage ETL development experience, including data mapping and transformation implementation.
  • Strong SQL Server experience with advanced T?SQL (joins, window functions, CTEs, temp tables, indexing basics, query plans).
  • Solid understanding of file-based ingestion and parsing (CSV, fixed-width, headers/trailers, control totals, encoding, delimiters, quoting/escaping).
  • Experience designing ETL jobs with good operational characteristics: parameter-driven design, logging, error handling, restart/re-run strategy, and auditability.
  • Ability to troubleshoot data issues end-to-end (source file ? stage tables ? target tables) and communicate findings clearly.

Preferred qualifications

  • Experience with DataStage Parallel Jobs tuning (partitioning methods, collect/sort trade-offs, skew handling).
  • Familiarity with UNIX/Linux basics and shell scripting for orchestration and file handling.
  • Experience with job scheduling/orchestration tools (e.g., Control?M, Autosys) and CI/CD practices.
  • Knowledge of common warehousing patterns (incremental loads, slowly changing dimensions, surrogate keys, effective dating).
  • Experience with version control (Git) and structured promotion/release processes across environments (dev/test/prod).
  • Exposure to data governance practices (metadata, lineage, naming standards) and secure handling of sensitive data.

Education

  • Bachelor s degree in Computer Science, Engineering, Information Systems, or equivalent practical experience.

What success looks like in this role

  • File feeds land and load consistently with clear reconciliation results.
  • Failures are diagnosable from logs and reject outputs without deep forensics.
  • Jobs meet runtime SLAs through solid DataStage design and SQL tuning.
  • Mappings and transformations are documented and traceable to requirements.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 91138303
  • Position Id: 8931247
  • Posted 3 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Remote

10d ago

Easy Apply

Contract

Depends on Experience

Remote

6d ago

Easy Apply

Contract

Depends on Experience

Remote

12d ago

Easy Apply

Third Party, Contract

60 - 90

Remote

2d ago

Easy Apply

Contract

Depends on Experience

Search all similar jobs