Overview
Skills
Job Details
Responsibilities:
Design, develop, and maintain ETL workflows using IBM DataStage.
Integrate and process large datasets using SQL, Databricks (PySpark/SQL), and Azure Data Lake environments.
Collaborate with data architects and analysts to translate business requirements into technical solutions.
Optimize ETL jobs for performance, scalability, and cost efficiency.
Troubleshoot, debug, and resolve ETL/data pipeline issues proactively.
Implement best practices in data governance, security, and compliance.
Support migration or integration of on-prem data pipelines to Azure Databricks if required.
Requirements:
Mandatory: Strong hands-on experience in IBM DataStage ETL development.
Strong proficiency in SQL (complex queries, performance tuning).
Experience with Azure Databricks (PySpark / Spark SQL) for data transformations.
Knowledge of ETL workflows, data warehousing concepts, and data lake architectures.
Strong problem-solving and debugging skills.
Good communication and collaboration skills.