Oracle Datawarehouse Developer
Charotte, NC only 3 days a week onsite.
Position Summary:
We are seeking a Senior Oracle Data Warehouse Developer to design, build, and optimize enterprisescale data integration and analytics solutions. This role requires deep Oracle database expertise (SQL/PL/SQL, performance tuning, partitioning, query optimization), handson experience with Apache NiFi, Python, and Informatica.
Key Responsibilities:
Must have Python
Data Engineering & ETL/ELT
Design, develop, and maintain robust ETL/ELT pipelines across Apache NiFi, Informatica PowerCenter/Cloud, and Python to ingest, transform, and load data from varied sources (RDBMS, files, APIs, streaming).
Implement orchestration, scheduling, parameterization, and dependency management; automate error handling, retries, and observability (alerts/metrics).
Build reusable pipeline components and performanceoptimized SQL/PL/SQL procedures, packages, and functions.
Performance, Reliability & Quality
Conduct query tuning (execution plans, hints, statistics management), table/index design, and workload optimization on largescale Oracle environments (e.g., Exadata preferred).
Implement robust data validation frameworks (rowcount, referential integrity, reconciliation, anomaly detection).
Ensure environments and pipelines meet SLAs, scalability, and costefficiency targets.
Collaboration & Delivery
Partner with analysts, data scientists, and product teams to translate requirements into scalable data solutions.
Contribute to CI/CD (Git, branching strategies, code review, automated testing, deployment).
Mentor junior developers; champion best practices, standards, and design patterns.
8+ years of experience in data warehouse development with a strong Oracle database background.
Expertlevel IDMC development of taskflow, mappings etc.
Expert- level SQL and PL/SQL; deep knowledge of partitioning, indexing strategies, optimizer behavior, statistics, execution plans, and parallel processing.
Handson experience building production pipelines with Apache NiFi (processors, flow files, back pressure, provenance, parameter contexts)
Solid Python for data transformations, file/API integrations, and utility automation (e.g., pandas, PySpark).
Proficiency with Linux/Unix, shell scripting, and job scheduling on Autosys
Strong understanding of enterprise SDLC practices, Gitbased workflows, code reviews, and dev/test/prod release management.