Overview
On Site
Depends on Experience
Contract - W2
Skills
Acceptance Testing
Apache Hadoop
Big Data
Apache Hive
Automated Testing
Data Profiling
Database
Docker
ELT
GitLab
Data Modeling
Collaboration
Apache Spark
ServiceNow
Unix
Testing
SAP BASIS
Scala
SAS
Quality Assurance
Performance Tuning
Data Migration
Cloudera Impala
BMC Remedy
Extract
Transform
Load
Kubernetes
Orchestration
Splunk
IT Management
Microservices
Data Quality
IT Architecture
Code Review
Job Details
Responsibilities:
- Lead in design, development and testing of data ingestion pipelines, perform end to end validation of ETL process for various datasets that are being ingested into the big data platform.
- Perform data migration and conversion validation activities on different applications and platforms.
- Provide the technical leadership on data profiling/analysis, discovery, analysis, suitability, and coverage of data, and identify the various data types, formats, and data quality issues which exist within a given data source.
- Contribute to development of transformation logic, interfaces and reports as needed to meet project requirements.
- Participate in discussion for technical architecture, data modelling and ETL standards, collaborate with Product Managers, Architects and Senior Developers to establish the physical application framework (e.g., libraries, modules, execution environments).
- Performance tuning long-running ETL/ELT jobs by creating partitions, enabling full load, and other standard approaches.
- Perform Quality assurance checks, Reconciliation post data loads, and communicate to the vendor for receiving fixed data.
- Participate in ETL/ELT code review and design re-usable frameworks.
- Create Remedy/Service Now tickets to fix production issues, create Support Requests to deploy Database, Hadoop, Hive, Impala, UNIX, ETL/ELT, and SAS code to the UAT environment.
- Create Remedy/Service Now tickets and/or incidents to trigger Control M jobs for FTP and ETL/ELT jobs on an ad hoc, daily, weekly, Monthly, and quarterly basis as needed.
- Experience with APACHE SPARK with SCALA programming
- Experience in developing, optimizing, and scaling Spark Functions
- Experience with SCALA framework
- Experience with Functional Programming
- Use Docker and Kubernetes (k8s) for containerization and orchestration of microservices and batch jobs.
- Set up CI/CD pipelines with GitLab for automated testing and deployment.
- Monitor and troubleshoot systems using Splunk and other observability tools.
- Ensure data quality, security, and governance across the platform.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.