Overview
On Site
60 - 65
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 18 Month(s)
Able to Provide Sponsorship
Skills
Sqoop
DataFlow
DataProc
Cloud Pub/Sub
Cloud Composer
PySpark
Python
GCS
BigQuery
DAG
Job Details
Hi,
Role : Sr.Data EngineerDuration : Long Term
Location : Wilmington (Delaware) or New Jersey (Day 1 Onsite)
Rate : $65/hr
Required Skills
Skills:
Sqoop, DataFlow, DataProc, Cloud Pub/Sub, Cloud Composer, PySpark, Python, GCS, BigQuery, DAG
Mandatory Certifications:
Google Cloud Platform Professional Data Engineer
Job Description:
- Source Data Analysis & Mapping: Conduct thorough analysis of source data systems, collaborate with stakeholders to define detailed source-to-target mappings, and translate business requirements into technical specifications for data ingestion and transformation- Effort Estimation & Planning: Provide accurate effort and resource estimates for development tasks, supporting sprint planning and roadmap alignment within the agile framework
- Data Ingestion & Pipeline Development: Design, build, and maintain robust, scalable ETL/ELT pipelines that efficiently ingest data from the data lake into the BigQuery data warehouse and onward to business-specific data marts
- Transformation Implementation: Implement complex data transformation logic, ensuring data quality, accuracy, and timeliness to meet various analytical and operational business needs
- Unit Testing & Quality Assurance: Develop and execute comprehensive unit tests for all data pipelines and transformation logic to ensure functionality, accuracy, and performance prior to deployment. Commit to delivering high-quality, reliable solutions that meet business requirements
- Collaboration & Communication: Work closely with the Data Architect to align architectural guidelines, with the Scrum Master to support agile delivery processes, with the PM to meet project timelines, and with the BA to ensure requirements are fully captured and addressed
- Best Practices & Cost Optimization: Follow industry best practices for BigQuery schema design, data partitioning, clustering, and query optimization, while proactively managing cost control measures such as efficient resource usage and storage lifecycle management
- Reliability & Scalability: Ensure that all pipelines and data workflows are resilient, fault-tolerant, and scalable to support growing data volumes and evolving business demands
Thanks and Regards,
Amarinder Singh
Amarinder Singh
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.