AWS Data Engineer $81.50 (LOCAL CANDIDATES ONLY)

  • Torrance, CA
  • Posted 50 days ago | Updated 5 hours ago

Overview

On Site
Hybrid
USD 90,501.00 per year
Contract - W2

Skills

Management
Data Flow
Accessibility
Pivotal
AIM
Decision-making
Apache Spark
Performance Tuning
Scalability
IaaS
Data Processing
Specification Gathering
Business Intelligence
Analytics
Documentation
Data Integration
Workflow
Data Governance
Apache HTTP Server
Database Design
Amazon Web Services
Electronic Health Record (EHR)
PostgreSQL
Amazon RDS
Remote Desktop Services
Amazon Redshift
Spectrum
Data Marts
Data Warehouse
Extract
Transform
Load
System Integration
Continuous Integration
Continuous Delivery
Programming Languages
PySpark
Python
Database
Data Security
Privacy
Regulatory Compliance
Conflict Resolution
Problem Solving
Communication
Collaboration
Agile
Sprint
UPS
Mentorship
Attention To Detail
Data Quality
Data Engineering
Computer Science
Information Technology

Job Details

** The quickest way to be considered for this role is to CALL US directly! Click "Apply On Web" or "Apply Now" to access our Recruiter s contact details and give us a call today! **


** We will NOT accept 3rd Party (C2C) Contractors **
===

Position:AWS Data Engineer
JOB REF#:43308 - JIH5JP00003536
Duration:12+ Months (On-Going Contract)
Location:HYBRID - Torrance, CA 90501
Pay Rate:$81.50 per hour (W2 Only)

**YOU MUST BE LOCAL, we will NOT consider ANY NON-LOCAL Candidates**

Hybrid Schedule: MON-FRI, able to work ONSITE 4 days per week (mandatory).

AWS Data Engineer will design, develop, and manage data integration processes to ensure seamless data flow and accessibility across the organization. This role is pivotal in integrating data from diverse sources, transforming it to meet business requirements, and loading it into target systems such as data warehouses or data lakes. The aim is to support the organization's data-driven decision-making by providing high-quality, consistent, and accessible data.

RESPONSIBILITIES INCLUDE:

Develop and Maintain Data Integration Solutions:
Design and implement data integration workflows using AWS Glue/EMR, Lambda, Redshift
Demonstrate proficiency in PySpark, Apache Spark and Python for data processing large datasets
Ensure data is accurately and efficiently extracted, transformed, and loaded into target systems.

Ensure Data Quality and Integrity:
Validate and cleanse data to maintain high data quality.
Ensure data quality and integrity by implementing monitoring, validation, and error handling mechanisms within data pipelines

Optimize Data Integration Processes:
Enhance the performance, optimization of data workflows to meet SLAs, scalability of data integration processes and cost-efficiency on AWS cloud infrastructure.
Identify and resolve performance bottlenecks, fine-tuning queries, and optimizing data processing to enhance Redshift's performance
Regularly review and refine integration processes to improve efficiency.

Support Business Intelligence and Analytics:
Translate business requirements to technical specifications and coded data pipelines
Ensure timely availability of integrated data for business intelligence and analytics.
Collaborate with data analysts and business stakeholders to meet their data requirements.

Maintain Documentation and Compliance:
Document all data integration processes, workflows, and technical & system specifications.
Ensure compliance with data governance policies, industry standards, and regulatory requirements.

MADATORY SKILLS REQUIRED:
5+ years of AWS Glue or AWS EMR experience in building pipelines.
4+ years of RedShift and RedShift Spectrum experience.
1+ years of Apache Iceberg experience.
1+ years of AWS ECS/ELB experience.

REQUIRED SKILLS/EXPERIENCE
5+ years of experience in data engineering, database design, and ETL processes
5+ years of experience with AWS tools/technologies in either EMR or Glue (nice to have's: Athena, RedShift, Postgres, RDS, Lambda, PySpark)
4+ years of RedShift and RedShift Spectrum
3+ years of experience of working with databases/ data marts/data warehouses
Proven experience in ETL development, system integration, and CI/CD implementation.
5+ in programming languages, such as PySpark and Python
Experience in complex database objects to move the changed data across multiple environments
Solid understanding of data security, privacy, and compliance.
Excellent problem-solving and communication skills.
Display good communication skills to effectively collaborate with multi-functional teams
Participate in agile development processes including sprint planning stand-ups and retrospectives
Provide technical guidance and mentorship to junior developers
Attention to detail and a commitment to data quality.
Continuous learning mindset to keep up with evolving technologies and best practices in data engineering.
EDUCATION: Bachelor's degree in computer science, information technology, or a related field.

==
==

Calance Consultant Benefits Offerings:
- EPO/PPO Medical Plans
- HMO/PPO Dental programs
- Vision - VSP (Vision Plan Summary)
- 401K Retirement vesting program (VOYA)
- Paid Bi-Weekly/Direct Deposit
- Flex Spending Plan
- Voluntary Life, AD&D, STD or LTD plans
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.