Job Title: Big Data Engineer
Location: Phoenix, AZ
Duration: 12+ Months
Females are preffered
Job Description:
Design, develop, and optimize large-scale data processing pipelines using Apache Spark (Spark SQL, Spark Core, and DataFrames) for high-volume batch and real-time data workloads.
Write complex and high-performance SQL queries, stored procedures, and data transformations to support data ingestion, cleansing, aggregation, and reporting requirements.
Perform advanced performance tuning and troubleshooting of Spark jobs and SQL workloads, including partitioning, joins, caching, and query optimization for improved efficiency.
Build and maintain ETL/ELT workflows by integrating data from multiple structured and unstructured sources into enterprise data platforms and data lakes.
Work with BigQuery for data analysis, query optimization, and large-scale analytical reporting; knowledge of schema design and cost-efficient query execution is an added advantage.
Collaborate with cross-functional teams including Data Architects, Analysts, and DevOps teams to ensure scalable, secure, and reliable big data solutions aligned with business requirements.