Google Cloud Platform Data Engineer

Overview

Remote
$60 - $65
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 3 Year(s)
No Travel Required

Skills

Amazon Web Services
Apache Spark
Big Data
GCS
Microsoft Azure
Python
SQL
Scala
Google Cloud Platform
Apache Hadoop
Apache Hive

Job Details

Role : Google Cloud Platform Data Engineer
Location: Remote
Description:
We are seeking a highly motivated and talented Data Engineer to join our dynamic team. As a Data Engineer, you will play a critical role in designing, developing, and implementing data pipelines and data integration solutions using Spark, Scala, Python, and Google Cloud Platform (Google Cloud Platform). You will be responsible for building scalable and efficient data processing systems, optimizing data workflows, and ensuring data quality and integrity.
Responsibilities:
- Collaborate with cross-functional teams to understand data requirements and design data solutions that meet business needs
- Develop and maintain data pipelines and ETL processes using Spark and Scala/Python
- Design, build, and optimize data models and data architecture for efficient data processing and storage
- Implement data integration and data transformation workflows to ensure data quality and consistency
- Monitor and troubleshoot data pipelines to ensure data availability and reliability
- Conduct performance tuning and optimization of data processing systems for improved efficiency and scalability
- Work closely with data scientists and analysts to provide them with the necessary data sets and tools for analysis and reporting
- Stay up-to-date with the latest industry trends and technologies in data engineering and apply them to enhance the data infrastructure
Qualification:
- Proven working experience as a Data Engineer with a minimum of 10 years in the field.
- Ability to work directly with stakeholders to understand data requirements and translate that to pipeline development / data solution work.
- Strong programming skills in Python and Scala and experience with Spark for data processing and analytics
- Strong Experience in Google Cloud Platform (Google Cloud Platform) services such as BigQuery, GCS, Dataproc, Pub/Sub, etc.
- Expertise in big data technologies like Hadoop, Apache Spark, Apache Hive, or similar frameworks on the cloud (Google Cloud Platform preferred, AWS, Azure etc.)
to build batch data pipelines with strong focus on optimization, SLA adherence and fault tolerance.
- Experience with data modeling, data integration, and ETL processes
- Strong knowledge of SQL and database systems
- Understanding of data warehousing concepts and best practices
- Proficiency in working with large-scale data sets and distributed computing frameworks
- Strong problem-solving and analytical skills
- Excellent communication and teamwork abilities
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.