Overview
On Site
Depends on Experience
Contract - W2
Contract - 12 Month(s)
Skills
GCP data engineer
PySpark
Scala
Job Details
Hello,
Hope you are doing great!!!
Please have a look at the below requirement and let me know if you are comfortable with the position ASAP with your updated word resume.
Role: Sr. Google Cloud Platform Data Engineer
Location: Sunnyvale, CA - Onsite
Location: Sunnyvale, CA - Onsite
Contract on W2
Must Have Skills:
Data: 10+ Years
Google Cloud Platform: 5+ Years
PySpark: 5+ Years
Scala: 5+ Years
Responsibilities:
Design and develop big data applications using the latest open source technologies.
Desired working in offshore model and Managed outcome
Develop logical and physical data models for big data platforms.
Automate workflows using Apache Airflow.
Create data pipelines using Apache Hive, Apache Spark, Apache Kafka.
Provide ongoing maintenance and enhancements to existing systems and participate in rotational on-call support.
Learn our business domain and technology infrastructure quickly and share your knowledge freely and actively with others in the team.
Mentor junior engineers on the team
Lead daily standups and design reviews
Groom and prioritize backlog using JIRA
Act as the point of contact for your assigned business domain
Google Cloud Platform Experience:
4+ years of recent Google Cloud Platform experience
Experience building data pipelines in Google Cloud Platform
Google Cloud Platform Dataproc, GCS & BIGQuery experience
Skills Required
10+ years of hands-on experience with developing data warehouse solutions and data products.
6+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required
5+ years of hands-on experience in modeling and designing schema for data lakes or for RDBMS platforms.
Experience with programming languages: Python, Java, Scala, etc.
Experience with scripting languages: Perl, Shell, etc.
Practice working with, processing, and managing large data sets (multi TB/PB scale).
Exposure to test driven development and automated testing frameworks.
Background in Scrum/Agile development methodologies.
Capable of delivering on multiple competing priorities with little supervision.
Excellent verbal and written communication skills.
Bachelor's Degree in computer science or equivalent experience.
The most successful candidates will also have experience in the following:
Gitflow
Atlassian products BitBucket, JIRA, Confluence etc.
Continuous Integration tools such as Bamboo, Jenkins, or TFS
Data: 10+ Years
Google Cloud Platform: 5+ Years
PySpark: 5+ Years
Scala: 5+ Years
Responsibilities:
Design and develop big data applications using the latest open source technologies.
Desired working in offshore model and Managed outcome
Develop logical and physical data models for big data platforms.
Automate workflows using Apache Airflow.
Create data pipelines using Apache Hive, Apache Spark, Apache Kafka.
Provide ongoing maintenance and enhancements to existing systems and participate in rotational on-call support.
Learn our business domain and technology infrastructure quickly and share your knowledge freely and actively with others in the team.
Mentor junior engineers on the team
Lead daily standups and design reviews
Groom and prioritize backlog using JIRA
Act as the point of contact for your assigned business domain
Google Cloud Platform Experience:
4+ years of recent Google Cloud Platform experience
Experience building data pipelines in Google Cloud Platform
Google Cloud Platform Dataproc, GCS & BIGQuery experience
Skills Required
10+ years of hands-on experience with developing data warehouse solutions and data products.
6+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required
5+ years of hands-on experience in modeling and designing schema for data lakes or for RDBMS platforms.
Experience with programming languages: Python, Java, Scala, etc.
Experience with scripting languages: Perl, Shell, etc.
Practice working with, processing, and managing large data sets (multi TB/PB scale).
Exposure to test driven development and automated testing frameworks.
Background in Scrum/Agile development methodologies.
Capable of delivering on multiple competing priorities with little supervision.
Excellent verbal and written communication skills.
Bachelor's Degree in computer science or equivalent experience.
The most successful candidates will also have experience in the following:
Gitflow
Atlassian products BitBucket, JIRA, Confluence etc.
Continuous Integration tools such as Bamboo, Jenkins, or TFS
Thanks & Regards
Peter Kane
SR IT Recruiter
Lorvenk Technologies | 5225 Hickory Park DR Glenn Allen VA 23059
URL:
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.