Overview
On Site
$140000
Full Time
Job Details
Vaco is hiring a Google Cloud Platform Cloud Engineer for a direct hire opportunity in the greater Cincinnati area.
This role requires candidates to be local to the Cincinnati area, as regular in office meetings are required.
Only local candidates will be considered for this role.
***NO C2C SOLICITATIONS***
Our client is in the midst of leading a migration of data from SQL Server to Google Cloud Platform.
As their Google Cloud Platform Cloud Engineer, you will architect and build robust ingestion frameworks, establish scalable CI/CD and DataOps practices, and develop high-performance data pipelines using Python and Apache Spark.
Key Responsibilities
This role requires candidates to be local to the Cincinnati area, as regular in office meetings are required.
Only local candidates will be considered for this role.
***NO C2C SOLICITATIONS***
Our client is in the midst of leading a migration of data from SQL Server to Google Cloud Platform.
As their Google Cloud Platform Cloud Engineer, you will architect and build robust ingestion frameworks, establish scalable CI/CD and DataOps practices, and develop high-performance data pipelines using Python and Apache Spark.
Key Responsibilities
- Design, build, and optimize scalable data pipelines and ingestion frameworks on Google Cloud Platform (Google Cloud Platform).
- Migrate existing SQL Server-based data infrastructure to Google Cloud Platform services such as BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, and Cloud Functions.
- Develop, deploy, and monitor robust CI/CD pipelines for data workflows using Terraform, Cloud Build, or similar tools.
- Champion best practices in DataOps to automate, monitor, and validate data pipelines for reliability and performance.
- Collaborate with product, analytics, and engineering teams to ensure availability, reliability, and accuracy of data used in critical customer-facing solutions and internal operations.
- Work with structured and unstructured data sources and apply data transformation techniques using Python, PySpark, and SQL.
- Support and improve existing data infrastructure, monitor data quality and integrity, and troubleshoot data-related issues.
- Document architecture, processes, and pipelines clearly and comprehensively.
- 5+ years of hands-on experience in data engineering with a focus on cloud migration and modern cloud-native architecture.
- Deep expertise in Google Cloud Platform (Google Cloud Platform), particularly with:
- BigQuery
- Cloud Storage
- Dataflow / Apache Beam
- Pub/Sub
- Cloud Composer (Airflow)
- Cloud Functions
- Strong experience with Python and Apache Spark / PySpark for large-scale data processing.
- Proficiency in SQL, especially with SQL Server and BigQuery dialects.
- Demonstrated experience building CI/CD pipelines for data workflows using tools such as Cloud Build, GitHub Actions, Terraform, dbt, etc.
- Familiarity with DataOps practices and tools for orchestration, testing, and monitoring.
- Experience migrating and transforming legacy data sources (e.g., SQL Server) into cloud data warehouses.
- Strong understanding of data modeling, data quality, governance, and security best practices in the cloud.
- Experience with real-time streaming data using Kafka or Pub/Sub.
- Familiarity with dbt (data build tool) and Looker or other BI tools.
- Experience working in a SaaS or product-based company with customer-facing analytics or features.
- Knowledge of infrastructure-as-code and containerization using Docker/Kubernetes.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.