Location: Columbus, OH
Salary: $69.00 USD Hourly - $74.00 USD Hourly
Description: Senior Data Engineer (Contingent Resource)Location: Columbus, OH
Levels: P4 (Senior), P2 (Mid-Level)
About the RoleIn this contingent role, you will serve as a senior technical contributor supporting large-scale data engineering initiatives. You will design, build, and optimize modern data lake and data processing architectures on Google Cloud Platform (Google Cloud Platform). You'll partner with cross-functional engineering teams to solve complex data challenges, advise on architectural decisions, and ensure solutions meet enterprise standards for scalability, reliability, and security.
This role is ideal for engineers with deep experience in cloud-native data platforms, large-scale distributed processing, and advanced analytics data models.
ResponsibilitiesData Lake Architecture & Storage- Design and implement scalable data lake architectures (e.g., Bronze/Silver/Gold layered models).
- Define Cloud Storage (GCS) architecture including bucket structures, naming standards, lifecycle policies, and IAM models.
- Apply best practices for Hadoop/HDFS-like storage, distributed file systems, and data locality.
- Work with columnar formats (Parquet, Avro, ORC) and compression for performance and cost optimization.
- Develop effective partitioning strategies, organization techniques, and backfill approaches.
- Build curated and analytical data models optimized for BI and visualization tools.
Data Ingestion & Orchestration- Build batch and streaming ingestion pipelines using Google Cloud Platform-native tools.
- Design event-driven architectures using Pub/Sub with well-defined schemas and versioning.
- Implement incremental ingestion, CDC patterns, idempotency, and deduplication.
- Develop workflows using Cloud Composer / Apache Airflow.
- Create mechanisms for error handling, monitoring, replay, and historical backfills.
Data Processing & Transformation- Build scalable batch and streaming data pipelines using Dataflow (Apache Beam) and/or Spark (Dataproc).
- Write optimized BigQuery SQL leveraging clustering, partitioning, and cost-efficient design.
- Utilize Hadoop ecosystem tools (Hive, Pig, Sqoop) where applicable.
- Write production-grade Python for data engineering with maintainable, testable code.
- Manage schema evolution with minimized downstream disruption.
Analytics & Data Serving- Optimize BigQuery datasets for performance, governance, and cost.
- Build semantic layers, governed metrics, and data serving patterns for BI consumption.
- Integrate datasets with BI tools using compliant access controls and dashboarding standards.
- Expose data through views, APIs, and curated analytics-ready datasets.
Data Governance, Quality & Metadata- Implement metadata management, cataloging, and ownership standards.
- Define lineage models to support auditing and troubleshooting.
- Build data quality frameworks (validation, freshness, SLAs, alerting).
- Establish and enforce data contracts, schema policies, and data reliability standards.
- Work with audit logging and compliance readiness processes.
Cloud Platform Management- Manage Google Cloud Platform environments including project setup, resource boundaries, billing, quotas, and cost optimization.
- Implement IAM best practices with least-privilege design and secure service account usage.
- Configure secure networking including VPCs, private access, and service connectivity.
- Manage encryption strategies using KMS/CMEK and perform platform-level security audits.
DevOps, Platform & Reliability- Build and maintain CI/CD pipelines for data platform and pipeline deployments.
- Manage secret storage with Google Cloud Platform Secret Manager.
- Build observability stacks including dashboards, SLOs, alerts, and runbooks.
- Support logging and monitoring for pipeline health and platform reliability.
Preferred Skills (Nice to Have)Security, Privacy & Compliance- Implement fine-grained access controls for BigQuery and GCS.
- Experience with VPC Service Controls, perimeter security, and data exfiltration prevention.
- Understanding of PII protection, data masking, tokenization, and audit/compliance practices.
Required Qualifications- 5+ years of software engineering or data engineering experience, or equivalent through training, education, consulting, or military experience.
By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively "Judge") to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.
Contact: This job and many more are available through The Judge Group. Please apply with us today!