Role SummaryWe are seeking a highly skilled Data Engineer with strong hands-on experience in BigQuery and Apache Iceberg to design, build, and optimize scalable data platforms. The ideal candidate will have deep expertise in modern data lakehouse architectures, distributed data processing, and cloud-native data pipelines.Key ResponsibilitiesDesign, develop, and maintain scalable data pipelines using BigQuery and Iceberg-based data lake architecturesImplement and manage Apache Iceberg tables for large-scale, high-performance analytics workloadsBuild efficient data ingestion, transformation, and storage solutions supporting batch and streaming use casesOptimize query performance, partitioning strategies, and storage formats for cost and performance efficiencyCollaborate with data scientists, analysts, and platform teams to deliver high-quality datasetsEnsure data quality, governance, and reliability across pipelines and platformsContribute to architecture decisions for lakehouse and modern data platform designRequired Skills & Qualifications6+ years of experience in Data Engineering / Data Platform developmentStrong hands-on expertise in:BigQuery (data modeling, performance tuning, cost optimization)Apache Iceberg (table design, partitioning, schema evolution, time travel)Experience with SQL and distributed data processing frameworksStrong understanding of data lakehouse architecturesExperience working with large-scale datasets and cloud-native data platformsProficiency in Python / Java / ScalaNice to HaveExperience with:Dataflow (Apache Beam) for batch/stream processingPub/Sub for real-time data ingestionFamiliarity with Google Cloud Platform ecosystem (Cloud Storage, Composer, IAM)Exposure to CI/CD, DataOps, and orchestration toolsUnderstanding of data governance and lineage frameworks