Job Title: Sr. AWS/Snowflake Engineer
Location: Richardson, Texas
Type: Contract
Compensation: $90.32-$90.32 per hour
Contractor Work Model: Onsite
Hours: 40 hours/week
Overview
This Sr. AWS/Snowflake Data Engineer role will design, build, and support scalable cloud data solutions with a focus on AWS-native services, Apache Iceberg lakehouse patterns (S3), and Snowflake for analytics and governed data consumption. You’ll develop end-to-end ETL/ELT pipelines, manage Iceberg table design/maintenance, create secure data interfaces/APIs, and optimize performance and cost while ensuring strong data quality and observability. The position partners closely with analysts, data scientists, and engineering teams to deliver reliable, reusable datasets and data products.
Responsibilities
- Design and develop data architecture (AWS Lakehouse + Snowflake): Create scalable, reliable, and efficient data lakehouse solutions on AWS using Amazon S3 and Apache Iceberg, and design curated/consumption architectures in Snowflake to enable performant analytics and governed data sharing.
- Build and maintain data pipelines (native AWS tooling): Design, construct, and automate ETL/ELT processes to ingest data from diverse sources into AWS, leveraging native services such as AWS Glue, Lambda, Step Functions, EventBridge, and orchestration patterns as appropriate.
- Develop and manage Iceberg tables: Build and manage Apache Iceberg datasets, including table design, schema evolution, partition strategies, and compaction/maintenance patterns to support ACID-like behavior and scalable analytics.
- Snowflake engineering: Design and implement Snowflake objects and pipelines to support analytics and data products (schemas, tables, views), and contribute to patterns for secure and governed consumption.
- Create and manage data APIs / interfaces: Design, develop, and maintain secure and scalable RESTful (and other) APIs to facilitate data access for internal teams and applications, typically leveraging AWS services (e.g., API Gateway, Lambda, IAM).
- Optimize performance and cost: Implement partitioning strategies, data layout optimization, and tuning techniques across Iceberg and Snowflake; monitor workloads and continuously improve efficiency and runtime performance.
- Ensure data quality and integrity: Implement data validation, reconciliation, and error-handling processes; build observability into pipelines so issues are detected early and addressed quickly.
- Collaborate with stakeholders: Work closely with analysts, data scientists, software engineers, and business teams to understand data needs and deliver effective, reusable solutions.
- Provide technical support: Offer troubleshooting and technical expertise for data-related issues across pipelines, datasets, and endpoints.
- Maintain documentation: Create and maintain technical documentation for data workflows, pipelines, dataset definitions, and API specifications.
Requirements
- Education: Bachelor''s degree in Computer Science, Information Technology, or a related field.
- Experience: Proven experience in data engineering with significant hands-on experience building data solutions on AWS.
- Technical Skills:
- Programming: Proficiency in Python, Java, or Scala.
- SQL: Strong SQL skills for querying, transformations, and data modeling / database design (including Snowflake SQL).
- AWS Data Engineering Services: Practical experience with AWS services such as S3, Glue, Lambda, API Gateway, and IAM (and related native services used to build secure, automated pipelines).
- Big Data: Experience with Apache Spark and Hadoop ecosystems.
- API Development: Experience creating and deploying RESTful APIs, including best practices for performance and security.
- ETL/Workflow / Orchestration: Experience with workflow orchestration tools (e.g., Airflow or AWS-native orchestration patterns).
- DevOps / IaC: Familiarity with DevOps practices, CI/CD pipelines, and infrastructure as code (Terraform).
- Required / Strongly Preferred Experience (Iceberg + Snowflake focus):
- Apache Iceberg: Hands-on experience building and managing Apache Iceberg tables (schema evolution, partitioning strategies, table maintenance).
- Snowflake: Experience implementing and operating data solutions in Snowflake, including data modeling for analytics and patterns for secure consumption.
- Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration skills.
- Ability to work independently and as part of an agile team.
- Certifications (Preferred):
- AWS data-related certifications and other relevant cloud/data engineering certifications.
System One, and its subsidiaries including Joulé and Mountain Ltd., are leaders in delivering outsourced services and workforce solutions across North America. We help clients get work done more efficiently and economically, without compromising quality. System One not only serves as a valued partner for our clients, but we offer eligible employees health and welfare benefits coverage options including medical, dental, vision, spending accounts, life insurance, voluntary plans, as well as participation in a 401(k) plan.
System One is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, age, national origin, disability, family care or medical leave status, genetic information, veteran status, marital status, or any other characteristic protected by applicable federal, state, or local law.
#M-
#LI-
Ref: #208-Rowland Tulsa