Sr. Staff Engineer, Data Architecture AWS (Data Pipelines, Snowflake, EMR, Data bricks, DW Design, HDFS, Python, Spark, SQL, Power BI, Tableau, Hadoop) - Remote

Depends on Experience

Full Time

  • No Travel Required


Data ArchitectureAWSData PipelinesSnowflakeEMRData BricksETLAWS Step FunctionAirflownormalization

Job Description

Sr. Staff Engineer, Data Architecture – AWS (Data Pipelines, Snowflake, EMR, Data bricks, DW Design, HDFS, Python, Spark, SQL, Power BI, Tableau, Hadoop) - Remote
POSITION: Sr. Staff Engineer, Data Architecture – AWS (Data Pipelines, Snowflake, EMR, Data bricks, DW Design, HDFS, Python, Spark, SQL, Power BI, Tableau, Hadoop) - Remote
LOCATION: 100 % Remote
DURATION: Full Time Position
SALARY: Excellent Compensation with Benefits + 401K
SKILLS: Data Architecture, AWS, Data Pipelines, Snowflake, EMR, Data Bricks, ETL, AWS Step Function, Airflow, normalization, de-normalization, ETL, Data Warehouse Design, Distributed File Systems, HDFS, ADLS, S3, Python, Spark, SQL, Synapse, MS SQL Server, ETL, Orchestration Tools, DBT, Azure, Cosmos, ADLS Gen 2, Git, Power BI, Tableau, ML, Notebooks, Hadoop 2.0, Impala, Hive


For one of our highly prestigious global client, we have an immediate need for a Software Engineer and Data Architect with experience in at least some of the following skills:

  • Software Engineering
  • Data Architecture
  • AWS
  • Data Pipelines
  • Snowflake
  • EMR
  • Data Bricks
  • AWS Step Function
  • Airflow
  • Normalization
  • De-normalization
  • ETL
  • Data Warehouse Design
  • Distributed File Systems
  • HDFS
  • ADLS
  • S3
  • Python
  • Spark
  • SQL
  • Synapse
  • MS SQL Server
  • ETL
  • Orchestration Tools
  • DBT
  • Azure
  • Cosmos
  • ADLS Gen 2
  • Git
  • Power BI
  • Tableau
  • ML
  • Notebooks
  • Hadoop 2.0
  • Impala
  • Hive



Looking for a hands-on, Sr. Staff Engineer, Data Architecture with a focus on Data Engineering.

  • This position requires extensive hands-on, data system design and coding experience, developing modern data pipelines (AWS Step functions, Prefect, Airflow, Luigi, Python, Spark, SQL) and associated code in cloud/on-prem Linux/Windows environments.
  • This is highly collaborative position that will be partnering and advising multiple teams by providing guidance throughout the creation and consumption of our data pipelines.


  • Design and implement conceptual, logical, and physical data workflows that support business needs on Cloud-based systems.
  • Propose architecture that enables integration of disparate enterprise data
  • Build and maintain efficient data pipeline architecture, ingress / egress data pipelines for applications, data layers and Data Lake; resolve cost effective efficient data movement strategies across a hybrid cloud
  • Lead multi-functional design sessions with functional authorities to understand and detail data requirements and use cases
  • Develop and document the data movement standards, best practices, and promote them across department
  • Drive long-term data architecture roadmaps in alignment with corporate strategic objectives
  • Conduct code and design reviews to ensure data related standards and best practices are met
  • Proactively educate others on modern data engineering concepts and design
  • Mentor Junior Members of the team


  • Candidate MUST have experience in owning large, complex system architecture, and hands on experience crafting and implementing data pipelines across large scale systems.
  • Experience implementing data pipelines with AWS is a must.
  • Production delivery experience in Cloud based PaaS Big Data related technologies (Snowflake, EMR, Data Bricks)
  • Experienced in multiple Cloud PaaS persistence technologies, and in-depth knowledge of cloud based ETL offerings and orchestration technologies (AWS Step Function, Airflow)
  • Experienced in stream-based and batch processing applying modern technologies
  • Database design skills including normalization/de-normalization and data warehouse design
  • Strong analytical / Debugging / Troubleshooting skills
  • Understanding of Distributed File Systems (HDFS, ADLS, S3)
  • Knowledge and understanding of relevant legal and regulatory requirements, such as SOX, PCI, HIPAA, Data Protection
  • Experience transitioning from On-premises big data installations to cloud is a plus
  • Strong Programming Experience - Python / Spark / SQL.
  • Collaborative and informative mentality is a must!


  • AWS
  • Spark / Python / SQL
  • Snowflake/ Databricks / Synapse / MS SQL Server
  • ETL / Orchestration Tools (DBT etc.)
  • Azure /Cosmos / ADLS Gen 2
  • git
  • Power BI/Tableau
  • ML / Notebooks
  • Hadoop 2.0 / Impala / Hive


  • Bachelors or Master’s in Computer science, Information Systems, or an engineering field or relevant experience.
  • 10+ years of related experience in developing data solutions and data movement.