Sr. Data Engineer

Overview

On Site
Depends on Experience
Accepts corp to corp applications
Contract - W2
No Travel Required

Skills

Amazon EC2
Amazon Web Services
Apache Airflow
Big Data
PySpark
SQL
Extract
Transform
Load
Snow Flake Schema

Job Details

Job Title: Sr. Data Engineer

Location: Denver, CO

Duration: Long-term

Main Skill:

Over 10+ years of experience in the Software Development Industry.

We need data Engineering exp - building ETLS using spark and sql, real time and batch pipelines using Kafka/firehose, experience with building pipelines with data bricks/snowflake, experience with ingesting multiple data formats like json/parquet/delta etc.

Job Description:

About You

  • You have a BS or MS in Computer Science or similar relevant field
  • You work well in a collaborative, team-based environment
  • You are an experienced engineering with 3+ years of experience
  • You have a passion for big data structures
  • You possess strong organizational and analytical skills related to working with structured and unstructured data operations
  • You have experience implementing and maintaining high performance / high availability data structures
  • You are most comfortable operating within cloud based eco systems
  • You enjoy leading projects and mentoring other team members

Specific Skills:

  • Over 10 years of experience in the Software Development Industry.
  • Experience or knowledge of relational SQL and NoSQL databases
  • High proficiency in Python, Pyspark, SQL and/or Scala
  • Experience in designing and implementing ETL processes
  • Experience in managing data pipelines for analytics and operational use
  • Strong understanding of in-memory processing and data formats (Avro, Parquet, Json etc.)
  • Experience or knowledge of AWS cloud services: EC2, MSK, S3, RDS, SNS, SQS
  • Experience or knowledge of stream-processing systems: i.e., Storm, Spark-Structured-Streaming, Kafka consumers.
  • Experience or knowledge of data pipeline and workflow management tools: i.e., Apache Airflow, AWS Data Pipeline
  • Experience or knowledge of big data tools: i.e., Hadoop, Spark, Kafka.
  • Experience or knowledge of software engineering tools/practices: i.e., Github, VSCode, CI/CD
  • Experience or knowledge in data observability and monitoring
  • Hands-on experience in designing and maintaining data schema life-cycles.
  • Bonus - Experience in tools like Databricks, Snowflake and Thoughtspot
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.