Job Title: Sr. Data Engineer
Location: Denver, CO
Duration: Long-Term
Objective: We need data Engineering exp - building ETLS using spark and sql, real time and batch pipelines using Kafka/firehose, experience with building pipelines with data bricks/snowflake, experience with ingesting multiple data formats like json/parquet/delta etc.
Skills Needed:
strong organizational and analytical skills related to working with structured and unstructured data operations
experience implementing and maintaining high performance / high availability data structures
Experience in relational SQL and NoSQL databases
High proficiency in Python, Spark, SQL and/or Scala
Experience in designing and implementing ETL processes
Experience in managing data pipelines for analytics and operational use
Strong understanding of in-memory processing and data formats (Avro, Parquet, Json etc.)
Experience in AWS cloud services: EC2, MSK, S3, RDS, SNS, SQS
Experience in stream-processing systems: i.e., Storm, Spark-Structured-Streaming, Kafka consumers.
Experience in data pipeline and workflow management tools: i.e., Apache Airflow, AWS Data Pipeline
Experience in big data tools: i.e., Hadoop, Spark, Kafka.
Experience in software engineering tools/practices: i.e., GitHub, VSCode, CI/CD
Experience in data observability and monitoring
Hands-on experience in designing and maintaining data schema life-cycles.
Good to have Experience in tools like Databricks, Snowflake and ThoughtSpot