5+ years relevant work experience in the Data Engineering field
3+ years of experience working with Hadoop and Big Data processing frameworks (Hadoop, Spark, Hive, Flink, Airflow etc.)
2+ years of experience Strong experience with relational SQL and at least one programming language such as Python, Scala, or Java
Experience working in AWS environment primarily EMR, S3, Kinesis, Redshift, Athena, etc.
Experience building scalable, real-time and high-performance cloud data lake solutions
Experience with source control tools such as GitHub and related CI/CD processes.
Experience working with Big Data streaming services such as Kinesis, Kafka, etc.
Experience working with NoSQL data stores such as HBase, DynamoDB, etc.
Experience with data warehouses/RDBMS like Snowflake & Teradata