This is a hybrid position in Chicago, IL (preferred), or Phoenix, AZ with remote flexibility.
What You’ll Do:
· Writing code – lots of it. We use Python, Java, Spark, and SQL. We welcome programmers of all backgrounds as long as you are keen to work with data and deliver good-quality code!
· Foster and strengthen a deep understanding of vast data sources in the cloud and know precisely how, when, and which data to use to solve business problems.
· Design, architect, implement and support critical streams and datasets that support our business, clients, and data scientists.
· Design, implement and support critical streams and datasets that support our business, clients, and data scientists.
· Be responsible for end-to-end data solution design and implementation, including data modeling, pipelining, transformation, and visualization. You will have an opportunity to work with innovative technologies.
· Implementing product features, working with Product Owners, Data Operations teams, and other data engineers.
· Implement best practices like infrastructure as code, automated testing, and code reviews, etc.
Who You Are:
· We look for intelligent people with good general programming skills because we believe that outstanding developers can learn new technologies quickly
· Bachelor’s/master’s degree in computer science, mathematics, statistics, economics, or other quantitative fields
· 5+ years of experience as a Data Engineer or in a similar role.
· Demonstrable Experience with any programming or scripting language (Python/Java/Scala/Ruby etc.)
· Experience using big data technologies (Hadoop, Spark, Hive, Presto, etc.)
· Experience using cloud technologies such as EMR, Lambda, EC2, and data pipelines.
· Experience with Agile, DevOps, CICD frameworks in cloud-based environments.
· Exposure to at least one dashboarding tool like Tableau, Power BI, Sisense, etc.
· Experience with Snowflake Computing is a huge plus.