Currently looking for an experienced Big Data Engineer to fill a remote, contract/freelance opening with a Fortune 100 company located in the United States. Interested candidates should have 4 years of hands-on experience as a Data Engineer in a Big Data environment (Spark, Hive, HDFS, Sqoop).
Responsibilities of the Big Data Engineer
- Develop fast data infrastructure leveraging data streaming, batch processing, and machine learning to personalize experiences for our customers.
- Lead work and deliver elegant and scalable solutions
- Work and collaborate with a nimble, autonomous, cross-functional team of makers, breakers, doers, and disruptors who love to solve real problems and meet real customer needs.
Requirements of the Big Data Engineer
- Bachelor-level degree in engineering, information technology or computer science
- 4 years of hands-on experience as a Data Engineer in a Big Data environment (Spark, Hive, HDFS, Sqoop)
- Strong SQL knowledge and data analysis skills for data anomaly detection and data quality assurance
- Programming experience in Scala, Python, shell scripting and automation
- Hadoop Certification or Spark Certification
- Experience with modern workflow/orchestration tools (e.g. Apache Airflow, Oozie, Azkaban, etc.)
- Experience working with PostgreSQL, Teradata, Vertica and/or other DBMS platforms
- Experience with BI tools such as Tableau or Qlik to create visualizations and dashboards for various data quality metrics
Compensation for the Big Data Engineer
The starting salary for this position ranges from $85.00 to $125.00 per hour depending on experience.