Location is Northwest of downtown Detroit, MI.
Duration: 3+ years
The AI Engineer will be responsible for the design and build of world class high-volume real-time data processing, machine learning and AI on big data platforms. He/She will research, develop, optimize, and innovate frameworks and patterns for enterprise scale data analysis and computations as part of our Big Data Team.
Design and Build world class high-volume real-time data processing frameworks and advanced analytics on big data platforms.
Create hypotheses, design experiments and test feasibility of proposed business objectives
Experiment with new algorithms and analytical approaches and report on the impact to model performance
Develop quantitative models that include forecasting, predicting, clustering, modeling and dimensionality reduction.
Know how to obtain & manage a sufficient set of usable data from multiple sources, including manipulating noisy and irregular data to produce clean datasets.
Influence development teams to implement on a flexible analytical platform
Passionate about asking questions & using data to supply answers.
Research, develop, optimize, and innovate frameworks, tools and patterns for enterprise scale data analysis and computations as part of our Big Data Car initiatives.
3+ years of hands-on implementation experience working with a combination of the following technologies: Storm and Spark streaming, Kafka, Spark advanced analytics on Cloudera platform. Advanced programming skills in Python
Research the tool market, follow new research streams and build partnerships with start-ups, universities and local communities
Education Requirements: A Bachelor s degree in Computer science engineering or related field
Several years-experience preferred in predictive, optimization, and simulation models using structured and unstructured data
Strong analytical and quantitative problem solving ability
Solid understanding of statistical modeling, predictive analysis, machine learning, and data mining
Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner
3+ years background in statistical, mathematical, and operations research modeling
4+ years programming/scripting languages python
2+ years of experience of developing solutions in cloud environments (Azure, AWS) with focus on analytics stack.
2+ years of experience in big data streaming frameworks, data processing and real-time ingestion patterns.
Experience working with and evaluating open source technologies and demonstrated ability to make objective choices
Experience with Visualization Tools such as Tableau or QlikView
Experience with more than one data streaming technologies
Understanding of Machine Learning skills (like Apache Mahout, Spark MLib)
Ability to work in an agile, multi skill team of top talents