Overview
Skills
Job Details
Position: Data Architecture (W2 Position)- 63 / 322562 Location: Dearborn MI (Hybrid) Duration: 12+ months MOI: MS Teams or WebEx Direct Client: FORD Motors
"Position Description:
Design data solutions in the cloud or on premises, using the latest data services, products, technology, and industry best practices
Experience migrating legacy data environments with a focus performance and reliability Data Architecture contributions include assessing and understanding data sources, data models and schemas, and data workflows
Ability to assess, understand, and design ETL jobs, data pipelines, and workflows
BI and Data Visualization include assessing, understanding, and designing reports, creating dynamic dashboards, and setting up data pipelines in support of dashboards and reports
Data Science focus on designing machine learning, AI applications, MLOps pipelines
Addressing technical inquiries concerning customization, integration, enterprise architecture and general feature / functionality of data products
Experience in crafting data lake house solutions in Google Cloud Platform.
This includes relational & vector databases, data warehouses, data lakes, and distributed data systems.
Must have PySpark API processing knowledge utilizing resilient distributed datasets (RDSS) and data frames
Skills Required:
10 + Years overall experience
Design data solutions in the cloud or on premises, using the latest data services, products, technology, and industry best practices
Experience migrating legacy data environments with a focus performance and reliability Data Architecture contributions include assessing and understanding data sources, data models and schemas, and data workflows
Ability to assess, understand, and design ETL jobs, data pipelines, and workflows
BI and Data Visualization include assessing, understanding, and designing reports, creating dynamic dashboards, and setting up data pipelines in support of dashboards and reports
Data Science focus on designing machine learning, AI applications, MLOps pipelines
Addressing technical inquiries concerning customization, integration, enterprise architecture and general feature / functionality of data products
Experience in crafting data lake house solutions in Google Cloud Platform. This includes relational & vector databases, data warehouses, data lakes, and distributed data systems.
Must have PySpark API processing knowledge utilizing resilient distributed datasets (RDSS) and data frames
Skills Preferred:
Ability to write bash, python and groovy scripts to help configure and administer tools
Experience installing applications on VMs, monitoring performance, and tailing logs on Unix PostgreSQL Database administration skills are preferred
Python experience and experience developing REST APIs
Experience Required:
10 + Years
Education Required:
Bachelor's Degree Computer Science, Computer Information Systems, or equivalent experience
Education Preferred:
Masters Data Science
Additional Safety Training/Licensing/Personal Protection Requirements:
Additional Information :
.****HYBRID****"