Data Pipeline Architect & Builder

  • Dearborn, MI
  • Posted 4 days ago | Updated moments ago

Overview

On Site
$72 - $77 /hr
Contract - Independent
Contract - Long Term

Skills

NoSQL
PostgreSQL
MySQL
SQL
KAFKA
GCP
Big Query
Cloud Run
Dataflow
Dataproc
Java
Python
SOA
Microservices
CI/CD
Terraform
Tekton

Job Details




Stefanini Group is hiring!

Stefanini is looking for a Data Pipeline Architect & Builder, Dearborn, MI (Onsite)

For quick apply, please reach out Vasudha Lakshmi at /



We are looking for Data Pipeline Architect & Builder who will Spearhead the design, development, and maintenance of scalable data ingestion and curation pipelines from diverse sources. Ensure data is standardized, high-quality, and optimized for analytical use. Leverage cutting-edge tools and technologies, including Python, SQL, and DBT/Data form, to build robust and efficient data pipelines.



ResponsibilitiesUtilize your full-stack (End to End) Integration skills to contribute to seamless end-to-end development, ensuring smooth and reliable data flow from source to insight.Working as a Google Cloud Platform Data Solutions Leader where leverage your deep expertise in Google Cloud Platform services (Big Query, Dataflow, Pub/Sub, Cloud Functions, etc.) to build and manage data platforms that not only meet but exceed business needs and expectations.Working as a Data Governance & Security Champion where implement and manage robust data governance policies, access controls, and security best practices, fully utilize Google Cloud Platform's native security features to protect sensitive data.Working as a Data Workflow Orchestrator where employ Astronomer and Terraform for efficient data workflow management and cloud infrastructure provisioning, championing best practices in Infrastructure as Code (IaC).Continuously monitor and improve the performance, scalability, and efficiency of data pipelines and storage solutions, ensuring optimal resource utilization and cost-effectiveness. Collaborate effectively with data architects, application architects, service owners, and cross-functional teams to define and promote best practices, design patterns, and frameworks for cloud data engineering.Proactively automate data platform processes to enhance reliability, improve data quality, minimize manual intervention, and drive operational efficiency.Clearly and transparently communicate complex technical decisions to both technical and non-technical stakeholders, fostering understanding and alignment.Working as a Continuous Learner where stay ahead of the curve by continuously learning about industry trends and emerging technologies, proactively identifying opportunities to improve our data platform and enhance our capabilities.Translate complex business requirements into optimized data asset designs and efficient code, ensuring that our data solutions directly contribute to business goals.Develop comprehensive documentation for data engineering processes, promote knowledge sharing, facilitating collaboration, and ensuring long-term system maintainability.



Experience RequiredExpertise in NoSQL, PostgreSQL, KAFKA, Google Cloud Platform, Python.5-7 years of experience in Data Engineering or Software Engineering.Strong proficiency in SQL, Java, and Python, with practical experience in designing and deploying cloud-based data pipelines using Google Cloud Platform services like Big Query, Dataflow, and Dataproc.Solid understanding of Service-Oriented Architecture (SOA) and microservices, and their application within a cloud data platform.Experience with relational databases (e.g., PostgreSQL, MySQL), NoSQL databases, and columnar databases (e.g., Big Query).Knowledge of data governance frameworks, data encryption, and data masking techniques in cloud environments.Familiarity with CI/CD pipelines, Infrastructure as Code (IaC) tools like Terraform and Tekton, and other automation frameworks.Excellent analytical and problem-solving skills, with the ability to troubleshoot complex data platform and microservices issues.Experience in monitoring and optimizing cost and computing resources for processes in Google Cloud Platform technologies (e.g., Big Query, Dataflow, Cloud Run, Dataproc).





Experience PreferredAt least 2 years of hands-on experience building and deploying cloud-based data platforms (Google Cloud Platform preferred).



Education RequiredBachelor's degree in Computer Science, Information Technology, Information Systems, Data Analytics, or a related field (or equivalent combination of education and experience).



**Listed salary ranges may vary based on experience, qualifications, and local market. Also, some positions may include bonuses or other incentives***

Stefanini takes pride in hiring top talent and developing relationships with our future employees. Our talent acquisition teams will never make an offer of employment without having a phone conversation with you. Those face-to-face conversations will involve a description of the job for which you have applied. We also speak with you about the process, including interviews and job offers.



About Stefanini Group

The Stefanini Group is a global provider of offshore, onshore and near shore outsourcing, IT digital consulting, systems integration, application, and strategic staffing services to Fortune 1000 enterprises around the world. Our presence is in countries like the Americas, Europe, Africa, and Asia, and more than four hundred clients across a broad spectrum of markets, including financial services, manufacturing, telecommunications, chemical services, technology, public sector, and utilities. Stefanini is a CMM level 5, IT consulting company with a global presence. We are a CMM Level 5 company.



#LI-VL1

#LI-ONSITE
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.