Overview
Skills
Job Details
Responsibilities:
• Design and build data processing pipelines using tools and frameworks in the Databricks/Azure ecosystem.
• Analyzes requirements and architecture specifications to create a detailed design document. - Responsible for data engineering functions including, but not limited to data extract, transformation, loading, integration in support of modern cloud computing platforms like Azure and Databricks
• Develop, Construct, Test and maintain data architectures.
• Ensure data architecture will support the requirements of the business.
• Assist in the discovery of new opportunities for data acquisition.
• Work with huge data sets and work with other Data Engineers and/or Scientists on analyzing this data using algorithms and machine learning.
• Implement and configure big data technologies as well as tune processes for performance at scale.
• Design and build ETL pipelines to automate the ingestion of structured and unstructured data.
• Work with DevOps engineers on CI/CD, and IaC (Continuous Integration, Continuous Delivery, and Infrastructure as Code) processes; read specifications and translate them into code and design documents; and perform code reviews and develop processes for improving code quality.
• Strong in scalability, performance, and availability of our systems.
• Responsible for deploying the developed solution in Azure environment and examining the results for accuracy.
• Responsible for defining and assisting in the implementation/maintaining Data Governance Policies, Quality Standards, Data Security Data Classifications, and the creation of a data catalog.
Basic Qualifications:
• Bachelor’s degree in computer science or similar field with 10 years' experience as a data engineer, Database Administrator, or software developer.
• 5 years’ experience developing SQL and Python.
• 5 years' experience working in an Azure Environment
• 5 years' experience creating, implementation and maintaining a Data Government Policy