Sr. Azure Data Engineer

Depends on Experience

Contract: W2, Independent, 36 Month(s)

    Skills

    • Azure Analysis Services
    • Azure Data Factory
    • Azure Databricks
    • Azure DevOps
    • Azure DevOpsExperience
    • Azure SQL
    • Computer Science
    • Continuous
    • Continuous Deployment
    • Continuous Integration

    Job Description

    No sponsorships

    Location:  US Remote - EST/ CST time zone
    Duration:  Long term contract, no end date.

    Skills

    • Must have Star Schema and multi dimensional modeling, dimensional data marts, data warehouse
    • Azure Data Factory, Azure Data Lake, Azure SQL Data Warehouse, Azure Databricks, Azure Synapse, Azure Analysis Services, Azure DevOps
    • Experience working in ETL process for at least 10+ years
    • Experience developing ETL process using Azure ETL tools for at least 4+ years


    Qualifications

    • Bachelor’s degree in Information Systems, Computer Science, Engineering, or related field
    • Proficient in designing and implementing data sourcing and transformation processes (ETL) in a large, distributed environment using cloud services e.g., Azure Event Hub, Data Factory, Data Catalog, Databricks, Stream Analytics, Apache Spark, Python
    • Hands on experience with developing complex ETL pipelines that ingest and transform data from various source systems into Azure cloud operational databases and data warehouse environment
    • Proficient in developing and operationalizing various data distribution patterns such as APIs, event based, publish/subscribe models
    • Well versed with data models, source target definitions, identify/fixing discrepancies adhering to enterprise data governance standards
    • Adept with Azure DevOps-CI/CD (Continuous Integration and Continuous Deployment) and Test-Driven Development practices
    • Design, develop and implement scalable batch / real-time data pipelines to integrate data from a variety of sources into an Azure database / data warehouse and data lake
    • Experience with Azure Data Factory and Synapse Pipelines build and provisioning Databricks notebooks for ETL processing and environments
    • Experience migrating data lakes from on-premises to cloud
    • Experience with agile delivery using Azure DevOps
    • Experience monitoring performance of data workflows and infrastructure through use of monitoring tools and enabling alerts
    • Experience with data mining, pattern matching, forecasting, sentiment analysis, cluster analysis or similar
    • Functional knowledge of data visualization tools

     

    Responsibilities

    • Design and implement data frameworks and pipelines to process data from on-premises and cloud data sources to feed into the Azure Data Lake, monitor data quality and implement related controls
    • Evaluate existing designs, improve methods, and implement optimizations
    • Analyze and validate data sharing requirements within and outside data partners
    • Analyze and interpret collected data; identifying trends; writing reports and recommendations for internal or external clients
    • Use statistical practices to analyze current and historical data to make predictions, identify risks, and opportunities enabling better decisions on planned/future events
    • Work with business leadership to understand data requirements; propose and develop solutions that enable effective decision-making and drive business objectives
    • Prepare project deliverables that are valued by the business and present them in such a manner that they are readily understood by project stakeholders
    • Prepare advanced project implementation plans using Agile methodologies, which highlight milestones and deliverables, leveraging standards, practices, and work planning tools
    • Recognize potential issues and risks during the analytics project implementation and recommend mitigation strategies
    • Document best practices for data analysis, data engineering, and evangelize their usages
    • Work with stakeholders including the executives as well as product engineering teams to assist with data-related technical issues and support their data infrastructure needs
    • Create data tools for data scientist team to assist them in building and optimizing our product into an innovative industry leader
    • Work with data and analytics experts to strive for greater functionality in our data capabilities