Overview
Remote
Hybrid
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)
Skills
azure
"data engineer"
Job Details
Hi,
I hope you are doing well.
Please let me know if you are looking for a job change and interested in the below position.
Azure Data Engineer
Hybrid: St. Louis, MO
6 months with possible extension/conversion
Interview : Video ( Prefer local candidates who can go for F2F interview but open to non-local for video interview )
- Initial Project: Migrate existing Python data extraction tool and codebase to Alpine Data Lake, set up API extraction in Synapse, establish daily ETL for updates, and update Power BI reports to use data lake source. [Alpine Strategic Credit (ASC)]
- Subsequent Project:Extract and migrate 20+ years of historical data from Portfolio system to a data warehouse, set up real-time API for continuous data feed from Tamarac, establish daily ETL for updates, and enable improved Power BI reporting. [Alpine Private Wealth (APW)]
- Both projects involve ensuring data accuracy and integrity, structuring data for seamless integration into the data warehouse and Power BI.
- Data modeling will utilize a Star Schema or Snowflake Schema approach.
Responsibilities:
- Designing, developing, and maintaining data pipelines and warehousing solutions.
- Key tasks will include API integration, ETL development, data modeling (Star Schema or Snowflake Schema), and supporting Power BI reporting.
- Collaborate with internal project teams to ensure data accuracy, integrity, and structured organization for business intelligence.
Tech stack:
- Azure Synapse Analytics
- Two separate environments (e.g., Development and Production).
- Handles data warehousing and large-scale analytics workloads.
- Azure Data Lake
- Centralized storage layer.
- Supports both structured and unstructured data.
- Scalable foundation for analytics and data integration.
- Azure Key Vault
- Manages secrets, encryption keys, and certificates.
- Ensures secure access across both environments.
- Azure DevOps
- CI/CD pipelines for automated builds and deployments.
- Manages data pipeline lifecycle and component delivery.
- Apache Spark Notebooks
- Deployed in both environments.
- Used for interactive data exploration, transformation, and analytics.
- Azure Integration Runtime
- Facilitates secure and scalable data movement.
- Enables transformations across network boundaries within Synapse or Data Factory.
- Metastore Data Warehouse
- Centralized metadata repository.
- Maintains schema definitions, and table metadata
- ARM Template (Azure Resource Manager)
- Defines and automates infrastructure deployment.
- Enables consistent provisioning of Synapse, Data Lake, Key Vault, and other resources across environments.
Thanks & Regards,
Anikat Kumar
Sr. Technical Recruiter
ShiftCode Analytics Inc.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.