Design build and maintain scalable data models and transformation pipelines using dbt SQL and Python.
Build and optimize ELT Extract Load Transform workflows using Python scripts and orchestration tools like Airflow or Dagster.
Implement data testing validation and documentation within dbt to ensure data integrity and reliability.
Manage and enhance database and data warehouse solutions e.g. Snowflake Big Query Redshift.
Work closely with data scientists' analysts and business stakeholders to understand data requirements and translate them into robust data solutions.
Continuously monitor and optimize data workflows and SQL queries for performance and cost efficiency.
Version control CICD Leverage software engineering best practices including version control Git and CICD pipelines for automated testing and deployment.
Troubleshooting Monitor and troubleshoot issues within data pipelines and systems to ensure data availability and accuracy.
Proficiency in SQL Strong experience in writing complex optimized SQL queries for data manipulation and extraction.
Extensive programming experience in Python for data processing automation and building data pipelines using libraries like pandas or PySpark.
Handson experience with dbt Data Build Tool for data transformation modeling testing and documentation.
knowledge Familiarity with modern cloud data platforms such as Snowflake AWS Azure or Google Cloud Platform.
Solid understanding of data modeling concepts eg star schema dimensional modeling and database design principles.
Experience with data integration transformation and ETLELT techniques and tools
Excellent analytical and problem-solving skills to debug and resolve complex data issues.
Strong communication skills to effectively collaborate with both technical and nontechnical teams.
Version control Experience with version control systems especially Git.
Skills
Mandatory Skills : Data Vault (ODS) Modeling