Together, we own our company, our future, and our shared success. As an employee-owned company, our people are Black & Veatch. We put them at the center of everything we do and empower them to grow, explore new possibilities and use their diverse talents and perspectives to solve humanity's biggest challenges in an ever-evolving world. With over 100 years of innovation in sustainable infrastructure and our expertise in engineering, procurement, consulting and construction, together we are build
Title: Palantir Developer Location: Dallas, Texas Job Description: 3+ years of experience in software development or data engineering roles, including experience with Palantir Foundry Develop and deploy robust data pipelines, applications, and analytical tools using Palantir Foundry Create, maintain, and optimize Foundry Ontologies, Code Workbooks, and Actionable Applications. Integrate diverse datasets into Foundry from various data sources (e.g., relational databases, APIs, files). Proficie
Description: Must Haves: 5-8 years of hands on experience with AWS (Kubernetes). 5-8 years of experience with data storage platforms (data lakes, data warehouses etc.) The candidate specifically needs to have experience with cloud storage service- S3. Strong experience developing and maintaining data pipelines supporting the ETL processes leveraging Python and Lambda. Must be well versed in file ingestion, extraction, storage and generation5-8 years of hands on experience with Python coding. add
Data acquisition from a variety of data sources for multiple uses. Developing complex SQL scripts to transform the source data to fit into a dimensional model, then to create views and materialized views in Oracle.Developing automation with Informatica IICS or Power Center to pull data from external data sources and transform it to fit into a dimensional model.Collaborating with other members of the Data Engineering Team on the design and implementation of an optimal data design.Verification and
Snowflake Developer- with solid Python & SQL Location: Houston, TX Working Mode: - Hybrid (3 days Onsite) Job description: 8+ years of experience in data engineering using Snowflake and Python Need Minimum 4+ Years of Hands-on Experience in Snowflake Must have a hands on and strong experience in SQL & Core Python Experience with any data analysis tool or BI tool is advantage
Job Summary: We are seeking a skilled Azure Data Engineer with hands-on experience in Azure Databricks and Azure Data Factory (ADF) to design, develop, and maintain scalable data pipelines and analytics solutions. The ideal candidate will have a strong understanding of cloud data engineering best practices and a passion for solving complex data problems. Key Responsibilities: Design and implement ETL/ELT pipelines using Azure Data Factory and Databricks.Develop scalable data models and workflows
Key Responsibilities: Design, build, and maintain scalable and efficient ETL/ELT data pipelinesDevelop complex SQL queries to extract, transform, and analyze large datasetsWrite clean, efficient, and well-documented Python scripts for data processingCollaborate with data architects and stakeholders to ensure data quality, consistency, and governanceOptimize performance of data workflows and queries for faster results and lower costsIntegrate data from multiple sources including APIs, flat files,
10+ years experience in designing and developing Data Engineering and Extract, Transform & Load ( ETL ) data pipelines based solutions.Strong Experience with Python Programming (best practices, Code Quality, Security Vulnerabilities handing, Resiliency and reliability best practices)Excellent analytical and problem solving skills.5+ years experience of ETL on Google Cloud Platform (Google Cloud Platform).5+ years experience of ETL using Dataflow / Python3+ years experience on Google Cloud St
Required Skills12+ years in data engineering or architecture, with a strong focus on Databricks (at least 4-5 years) and AI/MLenablement.Deep hands-on experience with Apache Spark, Databricks (Azure/AWS), and Delta Lake.Proficiency in AI/ML pipeline integration using Databricks MLflow or custom model deployment strategies.Strong knowledge of Apache Airflow, Databricks Jobs, and cloud-native orchestration patterns.Experience with structured streaming, Kafka, and real-time analytics frameworks.Pro
Baer is looking for Databricks Architect for a 3+ month remote project Title: Databricks Architect Location: Remote (Must be based in US) Duration: 3 months Rate: All-Inclusive Alignment: W2 Description: Design and manage scalable solutions on the Databricks platform.Oversee Databricks workspace configuration, cluster setup, and job orchestration.Administer user roles, access controls, and workspace policies.Implement automation using Databricks REST APIs.Collaborate with data engineering and
THIS IS OPEN TO ONLY PERMANENT RESIDENTS AND CITIZENS OF CANADA REMOTE AWS Cloud Architect Toronto 10+ years of experience Strong MySQL experience enabling platform growth andscale. Advanced expertise and experience on schema design improvements and SQL/Schema optimizations ETL and Data Lake management expertise required. Knowledge and experience related to migrations from MySQL to PostgreSQL would be a definite PLUS. Python/Go experience is not mandatory but highly desirable, to support curr
Job Title: Google Cloud Platform Data Engineer Location: REMOTE Employment Type: contract - Only W2 Duration: 12-month initial contract About VLink: Started in 2006 and headquartered in Connecticut, VLink is one of the fastest growing digital technology services and consulting companies. Since its inception, our innovative team members have been solving the most complex business, and IT challenges of our global clients. Job Description: Key skills: SQL, Python, Kafka, Google Cloud Platform De
Responsibilities: Lead the migration of the existing SSIS-based ETL workflows to cloud-native pipelines using DBT and/or Google Cloud Platform tools such as Dataflow, Dataform, or Cloud Composer (Airflow). Design and implement scalable, efficient data models in BigQuery, following best practices for dimensional modeling. Optimize and maintain existing SQL transformations, ensuring correctness and performance in the cloud. Collaborate with BI developers and analysts to ensure data marts align wi
Vaco is seeking an experienced and dynamic Senior Business Analyst to oversee the requirements, design, testing and delivery of data-related solutions. This role is part of a Global Data Engineering and Business Intelligence (BI) team and drives actionable insights, enhances stakeholder engagement, and fosters a collaborative, high-performing team environment. The ideal candidate will have a strong technical background in Data and Business Intelligence (BI) and significant experience liaising be
Job Title: Databricks Data Engineer Location: On-Site (Multiple Offices Nationwide) Employment Type: Full-Time Salary Range: $165,000 - $185,000 per year + Annual Bonus About the Role: We are seeking a highly skilled Databricks Data Engineer to join our on-site team supporting clients across multiple office locations nationwide. The ideal candidate will have a strong background in designing, developing, and optimizing big data pipelines in Databricks, with solid experience across both Azure and
Junior Software Engineer (Software Engineer II)Salary: $70-80k DOE3-5 years of experience requirement Changeis, Inc. is seeking a highly skilled and motivated Senior Software Engineer to support our current work with the Supply Chain Systems Team. We are seeking a skilled and adaptable Junior Software Engineer to join our dynamic technology team. The successful candidate will be instrumental in designing, developing, and maintaining software solutions that extend and integrate with our core IFS
OPEN TO CANADIAN PERMANENT RESIDENTS AND CANADIAN CITIZENS ONLY Snowflake Engineer Calgary 5-8 years Technical Skills: Good HandS on experience on Snowflake with strong programming experience in SQL. Competence in Snowflake data engineering components such as Snowpipe, Tasks, UDF s and Dynamic Tables, experience in python Hands on experience in databases, stored procedures, optimizations of huge data In-depth knowledge of ingestion techniques, data cleaning, de-dupe, partitioning. Knowledge of m
We are seeking an accomplished and visionary AI Architect to spearhead the design and deployment of transformative AI solutions centered around Large Language Models (LLMs) and Agentic AI systems. As a key technical leader, you will shape the next generation of intelligent platforms that blend powerful LLM capabilities with robust, scalable cloud and data architecture. Your expertise will drive innovation across our AI initiatives, overseeing everything from architecting advanced multi-agent fra
Must have 12+ years of IT experience.6+ In-depth experience of NoSQL database systems including architecture design, such as, Cassandra, ScyllaDB, DynamoDB, MongoDB (required to be domain expert at least two technologies)5+ years of experience with infrastructure automation and configuration management (Ansible, Terraform), can orchestrate and automate complex administrative tasks.5+ years of experience on AWS DB solutions and their ecosystems (EC2, DynamoDB, Cloudwatch, AMI, Security Group, VPC
Job Title: Tableau to Databricks Migration Specialist Location: Remote Type: Contract Duration: Long Term Job Summary: We are seeking a highly skilled Tableau to Databricks Migration Specialist to lead and execute the migration of enterprise-level reporting and analytics from Tableau dashboards to Databricks Lakehouse Platform. The ideal candidate will have hands-on experience with data visualization, ETL processes, SQL optimization, and translating BI logic from Tableau to scalable notebook