spark Jobs

Refine Results
1,621 - 1,640 of 1,918 Jobs

AWS Data Engineer - CDC(Change Data Capture)

Nityo Infotech Corporation

Mountain View, California, USA

Full-time

Job Description: Bachelor's or Master's degree in Computer Science or a related field2+ years engineering experience (2+ years in data infrastructure)programming skills, preferably in Python or Javacloud data lakes such as Snowflake or Databricksexperience with streaming and big data processing frameworks (e.g., Kafka, Spark)Experienced with workflow orchestration systemsExpertise in implemented CDC solutions (e.g. Debezium, Maxwell, DMS)Working Experience with cloud platforms (preferably AWS)

Data Engineer

Amiseq Inc.

San Jose, California, USA

Contract

Job Description: We are seeking a skilled Data Engineer with expertise in Big Data technologies to join our client in San Jose, CA. This role requires a long-term W2 contract and onsite presence 3 days per week. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL workflows.Work extensively with Big Data technologies including Hadoop, HDFS, Hive, and Spark SQL.Develop and optimize Python scripts for data processing and automation.Write and maintain Unix/Linux shell

Senior Data Engineer

Ace Technologies, Inc.

Remote

Contract

NO FAKE consultants fake candidates will get you taken off Strong SQL, Python, Pyspark, Glue, Step Function, Some experience/exposure with Power BI/AI 100% RemoteWork EST hoursNeed a Sr. Resource who can articulate and perform at a high level on camera with heavy SLACK activity. Engagement: 2+ years Need them to start ASAP 2 Must be a Data Engineer with a strong background in AWS Glue, PySpark, Python, Healthcare EMR, and AWS Step Functions. 12+ years of development experience5+ years SQL S

Python Architect

Deemsys Inc

Columbus, Ohio, USA

Contract

Job Title: Python Architect AI/ML (Retail Domain) Experience: 12 -15 Years Location: Columbus, Ohio, with Local DL Must Type: C2C Skills: Python, AI/ML (TensorFlow, PyTorch, Scikit-learn) Retail domain expertise (recommendation systems, forecasting) Cloud: AWS / Azure / Google Cloud Platform REST APIs, Microservices, Architecture Design Data Pipelines, ETL Nice to have: Spark, MLOps, Docker/Kubernetes

Databricks Certified Data Engineer

Satsyil Corporation

Remote

Full-time

Role: Sr. Databricks Architect with Databricks certification. Location: 1775 Tysons Blvd, VA (REMOTE). Duration: FTE. Job Description: Satsyil Corp is currently seeking a highly skilled and motivated Senior Databricks Architect to join our team and contribute to the success of our Enterprise Data Services project. As a Databricks Architect, you will play a crucial role in developing and optimizing Spark applications in AWS Databricks, leveraging your expertise in Python, SQL, and pySpark. The id

Databricks Admin with AWS

NeoTech Solutions

Menlo Park, California, USA

Third Party

Role : Databricks Admin with AWS Location : Menlo Park, CA - Onsite Responsibilities: Databricks Platform Infrastructure: Design, build, and maintain data infrastructure on the Databricks Platform, including Spark and Databricks SQL. Data Team Enablement: Provide technical expertise and support to Data Scientists, Data Engineers, and Analysts, enabling them to effectively utilize the Databricks platform. Performance Optimization: Optimize Databricks workloads for performance, scalability, and

Analytics Engineer

Caresoft

Atlanta, Georgia, USA

Contract

Title: Analytics Engineer Duration : Long term Location: Atlanta, Georgia 30334(Hybrid) Skills: Years of experience in data engineering or analytics engineering roles. Advanced SQL: Proficiency in advanced SQL techniques for data transformation, querying, and optimization. Azure Databricks (Spark, Delta Lake) Microsoft Fabric (Dataflows, Pipelines, OneLake) SQL and Python (Pandas, PySpark)

Data Engineer

Randstad Digital

North Carolina, USA

Contract

job summary: Location: Durham North Carolina Required Skills: Bachelor's degree in computer science, Information Systems, or a related fieldproven track record in data engineering10+ years' experience developing Spark or Spring Batch Services for Data movement. location: Durham, North Carolina job type: Contract salary: $72 - 73 per hour work hours: 8am to 5pm education: Bachelors responsibilities: Schedule, Monitor and debug ETL Spring Batch and Spark Batch.Hands on Experience with Java

Lead Python Developer

Connect Tech+Talent

Calgary, Alberta, Canada

Contract, Third Party

Job Title: Lead Python Developer Location : GTA/Calgary, Canada Position with long-term growth potential and exciting technical challenges! Must-Have Skills: 6 8 years of hands-on experience in Python Strong knowledge of Airflow, Kubernetes, ELK Stack Backend experience with Flask, Django, or FastAPI (3+ years) Nice-to-Have / Highly Preferred: Exposure to Generative AI use cases Experience with DB to DB migrations, particularly from MS SQL to open-source DBs like Apache Spark or Click House Pro

Lead Google Cloud Platform Data Engineer

Data Capital Inc

Sunnyvale, California, USA

Full-time

4+ years of recent Data Engineer experienceExperience building data pipelines in Google Cloud PlatformGoogle Cloud Platform Data proc, GCS & BIG Query experience10+ years of hands-on experience with developing data warehouse solutions and data products.6+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required5+ years of hands-on experience in modeling and designing schema for data lakes

Lead Foundry Engineer

K-Tek Resourcing LLC

Atlanta, Georgia, USA

Full-time

We are ! Know anyone who might be interested? Full Time for AIG in Atlanta GA H1 transfer & Relocation will work Lead Foundry Engineer Total experience of 10+ years in Data Engineering using PySpark and must have at least 3-4 years of experience in Palantir Foundry. Strong experience with Palantir Data Engineering features such as, Code Repo, Code Workbook, Pipeline Build, AIP logic, OSDK, migration techniques, Data Integration and Security setup. Design, develop Data Pipelines, and have exc

Java Developer (Backend)

Korn Ferry

New York, New York, USA

Contract

Title: Java Developer (Backend) Location: Hybrid in Toronto/New York (Toronto preferred) Client Industry: Financial Services - Banking Compensation: $60 - $75/hr. (contract/contract-to-hire) We have partnered with our client in their search for a Java Developer (Backend). In this role you will help drive the build-out of cloud-native, data-intensive treasury-analytics solutions by combining Spring Boot microservices, Spark Databricks pipelines, and RESTful APIs to deliver secure, scalable, high-

SRE/Devops/Kubernetes/Python

Infonex Technologies, Inc.

Pleasanton, California, USA

Contract

Position: Devops/KUBERNETES -Open Position-CA Type: contract Duration: 12+ months Location: Pleasanton, CA Job Description: Required Skills: Spark Hadoop/CDH H2O/Steam MapR Kubernetes Docker Tensorflow Apache Airflow Jupyterhub Rstudio PyTorch ELK OpenVino MySql GitLab Traefik Prometheus, Grafana, Node Manager, Alert Manager Vault Notes: Currently client has on prem environment The client wants experience in containerization with Kubernetes, Vault, Slurm with Rstudio hook all the components

Sr. Data Engineer (Foundry AIP / Foundry Integration Specialist) Full-Time

K-Tek Resourcing LLC

Atlanta, Georgia, USA

Full-time

Location - Atlanta, GA (Hybrid) Type - Full-Time Sr. Data Engineer (Foundry AIP / Foundry Integration Specialist) Total experience of 6+ years in Data Engineering using PySpark and must have at least 2-3 years of experience in Palantir Foundry. Strong experience with Palantir Data Engineering features such as, Code Repo, Code Workbook, Pipeline Build and AIP / OSDK is mandatoryDesign, develop Data Pipelines, and have excellent skills in PySpark and Spark SQL, hands-on with code Build and deploy

Full Stack Developer - C#

Triumph Tech

Surprise, Arizona, USA

Full-time

Since its release in 2014, Spark s initial project Rock RMS, has been leading the innovation curve of the Church Management Space. Hundreds of ministries have been impacted by this open-source revolution. It s amazing what God has done and we believe He s just getting started. This position is a key hire that will allow Spark to take Rock to the next level by providing consulting services and custom development for churches using Rock. We re looking for quick learners who are motivated by applyi

Google Cloud Platform Engineer

Aptino

Jersey City, New Jersey, USA

Contract, Third Party

Implement and control the flow of data to and from Google Cloud Platform Migrate on-premises workloads to Google Cloud Platform, hands on experience migrating data to the cloud. familiarity with the disciplines of enterprise software development such as configuration & release management, source code. Data engineering tools and technologies, including SQL, and Google Cloud Platform Experience with Big Query and Big Data technologies like Spark, Hadoop Exposure to any programming ( Java , .Net ,

Data Integration Lead

Hexaware Technologies, Inc

Dallas, Texas, USA

Full-time

Responsibilities: Experience implementing data integration workflows using SAS tools such as SAS DI Studio.Sanitize SAS programs in preparation for conversion.Optimize SAS programs in preparation for conversion.Incorporate changes in accelerator to ensure generated code aligns to required standardsUtilize accelerator to convert SAS programs to PySpark programs.Demonstrate and document code lineage.Unit Test generated PySpark programsIncorporate feedback from PySpark EAP integration, SIT and pari

Business Systems Analyst (BSA)

Korn Ferry

Toronto, Ontario, Canada

Contract

Title: Business Systems Analyst (BSA) Location: Hybrid in Toronto Client Industry: Financial Services - Banking Compensation: $60-70/hr CAD (contract/contract-to-hire) We have partnered with our client in their search for a Business Systems Analyst (BSA). This role will support the build out of a next generation risk, valuations, and analytics platform by translating complex treasury and capital markets business needs into performant cloud based data solutions. Responsibilities Lead end to en

ML Ops senior engineer

Yashco Systems, Inc.

Sunnyvale, California, USA

Contract

Must have: Expertise in Python and experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Strong Experience in deployment/devops technologies: CI/CD pipelines, Kubernetes/Docker, and infrastructure-as-code tools (Terraform, Ansible, etc.). and cloud-native architectures (Google Cloud Platform and Aruze), monitoring and observability for ML workloads Advanced understanding of ML pipeline orchestration tools like Kubeflow, MLflow, Airflow, or TFX.Nice to have: Experience with dis

Databricks Architect - AI/ML

Swanktek

Remote

Full-time

Required Skills12+ years in data engineering or architecture, with a strong focus on Databricks (at least 4-5 years) and AI/MLenablement.Deep hands-on experience with Apache Spark, Databricks (Azure/AWS), and Delta Lake.Proficiency in AI/ML pipeline integration using Databricks MLflow or custom model deployment strategies.Strong knowledge of Apache Airflow, Databricks Jobs, and cloud-native orchestration patterns.Experience with structured streaming, Kafka, and real-time analytics frameworks.Pro