spark Jobs

Refine Results
1,601 - 1,620 of 1,888 Jobs

SW Developer ( Python+ Hadoop)

Wise Equation Solutions Inc.

Chicago, Illinois, USA

Contract

Role: Python + Hadoop Location : (Charlotte, NC/ Chicago, IL /Denver, CO.) Local only Onsite/hybrid: 5 Days Onsite Years of Experience required : 7+ Yrs Must Have skills: Python + Hadoop Required Skills: Strong SQL Skills - one or more of MS SQL, MySQL, HIVE, Impala, SPARK SQL Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml Excellent Object oriented programming experience with Python Experience w

Data Engineer

Amiseq Inc.

San Jose, California, USA

Contract

Job Description: We are seeking a skilled Data Engineer with expertise in Big Data technologies to join our client in San Jose, CA. This role requires a long-term W2 contract and onsite presence 3 days per week. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL workflows.Work extensively with Big Data technologies including Hadoop, HDFS, Hive, and Spark SQL.Develop and optimize Python scripts for data processing and automation.Write and maintain Unix/Linux shell

Senior Data Engineer

Ace Technologies, Inc.

Remote

Contract

NO FAKE consultants fake candidates will get you taken off Strong SQL, Python, Pyspark, Glue, Step Function, Some experience/exposure with Power BI/AI 100% RemoteWork EST hoursNeed a Sr. Resource who can articulate and perform at a high level on camera with heavy SLACK activity. Engagement: 2+ years Need them to start ASAP 2 Must be a Data Engineer with a strong background in AWS Glue, PySpark, Python, Healthcare EMR, and AWS Step Functions. 12+ years of development experience5+ years SQL S

Databricks Certified Data Engineer

Satsyil Corporation

Remote

Full-time

Role: Sr. Databricks Architect with Databricks certification. Location: 1775 Tysons Blvd, VA (REMOTE). Duration: FTE. Job Description: Satsyil Corp is currently seeking a highly skilled and motivated Senior Databricks Architect to join our team and contribute to the success of our Enterprise Data Services project. As a Databricks Architect, you will play a crucial role in developing and optimizing Spark applications in AWS Databricks, leveraging your expertise in Python, SQL, and pySpark. The id

Python Architect

Deemsys Inc

Columbus, Ohio, USA

Contract

Job Title: Python Architect AI/ML (Retail Domain) Experience: 12 -15 Years Location: Columbus, Ohio, with Local DL Must Type: C2C Skills: Python, AI/ML (TensorFlow, PyTorch, Scikit-learn) Retail domain expertise (recommendation systems, forecasting) Cloud: AWS / Azure / Google Cloud Platform REST APIs, Microservices, Architecture Design Data Pipelines, ETL Nice to have: Spark, MLOps, Docker/Kubernetes

Databricks Admin with AWS

NeoTech Solutions

Menlo Park, California, USA

Third Party

Role : Databricks Admin with AWS Location : Menlo Park, CA - Onsite Responsibilities: Databricks Platform Infrastructure: Design, build, and maintain data infrastructure on the Databricks Platform, including Spark and Databricks SQL. Data Team Enablement: Provide technical expertise and support to Data Scientists, Data Engineers, and Analysts, enabling them to effectively utilize the Databricks platform. Performance Optimization: Optimize Databricks workloads for performance, scalability, and

Data Engineer

Randstad Digital

North Carolina, USA

Contract

job summary: Location: Durham North Carolina Required Skills: Bachelor's degree in computer science, Information Systems, or a related fieldproven track record in data engineering10+ years' experience developing Spark or Spring Batch Services for Data movement. location: Durham, North Carolina job type: Contract salary: $72 - 73 per hour work hours: 8am to 5pm education: Bachelors responsibilities: Schedule, Monitor and debug ETL Spring Batch and Spark Batch.Hands on Experience with Java

Analytics Engineer

Caresoft

Atlanta, Georgia, USA

Contract

Title: Analytics Engineer Duration : Long term Location: Atlanta, Georgia 30334(Hybrid) Skills: Years of experience in data engineering or analytics engineering roles. Advanced SQL: Proficiency in advanced SQL techniques for data transformation, querying, and optimization. Azure Databricks (Spark, Delta Lake) Microsoft Fabric (Dataflows, Pipelines, OneLake) SQL and Python (Pandas, PySpark)

Lead Python Developer

Connect Tech+Talent

Calgary, Alberta, Canada

Third Party, Contract

Job Title: Lead Python Developer Location : GTA/Calgary, Canada Position with long-term growth potential and exciting technical challenges! Must-Have Skills: 6 8 years of hands-on experience in Python Strong knowledge of Airflow, Kubernetes, ELK Stack Backend experience with Flask, Django, or FastAPI (3+ years) Nice-to-Have / Highly Preferred: Exposure to Generative AI use cases Experience with DB to DB migrations, particularly from MS SQL to open-source DBs like Apache Spark or Click House Pro

Lead Google Cloud Platform Data Engineer

Data Capital Inc

Sunnyvale, California, USA

Full-time

4+ years of recent Data Engineer experienceExperience building data pipelines in Google Cloud PlatformGoogle Cloud Platform Data proc, GCS & BIG Query experience10+ years of hands-on experience with developing data warehouse solutions and data products.6+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required5+ years of hands-on experience in modeling and designing schema for data lakes

Python + Hadoop - Charlotte, NC/ Chicago, IL /Denver, CO

Lorven Technologies, Inc.

Charlotte, North Carolina, USA

Contract

Python + Hadoop Position: Contract (W2) Location: Charlotte, NC/ Chicago, IL /Denver, CO Duration: 12 Months Job description: Strong SQL Skills - one or more of MS SQL, MySQL, HIVE, Impala, SPARK SQL Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml Excellent Object oriented programming experience with Python Experience with testing frameworks including pytest and python. Unit test Experience wor

Java Developer (Backend)

Korn Ferry

New York, New York, USA

Contract

Title: Java Developer (Backend) Location: Hybrid in Toronto/New York (Toronto preferred) Client Industry: Financial Services - Banking Compensation: $60 - $75/hr. (contract/contract-to-hire) We have partnered with our client in their search for a Java Developer (Backend). In this role you will help drive the build-out of cloud-native, data-intensive treasury-analytics solutions by combining Spring Boot microservices, Spark Databricks pipelines, and RESTful APIs to deliver secure, scalable, high-

SRE/Devops/Kubernetes/Python

Infonex Technologies, Inc.

Pleasanton, California, USA

Contract

Position: Devops/KUBERNETES -Open Position-CA Type: contract Duration: 12+ months Location: Pleasanton, CA Job Description: Required Skills: Spark Hadoop/CDH H2O/Steam MapR Kubernetes Docker Tensorflow Apache Airflow Jupyterhub Rstudio PyTorch ELK OpenVino MySql GitLab Traefik Prometheus, Grafana, Node Manager, Alert Manager Vault Notes: Currently client has on prem environment The client wants experience in containerization with Kubernetes, Vault, Slurm with Rstudio hook all the components

Sr. Data Engineer/ EDI (835 & 837)

Encora

Remote

Contract, Third Party

Sr. Data Engineer/ EDI (835 & 837) Location: Remote Duration: 6+ months contract Data Engineer with Experience working with healthcare claims data processing (835 and 837 EDI transactions) using the below tech stack. Job Description: Looking for data engineers to help establish a data lake for health care information. Data sources will include data in flat EDI files, relational databases, and kafka. Tasks: Write data pipeline code to process data sources into an iceberg focused data lakeAnalyze

Full Stack Developer - C#

Triumph Tech

Surprise, Arizona, USA

Full-time

Since its release in 2014, Spark s initial project Rock RMS, has been leading the innovation curve of the Church Management Space. Hundreds of ministries have been impacted by this open-source revolution. It s amazing what God has done and we believe He s just getting started. This position is a key hire that will allow Spark to take Rock to the next level by providing consulting services and custom development for churches using Rock. We re looking for quick learners who are motivated by applyi

Data Integration Lead

Hexaware Technologies, Inc

Dallas, Texas, USA

Full-time

Responsibilities: Experience implementing data integration workflows using SAS tools such as SAS DI Studio.Sanitize SAS programs in preparation for conversion.Optimize SAS programs in preparation for conversion.Incorporate changes in accelerator to ensure generated code aligns to required standardsUtilize accelerator to convert SAS programs to PySpark programs.Demonstrate and document code lineage.Unit Test generated PySpark programsIncorporate feedback from PySpark EAP integration, SIT and pari

Google Cloud Platform Engineer

Aptino

Jersey City, New Jersey, USA

Contract, Third Party

Implement and control the flow of data to and from Google Cloud Platform Migrate on-premises workloads to Google Cloud Platform, hands on experience migrating data to the cloud. familiarity with the disciplines of enterprise software development such as configuration & release management, source code. Data engineering tools and technologies, including SQL, and Google Cloud Platform Experience with Big Query and Big Data technologies like Spark, Hadoop Exposure to any programming ( Java , .Net ,

Business Systems Analyst (BSA)

Korn Ferry

Toronto, Ontario, Canada

Contract

Title: Business Systems Analyst (BSA) Location: Hybrid in Toronto Client Industry: Financial Services - Banking Compensation: $60-70/hr CAD (contract/contract-to-hire) We have partnered with our client in their search for a Business Systems Analyst (BSA). This role will support the build out of a next generation risk, valuations, and analytics platform by translating complex treasury and capital markets business needs into performant cloud based data solutions. Responsibilities Lead end to en

Python Dveloper (with Hadoop exp) for contract in Charlotte, NC/ Chicago, IL /Denver, CO

Last Word Consulting

Charlotte, North Carolina, USA

Contract

Title: Python Dveloper (with Hadoop exp) Location: Charlotte, NC/ Chicago, IL /Denver, CO Onsite/hybrid: 5 Days Onsite Years of Experience required : 7+ Yrs Must Have skills: Python + Hadoop Only W2 Required Skills: Strong SQL Skills - one or more of MS SQL, MySQL, HIVE, Impala, SPARK SQL Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml Excellent Object oriented programming experience with Py

ML Ops senior engineer

Yashco Systems, Inc.

Sunnyvale, California, USA

Contract

Must have: Expertise in Python and experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Strong Experience in deployment/devops technologies: CI/CD pipelines, Kubernetes/Docker, and infrastructure-as-code tools (Terraform, Ansible, etc.). and cloud-native architectures (Google Cloud Platform and Aruze), monitoring and observability for ML workloads Advanced understanding of ML pipeline orchestration tools like Kubeflow, MLflow, Airflow, or TFX.Nice to have: Experience with dis