Data Engineer - Columbus,OH - Only Locals !!!

  • Columbus, OH
  • Posted 2 hours ago | Updated 2 hours ago

Overview

On Site
Depends on Experience
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 12 Month(s)
Able to Provide Sponsorship

Skills

Big Data
Apache Spark
PySpark

Job Details

Job Title: Data Engineer (PySpark & Databricks)
Location: Columbus,OH - Onsite - Day 1
Job Type: Contract W2 / 1099
Experience Level: [Mid-Level/Senior]
Need Only Locals Profiles !!!!

Job Summary:
We are seeking a highly skilled and motivated Data Engineer with strong experience in PySpark and Databricks to join our data engineering team. In this role, you will be responsible for building scalable data pipelines, optimizing big data workflows, and enabling data accessibility across the organization. You will work closely with data scientists, analysts, and business stakeholders to deliver high-quality data solutions.

Key Responsibilities:
  • Design, build, and maintain scalable ETL/ELT pipelines using PySpark and Databricks on cloud platforms (Azure, AWS, or Google Cloud Platform).
  • Develop and optimize large-scale data processing solutions in Apache Spark.
  • Implement data ingestion processes from structured and unstructured data sources.
  • Collaborate with cross-functional teams to gather data requirements and deliver clean, reliable data sets.
  • Ensure data quality, governance, and security throughout the data lifecycle.
  • Monitor and troubleshoot data pipelines, jobs, and performance issues.
  • Implement data transformation and enrichment logic to support analytics and reporting.
  • Work with modern data lake and data warehouse architectures (e.g., Delta Lake, Lakehouse, Snowflake, etc.).

Required Qualifications:
  • Bachelor s degree in Computer Science, Engineering, or related field.
  • 3+ years of experience in big data engineering, preferably in a cloud environment.
  • Strong hands-on experience with PySpark and Apache Spark.
  • Proficient in working with Databricks notebooks, workflows, and Delta Lake.
  • Experience with one or more cloud platforms (Azure, AWS, or Google Cloud Platform) and services such as Azure Data Factory, AWS Glue, etc.
  • Solid understanding of data modeling, data warehousing, and distributed computing.
  • Proficient in SQL and performance tuning of queries.
  • Experience with version control tools (e.g., Git) and CI/CD practices.
Regards,
Radiantze Inc
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.