AWS/Spark Data Engineer

Overview

Remote
Depends on Experience
Contract - Independent
Contract - W2
No Travel Required

Skills

Algorithms
Amazon Redshift
Amazon S3
Amazon Web Services
Analytical Skill
Analytics
Apache Hadoop
Apache Hive
Apache Spark
Application Development
Artificial Intelligence
Big Data
Cloud Computing
Collaboration
Communication
Computer Science
Conflict Resolution
Data Engineering
Data Manipulation
Data Modeling
Data Processing
Data Quality
Data Storage
Data Warehouse
Database
Electronic Health Record (EHR)
Extract
Transform
Load
Information Technology
Machine Learning (ML)
NoSQL
Problem Solving
PySpark
PyTorch
Python
SQL
Scalability
Scripting
TensorFlow
scikit-learn

Job Details

Job Description:

We are seeking a highly skilled and experienced AWS/Spark Data & AI Engineer with a strong background in Python and artificial intelligence to join our team. The ideal candidate will be responsible for designing, developing, and deploying scalable data pipelines and AI solutions on the AWS cloud platform. This role requires a deep understanding of big data technologies, machine learning concepts, and a proven ability to leverage Python to build robust and efficient data and AI applications.

Responsibilities:

  • Design, develop, and maintain large-scale, distributed data pipelines using Apache Spark, PySpark, and AWS services like AWS Glue, EMR, S3, and Redshift.

  • Implement ETL (Extract, Transform, Load) processes to ingest, transform, and load data from various sources into data lakes and data warehouses.

  • Develop, train, and deploy machine learning and AI models using Python and relevant libraries (e.g., scikit-learn, TensorFlow, PyTorch).

  • Collaborate with data scientists and business stakeholders to understand data requirements and translate them into technical solutions.

  • Optimize and fine-tune Spark jobs and other data processing applications for performance, scalability, and cost-efficiency.

  • Ensure data quality, integrity, and security across all data processing and storage systems.

  • Troubleshoot and resolve issues related to data pipelines, Spark jobs, and AWS infrastructure.

  • Stay up-to-date with the latest trends and technologies in big data, cloud computing, and artificial intelligence.

  • Participate in code reviews and contribute to a culture of engineering excellence and best practices.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.

  • Proven professional experience with AWS cloud services, particularly those related to data engineering (S3, Glue, EMR, Redshift, Lambda).

  • Extensive experience with Apache Spark and PySpark for big data processing and analytics.

  • Advanced proficiency in Python for data manipulation, scripting, and application development.

  • Strong understanding of AI/Machine Learning concepts, algorithms, and practical experience in building and deploying ML models.

  • Experience with big data frameworks and technologies (e.g., Hadoop, Hive).

  • Solid knowledge of data warehousing concepts, data modeling, and ETL processes.

  • Proficiency in SQL and working with relational and NoSQL databases.

  • Excellent problem-solving, analytical, and communication skills.

  • Ability to work both independently and collaboratively in a fast-paced environment.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Isoftech Inc