Sr.Data Engineer - Charlotte NC, Onsite. Please share 15+ Years experienced profiles. Must have strong experience in PySpark and Python, SQL queries for data extraction, transformation, and loading (ETL), AWS cloud services (e.g., S3, EMR, Lambda, Glue), Object-Oriented Programming (OOP), AI Tools, SQL, Git, CI/CD pipelines, and Agile methodologies.


Keylent
Dice Job Match Score™
👾 Reticulating splines...
Job Details
Skills
- Agile
- Amazon Redshift
- Amazon S3
- Amazon Web Services
- Artificial Intelligence
- Backend Development
- Cloud Computing
- Collaboration
- Communication
- Conflict Resolution
- Continuous Delivery
- Continuous Integration
- Data Engineering
- Data Extraction
- Data Quality
- Data Warehouse
- Design Patterns
- Distributed Computing
- Electronic Health Record (EHR)
- Extract
- Transform
- Load
- Git
- Management
- Object-Oriented Programming
- Performance Tuning
- Problem Solving
- PySpark
- Python
- SQL
- Version Control
- Workflow
Summary
Sr.Data Engineer - on site in Charlotte NC
Job Responsibilities
Key Responsibilities:
Design, develop, and optimise large-scale data pipelines using PySpark and Python.
Implement and adhere to best practices in object-oriented programming to build reusable, maintainable code.
Write advanced SQL queries for data extraction, transformation, and loading (ETL).
Collaborate closely with data scientists, analysts, and stakeholders to gather requirements and translate them into technical solutions.
Troubleshoot data-related issues and resolve them in a timely and accurate manner.
Leverage AWS cloud services (e.g., S3, EMR, Lambda, Glue) to build and manage cloud-native data workflows (preferred).
Participate in code reviews, data quality checks, and performance tuning of data jobs.
Required Skills & Qualifications:
3 6 years of relevant experience in a data engineering or backend development role.
Strong hands-on experience with PySpark and Python, especially in designing and implementing scalable data transformations.
Solid understanding of Object-Oriented Programming (OOP) principles and design patterns.
Proficient in SQL, with the ability to write complex queries and optimise performance.
Strong problem-solving skills and the ability to troubleshoot complex data issues independently.
Excellent communication and collaboration skills.
Hands-on experience with AI Tools.
Preferred Qualifications (Nice to Have):
Experience working with AWS cloud ecosystem (S3, Glue, EMR, Redshift, Lambda, etc.).
Exposure to data warehousing concepts, distributed computing, and performance tuning.
Familiarity with version control systems (e.g., Git), CI/CD pipelines, and Agile methodologies.
Exposure to AI Tools and hands-on experience of building any AI applications
Regards
- Dice Id: 10423210A
- Position Id: 8907866
- Posted 1 hour ago
Company Info
About Keylent
We have been involved with the industry for over 2 decades and have seen the up's and down's. We have weathered bad times and enjoyed good times by putting our client needs ahead of ours. We continue to do the same thing.
We take great care of our Talent Acquisition and Administrative staff who in turn put in their best work to fulfill our Consultant and Client needs.
Our Clients and our Consultants have a variety of choices and we are thankful that they have chosen Keylent.
Careers


Similar Jobs
It looks like there aren't any Similar Jobs for this job yet.
Search all similar jobs