AWS DATA ENGINEER WITH HEAVY POSTGRESQL, NOSQL AND AZURE DATA FACTORY EXPERIENCE||REMOTE ROLE

Overview

Remote
Depends on Experience
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 12 Month(s)

Skills

data engineer
postgresql
amazon

Job Details

Hi,
Hope you are doing well
I have an urgent opening AWS Data Engineer at remote role

Position Type: Contract
Location: Remote, United States
Description:

Interview : Video


Description:

POSITION SUMMARY:
The Cloud Data Engineer / PostgreSQL & NoSQL Specialist is a hands-on, cloud-focused technical role that sits at the intersection of cloud database administration and data engineering. This position plays a critical role in managing, optimizing, and scaling PostgreSQL and NoSQL (DynamoDB) solutions on AWS, while also developing cloud-native ETL pipelines and data integration workflows. The ideal candidate has substantial experience in administering Amazon RDS for PostgreSQL, hands-on PostgreSQL tuning and design, as well as building automated, scalable data pipelines using tools such as Azure Data Factory. This role is foundational in supporting both real-time and batch data workloads across relational and non-relational ecosystems in a modern cloud environment.
PRINCIPAL RESPONSIBILITIES:

  • Administer and optimize Amazon RDS for PostgreSQL instances, including configuration, backups, performance tuning, replication, and patch management.
  • Design and develop highly available and secure PostgreSQL environments, both cloud-native and hybrid.
  • Implement and support DynamoDB-based NoSQL solutions, including schema design, throughput optimization, and integration with AWS Lambda or Amazon Kinesis Streams.
  • Build robust, scalable data pipelines using Azure Data Factory to orchestrate data from on-premises and cloud-based sources.
  • Develop and maintain data ingestion processes from systems like Oracle, SQL Server, and flat files to AWS storage and compute services.
  • Build and maintain data lakes on AWS S3 with considerations for storage optimization, performance, and security.
  • Automate and orchestrate ETL/ELT processes using a combination of Python, Shell scripts, and cloud-native tools.
  • Collaborate with teams on CI/CD automation, infrastructure as code (IaC) using tools like Terraform or CloudFormation, and Git-based workflows.
  • Integrate with modern business intelligence (BI) and analytics platforms, enabling efficient access to and reporting on structured and semi-structured data.
  • Participate in Agile ceremonies, contribute to backlog grooming, sprint planning, and retrospective meetings.
  • Contribute to the definition and enforcement of best practices across data modeling, ETL design, monitoring, and operational support.

QUALIFICATIONS:

  • 5+ years of experience in data engineering and integration pipeline design, especially in cloud environments.
  • 5+ years of experience working with relational databases, including deep expertise in PostgreSQL.
  • 3+ years of RDS PostgreSQL administration experience, with emphasis on performance tuning, monitoring, and security.
  • 3+ years of hands-on experience with AWS services, particularly RDS, S3, Lambda, Kinesis, CloudWatch, IAM, and DynamoDB.
  • 3+ years of experience building ETL/ELT solutions using Azure Data Factory, with pipelines involving both cloud and on-prem sources.
  • Strong skills in SQL, including writing optimized, scalable queries.
  • Working knowledge of Python or Java for automation and pipeline development.
  • Familiarity with NoSQL systems (e.g., DynamoDB, MarkLogic); DynamoDB preferred.
  • Experience with monitoring, logging, and troubleshooting data jobs in production.
  • Exposure to DevOps, CI/CD tools, and source control platforms like Git/GitHub.
  • Understanding of data security best practices, including encryption, IAM, and audit logging.
  • Understanding of database architecture and performance implications required.
  • Experience with integrating Business Intelligence applications like PowerBI, Qlik Sense, or Tableau.
  • Experience with Machine Learning and Artificial Intelligence is preferred.
  • A good understanding of Data Virtualization technologies, such as Denodo, is preferred.
  • Exposure to Snowflake, DBT, or other modern warehousing and transformation tools.
  • Ability to multi-task effectively, work collaboratively as part of an Agile Team, and guide junior engineers.
  • Excellent written and verbal communication skills, sense of ownership, urgency, and drive.

MINIMUM QUALIFICATIONS:

  • Bachelor's degree in computer science, Information Systems, Engineering, Statistics, or a related field (foreign equivalent accepted).
  • 3+ years of hands-on experience with AWS services for data and analytics (e.g., RDS, S3, Lambda, PostgreSQL, DynamoDB).
  • 3+ years of hands-on experience with Azure Data Factory.
  • 3+ years of experience with data ingestion, extraction, and integration pipelines.
  • 3+ years of experience with Python or Java for data engineering and automation tasks.

3+ years of experience writing efficient SQL queries, including performance tuning.

Regards,

Nitin Gupta
Team Lead-Recruitment
ShiftCode Analytics Inc.,
5118 Sylvester loop Tampa,
Florida 33610
Direct:
Email:

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.