Job Title: Data Engineer AWS & DynamoDB Location: Tampa, FL Onsite Only Local to FL Experience: 12+ Years
Job Summary
We are looking for a highly skilled Data Engineer with strong hands-on experience in AWS services and Amazon DynamoDB. The ideal candidate will design, build, and optimize scalable data pipelines and data stores, working closely with application, analytics, and cloud teams to support high-performance, data-driven systems.
Key Responsibilities
Design, develop, and maintain scalable data pipelines and ETL/ELT workflows on AWS.
Build and manage data solutions using Amazon DynamoDB, including schema design, indexing, partitioning, and performance tuning.
Work with AWS services such as S3, Glue, Lambda, EMR, Redshift, Kinesis, and CloudWatch.
Optimize DynamoDB read/write capacity, GSIs/LSIs, TTL, and cost efficiency.
Implement data modeling best practices for NoSQL and hybrid architectures.
Develop data processing solutions using Python, SQL, and/or PySpark.
Ensure data quality, reliability, security, and scalability across platforms.
Collaborate with DevOps teams on CI/CD pipelines, automation, and monitoring.
Troubleshoot and resolve performance, data integrity, and availability issues.
Document data architectures, pipelines, and operational procedures.
Required Skills
Strong experience as a Data Engineer in AWS environments.
Deep hands-on expertise with Amazon DynamoDB.
Strong knowledge of AWS core services: S3, Lambda, Glue, IAM, VPC.
Experience with Python and SQL for data processing.
Solid understanding of NoSQL data modeling and distributed systems.
Experience with ETL/ELT frameworks and large-scale data processing.
Familiarity with monitoring, logging, and performance tuning in AWS.
Preferred / Nice to Have
Experience with streaming data (Kinesis, Kafka).
Exposure to Redshift, Athena, or Snowflake.
AWS certifications (e.g., AWS Certified Data Analytics Specialty).
Experience with Infrastructure as Code (Terraform, CloudFormation).