Overview
Skills
Job Details
Expert-level proficiency in designing, developing, and implementing data engineering solutions within the AWS cloud ecosystem, including Lambda (Python), API Gateway, and S3 for large-scale data processing and transformation. Deep experience in building and optimizing data pipelines (ETL/ELT) using AWS services such as Glue, Athena, Step Functions, and DynamoDB, enabling scalable ingestion, cleansing, and integration of diverse datasets. Demonstrated ability to perform complex data analysis to identify trends, anomalies, and opportunities, and to materialize analytical findings into engineered solutions that drive measurable business impact. Advanced expertise in data modeling, schema design, and query optimization for both NoSQL (DynamoDB) and relational systems (SQL Server via pymssql), ensuring performance and reliability in analytical workloads. Skilled in developing analytics-ready datasets and enabling data-driven decision-making through integration with QuickSight, Athena, and downstream analytics environments. Proven experience implementing monitoring, observability, and performance tuning for data systems using CloudWatch, log aggregation tools, and event-driven frameworks. Strong understanding of data governance, quality assurance, and compliance, including HIPAA, PHI, and PII standards, ensuring security and trust in all data handling processes. Demonstrates mastery in test-driven data development (TDDD), implementing automated validation and regression frameworks to ensure the accuracy and integrity of data solutions. Advanced proficiency with CI/CD pipelines, infrastructure as code (IaC), and GitLab for version-controlled data engineering deployments. Skilled in designing and deploying REST-based APIs and data access layers to make analytical results and datasets accessible to other systems and applications.