Overview
Remote
On Site
Hybrid
BASED ON EXPERIENCE
Full Time
Skills
DATA ENGINEER
DATABASE ENGINEER
ETL
AWS
SQL
PYTHON
AWS GLUE
LAMBDA
STEP FUNCTIONS
DMS
ATHENA
SSIS
SSRS
SSAS
Job Details
Role: Database Engineer
Location: Remote
Duration: Fulltime
Current Project: CDM
- Build new pipelines with API integrations.
- Develop PySpark jobs to efficiently pull data from APIs.
Position Summary:
- This role focuses on the design, development, optimization, and maintenance of scalable, secure, and high-performance databases and data pipelines. You will work cross-functionally to support both operational and analytical data initiatives.
Key Responsibilities:
- Manage daily database operations (monitoring, backups, troubleshooting, performance tuning).
- Design, build, and maintain ETL pipelines using:
- AWS Glue, DMS, Lambda, Step Functions, Athena.
Develop and support:
- SSIS packages and operational data flows.
- Custom reporting solutions and normalized data models.
- Optimize ETL processes and SQL performance.
- Write and optimize T-SQL code, stored procedures, and functions.
- Create scripts for data migration and high-quality database solutions.
- Interpret and refine business reporting requirements.
- Provide analytical insights and real-time issue resolution.
- Automate recurring processes and maintain thorough documentation.
- Support testing, deployment, and CI/CD integration.
- Contribute to SSIS, SSRS, SSAS, and SQL development efforts.
- Participate in code reviews and inter-team collaboration.
- Occasionally travel or attend in-person sessions as needed.
Education & Experience Required:
- Bachelor s degree in Computer Science, Engineering, or related field OR 4+ years of equivalent experience.
- 4+ years of experience.
- Data conversion, integration, and migration
- Database modeling, monitoring, and performance tuning
- ETL design and execution
- Database backup, recovery, and security management
Proficiency in:
- SQL, T-SQL
- Python, R, or PowerShell
- Microsoft SQL Server stack (SSIS, SSRS, SSAS, TDE)
- High concurrency and large-scale performance tuning
Experience with:
- Stored procedures and handling large data sets
- CI/CD practices and testable code development
- HTML, JavaScript, Power BI, Informatica IICS (preferred)
- Strong analytical, debugging, and problem-solving abilities
- Ability to manage multiple projects independently
- Detail-oriented with a collaborative and high-integrity work ethic
- Resilient in fast-paced, results-driven environments
First 30 Days:
- Understand the business model and internal stakeholders.
- Learn how data supports key initiatives.
- Assess the maturity of the current data engineering environment.
Next 30 Days:
- Dive into ETL processes built in AWS Glue.
- Review and enhance the current codebase with a focus on:
- Testing: Improve unit testing using PyTest.
- Monitoring: Enhance observability using Shell scripting and CloudWatch.
- Integration: Address challenges with AWS CLI.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.