Overview
Remote
Depends on Experience
Contract - W2
Skills
Data
Engineering
Job Details
Register Here: _2swj
The ideal candidate will work collaboratively with data architects, analysts, software engineers, and business stakeholders to design, build, and optimize data pipelines, data models, and scalable data platforms. This role involves maintaining and enhancing data ingestion, transformation, and storage processes with a strong focus on performance, reliability, and data quality.
Key Responsibilities:
- Collaborate with architects, analysts, and development teams to understand data requirements and translate them into technical solutions.
- Design, develop, and maintain scalable ETL/ELT pipelines and data workflows for structured and unstructured data.
- Build and optimize data models, data lakes, and data warehouses to support reporting, analytics, and machine learning use cases.
- Troubleshoot and resolve complex data pipeline and performance issues; implement long-term solutions and improvements.
- Create and maintain technical documentation, data dictionaries, and pipeline monitoring dashboards.
- Research and integrate new data engineering tools, cloud services, and automation solutions to improve scalability and efficiency.
- Ensure data security, governance, and quality standards are followed across all data environments.
- Work closely with platform and DevOps teams to deploy, monitor, and scale data infrastructure in cloud or hybrid environments.
Minimum Requirements:
- Education: Bachelor s degree in Computer Science, Data Engineering, Information Systems, or related technical field (or equivalent experience).
- Experience: 1 5 years of hands-on experience designing, building, and supporting data pipelines or data platforms in a production environment.
Technical Skills:
- Programming/Scripting: Python, SQL, Scala, or Java (Python preferred)
- Experience with ETL/ELT tools or frameworks (Airflow, DBT, Informatica, Matillion, Glue, etc.)
- Experience with cloud data services: AWS (Glue, Redshift, S3), Azure (Data Factory, Synapse), or Google Cloud Platform (BigQuery, Dataflow)
- Strong SQL experience query optimization, data modeling, stored procedures
- Experience with streaming or messaging technologies (Kafka, Kinesis, Pub/Sub, etc.)
- Familiarity with version control (Git) and CI/CD workflows
- Experience working with relational and/or NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB, etc.)
Preferred Qualifications:
- Experience with data lake or lakehouse architectures (Delta Lake, Iceberg, Hudi)
- Experience with Spark, Databricks, Snowflake, or similar platforms
- Cloud or data certifications (AWS, Azure, Google Cloud Platform, Databricks, Snowflake)
- Strong analytical and problem-solving skills
- Ability to work effectively in a collaborative, Agile team environment
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.