Junior Data Engineer

Signitives
Dice Job Match Score™
📊 Calculating match score...
Job Details
Skills
- ADF
- Amazon Web Services
- Continuous Delivery
- Databricks
- Data Modeling
- Data Warehouse
- ELT
- Docker
- Data Quality
- Provisioning
- Terraform
- Jenkins
- Communication
- Extract, Transform, Load
- Machine Learning Operations (ML Ops)
Summary
Job Title: Data Engineer (2+ Years Experience)
Location: Hybrid / Onsite / Remote
Experience Required: 2+ Years
Job Summary
We are seeking a skilled and motivated Data Engineer with 2+ years of experience in building scalable data pipelines, ETL workflows, and cloud-based data platforms. The ideal candidate will have hands-on experience with modern data engineering tools, distributed processing frameworks, and real-time streaming technologies, along with exposure to cloud ecosystems such as AWS, Azure, or Google Cloud Platform.
Key Responsibilities
Design, develop, and optimize scalable data pipelines for batch and real-time processing.
Build and maintain ETL/ELT workflows using tools like Databricks, Apache Spark, and dbt.
Develop and manage data lakehouse architectures using Delta Lake, Iceberg, or similar technologies.
Implement real-time data streaming solutions using Kafka, Kinesis, Flink, or Event Hubs.
Work with cloud platforms (AWS, Azure, or Google Cloud Platform) to build robust and cost-efficient data solutions.
Design and maintain data warehouses such as Snowflake, Redshift, or Synapse.
Collaborate with data scientists and analysts to support data modeling, feature engineering, and analytics.
Implement data orchestration using Airflow, Azure Data Factory, or similar tools.
Ensure data quality, governance, and security using monitoring and validation frameworks.
Automate infrastructure provisioning and deployments using Terraform and CI/CD pipelines.
Required Skills & Qualifications
2+ years of experience in Data Engineering or a related field.
Strong programming skills in Python and SQL.
Hands-on experience with Apache Spark / PySpark.
Experience with ETL tools and workflow orchestration (Airflow, dbt, ADF, Glue).
Familiarity with real-time streaming tools like Kafka, Kinesis, or Event Hubs.
Experience working with cloud platforms: AWS, Azure, or Google Cloud Platform.
Knowledge of data warehousing solutions such as Snowflake, Redshift, or Synapse.
Understanding of data modeling, data lakes, and lakehouse architectures.
Preferred Qualifications
Experience with MLOps tools such as MLflow, SageMaker, or Azure ML.
Familiarity with containerization (Docker) and CI/CD tools (Jenkins, GitHub Actions).
Exposure to data governance tools like Purview, DataHub, or Amundsen.
Experience with BI tools such as Tableau, Power BI, or Looker.
Knowledge of monitoring tools like Grafana, Datadog, or CloudWatch.
Education
Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
Nice to Have
Experience working in Agile environments.
Strong problem-solving and communication skills.
Ability to work in cross-functional teams and deliver business-driven data solutions.
- Dice Id: 91143212
- Position Id: 8916274
- Posted 1 hour ago
Company Info
About Signitives
At Signitives, we offer various engagement models to help our clients reach their business goals and get the most value from our product engineering services.
Our engagement models are tailored to fit our clients' unique needs and are designed to be flexible and adaptable.
Our operational methodologies are built on industry-proven frameworks and best practices, and are designed to help us plan, execute, and monitor our projects in a way that maximizes efficiency and minimizes risk.


Similar Jobs
It looks like there aren't any Similar Jobs for this job yet.
Search all similar jobs