Hiring: W2 Candidates Only
Visa: Open to any visa type with valid work authorization in the USA
Summary
A Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and architectures that enable analytics, reporting, and data-driven decision-making. This role ensures data availability, quality, security, and performance across enterprise systems while supporting business intelligence and advanced analytics initiatives.
Key Responsibilities
Design, develop, and maintain robust ETL/ELT pipelines for structured and unstructured data.
Integrate data from multiple internal and external sources into centralized data platforms.
Optimize data storage, processing, and retrieval for performance, scalability, and cost efficiency.
Ensure data quality, consistency, accuracy, and integrity across all data pipelines.
Collaborate with data scientists, analysts, and business teams to support analytics and reporting needs.
Implement data governance, security, and compliance measures to protect sensitive information.
Monitor data pipeline performance, identify bottlenecks, and resolve processing failures.
Develop and maintain data models, schemas, and metadata repositories.
Document data architectures, workflows, standards, and operational procedures.
Support business intelligence platforms and reporting tools with reliable, well-structured datasets.
Automate data ingestion, transformation, and validation processes.
Evaluate, test, and adopt new data technologies, frameworks, and best practices.
Perform capacity planning and ensure scalability of data infrastructure.
Troubleshoot data issues and partner with engineering teams to resolve system-level problems.
Contribute to data strategy initiatives and enable data-driven decision-making across the organization.
Qualifications
Bachelor s degree in Computer Science, Data Science, Information Technology, or a related field.
3-5 years of hands-on experience in data engineering or data infrastructure roles.
Proficiency in SQL and Python for data processing and pipeline development.
Experience with big data tools and frameworks such as Hadoop, Spark, or similar technologies.
Hands-on experience with cloud data platforms such as AWS, Azure, or Google Cloud Platform (Google Cloud Platform).
Strong understanding of data architecture, ETL processes, and data integration patterns.
Preferred Skills / Duties
Knowledge of data warehousing concepts and business intelligence (BI) tools.
Experience with data modeling, dimensional modeling, and schema design.
Familiarity with real-time or streaming data technologies (Kafka, Kinesis, Spark Streaming).
Experience with orchestration tools (Airflow, Azure Data Factory, or similar).
Strong analytical, troubleshooting, and problem-solving skills.
Excellent communication and collaboration abilities.