Overview
Skills
Job Details
Data Engineer (3 4 Years Experience) Experience Level: Mid-level (3 4 years)
Company: Aaratech Inc
 Eligibility:     Open     to             U.S. Citizens          and       s      only. We do     not     offer     visa   sponsorship. 
 About     Aaratech   Inc Aaratech Inc is a   specialized IT consulting and   staffing company that places elite   engineering talent into   high-impact   roles at leading U.S.   organizations. We focus on modern   technologies across   cloud, data, and software disciplines. Our   client engagements offer   long-term stability, competitive   compensation, and the opportunity   to   work on cutting-edge data   projects.
 Position   Overview 
We are seeking a Data   Engineer with 3 4 years of   experience to   join a client-facing role focused on building   and maintaining   scalable data pipelines, robust data models, and modern data warehousing   solutions. You'll work with a variety of tools   and frameworks,   including Apache Spark,   Snowflake,   and Python, to deliver   clean, reliable, and timely   data for advanced analytics and   reporting.
 Key Responsibilities
- Design and develop scalable Data Pipelines to support batch and real-time processing
- Implement efficient Extract, Transform, Load (ETL) processes using tools like Apache Spark and dbt
- Develop and optimize queries using SQL for data analysis and warehousing
- Build and maintain Data Warehousing solutions using platforms like Snowflake or BigQuery
- Collaborate with business and technical teams to gather requirements and create accurate Data Models
- Write reusable and maintainable code in Python (Programming Language) for data ingestion, processing, and automation
- Ensure end-to-end Data Processing integrity, scalability, and performance
- Follow best practices for data governance, security, and compliance
Required Skills & Experience
- 3 4 years of experience in Data Engineering or a similar role
- Strong proficiency in SQL and Python (Programming Language)
- Experience with Extract, Transform, Load (ETL) frameworks and building data pipelines
- Solid understanding of Data Warehousing concepts and architecture
- Hands-on experience with Snowflake, Apache Spark, or similar big data technologies
- Proven experience in Data Modeling and data schema design
- Exposure to Data Processing frameworks and performance optimization techniques
- Familiarity with cloud platforms like AWS, Google Cloud Platform, or Azure
Nice to Have
- Experience with streaming data pipelines (e.g., Kafka, Kinesis)
- Exposure to CI/CD practices in data development
- Prior work in consulting or multi-client environments
- Understanding of data quality frameworks and monitoring strategies
- Seniority Level