Job Overview:
o We are seeking an experienced Data Engineer to design, develop, and optimize scalable data platforms and pipelines supporting advanced analytics and enterprise reporting initiatives.
o The ideal candidate will have strong expertise in big data technologies, cloud platforms, and distributed data processing frameworks.
Key Responsibilities:
o Build and maintain scalable ETL/ELT pipelines and enterprise data solutions.
o Design and optimize big data processing workflows using Spark/Hadoop ecosystems.
o Develop data integration, transformation, and aggregation frameworks.
o Write complex SQL queries and optimize large-scale data processing jobs.
o Support cloud-based data lake and warehouse architectures.
o Collaborate with analytics, engineering, and business teams to deliver data-driven solutions.
o Ensure data quality, governance, scalability, and disaster recovery readiness.
Required Skills:
o Strong experience with Python, SQL, Spark, Hadoop.
o Experience with Scala and/or Java.
o Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
o Expertise in distributed data processing and big data technologies.
o Experience building data solutions for analytics and machine learning initiatives.
o Familiarity with tools such as Databricks, Airflow, Kafka, Snowflake, or similar.
o Strong communication and collaboration skills.
Preferred Qualifications:
o Experience with real-time streaming/data ingestion frameworks.
o Exposure to modern lakehouse/data warehouse architectures.
o Experience in enterprise-scale data environments and performance optimization.