Overview
Skills
Job Details
Key Responsibilities:
Design, develop, and maintain scalable big data solutions using Apache, Spark and Scala.
Implement complex SQL queries for data transformation and analytics.
Develop and optimize Power BI dashboards for business reporting and visualization.
Collaborate with cross-functional teams to integrate data pipelines and reporting solutions.
Ensure data quality, security, and compliance across all systems.
Required Skills & Experience
Strong proficiency in Apache, Spark with hands-on experience in Scala.
Solid understanding of SQL for data manipulation and analysis.
Experience in Power BI for creating interactive reports and dashboards.
Familiarity with distributed computing concepts and big data ecosystems (Hadoop, Hive, etc.).
Ability to work with large datasets and optimize data workflows.
Highly Desirable
Spark Performance Tuning Expertise: Proven ability to optimize Spark jobs for efficiency and scalability.
Knowledge of cluster resource management and troubleshooting performance bottlenecks.
Experience with Azure cloud services for big data solutions (e.g., Azure Data Lake, Azure Databricks, Synapse Analytics).
Exposure to other cloud platforms (AWS or Google Cloud Platform) is a plus.
Experience working in financial domain or with ERP systems (e.g., SAP, Oracle ERP).
Understanding of compliance and regulatory requirements in financial data processing.
Additional Qualifications
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Bachelor s or Master s degree in Computer Science, Engineering, or related field.