Job Duties - Key Responsibilities · Design, develop, and maintain scalable backend services using Java (Advanced), Spring Boot, and Kafka · Build RESTful APIs and microservices architecture aligned to enterprise design principles · Implement and optimize complex SQL queries, focusing on MS SQL Server performance and reliability · Integrate Elasticsearch for search and analytics use cases · Automate deployments and environment setups using PowerShell and support CI/CD readiness · Utilize Bitbucket for version control and Jira for Agile tracking and sprint execution · Collaborate with cross-functional teams including DevOps, QA, Product Owners, and Architects to deliver quality outcomes · Participate in peer code reviews, sprint ceremonies, and architecture discussions Job Requirements - Key Responsibilities · Design, develop, and maintain scalable backend services using Java (Advanced), Spring Boot, and Kafka · Build RESTful APIs and microservices architecture aligned to enterprise design principles · Implement and optimize complex SQL queries, focusing on MS SQL Server performance and reliability · Integrate Elasticsearch for search and analytics use cases · Automate deployments and environment setups using PowerShell and support CI/CD readiness · Utilize Bitbucket for version control and Jira for Agile tracking and sprint execution · Collaborate with cross-functional teams including DevOps, QA, Product Owners, and Architects to deliver quality outcomes · Participate in peer code reviews, sprint ceremonies, and architecture discussions · Optionally contribute to: • Low-code platforms (e.g., Appian) and BRMS tools (e.g., Drools) • Frontend components using React or Angular • Cloud migration using AWS or OCI • Data engineering using PySpark, Hive, Hue, Impala, and HBase Mandatory Skills Backend & Microservices Development · Advanced proficiency in Java and Spring Boot · RESTful API and microservices architecture design · Real-time streaming and asynchronous messaging using Apache Kafka Database & Search · Strong SQL development and performance tuning, especially on MS SQL Server · Experience with Elasticsearch for scalable search implementations DevOps & Automation · Hands-on scripting with PowerShell for automation tasks · Experience with Bitbucket (Git), Jira, and Agile development practices · Understanding of CI/CD pipelines, code reviews, and build processes AI/ML Fundamentals · Basic understanding of ML, LLMs, and Generative AI · Prompt engineering fundamentals · Should have expertise in using AI-assisted development tools such as Claude code, Github CoPilot. etc Optional Skills (Nice to Have) · Exposure to Appian or similar low-code platforms · Experience with BRMS tools such as Drools · Ability to support legacy systems using VBA · Familiarity with Angular or React for full-stack capabilities · Understanding of SAFe Agile framework · Experience with cloud platforms like AWS or Oracle Cloud Infrastructure (OCI) · Data processing skills with PySpark and tools in the Hadoop ecosystem (Hive, Hue, Impala, HBase) · Experience within Databricks platform developing within Databricks workspace, notebooks, and cluster management. Qualifications & Experience · Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field · 3-4 years of experience in software development, with a strong focus on backend services · Proven ability to deliver reliable, secure, and scalable applications in distributed environments · Strong debugging, analytical, and problem-solving skills · Excellent verbal and written communication skills for working in collaborative, distributed teams Core Competencies · Passion for clean, modular, and scalable architecture · Proactive and self-driven with a continuous improvement mindset · Adaptable to evolving technologies and project needs · Ability to mentor junior engineers and contribute to technical leadership Desired Skills & Experience - Optionally contribute to: • Low-code platforms (e.g., Appian) and BRMS tools (e.g., Drools) • Frontend components using React or Angular • Cloud migration using AWS or OCI • Data engineering using PySpark, Hive, Hue, Impala, and HBase