Locations : Richardson, TX/ Bridgewater, NJ/Raleigh, NC /Charlette, NC /Phoenix, AZ / Houston, TX
Required Skill and Experience
Core development experience in below stacks:
Experience in Scala or Python or Spark or Hadoop application development
Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, Kafka, Elasticsearch, Grafana, ELK stack
Experience in AI-Powered Intelligent Data Integration & Transformation and also expertise in AI-powered data lakes
Proven experience in architecting and administering Snowflake, Azure, and Google Big Data platforms.
Strong understanding of cloud infrastructure, data integration, and analytics workflows.
Experience in Categorizing, cataloging, cleansing and normalizing of datasets
Hands-on experience with platform monitoring, automation, and DevOps tools.
Experience in managing hybrid environments (on-prem and cloud).
Excellent communication and leadership skills.
Knowledge of data governance, security, and compliance frameworks.
Technical Ability to build bespoke AI agents, multi-agent systems using Python and frameworks like LangGraph, LangChain to build applications tailored to nuanced, real-world business needs.
Build AI-assisted workflows (classification, enrichment, response generation, decision support) and Implement structured outputs, prompt pipelines, and reliability safeguard.
Preferred Skill and Experience
Sound Knowledge of Software engineering design patterns and practices
Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based tools
Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and Dockers
Planning and Co-ordination skills
Experience and desire to work in a Global delivery environment.
Ability to work in team in diverse/ multiple stakeholder environment.
A high degree of initiative and flexibility
High customer orientation
Excellent verbal and written communication skills