Overview
Hybrid
Depends on Experience
Full Time
Skills
Python
Pyspark
Artificial Intelligence
Gen AI
hadoop
Spark
Distributed data processing
CI/CD
Job Details
Must be local to TX
Core Skills: Python, Pyspark, Distributed data processing & CI/CD skills- Gen AI exposure is plus
Roles and responsibilities:
- Proficiency in Python, SQL, and AI/ML frameworks (Transformers, LangChain, OpenAI, Hugging Face, PyTorch, TensorFlow).
- Strong understanding of AI cost optimization strategies, including serverless inference, model distillation, quantization, and GPU efficiency tuning.
- Experience deploying AI models in cloud environments (AWS, Azure, Google Cloud Platform), including model orchestration and MLOps (Vertex AI, SageMaker, Azure ML).
- Track record of deploying and scaling AI solutions in production, ensuring reliability, latency optimization, and cost-effective serving.
- Strong analytical skills, problem-solving abilities, and experience working in cross-functional AI teams.
- Excellent communication and stakeholder management skills, with the ability to align AI initiatives with business impact and scalability.
- Hands-on experience with LLM fine-tuning, Retrieval-Augmented Generation (RAG), prompt engineering, and vector databases (Pinecone, Weaviate, FAISS, Milvus).
- Integral team member of our Data Engineering team responsible for design and development of Big data solutions Partner with domain experts, product managers, analyst, and data scientists to develop Big Data pipelines in Hadoop or Snowflake Responsible for delivering data as a service framework
- Responsible for moving all legacy workloads to cloud platform
- Work with data scientist to build Client pipelines using heterogeneous sources and provide engineering services for data science applications
- Ensure automation through CI/CD across platforms both in cloud and on-premises
- Ability to research and assess open source technologies and components to recommend and integrate into the design and implementation
- Be the technical expert and mentor other team members on Big Data and Cloud Tech stacks
- Define needs around maintainability, testability, performance, security, quality and usability for data platform
- Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes
- Convert SAS based pipelines into languages like PySpark, Scala to execute on Hadoop and non-Hadoop ecosystems
- Tune Big data applications on Hadoop and non-Hadoop platforms for optimal performance
- Evaluate new IT developments and evolving business requirements and recommend appropriate systems alternatives and/or enhancements to current systems by analyzing business processes, systems and industry standards.
- Supervise day-to-day staff management issues, including resource management, work allocation, mentoring/coaching and other duties and functions as assigned
- Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
Qualifications:
8+ years of total IT experience
- 5+ years of experience with Hadoop (Cloudera)/big data technologies
- Advanced knowledge of the Hadoop ecosystem and Big Data technologies Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr)
- Experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Java or Scala or Python.
- Experience with Spark programming (pyspark or scala or java)
- Expert level building pipelines using Apache Spark Familiarity with core provider services from AWS, Azure or Google Cloud Platform, preferably having supported deployments on one or more of these platforms
- Hands-on experience with Python/Pyspark/Scala and basic libraries for machine learning is required;
- Exposure to containerization and related technologies (e.g. Docker, Kubernetes)
- Exposure to aspects of DevOps (source control, continuous integration, deployments, etc.)
- Proficient in programming in Java or Python with prior Apache Beam/Spark experience a plus.
- System level understanding - Data structures, algorithms, distributed storage & compute
- Can-do attitude on solving complex business problems, good interpersonal and teamwork skills
- Possess team management experience and have led a team of data engineers and analysts.
- Experience in Snowflake is a plus.
Education:
- Bachelor’s degree/University degree or equivalent experience
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.