AI TOOL Developer
Plano,TX
The AI Tool Development Support will primarily be responsible for collaborating and providing technical
support.
Responsibilities:
Develop an end to end AI model and processing pipeline that transforms natural language or
structured test specifications into executable JSON scripts used by internal test automation tool.
Build prompt driven, rule augmented, or fine tuned models using TMNA approved AI platforms,
ensuring compliance with internal AI/ML Review Board standards. Create data ingestion and
training datasets using historical test cases, existing automation scripts, and test execution logs.
Design the mapping framework between spec semantics automation actions JSON schema.
Build validation utilities to automatically ensure script correctness, schema compliance, and
alignment with automation tool capabilities.
Integrate the AI generation pipeline into the existing test automation toolchain and CI based
workflows.
Collaborate with automation SMEs to refine domain specific rules, edge cases, and test
coverage requirements.
Drive continuous model improvement through error analysis, incremental fine tuning, and user
feedback loops.
Maintain documentation, versioning, and traceability of AI generated scripts.
Ensure responsible AI usage by aligning with TMNA governance, data handling, and compliance
requirements.
Skills Required:
5+ years of experience building AI/ML or NLP based solutions, including prompt engineering,
LLM based workflows, or model fine tuning.
Solid Python or similar development skills (data processing, model building, evaluation
pipelines).
Experience working with JSON schema design and automated test frameworks.
Familiarity with ML frameworks such as PyTorch, TensorFlow, HuggingFace or similar libraries.
Experience with building or integrating ML pipelines into production quality tools (API services,
microservices, or batch systems).
Strong understanding of software testing concepts: test cases, assertions, conditions, flows,
automation logic.
Ability to collaborate with cross functional engineering teams and translate ambiguous test
definitions into structured logic.
Educational Background:
Bachelor's Degree (or higher) in Computer Science, Management Information Systems or
related discipline, or equivalent professional work experience
Added Bonus:
Experience with automated test frameworks used in embedded or multimedia systems.
Experience with model fine tuning using enterprise datasets.
Background in building developer tooling, code generation, or compiler/AST type
transformations.
Familiarity with Toyota LEAP/MM automation concepts or previous exposure to internal test
automation pipelines.
Experience implementing AI tools in enterprise environments requiring governance and model
approval.
**The description supplied above is not intended to be an exhaustive list of all job duties,
responsibilities and requirements. Duties, responsibilities, and requirements may change over time and
according to business need.