Senior QA Tester

Overview

On Site
Depends on Experience
Full Time

Skills

Agile
Software Quality Assurance
System Testing
TestRail
RESTful
Natural Language Processing
Quality Assurance
Machine Learning (ML)
JIRA
JSON
Debugging
Artificial Intelligence
Amazon Web Services
Confluence
Automated Testing
Machine Learning Operations (ML Ops)

Job Details

Roles and Responsibilities:

We are hiring QA Testers with the State of Maryland - INNOVATIONS TEAM

The Quality Assurance Consultant provides quality management for information systems using the standard methodologies, techniques, and metrics for assuring product quality and key activities in quality management. This individual is responsible for performing the following tasks:

  • Establishing capable processes, monitoring and control of critical processes and product mechanisms for feedback of performance, implementing effective root cause analysis and corrective action system, and continuous process improvement.
  • Providing strategic quality plans in targeted areas of the organization.
  • Providing QA strategies to ensure continuous production of products consistent with established industry standards, government regulations, and customer requirements; and
  • Developing and implementing life cycle and QA methodologies and educating and implementing QA metrics.
  • Define and execute comprehensive QA strategies for AI-enabled software systems, including testing for model accuracy, bias, drift, and output consistency.
  • Design test cases for APIs or UIs that consume predictions from NLP models, classifiers, or AI assistants.
  • Validate data flows from ingestion pipelines through model inference and response rendering across multiple systems.
  • Partner with data engineers and scientists to verify pre-processing logic, validate predictions, and interpret edge-case outcomes.
  • Develop test cases and scenarios for model explainability (e.g., SHAP, LIME) and human-in-the-loop validation workflows.
  • Participate in agile sprint activities and act as QA lead for releases involving AI/ML features.
  • Perform database queries and SQL validation to confirm training and inference dataset consistency.
  • Maintain and enhance automated regression and integration test suites using tools like PyTest, Postman, Cypress, JMeter, or Selenium.
  • Support testing of user-facing AI features like chatbots, recommendations, smart prompts, or classification-driven workflows.
  • Conduct 508 accessibility, performance, and cross-browser testing for intelligent UI components.
  • Collaborate with developers and MLOps engineers to debug pipeline errors and track model prediction anomalies.
  • Monitor and test AI system behavior after model retraining, deployment, or feedback loop adjustment

Minimum Qualifications:

Education: This position requires a bachelor s degree from an accredited college or university in Engineering, Computer Science, Information Systems or a related discipline.

General Experience: The proposed candidate must have at least eight (8) years of information systems quality assurance experience.

Specialized Experience: The proposed candidate must have at least five (5) years of experience working with statistical methods and quality standards. This individual must have a working QA/process knowledge and possess superior written and verbal communication skills.

  • (AI/ML testing or data science coursework/certification a plus.
  • At least eight (8) years of software quality assurance experience, with increasing responsibility in testing enterprise systems.
  • Minimum three (3) years working with or supporting projects involving AI/ML services or data science teams.
  • Experience testing AI/ML model integration in enterprise applications (e.g., validation of model inferences, confidence scores, and response behaviors).
  • Familiarity with ML model lifecycle, training/inference pipelines, and feedback loop workflows.
  • Hands-on experience testing RESTful APIs, data APIs, or AWS-hosted AI services (e.g., SageMaker).
  • Experience with automated test frameworks and performance testing tools (e.g., JMeter, PyTest, Selenium, Postman, Newman).
  • Strong skills in writing and executing SQL for test data validation and pre/post inference checks.
  • Experience with JSON-based payloads, OpenAPI/Swagger, and mock service tools.
  • Ability to triage and analyze AI prediction issues related to data quality, model logic, or system design.
  • Familiarity with ethical AI practices, including model bias testing, fairness, and transparency, is a plus.
  • Excellent communication skills for bridging technical and non-technical stakeholders around complex AI test cases.
  • Experience in Agile teams, working with tools like JIRA, Confluence, GitHub, TestRail, or similar.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.