Title - QA Test Engineer
Location – Alpharetta, GA
Duration – 8 + Months
Job Description.
Overall Purpose: Quality Test role for automated as well as hands-on manual testing of assigned projects supporting various entities.
Roles & Responsibilities:
1) Review requirements and writing, updating test cases, troubleshooting defects and analyzing test results.
2) Execute test cases, logging defects, triaging defects and testing multiple mobile devices. Reporting status to Test lead and Test Mgr.
3) Act as sub-lead for specific functional areas of a project or multiple concurrent projects.
4) Participate in test planning and in testing activities on multiple devices for multiple projects.
5) Testing - review requirements, analyzing requirements, determining testing needs, writing and reviewing test cases, running test cases, troubleshooting, logging and regressing defects and analyzing test results; point of escalation for team members and will work closely with project stakeholders.
*** On-site position in Alpharetta, GA 3-4 days/week.
*** Hackerank testing is likely.
TOP 5 SKILLS REQUIRED:
- Bachelor’s in Computer Science, Engineering, Data/Information Systems, or equivalent practical experience.
- 3+ years in QA automation or SDET-type work (adjust by level); 1+ year exposure to AI/LLM or ML-driven features is a plus.
- Strong test automation in Python and/or Java/TypeScript.
- We are a platform team, testing APIs for high performance, automation will be primary focus.
- Strong communication and analytical skills.
ADDITIONAL SKILLS REQUIRED:
- Hands-on with frameworks/tools such as: UI: Playwright / Cypress / Selenium and API: pytest + requests, Postman/Newman, REST Assured
- CI/CD integration: Git, GitHub Actions/Jenkins/GitLab CI, test reporting, gating.
- Test design: equivalence partitioning, boundary testing, risk-based testing, defect triage.
AI-Specific Testing Competencies (Key)
- LLM/application behavior testing: validating correctness when outputs are probabilistic.
- Evaluation strategies: golden datasets, scoring rubrics, human-in-the-loop reviews.
- Non-determinism handling: statistical assertions, repeated runs, variance thresholds.
- Prompt and regression management: versioning prompts, detecting prompt drift, replay tests.
- RAG testing (if applicable): retrieval quality (recall/precision), grounding checks, citation validation, doc freshness.
- Safety & quality checks: hallucination detection, toxicity/PII leakage checks, policy compliance tests.
Data & Observability
- Ability to create and maintain test datasets (structured + unstructured), including edge cases.
- Familiarity with telemetry for AI systems:
- logging prompts/outputs safely, traceability, correlation IDs
- tools like OpenTelemetry, ELK/Splunk, Datadog/Grafana (any equivalent)
Understanding of data privacy constraints (masking/redaction) and secure test data practices.
- API / Microservices / Cloud
- Comfortable testing distributed systems: microservices, async workflows, queues/events.
- Basic cloud proficiency (AWS/Azure/Google Cloud Platform) and containerization (Docker, optional Kubernetes).
Performance & Reliability Testing (AI-Aware)
- Load/performance testing for inference endpoints (latency, throughput, concurrency).
- Cost-aware testing (token usage, rate limits, fallbacks).
- Resilience tests: retries, circuit breakers, model timeouts, degraded-mode behavior.
Nice-to-Have Domain Knowledge
- Familiarity with NLP concepts (embeddings, context windows, temperature/top-p).
- Experience with AI tooling: LangChain/LlamaIndex, evaluation tools, model gateways.
- Knowledge of regulatory/security needs relevant to the telecom domain.
Soft Skills / Ways of Working
- Strong communication—able to explain AI quality issues clearly to product and engineering.
- Comfortable partnering with data science/ML engineers and backend teams.
- Ownership mindset: building reusable test harnesses, improving quality metrics, preventing regressions.