
If you have applied for jobs online in the last few years, it's possible that an AI‑powered recruiting tool discriminated against you and rejected your application.
Research from the University of Washington found significant racial, gender, and intersectional bias in how three state‑of‑the‑art large language models (LLMs) ranked résumés. In addition, a class‑action lawsuit against Workday—alleging that its AI system disproportionately disadvantages job seekers older than 40—has been granted conditional certification by a federal court in California.
Unless the algorithm has been specifically programmed to ignore certain indicators, the danger of hiring bias increases drastically because AI lacks crucial human judgment, explained Victoria McLean, CEO of City CV and executive coach.
“What worries me most is that candidates never know this has happened,” McLean added. “There's no feedback loop, just silence. And that silence can erode someone’s confidence when, in reality, it was the tech, not their talent, that shut the door.”
Everyone deserves a fair shot. Here is how to tailor your application package to avoid being screened out by a bot.
How AI Discriminates
AI learns from historical data, and if that data reflects past hiring decisions skewed by bias—favoring certain schools, genders, ethnicities, or age groups—the algorithm quietly continues that pattern.
For example, the University of Washington found that the tools preferred white‑associated names 85 percent of the time versus Black‑associated names nine percent of the time when ranking résumés. The systems also preferred male‑associated names 52 percent of the time versus female‑associated names just 11 percent of the time.
The algorithms may pick out keywords that signal you are from a less‑affluent background or an underrepresented racial group, explained Dr. Janice Gassam Asare, workplace equity and technology consultant. One example: mentioning that you are a member of an organization that supports Black professionals in tech.
An AI tool might be trained to prioritize résumés that include certain terms meant to signal leadership skills—for instance, “debate team,” “captain,” or “president.”
If someone has taken a career break, changed sectors, or followed a nonlinear path—which is common in tech—AI can interpret that unpredictability as risk, not potential.
Use AI to Counter AI
When it comes to avoiding bias, limit any information that could trigger assumptions.
Consider removing your full address and specific dates from the education and work‑history sections of your résumé, online profiles, and digital portfolio. AI scans all of them, and together they shape the algorithm's picture of you.
For instance, instead of naming your university, highlight the qualification: “B.S. in Computer Science.” This conveys your credential without inviting location or socioeconomic bias.
If you have a name that is often misread or that sparks unconscious assumptions, present it in a neutral way—for example, “A. Jones”—on your résumé while still using your full name on LinkedIn to maintain consistency. These subtle adjustments can help level the field in a system that is not always playing fair.
Leverage ChatGPT, Claude, or a similar chatbot to tailor your résumé and cover letter to a specific job description; mirror the terminology the company uses—not just the technical skills but the behavioral and cultural ones as well. If a company is looking for “collaborative problem solvers” and you describe yourself as “an independent contributor,” you might unintentionally miss the mark.
Do not stop there. Ask the AI chatbot to review your résumé for language that could be perceived as discriminatory and to suggest alternatives that make you a stronger candidate, Asare suggested.
Be specific. For instance, ask the tool to find and remove pronouns, job titles, or adjectives associated with a particular gender. It may take several tools and prompts to create a document focused solely on skills and experience, and you should solicit input from human reviewers as well.
Focus on Format
Remember, these systems are designed to make decisions fast. They parse your résumé, scanning for patterns, structure, scores, and matches. If something does not fit neatly into their framework, it is either overlooked or ranked lower.
“Presentation matters enormously,” McLean emphasized.
You might be a brilliant engineer who has delivered transformative results, but if your résumé buries those achievements under dense narrative or vague descriptions, the algorithm will not pick them up.
Structure your experience, achievements, and qualifications clearly and with purpose. Use active language and quantify your results. Choose a modern, easy‑to‑read font such as Arial or Verdana. Avoid tables, graphs, images, and fancy graphics, which can confuse AI‑powered résumé tools. Format side projects and pro bono work like a mini “Work Experience” section.
Be proactive about transparency. If you have changed industries, relocated, or taken a career break, explain it in a sentence. Do not leave the system guessing; that context can prevent an unfair drop‑off.
Track Your Results
Test different versions of your résumé and online profiles. If you apply for similar roles and receive wildly different responses, an AI filter may be at play. Adjust, measure, repeat.
Finally, advocate for yourself. If you notice that a firm relies heavily on automated systems, ask about them, McLean added.
Responsible employers should be able to explain how they mitigate bias. When candidates push back, it creates pressure for change. We cannot leave fairness in the hands of algorithms alone.