AI Bias in Hiring: Your Resume Might Be Judged by Your Name, Age, or Gender
Researchers at the University of Washington fed identical resumes through three leading language models, varying only the names. Across more than three million comparisons, AI screeners favored white-associated names 85.1% of the time. Black-associated names were preferred in 8.6% of cases.
Same resume. Same qualifications. Different name, different outcome. And this is just one finding in a growing body of research documenting how AI hiring tools systematically discriminate by name, age, and gender -- at machine speed and machine scale.
The Scale of AI Screening
An estimated 88% of employers now use AI at some point in their hiring process, according to a 2025 ResumeBuilder survey. Among Fortune 500 companies, 97.8% use automated tracking systems that reject roughly 75% of resumes before a recruiter opens them.
Half of all companies use AI exclusively for initial screening. No human in the loop. No appeal. No explanation.
What the Research Shows
Names as a Sorting Mechanism
The University of Washington study (October 2024) varied names associated with white and Black men and women across more than 550 real-world resumes, fed them through three leading LLMs, and ranked them against over 500 real job listings across nine occupations.
Resumes with Black and white names were treated equally in only 6.3% of tests.
A separate Bloomberg investigation asked OpenAI's GPT-3.5 to rank identical resumes 1,000 times with demographically distinct names drawn from voter and census data. The model reliably picked winners and losers based on name alone, to an extent that would fail standard employment discrimination benchmarks.
Where Bias Compounds
A March 2025 study published in PNAS Nexus tested five major models -- GPT-4o, Gemini 1.5 Flash, Claude 3.5 Sonnet, Llama 3-70b, and one other -- across approximately 361,000 resumes with randomized social identities, scoring candidates for entry-level positions.
LLMs systematically awarded higher scores to female candidates overall, but specifically disadvantaged Black male applicants compared to white male peers with identical qualifications.
Black male applicants faced a 1.4 percentage-point lower hiring probability. Applied to the 10.5 million Black men in the U.S. labor force, the researchers estimated this translates to nearly 150,000 jobs negatively affected for this group alone.
The Brookings Institution's analysis found that Black males were disadvantaged in up to 100% of test cases -- replicating real-world employment discrimination, but at algorithmic scale.
Older Women Penalized Twice
Stanford and UC Berkeley researchers published findings in Nature (October 2025) after prompting ChatGPT to generate more than 34,500 resumes for 54 occupations.
When generating resumes for women, the model assumed they were 1.6 years younger, assigned more recent graduation dates, and attributed less work experience compared to equivalent male profiles. When the same model evaluated those resumes, it rated older men highest -- even when all resumes were derived from the same underlying data.
Older women were systematically presented as less experienced and then penalized for the manufactured gap. The study analyzed 1.4 million online images and videos alongside the resume experiments, demonstrating the age-gender distortion is embedded across training data and amplified by LLMs.
Human Oversight Does Not Fix This
A November 2025 University of Washington study tested 528 participants reviewing candidates for 16 different jobs, some working with biased AI recommendations and some without.
Without AI suggestions, participants showed little measurable bias. With biased AI recommendations:
- Participants mirrored the AI's preferences, whether those favored white or non-white candidates
- In cases of severe AI bias, human choices followed the AI's picks roughly 90% of the time
- Even participants who recognized the bias could not consistently override it
The researchers found one partial mitigation: having participants take an implicit association test before reviewing candidates reduced bias by 13%. No company currently implements this in production hiring.
The Legal Landscape
Mobley v. Workday
In May 2025, a federal judge in the Northern District of California allowed Mobley v. Workday to proceed as a nationwide collective action. Derek Mobley, a Black man over 40, claims that since 2017 he applied to more than 100 jobs at companies using Workday's AI screening and was rejected every time.
The court ruled: "Workday's role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being." The potential class includes every Workday applicant over 40 rejected since September 2020.
State and International Regulation
Regulatory responses are accelerating unevenly:
- California finalized employment regulations on AI decision systems, effective October 1, 2025. Any AI tool that screens resumes, targets job ads, or analyzes interviews falls under the state's anti-discrimination framework.
- Colorado's AI Act (delayed to June 2026) will require impact assessments for high-risk AI systems and mandate "reasonable care" to prevent algorithmic discrimination.
- Illinois updated its AI Video Interview Act effective January 2026, requiring explicit written consent before AI analyzes candidate interviews.
- New York City's Local Law 144 requires annual bias audits on all automated employment decision tools, with results published publicly.
- The EU AI Act classifies recruitment AI as "high-risk," with compliance requirements enforceable from August 2, 2026. Fines reach up to 35 million euros or 7% of global turnover.
Federal enforcement remains uncertain. The EEOC removed its AI hiring guidance from its website in January 2025. But Title VII liability is unchanged: employers remain responsible for disparate impact caused by their AI tools, regardless of whether a vendor built them.
What Job Seekers Should Know
AI is almost certainly involved. If you apply through a major ATS platform, your resume is being scored by an algorithm. In Illinois, California, and New York City, employers may be legally required to disclose this.
Format for parsers. Single-column layouts, standard section headers ("Experience," "Education," "Skills"), common fonts. Avoid tables, graphics, columns, and headers/footers. ATS parsers discard what they cannot parse.
Mirror the job description. AI screeners match keywords. If a listing says "project management" and your resume says "managed projects," some systems miss the connection. Use the employer's exact phrasing where it honestly applies.
Document everything. Save job postings, confirmation emails, rejection timestamps. If an application is rejected within minutes of submission, that timestamp is evidence that no individualized human review occurred.
Know your rights. Several states now require disclosure when AI is used in employment decisions. The legal landscape is shifting in applicants' favor, and the research increasingly supports algorithmic discrimination claims.
The Uncomfortable Bottom Line
AI hiring tools were supposed to remove human bias from recruitment. Instead, they have industrialized it. A prejudice that once affected one hiring manager's decisions now scales across millions of applications per day.
The research is unambiguous. Names, gender, age, and race influence AI screening outcomes in ways that are statistically significant, well-documented, and legally actionable. The models tested are not obscure prototypes -- they are GPT-4o, Gemini, Claude, and Llama, the same architectures companies deploy at scale.
Regulation is arriving years after the technology was deployed. In the interim, the burden falls on job seekers to navigate a system that may not evaluate them fairly.
Nox works on behalf of job seekers -- applying to roles that match stated preferences rather than screening candidates through demographic signals. Try Nox free -- no credit card required.
Sources:
- University of Washington: AI tools show biases in ranking job applicants' names
- PNAS Nexus: Measuring gender and racial biases in large language models
- Brookings Institution: Gender, race, and intersectional bias in AI resume screening
- Stanford Report: Researchers uncover AI bias against older working women
- Nature: Age and gender distortion in online media and large language models
- University of Washington: People mirror AI systems' hiring biases
- Bloomberg: OpenAI GPT Sorts Resume Names With Racial Bias
- Holland & Knight: Federal Court Allows Collective Action Over Alleged AI Hiring Bias
- Seyfarth Shaw: AI Legal Roundup -- Colorado, California, and Illinois
- EU AI Act: Recruiting under the EU AI Act
- ResumeBuilder: 7 in 10 Companies Will Use AI in Hiring in 2025