AI Is Speeding Up Hiring, But Many People Are Left Behind
Job Searching
Oct 8, 2025
I've been watching something unfold in hiring tech that honestly keeps me up at night. And I keep looking around waiting for someone to sound the alarm. So here I am, sounding it.
According to Insight Global's 2025 report, 99% of hiring managers are now using AI tools. Nearly all of them. These systems screen resumes, run video interviews, analyze your facial expressions and tone of voice. And yes, they're fast. Companies love talking about their improved time-to-hire metrics.
But here's what nobody's talking about: we've automated exclusion at scale, and the ADA isn't remotely equipped to handle it.
The Video Interview Problem
Let's start with AI video interviews. You sit in front of your webcam, answer questions, and an algorithm scores how you look, sound, and move. Sounds futuristic. It's also discriminatory.
Mujtaba and Mahapatra published research this year in arXiv showing that neurodivergent candidates and people with disabilities get penalized for nonstandard communication. If your eye contact is different, if you have speech patterns that don't match the training data, if you move in ways the algorithm wasn't taught to recognize, you get a lower score. The Guardian covered this in May, and honestly, I'm shocked it didn't blow up bigger than it did.
Here's my question: where's the lawsuit? These systems are screening out people with autism, cerebral palsy, speech disabilities, and anxiety disorders. That's not a bug. That's the feature working exactly as designed. And it's illegal. Or at least it should be.
But current ADA regulations were written for a world of human recruiters and paper applications. Nobody's figured out how to apply them to an algorithm that rejects you before a human ever sees your name.
Resume Screeners That Hate Experience
Now let's talk about resume screening. Large language models are reading your resume and deciding if you get a phone call. Brookings researchers Wilson and Caliskan found this year that these systems reject older candidates at higher rates. Why? Because they learned to treat words like "experienced" and "seasoned" as red flags.
I had to read that twice. We've built systems that penalize expertise.
The AI isn't neutral. It learned from historical hiring data, which means it learned every bias that data contains. Rao and colleagues found similar patterns around cultural and linguistic markers. The system sees difference and marks it as deficiency.
And again, I'm asking: where are the age discrimination lawsuits? Because this is textbook ADEA violation territory, except it's happening in code instead of a hiring manager's office. And nobody seems to know how to prosecute an algorithm.
Everyone's Using It, Nobody's Fixing It
Here's the thing that really gets me. That same Insight Global report shows 98% of companies seeing efficiency gains from AI hiring. Great. Fantastic. But you know what that report barely mentions? Accessibility. The University of South Australia put out a warning this year: AI isn't fixing diversity problems. It's making them worse, especially for disabled applicants.
And companies just keep rolling these tools out.
We've got 93% of companies saying they still use human oversight. But Wilson and colleagues published research showing that recruiters just adopt whatever the AI recommends, even when they suspect it's wrong. The AI makes a suggestion, and humans rubber-stamp it. Parasurama and Ipeirotis found the same thing. The more recruiters rely on AI, the worse diversity outcomes get.
So that human oversight everyone keeps promising? It's theater.
The Regulation Gap
Here's where I really start looking around for the lawyers. New York and New Jersey updated their employment discrimination laws this year to cover AI. The National Law Review and Northwestern Journal of Technology and Intellectual Property both published analyses of the new landscape.
You know what those laws cover? Race and gender bias. Important, yes. But disability bias? Age discrimination? Almost nothing.
Companies can get sued for building AI that discriminates based on race or gender. But if that same AI screens out disabled candidates or older workers, there's no clear legal framework. No enforcement mechanism. Just a giant regulatory gap and millions of people falling through it.
This Is About All of Us Eventually
Here's what connects all of this: ageism and ableism in AI hiring aren't separate problems. They're the same problem.
These systems evaluate everyone against one narrow template of communication, appearance, and behavior. An older candidate with formal communication style gets flagged as outdated. A disabled candidate who communicates differently gets flagged as non-standard. Both get rejected for the same reason. The system wasn't built for them.
And let's be honest. If you're in tech long enough, you're going to be the "experienced" candidate someday. You might become disabled. Your kid might be neurodivergent. This isn't an abstract problem. It's coming for all of us.
What Actually Needs to Happen
I'm a UX person, so I think about systems. And this system is badly designed.
We need to engineer accessibility into AI hiring from the beginning. Not bolt it on later. Not add it as an accommodation if someone asks. Build it into the foundation. Train models on diverse communication styles. Offer captions, transcripts, alternative formats. Let people opt out of video scoring without penalty.
We need actual human oversight, not the pretend version. That means training recruiters to recognize algorithmic bias and giving them clear protocols to override bad recommendations.
And honestly? We need lawyers. We need test cases. We need someone to drag these systems into court and force companies to prove their AI isn't violating the ADA and ADEA. Because right now, they're operating in a legal gray zone, and they know it.
Final Thought
AI should expand who gets opportunities, not narrow it down to people who fit one algorithmic template. The technology exists to build fair systems. We know how to do this. We're just not doing it.
And every day we don't, we're excluding millions of qualified people from jobs they could do. People with disabilities. Older workers. Anyone who communicates differently.
So I'm asking: where's the urgency? Where's the accountability? And seriously, where are the lawyers?
Because this can't be legal. And if it somehow is, we need to change that. Fast.
Need Help Fixing This?
If you're building hiring tech or implementing AI systems and want to avoid these pitfalls, I can help. I work with companies as a fractional UX and content strategy consultant to design inclusive systems from the ground up. Let's make sure your technology expands opportunity instead of limiting it.
(Claude.ai helped research and copy edit this article.)

Subscribe to my
newsletter
Dive into case studies to see how we transform ideas into impactful design solutions.
By clicking Sign Up you're confirming that you agree with our Terms and Conditions.