florio.dev

Does Fair Ranking Lead to Fair Recruitment? With Dr. Carlos Castillo (tgz-06)

On other platforms: Web, Apple Podcast, YouTube.

This is the AI generated summary of the episode.

I recently had a fascinating chat on the podcast with Carlos Castillo (also known as ChaTo) about a topic that hits close to home for anyone in the tech industry: recruitment. We all want a fair process where the best candidate wins, but as ChaTo explains, the reality is far more complex than just "fixing the algorithm".

The discussion centered on a paper titled "Does fair ranking lead to fair recruitment outcomes? A study of interventions, interfaces, and interactions", which is part of the FINDHR project. It explores how humans interact with AI-driven recruitment tools, and the results are a bit of a wake-up call.

The Myth of the "Smart" List

One of the most striking points ChaTo made is about positional bias. Even when we don't tell recruiters that a list was generated by an AI, they assume it was. We’ve been conditioned by search engines and flight bookings to believe that the top results are naturally "the best".

In their experiments, ChaTo’s team found that:

The Name Trap

They also tested demographic variables using synthetic but realistic CVs. Even when profiles were identical in terms of education and experience, a "European-sounding" name provided a distinct advantage over "Arab" or "Chinese" sounding names.

The data showed that "formal equality" (treating everyone the same) isn't enough. In their tests, the only way to achieve actual parity in outcomes was to place non-European candidates at the very top of the list—a form of positive action.

Fighting "Recruitment Fatigue"

We often blame the software, but we should also look at the human on the other side. Recruiters are dealing with massive "recruitment fatigue". With an average of 250 applications per corporate role, no human can maintain the same level of attention for every CV.

ChaTo suggests a few practical interventions:

AI isn't a Toaster

My biggest takeaway from this talk is that AI products aren't like toasters—you don't just manufacture them once and you're done. They are constantly updated, and every update can introduce a new bias.

This is why "post-marketing monitoring" (as mentioned in the AI Act) is so vital. We need to continuously audit these systems to see how they interact with real human experts in the field.

Recruitment is a high-stakes game. If we want to move toward substantive equality, we have to look beyond the code and understand the "budget of attention" that recruiters are working with.

If you're interested in the technical side or want to check out the synthetic CV dataset they released, I highly recommend checking out the full paper.

#AI #ethics #humancomputerinteraction #podcast #recruitment #targz