AI tools are rapidly being developed for integration into daily life across domains, and recruiting is no exception. Even during the course of writing this blog post, it seemed as if the number of AI-enhanced products specifically for recruiting was growing by the hour. The overall vibe in the tech community is one of excitement for these new offerings, as they address many recruiting pain points and make high-value promises such as reduction of time-to-hire, access to millions of quality candidate profiles, and reduction in total tech costs. With well over 100 AI-based talent acquisition products in the market currently, it would be a daunting task to try and review the veracity of the claims that each one makes. What we can do, though, is take a look at the general capabilities of these tools, the promises they make for process improvement, and the lingering questions that need to be explored before unilaterally accepting them as “catch-free.”
A note on the AI recruiting tools included below: We do not recommend any specific AI tool for recruiters, and this is not a review of AI recruiting products. None of the links below are affiliate links, and we have no business association with any developer of recruiting software, AI-driven or otherwise.
Where do AI tools show up in the recruiting process?
Everywhere! Many of these products (such as Loxo and HireEZ) provide AI-assisted support for the entire recruiting workflow in one platform, while others focus their offerings on specific steps in the process.
Sourcing
This first step in the pipeline is also often the most time-consuming, with the speed of the process limited by how fast a human recruiter can read candidate profiles and discern whether they might be a good fit. The assistance offered by AI tools in this area is certainly appealing, as these tools promise to rapidly scour the web to identify possible candidates based on desired criteria.
For example, HeroHunt.ai states that their, “AI recruiter Uwi can recruit for you on complete autopilot, from finding 1 billion profiles on the web to AI screening candidates and even outreach.” Another AI-assisted recruitment sourcing tool, Pin, also claims that their product can (if desired) fully automate the recruiting process through sourcing, outreach, follow-up, fielding questions, and scheduling interviews. Many of these sourcing-related products also offer to find and deliver accurate contact information for candidates to improve the chances of an outreach message actually reaching its intended target.
Scheduling/Communication
All of the end-to-end solutions and sourcing tools listed above also incorporate AI for communicating with candidates. This can look like generative AI drafting personalized outreach messages, building automated multi-channel recruiting campaigns, asking pre-determined follow-up questions, or inviting candidates to schedule an interview. The aim here is to automate repetitive communication tasks at the beginning of the hiring funnel, thereby freeing up recruiters from the time suck of email and phone tag.
Interviewing
Here we find some of the most interesting additions of AI to the recruiting process. In some cases, these tools merely assist a human interviewer, while others are designed to actually perform the function of the interviewer.
Products like Quil and Elly focus on transcription and note-taking for candidate interviews. These tools offer customizable templates to record important details specific to each role and also produce AI-generated summaries and writeups that can be used for candidate submissions or scorecards.
In a step further along the depersonalization spectrum, tools such as Apriora, HeyGen, and HireVue are leveraging AI to structure candidate interviews, develop automated assessments, and even create two-way interactive AI avatars to conduct the candidate interviews on demand.
All of this sounds pretty cool, right? All of this without a catch? This human recruiter has some doubts.
What’s the catch: AI can do recruiting tasks faster.
Here we invoke the old adage, “It can be cheap, fast, or good, but you can’t have all three.” In the case of AI recruiting tools, it seems that one of the tradeoffs for faster access to (supposedly) better candidates is added cost. Some products (such as Loxo) only provide access to these time-saving AI features when a user subscribes to higher-tier subscription plans. Similarly, the base package for AI-assisted interviewing product HireVue comes in at $35K+, making this a very large financial investment.
What’s the catch: AI can source more and better-qualified candidates.
All of the end-to-end AI recruiting tools make claims of high numbers of profiles they have access to. However, it is often unclear where these profiles are sourced from, or whether there have been filters set to screen out fakes and/or duplicates. I asked a chatbot at one of these companies how they find their candidates. The reply was that they have “data partners” that source candidate resumes. What this actually means is unclear. What databases do they have access to? Who is managing this data? Who is protecting the personal information of the candidates the “data partners” are providing? Did candidates give consent to be included in this sharing between data partners?
The goal here is clearly to “work smarter, not harder” but it remains to be seen whether putting AI in charge of these sourcing tasks is actually working smarter, or merely outsourcing time-consuming tasks while losing the ability to verify how these tasks are being completed.
What’s the catch: AI makes communication between candidates and recruiters more efficient.
Many of these AI recruitment tools assert that the use of generative AI for outreach/follow-ups and chatbots for preliminary questions and scheduling can result in an improved candidate experience by allowing for faster and more efficient communication. The tradeoff here is that a candidate could potentially go through the majority of the recruitment process without interacting with an actual human. It seems doubtful that most candidates would classify this as an “improved” experience that enhances their interest and confidence in a potential new employer, or that they are going to “feel better” about a process that has removed the human element as much as possible.
True human communication is a two-way street. Whether in an introductory call, an email, or an interview, we use context cues and non-verbals to intuit additional information into the conversation. Additionally, mammals (such as humans!) commonly leverage “social mirroring”—unconsciously mimicking the behaviors of others—as a strategy to promote bonding, understanding, and rapport building. Both non-verbal cues and another living being to mirror are absent when “conversing” with an AI avatar, which can be a disconcerting experience for interviewees. In a series of case studies conducted by the Harvard Business Review, candidates explicitly noted that when being interviewed by an AI: “their default was to perform in a rigid way — holding a fixed gaze, a fake smile, or unnatural posture; speaking with a monotonous voice; and holding their hands still. They felt they had to behave like robots.” Far from “improving communication,” trying to “mirror” with a robot results in robot-esque behavior that does not necessarily provide a good indication of how a candidate is able to interact with human teammates.
What’s the catch: AI reduces bias in the hiring process.
The claim here is that removing some of the human oversight is key to diminishing bias in sourcing and screening candidates. One way this is accomplished is through blinding names, photos, education history, etc.—things that might give hints into a candidate’s gender, age, race, country of origin, etc. However, even as some of the more explicit modes for bias are tempered, many of these same tools offer filters where hiring managers can select candidates who attended Ivy League schools or who have held particular job titles. These kinds of filters inherently discriminate against groups that historically have not had the same access to opportunity as others. Systemic bias then continues to be a feature, not a bug.
Bias creeps into other areas of the AI-assisted recruiting process as well. One place where this could be particularly evident is the assessments and metrics used for evaluating candidates during AI-led interviews. The code behind these analytics is designed by humans. Humans are making decisions about how heavily certain skills, personality traits, or communication styles are weighted in terms of desirability. Humans are deciding how questions are phrased. Humans are deciding what kind of tonal quality or facial expressions are perceived as positive or negative. All of these decisions allow space for implicit bias to get baked back in.
In the field of psychology, it is considered ethically problematic to use assessment instruments whose validity and reliability have not been established for use with members of the population tested. (APA code of ethics 9.02b). Have the AI interview assessment metrics been normed across all racial/cultural/gender populations? Does a candidate’s informed consent for the interview require an explanation of what they are being measured on? Without proper oversight for these kinds of ethical questions, it is highly likely that the AI models will trend towards preference for one kind of candidate over others even if it does not “know” that is what it is doing.
Fortunately, some of these software providers are being proactive to manage the buildup of bias in the recruitment process, as well as to address the lack of transparency about how AI tools are used. HireVue provides a detailed “Explainability Statement” to cover the why and how of their AI usage, as well as to discuss the steps taken to ensure ethical considerations are addressed. HireVue, HireEZ, and Pin, among others, also lay out their external oversight through independent audits and compliance with relevant regulatory frameworks.
Everything has a catch: AI can be a helpful tool for recruiters, not a panacea.
All of these unresolved questions make it hard to imagine a picture of the future as AI continues to integrate into the recruiting process. When I asked ChatGPT to “Write a five-paragraph blog post about how AI tools are being used in recruiting,” the LLM tweaked the narrative slightly and titled its blog post “How AI Tools Are Revolutionizing Recruiting.” The content presented an overall positive story about the usage of AI and asserted that “AI is reshaping the recruitment landscape by increasing efficiency, reducing bias, and improving the candidate experience.”
This human recruiter remains unconvinced that there has been as much progress towards these latter two goals as the AI product developers would like to claim. However, perhaps these misgivings are merely because this is a new approach, which does not yet seem “normal.” We may look back and wonder at the actual human hours that were spent on these recruiting-related tasks prior to automation. In the meantime, developments in this domain deserve a critical eye, to ensure we are asking the right questions as the relationship between humans and AI continues to evolve.