Big ideas on Technology

How Meytier uses Large Language Models to streamline the hiring process

SHARE
Rajesh Kamath, Chief Solutions Officer

How we use LLMs and what we've learned implementing this technology

Large Language Models (LLMs) continue to be at the forefront in conversations about technology use in recruitment. We believe that LLMs bring both potential for innovation as well as risks to recruitment. Our Talent Research Intelligence Platform (TRIP) leverages artificial intelligence and LLMs in several ways. Here are some of the ways we do that and what we’ve learned implementing this technology.

Rajesh Kamath, Chief Solutions Officer

How we use LLMs and what we've learned implementing this technology

Large Language Models (LLMs) continue to be at the forefront in conversations about technology use in recruitment. We believe that LLMs bring both potential for innovation as well as risks to recruitment. Our Talent Research Intelligence Platform (TRIP) leverages artificial intelligence and LLMs in several ways. Here are some of the ways we do that and what we’ve learned implementing this technology.

How can artificial intelligence reduce human workloads?

LLMs can be used in two ways, extractive and generative. Extractive AI extracts value and valuable information from unstructured data and Generative AI generates new content. In extractive mode, with the right configurations, and most importantly, the right code and human checks and balances, the risk of the notorious AI hallucinations impacting any workflows is minimized. The risk still exists, but can be mitigated with technology and process design. In generative mode, the risk of hallucinations is larger in our belief, and cannot be mitigated as easily. This has to be taken into account in use case selection and UX design.


We currently leverage LLMs across several use cases. Here are two major ways we leverage both extractive and generative AI.


Getting to the important stuff.

  • Extracting what is important from candidate profiles isn’t a small task. Candidates come in all different shapes and sizes, people write all kinds of resumes. 
  • We leverage extractive AI to help us understand candidate career journeys better.
  • In this use case, information discovery is driven by the LLM, reducing human workload dramatically. 


Writing better job descriptions.

Companies write bad job descriptions. They often copy and paste old ones, add unnecessary skills that aren’t required to do the job, or include requirements that aren’t commensurate with the seniority of a job. 

  • TRIP alerts users when they’ve missed a must have in a job description, like an EEOC statement, information about benefits, or a salary range.
  • We leverage LLMs to give users insights on whether or not they’ve added skills from different industries or skills that don’t go together. We also provide insights on skills that diverse candidates are less likely to have.
  • After users have prioritized the listed skills of the job description, they have the opportunity to create a new one using generative AI, with several different writing style options.

Intentionality is key.

What we’ve learned using LLMs in recruitment.

The capacity of LLMs is incredible and it can be tempting to implement them in a lot of use cases. However, prioritization is super important. Where can LLMs deliver the most value the fastest? One recurring theme we have found in conversations with users is that they do not care as much about the use of AI in doing something as compared to how it made their daily lives better in a very tangible manner. Did it help reduce the amount of data entry? Did they have to do fewer Google searches for additional information / knowledge?


Prompts & Data

Prompt engineering is incredibly important and prompts typically take some trial and error to get right, especially when you want to leverage LLMs in automated flows with minimal human touch points. 


Data preparation is almost as important as prompt engineering. We started to get our best results when we sent GenAI focused data that was very relevant to our use case. This had a double benefit - it also helped us overcome token length related restrictions. Bots suffer from information overload too. Results started to degrade when our prompt asked the AI to do too much. We had to find the sweet spot between too-many-calls-that-do-just-one-thing and also-launch-the-mars-rocket.


Intentionality is key.

Platform interface with LLMs has to be designed to be fault-tolerant (or rather GenAI-mood-tolerant). Every once in a while, an interface you thought was pretty stable in the output it generated will send you something quirky in content or structure or in the time it takes to respond. You have to design your architecture (and user experience!) to handle the quirks. 


What about AI and bias?

The potential for AI introduced bias isn’t hypothetical. It is very real. We do not believe that this technology is sophisticated enough to make human decisions. Therefore, we don't use Gen AI for matching jobs to candidates. We don't use Gen AI for scoring candidates in any manner. We don't use Gen AI for comparing candidates with each other in any aspects. Lastly, we don't use it for direct candidate or employer facing use cases yet, we simply do not believe Gen AI is there yet. All of our flows have a human-in-the-loop before any decisions are made. We believe that LLMS can be leveraged to help the participants in hiring reach a decision with the best information in hand most efficiently. They cannot make hiring decisions automatically.


Generative AI will certainly change how companies hire and screen candidates. Read Rajesh's article, "How will ChatGPT impact recruitment?" here.


© 2024 Meytier - All Rights Reserved.
   Privacy Policy    Terms Of Use