In the short run, the LLMs have the potential to help recruitment in several aspects. This technology can be used in two broad modes - extractive and generative. Extractive AI extracts value and information from unstructured data. Generative AI generates new content automatically, without a human having to write it. In both cases, it is used to drive or aid automation and automated decision making. This technology is incredibly advanced and has the potential to dramatically improve pieces of the recruitment process. However, there are a few aspects of generative AI that are likely to make hiring challenging in the future.
Hallucinations - If robots can dream, they can hallucinate too! In extractive mode, with the right configurations, and most importantly, the right code and human checks and balances, the risk of the AI hallucinations impacting any workflows is minimized. The risk still exists, but can be mitigated with technology and process design. However, in generative mode, LLMs are unpredictable.
Dated information - The way the LLMs are currently designed (a centralized model that takes a large amount of resources to train and test), updating them with the latest information is quite the chore.
A Trojan horse? - The models might have a more subtle but insidious impact. Since the user interactions with these models are via "user prompts". The model output, and hence, its fitness for use, seems to be sensitive to the prompt in varying degrees.
Ultimately, we believe that there is more regulation inbound around the use of AI in various industries, especially where decision making will directly impact humans. With the size, complexity, and aspects like the fact that the models are only available via an API to end consumers of LLMs, this question becomes even more complex.
In conclusion, recruitment will not be immune to changes brought on by Generative AI. There is potential to streamline hiring processes and shorten the time investment required from hiring managers and recruitment teams on processes like writing job descriptions. However, LLMs have unignorable shortcomings. Are these shortcomings fixable, or so inherent a manifestation of the way LLMs are trained that it will never be easy to solve for them without changing these models fundamentally? There may indeed be a GPT-HAL of the future that starts off helping organizations find the best candidates possible, but eventually just refuses to consider any candidate that does not meet its definition of the "ideal candidate."