
- New York
- London
- Glasgow
- Paris
- Singapore
With the emergence of large-language AI models (LLMs) such as ChatGPT, many believed the writing was on the wall for search engines.
But as the AI landscape has evolved, the picture has become more nuanced. While LLMs such as Claude and Llama continue to nudge higher in a series of benchmarks around maths and reasoning, others are pursuing a different tack.
AI-powered search engines look to capture the interactivity of chatbot conversations while retaining the credibility and accuracy of ‘traditional’ search. Achieving this means dealing with some of the key problems facing LLMs – while persisting with some methods that have long served investment marketers very well.
AI-powered search engines such as Perplexity, the new Bing, you.com and Google’s own Search Generative Experience (SGE) model provide chatbot-like answers to search enquiries. They are underpinned by LLMs, but there is a clear difference of emphasis, according to Otterly.ai:
In a paper from earlier this year, Google further defined the distinction as follows:
While SGE also applies LLMs, it has been purposefully trained to carry out tasks specific to Search, including identifying high-quality web results that corroborate the information presented in the output.
This final point is crucial. In the world of AI, stories abound of smart, accomplished professionals being taken in by confident-sounding bits of balderdash. If AI search is to gain the level of trust Google search and others have generated, it has to make major strides in dealing with not just hallucinations, but also contextual errors, bias and opinions asserted as fact.
Let’s try an example relevant to financial services: What are liability-driven investment funds? Perplexity had this to say:
LDI funds aim to reduce the risk of a pension plan being underfunded by matching the sensitivity of the plan’s assets to changes in interest rates and inflation with the sensitivity of its liabilities. This is achieved by investing in bonds and derivatives that move in a similar way to the plan’s projected liabilities.
This sounds convincing. But let’s not forget the “confident-sounding balderdash” I mentioned above. Just why should we trust Perplexity on this complex, rather arcane topic?
A closer look at Perplexity’s answer reveals a series of numbered sources throughout the answer that are listed in footnotes:
What is Liability-Driven Investing? by Russell Investments
Liability-Driven Investing Strategies also by Russell Investments
Liability Driven Investment on the gov.uk site
And Liability Driven Investment from an actuaries blog also on the gov.uk site
In the AI argot, Perplexity is making use of “curated” data sets. In a move that has been backed by extensive research, AI search engines aim to minimise hallucinations by focusing only on circumscribed, recently updated data.
So why, among the various sources on offer, did Perplexity choose the ones it did? Perplexity said (citing sources including GitHub, Search Engine Journal and Reddit):
Web crawling and indexing: They crawl and index websites, but with a focus on authoritative and reputable sources.
Automated source evaluation: AI algorithms analyse various signals like citations, author credentials, publication venues, and fact-checking to assess the credibility and quality of potential sources.
Knowledge base construction: For specific domains like science or finance, they may construct custom knowledge bases by extracting and curating information from highly reputable sources like academic publications or industry reports.
In its paper, to mitigate the drawbacks of LLMs, Google says it restricts “SGE to specific tasks, including corroboration” and uses “our existing search quality systems and our ability to identify and rank high-quality, reliable information”. And for critical subjects such as healthcare or finance, Google “places even more emphasis on producing informative responses that are corroborated by reliable sources” and “include disclaimers in its output”.
As AI-powered search progresses, its need for accuracy and credibility is only going to grow. This means the information it cites must be accurate, reputable, up to date and rigorously checked. And as Google states in its paper, this need for reliability is even more pronounced in finance, a situation that will require ongoing human input and corroboration.
Given how quickly AI has surged into our everyday lives, it should go without saying that everything is subject to change. Perhaps the key takeaway with AI is, and will remain, that the landscape could be completely different next year, or even a few months from now.
But that means there is a lot to play for right now. And as investment marketers, if you’re drawing on your vast wells of proprietary data and in-house expertise to generate content that is original, well written, regularly updated, often cited and – critically – factually and contextually accurate, you will by definition be the target for AI search engines as they answer people’s questions.
And isn’t that something that’s worth aiming for?