New York
London
Glasgow
Paris
Singapore

Robocop(y): is ChatGPT set to replace copywriters?

Jamie Ryder 6 March 2023

Will ChatGPT, or language models like it, make financial copywriting redundant? Will asset managers and other financial companies seeking marketing copy and investment communications opt for the bot, doing away with humans entirely?

The short answer is: it depends. The longer answer is: while ChatGPT could certainly upend elements of writing and content creation, there’s still plenty of uncertainty. And the onus is on the communicators to adjust and thrive in this new reality.

Although ChatGPT is in a fairly early stage of development, it is already a powerful generative tool that can perform a variety of writing tasks with a high degree of accuracy. Besides responding to prompts to generate copy, the bot can proof existing text, suggest revisions, answer questions, deploy various tones of voice, create summaries of existing text, adjust its output based on feedback and so on. And away from copywriting work, the bot can produce jokes, poems, songs and – perhaps most interestingly for financial firms – write and check code in various languages.

But make no mistake: significant barriers to adoption remain, and there are some fundamental aspects of ChatGPT that should make the financial industry and its regulators wary.

Are financial companies using ChatGPT right now?

Financial companies are notoriously cagey when it comes to discussing the tech they use, especially if that tech is at an experimental stage. For this report, I contacted a number of major banks and asset managers, none of which would give me any comment on whether or not they had been playing around with ChatGPT.

But some financial companies have made public remarks about the bot or technologies like it.

BlackRock calls ChatGPT a “potentially transformative” development, one that “might be the most impressive AI tool yet”. The company also published a statement written by ChatGPT itself, responding to the prompt “How might ChatGPT impact investors?”, excerpted in part below:

[ChatGPT] can be used for natural language processing, which can help with tasks such as sentiment analysis and news summarisation. This can […] provide investors with a better understanding of the overall sentiment and key information related to a particular stock or market.

Other coverage is more sceptical.

“Despite the sophistication of ChatGPT, the technology is still relatively immature,” writes one portfolio manager at Janus Henderson. “While this technology is mostly in the experimental stage, it has already been deployed in commercial applications,” they add.

Customer experience, the manager goes on, is one area where ChatGPT could be employed in the coming years.

“Customer interaction functions are a ripe opportunity for chatbots and related applications,” the manager says. “They have the potential to improve the customer experience by rapidly analysing myriad data points to diagnose problems.”

London-based asset manager Channel Capital, meanwhile, published the results of a few informal ChatGPT test runs it conducted earlier this year. While the firm identifies some of the bot’s shortcomings – which we’ll discuss soon – it describes the application as a “useful assistant” that can “help [users] get over the initial hump and into productive flow.”

“It will become an additional required skill to be able to prompt tools like ChatGPT to produce the best output,” the firm adds, “and it will need human moderation (for now at least) for correctness.”

The use of ChatGPT beyond finance

Outside of financial services, firms across industries have put ChatGPT to work on certain tasks. Time will tell whether or not the bot becomes a key part of these companies’ day-to-day operations; given ChatGPT’s popularity, a publicised announcement of its deployment in a given field could be a short-term marketing effort rather than a serious commitment.

The dating app OkCupid has used the bot to generate questions for users. The answers given are then used by OkCupid’s algorithm to match compatible people. ChatGPT came up with questions including “Are you more of an introvert or extrovert?”, “Are you a morning or night person?”, “What’s your favourite way to spend a weekend?” and “What do you value most in a partner?”

More serious users might include the Reach Group, which publishes the Daily Mirror and Daily Express. Reach has set up a working group to examine how ChatGPT could be used to support its editorial team and reduce the time spent on more routine coverage of weather and traffic.

Law firm Allen & Overy has been using a modified form of the bot to help its staff write contracts and memos. The company has partnered with a legal AI startup called Harvey, which is based on ChatGPT’s technology and received funding from ChatGPT developer OpenAI’s startup fund. A&O says that Harvey’s outputs must be thoroughly checked by lawyers and that it is not part of a “cost-cutting” effort.

JP Morgan has banned the use of ChatGPT among its employees, citing compliance concerns, according to CNN. Beyond that, analysts from the US bank have opined on the technology’s potential impacts on IT and consulting firms. Generative AI like ChatGPT is likely to slow market share gains for firms like Infosys and Wipro, the bank said in a research note, while Deloitte and Accenture are expected to benefit.

ChatGPT and risk

Despite ChatGPT’s clear potential and wide range of uses, it has drawbacks that any good financial institution will need to understand before the bot can be used in a systematic way.

1. ChatGPT can ‘hallucinate’

Although it’s among the most advanced consumer chatbots available, ChatGPT doesn’t really ‘understand’ the information it spits out. Rather, it digests large quantities of language information and makes statistics-based inferences to create natural-seeming responses to user inputs. Therefore, users can receive plausible and grammatical statements that end up being entirely incorrect.

I asked ChatGPT to tell me about AI researcher Manuela Veloso’s 2017 paper, ‘The Core of the Machine’. Veloso is a real computer scientist, but the paper is something I made up. ChatGPT, however, proceeded to tell me that in ‘The Core of the Machine’, Veloso “proposes a new framework for building intelligent autonomous agents that can learn and adapt to changing environments”. While the response to the prompt is totally reasonable, the bot hasn’t mentioned that the paper it is discussing doesn’t exist. In another example, the US news magazine Fast Company asked ChatGPT to produce a quarterly earnings report for Tesla; the result was plausibly worded, but the numbers themselves didn’t correspond to reality.

This trouble differentiating between true and false – the ‘hallucination’ phenomenon – is well documented, so asset managers and other financial companies will, at the very least, need to fact-check any writing produced by ChatGPT.

2. ChatGPT is at an early stage

ChatGPT is currently an internet application. OpenAI has yet to release the tool as a download, so anybody wishing to use it must log on to the site and deal with wait times and crashes, which are quite frequent at the moment. Commercial API releases are only just emerging, so it may be some time before the cautious, highly regulated world of financial services can fully embrace them.

According to a recent leak, OpenAI will soon begin rolling out a business-friendly form of the tool, allowing companies to run their own dedicated versions of ChatGPT at an as-yet-undisclosed date. The leak also included details on OpenAI’s planned rental-pricing structure, new capabilities in the business package and more. While an enterprise version of the tool is likely to come with security assurances, the bot’s browser-only form should limit usage for now.

3. ChatGPT outputs could infringe copyright or data protection laws

According to law firm Stephenson Harwood, the massive datasets used to train ChatGPT are likely to contain both copyrighted material and personal data. The reproduction of copyright-protected content and the processing of personal data – even if inadvertent – can have serious consequences for financial companies, which are subject to both the laws of the land and stringent industry-specific regulation.

“The risk of [AI] output infringing the rights of a third party, or at least of action being taken, is not just theoretical,” says Stephenson Harwood in its coverage. “In January 2023, Getty Images started proceedings against Stability AI […] claiming that Stability had copied millions of images from [Getty’s] database to train image generation models.” Asset managers would likely need assurances that they would not be targeted in any ChatGPT-related suits involving copyright or personal data before they felt comfortable using the bot for day-to-day processes.

4. ChatGPT could be a compliance risk

While ChatGPT’s accessibility and sophistication have helped it to make a splash, it’s important to note that all major financial companies have been using AI applications for years across a range of functions. Data scrapers are deployed to identify suspicious transactions as part of banks’ anti-money laundering efforts; neural networks have been used to assist with aspects of trading as early as the 1980s; and robo-advisers – which debuted over a decade ago – provide investment guidance to clients with limited human intervention. The AI research publications library maintained by JP Morgan helps to give a sense of the scale of AI’s embeddedness in finance, with applications spanning fields from risk management and market simulation to payment pattern analysis and cryptography.

As technology advances, AI-specific rules are being discussed by the UK’s key regulators (the Bank of England and the FCA). Meanwhile, any AI applications developed outside of a given firm are subject to third-party supplier rules. There is a wealth of existing regulation that financial services firms have to follow here, and it is detailed and extensive – as mentioned above, JP Morgan has banned the usage of ChatGPT in its current form, illustrating the seriousness with which financial companies approach potentially risky technology. To remain compliant with regulation, firms must have a high degree of confidence in all their outsourcing suppliers and a deep knowledge of how they operate and process data.

5. ChatGPT is not a human

This last drawback may be particularly important to financial companies who are keen to differentiate themselves in their communications; ChatGPT, after all, is available to everybody. While the tool is useful for proofing, summarisation and quick writing tasks, there is a certain “bot vibe” to its prose that companies – and readers – may eventually find tedious.

There’s a wealth of information out there on generative AI, and it’s a rapidly evolving field. ChatGPT and tools like it are likely to have seismic impacts on a range of industries, but it’s tough to guess exactly what shape those impacts will take right now. Unanswered questions abound: what will AI-specific regulation look like, in financial services and beyond? For businesses, how should a company transition into the potentially unfamiliar territory that AI-centric work presents? And how will legislators approach the issue, especially if mass layoffs result from large-scale adoption of the technology?

For Copylab, generative AI could be transformative. While the financial industry’s adoption of tools like ChatGPT will depend on decisions made in compliance departments and on improvements in the tools themselves, Copylab’s years of expertise and openness to new technology equips us to adapt to a changing world. But whether ChatGPT ends up as a nifty proofing tool, a full-scale AI writing partner or just a flash in the pan, Copylab’s team of writers and editors will remain focused on the delivery of industry-leading financial communications.