In brief
The most common question people asked Accomplish in 2023 was whether we use artificial intelligence (AI) to analyse client data from the Client Behavior Benchmark and create recommendations. The answer is no. Read on to discover why.
Common uses for AI
AI is an umbrella term for a growing number of capabilities and, as a digital benchmarking firm, three categories of tools stand out for us as worthy of evaluation.
Supporting client interactions – this category includes writing assistants that check the quality of planned communications (e.g. Grammarly), notetaking applications for client meetings (e.g. Otter.ai), and real-time support to colleagues with queries about the business (e.g. SecondBrain). We actively encourage the first; we are deterred from the second because we believe recording client meetings may discourage debate; and we see no business case (yet) for the third, but this may change as we continue to grow.
Generating digital marketing content – we have looked at applications like Jasper.ai that create text content, DALL·E 2 for images, Tome for slide decks, and Lumen5 for videos. We do not prohibit applications like these, but we do not use them because we want our brand to remain distinctive.
Performing analytics and creating insights and recommendations – while the first two categories of capability are peripheral to benchmarking, this third group of tools goes to the heart of our provision of business intelligence. Specifically, will we use AI to analyse client data and create insights and recommendations?
Why Accomplish does not use AI for data analytics and insights
There are two main options we could use to analyse our client data: an off-the-shelf solution (e.g. Chat GPT) or a proprietary AI model. Here are the reasons why we use neither.
An off-the-shelf solution would breach our information security policy
Nothing is more important to us than the security of data the asset managers who participate in the Benchmark entrust to us.
Hypothetically then, what would be the impact of inputting Client Behavior Benchmark data into, for example, a tool like ChatGPT?
Basically, using ChatGPT is the same as publishing your input data onto the internet. This is because OpenAI’s terms of business are such that a user cedes ownership of their input data and permits OpenAI to use it for its own commercial purposes – related or unrelated to ChatGPT.
Would we do that? Would we input client confidential data into a tool under those terms? No, never. It would be a breach of our legal agreements with clients and, therefore, a disciplinary matter likely to result in termination, at a minimum.
As a result, our information security policy expressly prohibits the use of an off-the-shelf AI application for the purposes of analyzing our benchmarking data.
The business case for a proprietary AI model is unconvincing and, at best, a few years away
A proprietary model would overcome the problem of information security, but two other issues would remain that result in an unconvincing business case.
The first relates to the length of the data series needed. On a quarterly basis, we collect data on the effect asset managers have on institutional client behavior. This gives us four cuts of data per year and, at the time of writing, the data series is approaching the end of its third year.
To begin predicting future behaviors and establishing statistical correlations between behaviors, we believe any mathematical model would need a minimum of between 20 and 40 data points. This would require five to ten years’ worth of data, placing this decision between two and seven years in the future. Unfortunately, no amount of computing power will speed up this quarterly accretion of data.
Even then, a second reason leaves us unconvinced that AI could outperform already-automated data science tools. This is because the advantage AI has over them is its ability to learn from similar datasets, yet the Client Behavior Benchmark’s dataset is unique. To our knowledge, there is no other body of information on the effects asset managers have on whether institutional clients buy, stay, and buy more. This means that any proprietary AI model would have no other similar datasets from which to learn, limiting its advantage over existing data science tools.
To conclude …
For the central task of analysing client data and creating insights, we prohibit the use of publicly available ‘off-the-shelf’ tools, and we are yet to be convinced by the business case for a proprietary model.
As a result, an asset manager that uses the Client Behavior Benchmark can rest easy in the confidence that their data will never pop up in response to a query someone may input into ChatGPT or any other AI tool.
To be sure, any sensible company keeps its policies subject to review should the situation change. However, to clarify, we have no foreseeable plans to amend this policy and, should that ever change, we commit here to seeking our users’ prior and explicit approval.