Smart Stuff

Trust Labyrinth

Generative AI - A Problem Of Trust

Written by: | |

Suddenly it seems that artificial intelligence is everywhere. The use of AI in business is nothing new. Companies have been using AI technology to cut costs and increase efficiency for years. It’s only now though that it’s being propagated throughout your computing environment. This is thanks to the recent surge in the generative AI market so that you can now ask for AI assistance to do a wide variety of both routine and complex tasks in your daily work life.

Are Chatbots Trustworthy?

Trust me I'm a Chatbot

We are told that AI can be a powerful tool in the daily work environment, helping to automate repetitive tasks, improve decision-making, and enhance productivity. This sounds good, but from our experience with AI to date we believe that AI is a long way from delivering some of the promised benefits. The AI marketing machine is very convincing. Like Wikipedia, Open AI chatbots seem very trustworthy, but the reality is that they are often a source of misinformation. AI chatbots are designed to provide answers to user queries based on the data and algorithms they have been trained on. This data, however, may not always be accurate, complete, or up to date. Or it could be biased. Depending on how the question is posed, the algorithm may misinterpret the question and give wrong information, and this can have serious consequences if acted upon. So, while chatbots can be incredibly useful tools, they are not a replacement for human judgment and critical thinking.

Like many companies, we would like to use AI to enhance the capability of our technical help desk. We have an exhaustive knowledge base from years of experience providing Tier 3 Netezza support, and we also support some other products and services. Our help desk and content management systems are AI enabled. Our core strength is problem-solving and resolution, and we have a platform and processes in place for extending technical support to any technology. One way that AI might assist would be to interrogate a new ticket and based on what it finds in the knowledge base or in Open AI, recommend a solution, thus speeding up the time to resolution. Our problem is that we can’t trust this information. AI generated information is often inaccurate or worse, a complete hallucination. Validating what is useful can be far more time consuming and frustrating than a traditional internet or technical manual search would be. So, in this case regenerative AI can have a negative effect on the productivity of the technical experts.

The Reputational Risks Inherent in Generative AI

When customers are paying for Tier 3 support, they should rightly be alarmed if the Tier 1 support team, with little or no relevant technical expertise, responds to their tickets with an AI generated response. This happened very recently with one of our important Netezza customers. The Open AI response, which was a complete hallucination, was emailed directly to the client. If the client had followed the recommendations, a severe breach of their security policy may have occurred. The amount of work that was required by us to undo this ignorant action was substantial. As the saying goes, a little knowledge is a dangerous thing.

What is gaining popularity is for companies to only offer Tier 0 support in the form of chat bots or automated responses, all in the name of enhancing the customer experience. This is really nothing more than a new form of outsourcing; cutting headcount to reduce costs. It may work in some scenarios, but unless it is easy to interact with and the information provided fixes the customer’s problem, or there is the option to speak to a real person, you can severely damage your reputation.

Applying AI to Data Analytics

In our own field of data analytics AI is already being used to analyse large amounts of data and provide insights that can help with decision-making. There have nonetheless been some spectacular false starts. IBM’s Watson, for example, was supposed to revolutionise health care by providing insight to oncologists about care for cancer patients, delivering insights to pharmaceutical companies about drug development, helping match patients with clinical trials, and more. It sounded revolutionary, but it never really worked for the simple reason that the doctors couldn’t trust the information it provided, and all the manual verification it needed negated any benefits.

New generations of generative AI have come along, but we believe that problem continues. You can automate certain tasks, but there are several things that AI cannot do. It still requires human experts to understand the context of the data, interpret the results, and make informed decisions based on the insights. While AI can handle simple and routine data analysis tasks, complex analysis requires a higher level of expertise. Data analysts and data scientists can identify patterns, trends, and correlations that may not be immediately apparent to AI models. If the end goal, however, is to percolate this capability throughout an organisation ordinary people need to be able to query AI, and that means using natural language and curating the data for this purpose is no mean task.


So, in summary, whilst the recent explosion in generative AI is exciting and offers many opportunities for companies and individuals to work smarter, the technology isn't completely reliable yet and can never fully replace a human. You need to be able to trust the information you get and have checks in place to avoid its misuse. As a business we see many opportunities to develop methods and systems that plug that trust gap, so that ordinary people can ask natural language questions and get an accurate answer, so it is definitely looking like a promising and exciting journey ahead.

Want to comment? Why not get in touch with us.

Author Bio