AI21 Labs has unveiled “Contextual Answers,” a question-answering engine for large language models (LLMs).
The new engine allows users to add their data libraries when connected to an LLM, limiting the model’s outputs to certain data. ChatGPT and comparable artificial intelligence (AI) solutions have fundamentally changed the AI market, yet adoption by many enterprises is challenging due to a lack of confidence.
Employees, according to a study, look for information for over half of their working hours. The possibilities for chatbots that can perform search operations are enormous, but most aren’t designed with enterprise in mind.
Created by AI21, Contextual Answers allows customers to pipeline their own data and document libraries, bridging the gap between chatbots made for general usage and enterprise-level question-answering services.
Contextual Answers, according to a blog post from AI21, allow users to direct AI responses without retraining models, reducing some of the adoption’s largest barriers:
“Most businesses struggle to adopt [AI], citing cost, complexity and lack of the models’ specialization in their organizational data, leading to responses that are incorrect, ‘hallucinated’ or inappropriate for the context.”
Teaching LLMs to communicate a lack of confidence is one of the unsolved problems in creating practical LLMs, such as OpenAI’s ChatGPT or Google’s Bard.
A chatbot will typically respond to a user’s query even if its data set lacks sufficient knowledge to provide factual information. LLMs frequently output false information in these situations instead of a low-confidence response like “I don’t know.”
Researchers call These results ” hallucinations ” because the machines produce information that appears missing from their data sets, much like people who perceive things that aren’t there.
A121 states that Contextual Answers should eliminate the hallucination issue by only displaying information when it is pertinent to user-provided documentation or by not displaying any information.
The introduction of generative pretrained transformer (GPT) systems has had various effects in industries where precision is more crucial than automation, such as finance and law.
Due to the tendency of GPT systems to confuse or hallucinate information, even when connected to the internet and capable of referring to sources, experts continue to advise care when employing them in the finance industry.
In the legal field, a lawyer who relied on ChatGPT outputs during a lawsuit is now subject to penalties and sanctions. AI21 has shown mitigation for the hallucination issue by front-loading AI systems with useful data and acting before the system can hallucinate false information.
This could lead to widespread adoption, especially in the fintech sector, where traditional financial institutions have been hesitant to use GPT technology and the cryptocurrency and blockchain communities have had mixed success using chatbots.