AI Impact Tour: How companies are going from ideation to real-world implementation


The VentureBeat AI Impact tour made a stop in San Francisco, and this time the conversations dove into the real world applications of generative AI, and what it actually takes to deploy a gen AI initiative. Matt Marshall, CEO of VentureBeat, pushed past the hype to get serious about gen AI deployment with keynote speaker Ed Anuff, DataStax’s chief product officer, as well as Nicole Kaufman, chief transformation officer at Genesis Health and Tisson Matthew, CEO of Skypoint.

The conversation is crucial as enterprises graduate from the experimental, ideation phase that characterized the last year of generative AI excitement. They’re moving beyond testing out the possibilities of ChatGPT and natural language interfaces in general, and starting to answer the important question:  how do we take that power and integrate it with our own business-critical data — and then how do you put it into production?

“What we’re seeing is an AI maturity model emerging that shows that companies are moving from one-off projects that are primarily about getting a few quick wins and educating the company about the potential,” Anuff said. “You are seeing critical AI business initiatives where typically there’s a business champion going and saying, how do we deploy AI in a critical, high-visibility, high-impact way? That usually takes a little bit more time, but the payoffs are there and those are where the transformative aspects will happen.”

Gen AI is useful for a large number of use cases, spanning from back office to front office, to a public website, to mobile. And organizations may still be using terms like “chatbots” or “conversational interface,” but at the end of the what they’re building is a knowledge app — figuring out how to retrieve knowledge interactively in a situation-appropriate setting. The question becomes whether to do it in-house, or use one of the increasing number of off-the-shelf products.

Pre-production considerations

For applications like customer support or financial analysis, many companies want to leverage gen AI to set up an application that can generate results from internal data or reports, Anuff said.

“Those types of applications, depending on how much data you have, depending on the nature of the customized interface, you may use something that’s off-the-shelf,” he said. “There are solutions from Amazon and others that simply provide you with a way to upload a bunch of documents and have the chatbot respond against. And that’s a very good way of getting an out-of-the-box experience in a very short amount of time.”

But as you move from back office, small teams applications to addressing use cases that are critical to your core business activities, especially those that are external-facing, off-the-shelf no longer works, such as in use cases that require a lot of customized data curation. Anuff pointed to healthcare applications, connecting the gen AI interface to data sources so it can respond in real time as information changes or is updated — such as patient readings in a hospital setting, and so on. Anuff also spoke about the AI agents many Asia-Pac financial institutions are deploying, with chat financial planning directly accessible from financial statements.

“That’s not something you get out-of-the-box,” he said. “That is a custom bespoke AI RAG (retrieval augmented generation) application that is against your core data assets. if you’re a HomeDepot or Best Buy, you don’t build your website on WIX. You’ve got thousands of web engineers that are building a custom tailored experience because it’s core to your brand and your core business activities.”

Calculating readiness and cost

As enterprises move past the ideation stage they start to run into two primary issues, Anuff said.

“First is relevancy, which for many of us dealing with data is somewhat of a new parameter and a new measure, which is just how appropriate are these responses?” he explained. “A lot of it is just relevancy and retrieval issues, inefficiencies or just retrieving the wrong content. And a lot of companies struggle with that. That ends up going and forcing you to rethink in many cases your entire data architecture.”

And that, in large part, impacts the second piece, which is cost. It’s already expensive to find a way to surface relevant and clean results — next you need to determine how much more production will cost.

“As we talk with folks, that’s a really good way of calibrating realistically how close they are to production,” he explained. “If people are still at the stage where they’re struggling with relevancy, we know that they’ve made it past the initial architecture pieces. On the other hand, the production costs, these things do tend to go hand in hand. These are the two big bookends.”

Hallucinations, data and the importance of RAG

The term “hallucinations” gets used whenever a response turns up that seems wrong. It’s good as a colloquial term, but not all bugs and lack of relevance from an AI system are, in fact, hallucination — it could in fact simply be an error in the training set. Hallucinations happen when an LLM uses its training data as a launch pad to start making assumptions and speculations, and responses start to get a bit fuzzy. But there are ways around that, Anuff said, and part of that comes from RAG.

RAG is a natural language processing (NLP) process that merges AI based on knowledge retrieval with generative AI. RAG can also process and consolidate data from an internal knowledge base to return answers that are context-aware, in natural language, rather than just summarizing.

“[A large language] model is good for two things. One, it has an amazing language factor in terms of understanding what you said and what you meant,” Anuff said. “The second piece is it is also a knowledge base. How much of its own knowledge it uses is something that the programmer decides. You tell the model, limit your response that you generate to this information I provided as much as possible. You’re doing something called grounding. And what that means is that the chances of hallucination are reduced significantly because the model is not running off on a tangent. It’s essentially using the language faculty of the model to reorganize the content that it already had. This is why RAG and the variance of RAG has become such a powerful piece for reducing hallucinations.”

The second and more important reason RAG is critical is because it’s how you get your real-time company data accurately, safely and securely into the model at the time of inference, he added.

“There are other techniques of getting your data in there, but they’re not safe, they’re not real-time, they’re not secure,” he said. “So that’s why you are going to see this model database coupling for a long time to come, whether we call it RAG or not.”



Source link

About The Author

Scroll to Top