Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
A new report from Deloitte sheds light on the complex landscape of generative AI adoption in the enterprise, revealing both significant progress and persistent challenges. The survey, titled “The State of Generative AI in the Enterprise: Now decides next,” gathered insights from 2,770 business and technology leaders across 14 countries and six industries.
The survey is the latest in the company’s quarterly series on the state of gen AI in the enterprise. The first edition of the survey released in January found that business leaders were concerned about societal impact and tech talent.
The new report paints a picture of organizations striving to capitalize on gen AI’s potential while grappling with issues of scalability, data management, risk mitigation and value measurement. It highlights a critical juncture where early successes are driving increased investments, but the path to widespread implementation remains fraught with obstacles.
Key findings from the report include:
- 67% of organizations are increasing investments in gen AI due to strong early value
- 68% have moved 30% or fewer of their gen AI experiments into production
- 75% have increased investments in data lifecycle management for gen AI
- Only 23% feel highly prepared for gen AI-related risk management and governance challenges
- 41% struggle to define and measure exact impacts of gen AI efforts
- 55% have avoided certain gen AI use cases due to data-related issues
“I see a lot of our clients are prototyping and piloting, but not yet getting to production,” Kieran Norton, principal at Deloitte, told VentureBeat. “A lot of that relates to concerns around both data quality and implications thereof, including bias getting into a model.”
How risk concerns are impacting enterprise AI deployments
The Deloitte survey is one of many in recent weeks that aim to detail the current usage of enterprise AI. PwC released a report last week that showed that while interest in gen AI is high, there is a bit of a gap when it comes to assessing AI risks.
The Deloitte report goes a step further noting that AI risks might well be impacting enterprise deployments. According to Norton, executives have a significant level of concern and they’re not willing to move forward until they feel like those concerns can be addressed.
The Deloitte report highlights key risks including data quality, bias, security, trust, privacy and regulatory compliance. While these are not entirely new domains, Norton emphasized that there are nuances to gen AI. Kieran believes organizations can leverage their existing risk management programs to address these challenges. However, he acknowledged the need to enhance certain practices, such as data quality management, to mitigate the specific risks posed by generative AI.
“There are some nuances that have to be addressed, but it’s still core governance at the end of the day,” Norton said. “Data quality has been a concern for a long time and so maybe you need to dial up what you’re doing around data quality in order to mitigate the risk.”
One particular concern is the risk of hallucination, where a gen AI model produces incorrect or nonsensical outputs. Norton explained that this risk is certainly a concern and noted that it is often tied to a lack of understanding about the data being fed into the models. He suggests that for certain use cases organizations will turn to smaller, more targeted language models and specific training to reduce the risks of hallucination.
How enterprises can demonstrate the value of gen AI initiatives
One of the big findings in the report was that 41% of organizations struggled to actually effectively measure their gen AI effort. Even worse is the finding that only 16% have produced regular reports for their company’s CFO detailing what value is created by gen AI.
Norton explained that this difficulty stems from the diverse range of use cases and the need for a more granular, use-case-specific approach.
“If you have 20 different use cases you’re exploring across different parts of the organization, you know, you probably have apples, oranges, bananas and pineapples, so you’re not going to be able to measure all those in a similar fashion,” Kieran said.
Instead Norton recommends that organizations define key performance indicators (KPIs) for each specific use case, targeting the business problems they are trying to solve. This could include metrics like productivity, efficiency, or user experience improvements, depending on the particular use case. He suggests that organizations identify areas where there are problems in the business and then try to solve those problems.
” I think it’s really breaking it down to the use case level, more than it is approaching it as an overall portfolio, ” he said.
Source link