How companies are embracing generative AI for employees...or not
Views:
1970-01-01 08:00
Companies are struggling to deal with the rapid rise of generative AI, with some rushing to embrace the technology while others shun it -- at least for now.

Companies are struggling to deal with the rapid rise of generative AI, with some rushing to embrace the technology as workflow tools for employees while others shun it -- at least for now.

As generative artificial intelligence -- the technology that underpins ChatGPT and similar tools -- seeps into seemingly every corner of the internet, large corporations are grappling with whether the increased efficiency it offers outweighs possible copyright and security risks. Some companies are enacting internal bans on generative AI tools as they work to better understand the technology, and others have already begun to introduce the trendy tech to employees in their own ways.

Banning AI? Embracing AI? Or bracing for AI?

Many prominent companies have entirely blocked internal ChatGPT use, including JPMorgan Chase, Northrup Grumman, Apple, Verizon, Spotify and Accenture, according to AI content detector Originality.AI, with several citing privacy and security concerns. Business leaders have also expressed worries about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere.

When users input information into these tools, "[y]ou don't know how it's then going to be used," Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN in March. "That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, McCreary said, "I think the opportunity for company trade secrets to get dropped into these different various AI's is just going to increase."

But the corporate hesitancy to welcome generative AI could be temporary.

"Companies that are on the list of banning generative AI also have working groups internally that are exploring the usage of AI," Jonathan Gillham, CEO of Originality.AI, told CNN, highlighting how companies in more risk-averse industries have been quicker to take action against the tech while figuring out the best approach for responsible usage. "Giving all of their staff access to ChatGPT and saying 'have fun' is too much of an uncontrolled risk for them to take, but it doesn't mean that they're not saying, 'holy crap, look at the 10x, 100x efficiency that we can lock when we find out how to do this in a way that makes all the stakeholders happy" in departments such as legal, finance and accounting.

Among media companies that produce news, Insider editor-in-chief Nicholas Carlson has encouraged reporters to find ways to use AI in the newsroom. "A tsunami is coming," he said in April. "We can either ride it or get wiped out by it. But it's going to be really fun to ride it, and it's going to make us faster and better." The organization discouraged staff from putting source details and other sensitive information into ChatGPT. Newspaper chain Gannett paused the use of an artificial intelligence tool to write high school sports stories after the technology called LedeAI made several mistakes in sports stories published in The Columbus Dispatch newspaper in August.

Of the companies currently banning ChatGPT, some are discussing future usage once security concerns are addressed. UBS estimated that ChatGPT reached 100 million monthly active users in January, just two months after its launch.

That rapid growth initially left large companies scrambling to find ways to integrate it responsibly. That process is slow for large companies. Meanwhile, website visits to ChatGPT dropped for the third month in a row in August, creating pressure for large tech companies to sustain popular interest in the tools and to find new enterprise applications and revenue models for generative AI products.

"We at JPMorgan Chase will not roll out genAI until we can mitigate all of the risks," Larry Feinsmith, JPM's head of global tech strategy, innovation, and partnerships said at the Databricks Data + AI Summit in June. "We're excited, we're working through those risks as we speak, but we won't roll it out until we can do this in an entirely responsible manner, and it's going to take time." Northrop Grumman said it doesn't allow internal data on external platforms "until those tools are fully vetted," according to a March report from the Wall Street Journal. Verizon also told employees in a public address in February that ChatGPT is banned "[a]s it currently stands" due to security risks but that the company wants to "safely embrace emerging technology."

Companies creating custom AI tools

"They're not just waiting to sort things out. I think they're actively working on integrating AI into their business processes separately, but they're just doing so in a way that doesn't compromise their information," Vern Glaser, Associate Professor of Entrepreneurship and Family Enterprise at the University of Alberta, told CNN. "What you'll see with a lot of the companies that will be using AI strategies, particularly those who have their own unique content, they're going to end up creating their custom version of generative AI."

Several companies -- and even ChatGPT itself -- seem to have already found their own answers to the corporate world's genAI security dilemma.

Walmart introduced an internal "My Assistant" tool for 50,000 corporate employees that helps with repetitive tasks and creative ideas, according to an August LinkedIn post from Cheryl Ainoa, Walmart's EVP of New Businesses and Emerging Technologies, and Donna Morris, Chief People Officer. The tool is intended to boost productivity and eventually help with new worker orientation, according to the post.

Consulting giants McKinsey, PwC and EY are also welcoming genAI through internal, private methods. PwC announced a "Generative AI factory" and launched its own "ChatPwC" tool in August powered by OpenAI tech to help employees with tax questions and regulations as part of a $1 billion investment for AI capability scaling.

McKinsey introduced "Lilli" in August, a genAI solution where employees can pose questions, with the system then aggregating all of the firm's knowledge and scanning the data to identify relevant "With Lilli, we can use technology to access and leverage our entire body of knowledge and assets to drive new levels of productivity," Jacky Wright, a McKinsey senior partner and chief technology and platform officer, wrote in the announcement. content, summarize the main points and offer experts.

EY is investing $1.4 billion in the technology, including "EY.ai EYQ," an in-house large language model, and AI training for employees, according to a September press release

Tools like MyAssistant, ChatPwC and Lilli solve some of the corporate concerns surrounding genAI systems through custom adaptions of genAI tech, offering employees a private, closed alternative that both capitalizes its ability to increase efficiency and eliminates the risk of copyright or security leaks.

OpenAI and Microsoft launch 'enterprise' AI tools

The launch of ChatGPT Enterprise may also help quell some fears. The new version of OpenAI's new tool, announced in August, is specifically for businesses, promising to provide "enterprise-grade security and privacy" combined with "the most powerful version of ChatGPT yet" for businesses looking to jump on the generative AI bandwagon, according to a company blog post.

The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

In response to the concerns raised by many companies over security, about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere, OpenAI's announcement blog post for ChatGPT Enterprise states that it does "not train on your business data or conversations, and our models don't learn from your usage."

In July, Microsoft unveiled a business-specific version of its AI-powered Bing tool, dubbed Bing Chat Enterprise, and promised much of the same security assurances that ChatGPT Enterprise is now touting -- namely, that users' chat data will not be used to train AI models.

It is still unclear whether the new tools will be enough to convince corporate America that it is time to fully embrace generative AI, though experts agree the tech's inevitable entry into the workplace will take time and strategy.

"I don't think it's that companies are against AI and against machine learning, per se. I think most companies are going to be trying to use this type of technology, but they have to be careful with it because of the impacts on intellectual property," Glaser said.

Tags policy ai generative epus one epus scitech corporate