news

Why companies including JPMorgan and Walmart are opting for internal gen AI assistants after initially restricting usage

Why companies including JPMorgan and Walmart are opting for internal gen AI assistants after initially restricting usage
Krongkaew | Moment | Getty Images
  • Companies like JPMorgan Chase and Walmart, that once restricted employees' use of generative AI, are now allowing them to use it.
  • Building a guardrail-equipped internal solution helps deal with privacy concerns.
  • The main risks related to employee's use of gen AI are leaking of sensitive information, hallucinations and industry compliance.

Eighteen months after restricting employee use of generative artificial intelligence solutions like ChatGPT, JPMorgan Chase CEO Jamie Dimon rolled out an AI assistant of the company's own making. In an almost full-circle fashion, the solution is built on the technology of ChatGPT's maker, OpenAI. Dubbed LLM Suite, the banking giant has already released the service to 60,000 employees for tasks like writing reports and crafting emails.

This transition from restricting employees' use of gen AI to building a guardrail-equipped internal solution is becoming a common theme among companies, both big and small, that are trying to leverage the power of the technology. More than a quarter (27%) of organizations have at least temporarily banned public gen AI applications, according to Cisco's latest Data Privacy Benchmark report, and the majority of businesses have limited what solutions employees can use and how they can use them.

Meanwhile, many of those same companies say their employees tend to use restricted applications anyway, according to a recent survey from cybersecurity company Extrahop — making providing an alternative solution with ample safeguards a clear necessity.

The main concerns about an employee's use of gen AI are leaking of sensitive information, hallucinations and industry compliance, according to a recent report from enterprise data management company Veritas.

The outputs that AI platforms generate don't come out of thin air. Regarding the worry about publicizing sensitive information, some large language models store user inputs (the stuff a person types into the chat for a gen AI platform to respond to) and use them to train, or improve, generative AI capabilities. This can put sensitive information about a company or its customers at risk, which is why many organizations opt for bans or restrictions until they can figure out how to manage the technology themselves.

Walmart's approach to employee AI

JPMorgan is not the only globally recognized company that went from restricting gen AI to bringing it in house. Last year, Walmart released My Assistant to 50,000 employees and has since expanded its availability to another 25,000 employees in 11 different countries.

"We've been proactive in defining principles that guide our use of AI," said David Glick, senior vice president of Enterprise Business Services at Walmart. Those principles included hearing what associates want help with, including summarizing large swaths of information, helping corporate employees better navigate the employee resource planning (ERP) system and automating certain tasks.

While My Assistant is a massive tool with expansive reach, Glick is also focusing on smaller, more precise projects. For example, Walmart is using gen AI to help its Benefits Help Desk team, who work to help all employees navigate the 300-page benefits guide. Rather than replacing the team members with a fallible chatbot that has limited capabilities, gen AI is enhancing the search and support capabilities of those team members.

"It may be that gen AI will be a bunch of little things that we do from the bottom up that makes associates' lives better and makes us more efficient every day," said Glick.

Keeping data from going 'external'

IT managed service provider Ensono is another company that changed its thinking on employee access to gen AI tools. Tim Beerman, the company's chief technology officer, made the decision to restrict employee use of large language models like ChatGPT as a way to protect sensitive data. But Beerman still wanted to leverage "the vast amounts of unstructured data about our company to provide value to our associates without letting this stuff go external," he said.

Ensono spent the summer rolling out its internal AI assistant to all 3,500 associates. The solution is built on GPT-4o but maintains flexibility to bring in various language models based on the type of data they want to use. "We provide our associates a single interface to these tools," Beerman said.

In the meantime, Ensono is moving department by department to develop small language models that cater to more specific use cases, like root cause analysis, request for proposal (RFP) responses and more. For all of this, model flexibility is key, as "what's true today is not going to be true 12 months from now," Beerman said.

Jason Hishmeh, chief technology officer of startup software development company Varyence, previously banned inputting internal, confidential and restricted data into any gen AI solution.

Now, Varyence and the startups it works with use an internal system that automatically prompts data classification when someone creates data such as emails and Word documents. The internal gen AI platform has built-in guardrails to keep non-public data secure, and it serves as an avenue for employees to tinker.

"Everyone wants to use it, wants to see how it can help their departments be more efficient," Hishmeh said about gen AI, explaining why banning is only a bandage and companies of any size must eventually deliver safe solutions for employees to work with.

As many companies transition from banning gen AI to bringing it in house, the outlook of how this technology serves the workforce may also shift. But as AI years shape up to be at pace with dog years amid rapid innovation, companies will have to remain flexible with how they implement gen AI policies, even as safety and security stay central to the conversation.

Copyright CNBC
Contact Us