Should chatGPT be allowed in the Enterprise?

Posted: February 16, 2023

Unless you have been living under a rock for the last couple months, you’ve at least heard of chatGPT. If you haven’t tried it yet, you will soon (whether you want to or not). Given Google and Microsoft’s relatively aggressive plans to integrate this capability into all their existing technologies, and the fact that both of these tech giants have a place in nearly every organization’s technology stack, this is something that all businesses are going to have to address in the very near future.

It’s a classic risk management problem of whether the benefits outweigh the potential costs. There are tremendous advantages to leveraging tools like chatGPT to better enable your workforce and accelerate your teams’ productivity. But there are also some new and unique risks that should be considered as well. In this article, we are going to explore some of those risks. But first, let’s make sure everybody is up to speed.

The Meteoric Rise of LLM technology – A Brief History

On November 30, 2022, OpenAI released the beta (“research release”) version of chatGPT to the public. chatGPT is a Large Language Model (“LLM”), a product of emerging ML (“Machine Learning”) and A.I. (“Artificial Intelligence”) technologies. Think of it as a charismatic and really smart assistant, who has all of the “knowledge” of the Internet and can instantly answer any question you ask it.

Within 2 months of its beta release, chatGPT had over 100 million active users — the fastest growing user base for any technology platform in history (far outpacing the previously meteoric rise of platforms like TikTok and Instagram).[i] The unprecedented flood of media and public enthusiasm made it immediately apparent – the future of how we consume information on the Internet would no longer be a process of searching through indexes of web content that may or may not have the specific answer you are looking for (the way search engines currently work). The future of consuming information on the internet would be in asking a direct question and getting the exact answer you were looking for, and more importantly, getting that answer instantly.

In December 2022, rumors began circling of Microsoft leveraging its existing business partnership with OpenAI to power it’s Microsoft products (Bing, Edge, Microsoft Office, and even possible Windows OS-level integrations), and reports were leaked that Google had declared an internal “Code Red”, in the wake of the rising popularity of chatGPT[ii]. In January 2023, Google announced the pending future release of “Bard”, its own A.I. powered chat system[iii].

Finally, in the weeks since, both Microsoft and Google have become locked into a rapidly escalating A.I. arms race, which will inevitably determine the information super-power of the future and drastically reshape the way we interact with technology. And all of this occurred within the span of a few months!!!

What harm could it do?

In the simplest terms possible, the biggest risk related to LLM solutions is that they are sometimes wrong. This technology is still in its early stages, and while it is already VERY POWERFUL, it does sometimes make mistakes. There have been many demonstrations floating around social media of incorrect statements that have been made by chatGPT, and other LLMs. But even when these systems are wrong, their presentation is often highly convincing and could easily lead an uninformed person to believe that what it is saying is true and accurate. In a recent post on Twitter, screenshots were published of the pre-release Bing chat (an implementation of chatGPT) arguing with a user that the current year is 2022, and not 2023. Little argument can be made in defense of the chat system’s position, but nonetheless, the Bing chat utility proceeded to gaslight the user, in an effort to convince them that it was correct.

A Question of Culpability?

Given the mistakes that these systems can make, and the level of conviction with which they represent this misinformation, it is conceivable to imagine scenarios in which things go horribly wrong if/when a user is given incorrect information, assumes that information to be fact, and then acts upon it within a business context. This will inevitably lead to questions of civil and even criminal liability. Consider some of the following scenarios:

  • A doctor uses an LLM when attempting to diagnose a patient, resulting in a misdiagnosis and now the healthcare firm employing them is facing accusations of medical malpractice.
  • A civil engineer uses an LLM to assist in building code for a traffic control system. Due to a flaw in the code, several major traffic accidents occur at multiple intersections, resulting in the unfortunate loss of life.
  • A pharmaceutical research organization faces litigation over a flawed product which was synthesized and engineered using flawed information supplied by an LLM.
  • A county government comes under fire after a local judge makes multiple flawed rulings when using an LLM to interpret legislation.

All of these are hypothetical scenarios. Nonetheless, they are also realistic and conceivable. Currently there is no legislation regulating A.I. and very little legal precedent to suggest who is culpable (the A.I. vendor, the employee, the company who allowed the use of A.I.) when things go wrong as a result of A.I. influence.

Additional Risk Factors

Even more concerning, is that the risk of misinformation is further exacerbated by two other key risk factors. These include both the risk of reinforcement bias, and the risk of data poisoning.

  1. Reinforcement Bias – A reinforcement bias occurs when a bias in a A.I. system becomes self-perpetuating. LLMs are trained on large amounts of input text data, which are mostly sourced from scraping content on the Internet. And we have now entered a new era, where an increasingly larger percentage of media content on the Internet is now being produced by A.I. This creates a feedback loop, in which the output of A.I. systems are now becoming their subsequent input. As A.I. generated content becomes more prevalent, the reinforcement of any existing biases will continue to intensify within the context of an infinite cycle of self-perpetuation. This will make early biases, flawed assumptions, or misinformation become more commonly accepted and widespread.
  2. Data Poisoning – The way an LLM model behaves and operates is largely contingent upon the data that is used for its training. Many of the data sources used to build these models are public knowledge. Even if you ask chatGPT directly what data sources it was trained on, it will happily provide you a list of data sources that were used for training.

Many of these sources (Wikipedia, Twitter, Reddit, GitHub, etc.) can be updated and/or manipulated by anybody. And to the extent that adversaries can manipulate (“poison”) enough data from any of its training sources, it may be possible to fully manipulate the system’s perception of truth and fact. Disinformation campaigns have the potential to spiral out of control by influencing downstream decisions (which are informed by A.I. systems that had been trained on the poisoned data).


The question of how an organization should handle LLMs or other emerging A.I. technologies is a challenging one. There are significant benefits to embracing this technology, but there are also potential risks as well. Unfortunately, there is not a “one size fits all” solution, and the simplest answer is just “it depends.” When making the decision of how to proceed, organizations need to address these decisions within the context of a larger risk management discussion which includes considerations regarding the organization’s planned use-cases for A.I. integration and enablement, the unique risks that these changes could introduce, and the organization’s overall risk tolerance.





This blog was written by Justin “Hutch” Hutchens, Director of Security Research & Development at Set Solutions