As with any nascent technology, ChatGPT has its advantages and disadvantages.
Notably, the AI-powered language processing tool has opened organizations up to reputational damage and cybersecurity risks since its launch on Nov. 30, 2022, including misinformation, plagiarism and copyright infringement, account compromise, and leaked information.
To effectively mitigate these and bolster a robust security posture, it is crucial business leaders thoroughly consider these factors when weighing whether to utilize ChatGPT for their business, and to what extent.
We explore these risks in depth, as well as relevant case studies, so you can make an informed decision about your company’s tech policies.
Potential Business Risks of Using ChatGPT
Misinformation
ChatGPT comes with a disclaimer that it “may produce inaccurate information about people, places, or facts.”
This is something New York lawyers Steven Schwartz and Peter LoDuca learned the hard way.
Longtime colleagues, Schwartz was using ChatGPT to help LoDuca supplement research for his case representing Roberto Mata, who sued an Avianca Airlines employee for allegedly injuring his knee with a drink cart.
When Schwartz asked ChatGPT for similar court cases, it turned up six results—all of which, unbeknownst to them, were complete fabrications by the chatbot.
When Schwartz asked whether one of them—”Varghese v. China Southern Airlines”—was authentic, ChatGPT assured him that it was in fact “a real case.” It said the same about the five other cases it listed, adding that they all could “be found on legal research databases such as Westlaw and LexisNexis.”
Schwartz included all six in the legal brief, under LoDuca’s name, but the airline employee’s legal team soon wrote in response “that the authenticity of many of these cases is questionable,” after they were unable to find them online.
Schwartz and LoDuca were forced to pay a $5,000 fine, and could still face discipline from the New York State Bar Association.
This is not the first time, and it won’t be the last, that someone utilized the AI tool for research purposes, only to be returned with bogus, and potentially damaging, source material.
ChatGPT’s algorithm is based on patterns, not facts and data, and is trained using datasets with specific cutoff points. In other words, ChatGPT does not have access to current information.
Therefore, in addition to the types of “hallucinations” experienced by the lawyers mentioned above, real data presented in ChatGPT’s responses may well be out of date or inaccurate, and should not be used for business-related research without separately verifying the information.
Plagiarism & Copyright Infringement
Another concern is the potential for copyright infringement and plagiarism.
ChatGPT doesn’t attribute to its sources directly (unless you’re using Microsoft Edge’s chatbot), making copyright infringement a distinct possibility for those using it to generate content. This not only exposes organizations to legal ramifications and hefty fines, but damages consumer trust—perhaps beyond repair.
To minimize the potential for such implications, those utilizing the AI tool should always verify information against reputable sources.
Furthermore, as ChatGPT’s responses become more and more conversational, it can be challenging to discern and prove whether a paper was written by a real person or chatbot.
Take the academic paper titled “Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT,” for instance.
While it was peer-reviewed and published in an education journal, the paper itself had been written by ChatGPT.
“We wanted to show that ChatGPT is writing at a very high level,” said Debby Cotton, director of academic practice at Plymouth Marjon University, who feigned being the paper’s lead author. “This is an arms race. The technology is improving very fast, and it’s going to be difficult for universities to outrun it.”
ChatGPT may be helpful in drafting outlines in certain cases, but should not be used to create work-related documents attributed to human authors.
Account Compromise
Consumer data and privacy were prominent concerns before the rise of ChatGPT, but the technology’s prevalence poses even greater threats.
More and more employees are utilizing the tool to streamline their workflows for everything from writing emails to answering questions.
However, unless users of non-API services—such as ChatGPT and DALL-E Labs—opt out from having their data used to “further training” and “improve [OpenAI’s] models,” as OpenAI writes in an internal article, all queries and responses are stored in the chatbot.
In the past year alone, more than 101,000 OpenAI ChatGPT account credentials have been compromised and leaked on dark web marketplaces, and discovered within info-stealing malware logs traded there.
The number of logs containing compromised accounts peaked in May 2023 at 26,802, with the majority of these breaches attributed to the infamous Raccoon Infostealer—a popular malware sold on the dark web.
With such programs and credentials, threat actors can access sensitive company information provided to ChatGPT by employees utilizing those accounts, and exploit this to engineer targeted attacks against them and their organizations.
To mitigate such risks, it is best practice to ensure all accounts use a long, unique password and implement multi-factor authentication (MFA) controls.
Leaked Information
Some organizations have not been compromised, but rather, have leaked their own sensitive information to the chatbot.
Less than three weeks after Samsung lifted a ban on its employees using ChatGPT, the tech giant reportedly leaked its secrets into the AI tool three times: once to solve a problematic source code of a semiconductor database download program, another to upload code to identify defective equipment, and then to generate meeting minutes.
Upon discovering this, Samsung applied “emergency measures” and limited employee upload capacity to 1024 bytes per query, but it may block access altogether if a similar incident occurs again.
Although the aforementioned OpenAI internal article claims it “remove[s] any personally identifiable information” from data the company uses, another concern is whether or not the bot still utilizes information from users who refuse to opt out in future responses.
Businesses should avoid uploading any proprietary company information, identifiers, customer information, and other sensitive information to ChatGPT.
This can include code, meeting notes, or any other data, even if the intended use case is internal-only.
Should Your Business Use ChatGPT?
Each company and industry is subject to unique implications, benefits, and risks from utilizing this technology.
For instance, the implications within the healthcare industry are different from those within an educational setting.
Therefore, it falls on business leaders to comprehensively weigh these factors when determining what extent to utilize the tool.
In doing so, they can mitigate associated business risks and contribute to bolstering a more robust cyber posture.
Even ChatGPT itself acknowledges that users should “exercise caution while interacting with ChatGPT and avoid sharing sensitive personal information or engaging in actions that could compromise their security.”
While identifying “data privacy,” “phishing and social engineering,” and “user profiling,” as potential risks related to its use, ChatGPT warns: “Organizations that deploy ChatGPT should implement strict data privacy and security measures to protect user data and ensure responsible use of the technology. Additionally, developers of ChatGPT and similar AI models must continually work on improving the system's accuracy, understanding of context, and responsiveness to safety concerns.”
Cybersafe Solutions: Cybersecurity for Today’s & Tomorrow’s Risks
As artificial intelligence (AI) technology continues to evolve, it will undoubtedly be accompanied by even more benefits and risks.
This underscores the importance of not only reacting to such changes, but equipping your company’s security posture to proactively anticipate shifting technology developments—and threat actor tactics, by proxy.
To do so requires enlisting dedicated security professionals with top-tier threat intelligence and decades of combined expertise.
The skilled team of advisors and analysts at Cybersafe Solutions is constantly devising new ways to thwart threat actor strategies, including those employed in evolving technology such as ChatGPT.
Partnering with an industry-leading managed security service provider (MSSP) offering managed detection and response (MDR) services grants 24/7/365 visibility into your online risk posture, effective risk mitigation, and real-time incident response services.
We’ll work with you to bolster a robust cybersecurity posture equipped to defend against not only today’s risks, but tomorrow’s, as well.
Cybersafe Solutions is an industry-leading MSSP offering MDR and a suite of tailored services such as SOL EDR, EDR+, XDR, and SIEM. To learn more about how partnering with Cybersafe can enhance your risk posture, contact our team today.