ChatGPT might be helpful for writing a professionally worded email, or summarizing information from a quick Google Search, but it shouldn’t be used as a sounding board for your new, and potentially valuable, ideas.
The capabilities and availability of AI tools have seemingly exploded overnight with no signs of slowing down. The possibilities for lightening the mental load and removing the tedious tasks from our to-do lists make them an enticing tool.
But…as the use of AI becomes more ubiquitous and more information (and questions) emerges on bots like ChatGPT, IP experts have identified several rather large red flags, which we are waving at our clients in the hopes of highlighting the potential dangers. Before jumping in further, it should be noted that while we reference ChatGPT extensively in this blog post, most, if not all, generative AI tools use similar modes of operation.
ChatGPT hasn’t signed an NDA…
When you task an AI tool with something, it is important to understand where and how ChatGPT is sourcing its outputs and, more importantly, where the information you share with it ultimately ends up. AI doesn’t really create content; it farms it from existing sources.
It has become increasingly evident, that companies should take great care to:
- Ensure that they are not feeding sensitive information into AI prompts/tools
- Ensure that they are not using information that has been “generated” by AI to develop their products or services
- Ensure that they are not infringing on someone else’s copyrights
…So why would you share corporate secrets?
Consider this cautionary tale: back in April 2023, Samsung made headlines when it was discovered that several of its employees had been inputting highly sensitive data into ChatGPT to help identify bugs and fix problems with their code. The same code that makes up part of Samsung’s lucrative IP portfolio. ChatGPT now has unfettered access to those trade secrets, patentable computer-implemented methods, and the like, having added them to its database, to use to further train itself.
Tech giant Amazon recently issued a warning to its employees about ChatGPT, cautioning them from sharing sensitive information with the AI bot. Their concerns include not only sharing sensitive code, but also with employees relying on ChatGPT to solve problems when it may have learned false or incorrect information, particularly from competitors.
If you (or anyone in your organization) use ChatGPT in the workplace, make sure to consider: Are you sending it anything that you wouldn’t send a competitor? Are you sending it anything that provides value to your company? If you are in a management position or are implementing disclosure policies or IT policies and internet-safety policies, consider revising your rules and regulations regarding disclosures. The information that employees are allowed to disclose, whether to a person, to an AI tool, or anywhere else, should be explicitly outlined.
What about data ownership?
Sharing too much information with ChatGPT is concerning for other reasons too – particularly how that information will be used by the bots in the future and who owns the content generated by the AI tools or even the content that comes out of methods or code that these bots generate for them. Considering how ChatGPT operations, it is not unlikely that it is using copyrighted material in the content it generates without proper licenses or attribution.
It’s possible that using content generated by AI might not be allowed, if the content generated was compiled from sources without proper ownership transfer or permissions from the (human) content creator. Think plagiarism, on a very wide scale- source material is typically not cited by ChatGPT. Authors, comedians, and other artists are beginning to form class-action lawsuits against the makers of ChatGPT for copyright infringement, for feeding the system their copyrighted works.
Information generated by the bot could come from copyrighted or patented material- material which may not have been ChatGPT’s to take or use, producing information that may not be yours to learn. The laws surrounding AI are constantly changing, with some countries even banning it altogether. The laws will continue to evolve meaning you cannot jump in without caution and hope for the best. But you also cannot stay on the sidelines and safely ignore the possibilities. Now is the time to dip your toes in intentionally and ensure that policies, culture and operations include AI considerations to negate potential disruption and headaches.
Better safe than sorry!
Care should be taken when using ChatGPT and other deep learning bots. Information that is input into AI tools could end up in your competitors’ hands. False information may be sent from your competitors into the mysterious chat bot and end up in your employees’ hands. IP rights could wind up unenforceable depending on AI’s role or presence in the process.
At this point, it’s safe to say that AI is here to stay and will only continue to evolve and expand its reach and it will be interesting to see how innovation and IP will be affected. Stratford consistently educates our clients on the importance of disclosure and IT policies in the workplace. We are happy to help our clients access and understand the latest information on how best to protect their intellectual property in a data driven economy and to incorporate AI policies into their IP strategy.
You May Be Interested In:
- Samsung workers made a major error by using ChatGPT (TechRadar)
- Don’t Chat With ChatGPT: Amazon’s Warning To Employees (Medium)
- Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms? (Forbes)
- These authors say Open AI stole their books to train ChatGPT. Now they’re suing (CBC)
- Sarah Silverman and other bestselling authors sue Meta and OpenAI for copyright infringement (Los Angeles Times)
- Looking Behind the Curtain: Understanding AI’s Potential and Impact (Stratford white paper)