Search:

Recent Posts

Popular Topics

Contributors

Archives

Legal developments in data, privacy, cybersecurity, and other emerging technology issues

Privacy and Advertising Considerations When Using Large Language Models like ChatGPT or Bard

Since late 2022, terms like “large language models,” “chat-bots,” and “natural language processing models” increasingly have been used to describe artificial intelligence (AI) programs that collect data and respond to questions in a human-like fashion, including Bard and ChatGPT. Large language models collect data from a wide range of online sources, including books, articles, social media accounts, blog posts, databases, websites, and other general online content. They then provide logical and organized feedback in response to questions or instructions posed by users. The technology is capable of improving its performance and otherwise building its knowledge base through its internal analysis of user interactions, including the questions that users ask and the responses provided. These AI programs have a variety of applications and benefits, but businesses should be aware of potential privacy and other risks when adopting the technology.

Privacy Implications. Utilizing AI applications has privacy and data protection implications. As an initial matter, information provided to, or provided by, large language models may include personal data. Businesses should be careful sharing personal data with AI programs operated by third parties because doing so may be inconsistent with a business’s privacy policy and therefore may require additional notice to, or consent from, consumers. In addition, there is limited information about the data sets that are used to build the “brains” of natural language processing models, so it is possible that these technologies may disclose personal, non-public data to businesses using the systems. That also may be the case if a business accesses a large language model through an Application Programming Interface (API) license. Businesses that obtain such personal data should understand what data they collect and only use and store personal data that is directly relevant and necessary to accomplish a specified business purpose (“data minimization”).

Businesses that provide or receive information through large language models also should consider whether “data subject rights requests” are implicated. Residents of California, and a growing list of other states, as well as individuals located in the European Union are able to request, the deletion or correction of personal information and are entitled to know what personal information has been collected and how such information has been used and disclosed. In addition, evolving state laws allow consumers to opt-out of the sale or sharing of personal data, as well as automated decision-making and profiling, under certain circumstances. Honoring such requests, and determining the party ultimately responsible to do so, may have complications. Ultimately, businesses that incorporate AI programs should consider conducting a privacy impact assessment and implementing privacy and security by design when implementing new products and services that incorporate AI. Business aiming to responsibly incorporate AI into their processes, product and services also can utilize the AI Risk Management Framework released by the National Institute of Standards and Technology earlier this year.

Advertising and Marketing Implications. The FTC recently issued guidance regarding claims made by businesses whose products purportedly incorporate AI. The FTC warned that businesses should be able to provide proof for any claim that, among other things, AI used in their products enhances such products or otherwise makes the products better than other products which may or may not utilize AI. Similarly, businesses should be able to substantiate performance claims relating to the capabilities of products that incorporate AI. The FTC has stated that such claims may be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions. The FTC’s recent guidance follows a blog post issued in 2021 in which the FTC noted its authority to regulate AI under the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act.

  • Owen  Agho
    Associate

    Owen Agho is a corporate attorney in the Technology Transactions and Data, Privacy, and Cybersecurity Practice Groups who focuses his practice at the intersection of the law and technology and their combined impact on society at ...

    |
  • Ahmad H. Sabbagh
    Associate

    Ahmad Sabbagh is a corporate attorney in the firm’s Commercial Transactions and Technology Transactions practice groups who focuses his practice on drafting and negotiating agreements in the automotive and technology spaces ...

    |
  • Steven M. Wernikoff
    Partner

    Steve Wernikoff is a litigation and transactional partner who co-leads two of the firm's technology-based practice areas–the Data, Privacy, and Cybersecurity group and the Autonomous Vehicle group. As a previous officer and ...

    |
Jump to Page

By using this site, you agree to our Privacy Policy and our Disclaimer.