Search:

Recent Posts

Popular Topics

Contributors

Archives

Legal developments in data, privacy, cybersecurity, and other emerging technology issues

Posts in Artificial Intelligence.

Privacy and data security laws and regulations continue to evolve quickly, and companies processing personal data have an increasing array of issues to manage. As we enter 2024, below are five key considerations for companies managing privacy and data security risks.

In April 2023, Kyland Young, a star from the popular reality TV show Big Brother, brought a right of publicity claim against NeoCortext, Inc., the developer of a deepfake software called Reface. See Young v. NeoCortext, Inc., 2:23-cv-02486 (C.D.CA filed Apr. 3, 2023). Young claimed that NeoCortext’s Reface, “which uses an artificial intelligence algorithm to allow users to swap faces with actors, musicians, athletes, celebrities, and/or other well-known individuals in images and videos,” violates California’s right of publicity law. Young’s case, which is still pending in the U.S. District Court for the Central District of California, raises important questions about deepfakes and their intersection with the law as it pertains to famous figures.

Since the arrival of AI programs like OpenAI’s ChatGPT, Google’s Bard, and other similar technologies (“Generative AI”) in late 2022, more programs have been introduced and several existing programs have been upgraded or enhanced, including ChatGPT’s upgrade to ChatGPT-4. Our previous posts have identified the features and functionality of Generative AI programs and outlined the emerging regulatory compliance requirements related to such programs. This post discusses how regulatory agencies worldwide have begun to address these issues.

Since late 2022, terms like “large language models,” “chat-bots,” and “natural language processing models” increasingly have been used to describe artificial intelligence (AI) programs that collect data and respond to questions in a human-like fashion, including Bard and ChatGPT. Large language models collect data from a wide range of online sources, including books, articles, social media accounts, blog posts, databases, websites, and other general online content. They then provide logical and organized feedback in response to questions or instructions posed by users. The technology is capable of improving its performance and otherwise building its knowledge base through its internal analysis of user interactions, including the questions that users ask and the responses provided. These AI programs have a variety of applications and benefits, but businesses should be aware of potential privacy and other risks when adopting the technology.

As seen from the recent release of the ChatGPT artificial intelligence (“AI”) tool, AI technologies have a major potential to transform society rapidly. However, the technologies also pose potential unique risks. Because AI risk management is a key component of responsible development and use of AI systems, the National Institute of Standards and Technology last week released its voluntary AI Risk Management Framework, which will be a helpful resource to assist businesses to responsibly incorporate AI into their processes, products and services.

Jump to Page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.