Search:

Recent Posts

Popular Topics

Contributors

Archives

Legal developments in data, privacy, cybersecurity, and other emerging technology issues

Posts in Artificial Intelligence.

Since late 2022, terms like “large language models,” “chat-bots,” and “natural language processing models” increasingly have been used to describe artificial intelligence (AI) programs that collect data and respond to questions in a human-like fashion, including Bard and ChatGPT. Large language models collect data from a wide range of online sources, including books, articles, social media accounts, blog posts, databases, websites, and other general online content. They then provide logical and organized feedback in response to questions or instructions posed by users. The technology is capable of improving its performance and otherwise building its knowledge base through its internal analysis of user interactions, including the questions that users ask and the responses provided. These AI programs have a variety of applications and benefits, but businesses should be aware of potential privacy and other risks when adopting the technology.

As seen from the recent release of the ChatGPT artificial intelligence (“AI”) tool, AI technologies have a major potential to transform society rapidly. However, the technologies also pose potential unique risks. Because AI risk management is a key component of responsible development and use of AI systems, the National Institute of Standards and Technology last week released its voluntary AI Risk Management Framework, which will be a helpful resource to assist businesses to responsibly incorporate AI into their processes, products and services.

Jump to Page

By using this site, you agree to our Privacy Policy and our Disclaimer.