Search:

Recent Posts

Popular Topics

Contributors

Archives

Legal developments in data, privacy, cybersecurity, and other emerging technology issues

  • Posts by Owen  Agho
    Posts by Owen Agho
    Associate

    Owen Agho is a corporate attorney in the Technology Transactions and Data, Privacy, and Cybersecurity Practice Groups who focuses his practice at the intersection of the law and technology and their combined impact on society at ...

Since the arrival of AI programs like OpenAI’s ChatGPT, Google’s Bard, and other similar technologies (“Generative AI”) in late 2022, more programs have been introduced and several existing programs have been upgraded or enhanced, including ChatGPT’s upgrade to ChatGPT-4. Our previous posts have identified the features and functionality of Generative AI programs and outlined the emerging regulatory compliance requirements related to such programs. This post discusses how regulatory agencies worldwide have begun to address these issues.

Since late 2022, terms like “large language models,” “chat-bots,” and “natural language processing models” increasingly have been used to describe artificial intelligence (AI) programs that collect data and respond to questions in a human-like fashion, including Bard and ChatGPT. Large language models collect data from a wide range of online sources, including books, articles, social media accounts, blog posts, databases, websites, and other general online content. They then provide logical and organized feedback in response to questions or instructions posed by users. The technology is capable of improving its performance and otherwise building its knowledge base through its internal analysis of user interactions, including the questions that users ask and the responses provided. These AI programs have a variety of applications and benefits, but businesses should be aware of potential privacy and other risks when adopting the technology.

Jump to Page

By using this site, you agree to our Privacy Policy and our Disclaimer.