Search:

Recent Posts

Popular Topics

Contributors

Archives

Legal developments in data, privacy, cybersecurity, and other emerging technology issues

Generative AI Draws Increased Scrutiny from Data Protection Regulators

Since the arrival of AI programs like OpenAI’s ChatGPT, Google’s Bard, and other similar technologies (“Generative AI”) in late 2022, more programs have been introduced and several existing programs have been upgraded or enhanced, including ChatGPT’s upgrade to ChatGPT-4. Our previous posts have identified the features and functionality of Generative AI programs and outlined the emerging regulatory compliance requirements related to such programs. This post discusses how regulatory agencies worldwide have begun to address these issues.

Regulators across the world are reacting to Generative AI programs. On March 31, 2023, the Italian Data Protection Authority (“IDPA”) temporarily banned ChatGPT from use in Italy and launched a probe into the AI program, citing numerous privacy justifications, including the allegations that: (i) individuals are not always informed that their data is collected and available through the AI program; (ii) ChatGPT provides inaccurate information regarding individuals; (iii) a lack of age-related restrictions existed to stop individuals under the age of 13 from using ChatGPT; and (iv) generally, there was no set standard of legal doctrines applicable to the collection and processing of personal data by Generative AI programs. The IDPA reviewed whether Generative AI programs appropriately collect personal data in accordance with the General Data Privacy Regulation’s legal justifications. The IDPA determined that Generative AI programs do not always receive the proper consent to process personal data from an individual, and no other legal justifications applied to the AI programs’ processing of such data, such as a contractual justification or legitimate interest. To lift the ban imposed by the IDPA, OpenAI was asked to comply with various requirements, including verifying users’ age before they used the program, asking users for their consent or formulating legitimate interest to use users’ data, and making it possible for users to request to correct or delete their data. By May 14, 2023, the company would also have to conduct an information campaign on Italian television, radio, websites, and newspapers to inform the public how it uses its personal data to train its ChatGPT algorithm.

Since the IDPA’s announcement of this decision, data regulators in France, Germany, Spain, and Ireland have contacted the IDPA to request more information on its decision and the rationale behind the decision.  The IDPA’s decision comes at a time when the European Union is working towards the passage of an AI Act that will regulate the usage of artificial intelligence in Europe.  In addition, on April 13, 2023, the European Data Protection Board launched “a dedicated task force to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.” 

Nevertheless, on April 28, 2023, the IDPA reinstated access of ChatGPT to the Italian public following OpenAI’s steps aimed at satisfying the IDPA’s objections, including, among others, implementing an age verification system and providing an opt-out mechanism from the processing of personal data by ChatGPT’s training algorithms. The IDPA, in a public statement, welcomed “the steps forward made by OpenAI to reconcile technological advancements with respect for the rights of individuals and it hopes that the company will continue in its efforts to comply with European data protection legislation” noting that it “will carry on its fact-finding activities regarding OpenAI.”

Concerns over Generative AI programs have also grown on the other side of the Atlantic. On April 4, 2023, the Office of the Privacy Commissioner of Canada launched an investigation into ChatGPT prompted by a complaint alleging that ChatGPT collected, used, and disclosed personal data without consent. Similarly, U.S. government agencies have expressed a desire to monitor the development of Generative AI.  Following recent guidance from the Federal Trade Commission regarding claims made by businesses whose products purportedly incorporate AI, on April 25, 2023, four federal agencies (including the Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission) issued a joint statement on enforcement efforts against discrimination and bias in AI programs. In the statement, the agencies stressed that the existing federal laws apply to AI programs just as they apply to other practices and that the agencies will collectively monitor the development and the use of AI programs for consistency with such federal laws. The agencies will specifically focus on technologies that rely on vast amounts of data to make recommendations and predictions because such activities may potentially produce outcomes that result in unlawful discrimination and bias.

As time passes and regulators have more opportunity to analyze the implications Generative AI programs will have on personal data, it seems likely that additional jurisdictions will introduce legislation regulating the use of Generative AI programs. Until then, regulators appear eager to enforce applicable existing laws.

Jump to Page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.