Search:
Recent Posts
Popular Topics
Contributors
Archives
Legal developments in data, privacy, cybersecurity, and other emerging technology issues
Last week, the FTC and HHS’ Office for Civil Rights (OCR) sent a joint letter to approximately 130 hospitals and telehealth providers concerning the privacy and security risks related to the use of online tracking technologies integrated into their websites or mobile apps. The agencies assert that these tracking technologies – such as the Meta/Facebook pixel and Google Analytics – gather identifiable information about users when they interact with a website or mobile app, often without users’ knowledge and in ways that are hard for users to avoid.
According to a study conducted by the Federal Research Division of the Library of Congress as of 2018, counterfeiting was identified as the largest criminal enterprise in the world, with domestic and international sales of counterfeit and pirated goods totaling between an estimated $1.7 trillion and $4.5 trillion a year.
On June 18, 2023, Texas Governor Greg Abbott signed the Texas Data Privacy and Security Act (TDPSA) into law, making Texas the next state to enact a comprehensive state-wide data privacy statute. The TDPSA will take effect on July 1, 2024, and applies to businesses that produce a product or service that is “consumed” by Texas residents, and process or engage in the sale of personal data.
Last updated: January 17, 2024
To assist privacy practitioners keep track of new state laws, below is a chart containing links to the major enacted state privacy laws and their respective regulations. Bookmark this page, and we will update this chart periodically as new laws are enacted.
Since the arrival of AI programs like OpenAI’s ChatGPT, Google’s Bard, and other similar technologies (“Generative AI”) in late 2022, more programs have been introduced and several existing programs have been upgraded or enhanced, including ChatGPT’s upgrade to ChatGPT-4. Our previous posts have identified the features and functionality of Generative AI programs and outlined the emerging regulatory compliance requirements related to such programs. This post discusses how regulatory agencies worldwide have begun to address these issues.
In March 2023, the White House released the National Cybersecurity Strategy, which details the Biden administration’s policy and agency directives to strengthen U.S. cybersecurity across the public and private sectors. Cybersecurity regulations and cybersecurity responses affect both U.S. national security as well as the security and stability of U.S. businesses and individuals. The 2023 National Cybersecurity Strategy replaces the 2018 National Cyber Strategy set forth under the Trump administration and builds on the 2008 Comprehensive National Cybersecurity Initiative set forth under the Obama administration.
Since late 2022, terms like “large language models,” “chat-bots,” and “natural language processing models” increasingly have been used to describe artificial intelligence (AI) programs that collect data and respond to questions in a human-like fashion, including Bard and ChatGPT. Large language models collect data from a wide range of online sources, including books, articles, social media accounts, blog posts, databases, websites, and other general online content. They then provide logical and organized feedback in response to questions or instructions posed by users. The technology is capable of improving its performance and otherwise building its knowledge base through its internal analysis of user interactions, including the questions that users ask and the responses provided. These AI programs have a variety of applications and benefits, but businesses should be aware of potential privacy and other risks when adopting the technology.
On February 17, 2023, the FTC brought its first civil enforcement action under the Telemarketing Sales Rule, 16 C.F.R. Part 310 (“TSR”), in nearly one year. In U.S. v. Stratics Networks Inc., et al., which was filed in the U.S. District Court for the Southern District of California, the FTC seeks to stop a group of companies and individuals that it claims are “responsible for delivering tens of millions of unwanted Voice Over Internet Protocol (VoIP) and ringless voicemail (RVM) phony debt service robocalls to consumers nationwide.” Because the FTC is seeking civil penalties, the Complaint was filed by the Department of Justice on behalf of the FTC.
In an eye-opening 4-3 decision issued on Friday, the Illinois Supreme Court ruled that a separate Biometric Information Privacy Act (“BIPA”) claim accrues “with every scan or transmission of biometric identifiers or biometric information without prior informed consent.” Cothron v. White Castle System, Inc., 2023 IL 128004 ¶ 45. The decision may have staggering consequences on all pending BIPA cases, converting what might have been a single claim, into thousands of separate claims for $1,000 or $5,000 (depending on whether the violation is negligent or willful). The impact of the decision is even more severe in light of the Illinois Supreme Court’s recent decision in Tims v. Black Horse Carriers, Inc., 2023 IL 127801, applying a five-year statute of limitations to all BIPA claims.
The Illinois Supreme Court has issued its highly anticipated ruling in Tims v. Black Horse Carriers, Inc., 2023 IL 127801, which expands the statute of limitations period for certain claims under the Biometric Information Privacy Act (BIPA) from one year to five years. The Court reversed in part a previous ruling by the appellate court, which held that a one-year limitations period applied to claims under subsections 15(c) and (d) of BIPA, prohibiting the sale and unauthorized disclosure of biometric data, and affirmed the appellate court’s judgment that a five-year period applied to other claims under BIPA.