Generative AI Regulations: A Comprehensive Global Overview

Author

Posted Oct 31, 2024

Reads 778

An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text. It was created by Wes Cockx as part of the Visualising AI project l...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text. It was created by Wes Cockx as part of the Visualising AI project l...

As we dive into the world of generative AI regulations, it's essential to understand the current landscape. The European Union has established the AI Act, which aims to regulate AI systems that pose a risk to human safety and well-being.

The AI Act requires developers to conduct risk assessments and implement safety measures for high-risk AI systems, such as those used in healthcare and transportation. This is a significant step towards ensuring the responsible development and deployment of generative AI technologies.

Regulatory bodies around the world are taking a closer look at generative AI, with many countries developing their own frameworks for oversight and control. In the United States, the Federal Trade Commission (FTC) has issued guidelines for the use of AI in advertising and marketing, highlighting the need for transparency and fairness.

Regulatory Frameworks

The European Union is taking a comprehensive approach to regulating generative AI with the EU AI Act, which includes a tier-based system for foundation models and generative AI. This legislation requires providers of foundation models to integrate safeguards for design, testing, data governance, cybersecurity, performance, and risk mitigation.

Credit: youtube.com, AI regulation is coming...

The EU AI Act also mandates providers of generative AI services to inform users when a piece of content is machine-generated, deploy adequate training and design safeguards, and publicly disclose a summary of copyrighted materials used to develop their models. This is in addition to the EU's mainstay legislation on problematic content, the Digital Services Act, which is being updated to include generative models in its auditing algorithm provisions.

In the United States, there is no specific federal legislation governing generative AI, but the National Telecommunications and Information Administration has issued a request for public comments on creating 'earned trust' in AI systems. This is part of a broader effort to understand what kind of data is needed to conduct algorithmic audits and ensure responsible and ethical innovation of AI systems.

For another approach, see: Generative Ai Content

US Regulations

The US is taking steps to regulate Generative AI, with the National Telecommunications and Information Administration (NTIA) asking for public comments on creating 'earned trust' in AI systems in April 2023.

Credit: youtube.com, How Do Regulatory Agencies Implement Laws?

The NTIA aims to understand what kind of data is needed to conduct algorithmic audits and how regulators can ensure the responsible and ethical innovation of AI systems across all industries.

Currently, there is no specific federal endeavour to govern Generative AI tools, but lawmakers have introduced several legislations covering automated systems.

The Algorithmic Accountability Act is one such legislation, which would require covered entities to conduct annual impact assessments and audits overseen by the Federal Trade Commission if promulgated.

Massachusetts is the only state that has introduced a bill aimed at regulating Generative AI, with Bill S.31 providing Operating Standards on privacy and algorithmic transparency that companies developing Generative AI models must adhere to.

This bill was even drafted using ChatGPT, highlighting the need for regulation in the field of Generative AI.

European Union Regulation

The European Union is actively working on establishing a comprehensive regulatory framework for generative AI, with the EU AI Act aiming to be the world's first comprehensive regulatory playbook on artificial intelligence.

Credit: youtube.com, The EU's AI Act Explained

The EU AI Act's latest compromise text has introduced a tier-based approach for foundation models and generative AI, with a new section (Article 28b) specifically governing foundation models.

This section requires providers of foundation models to integrate design, testing, data governance, cybersecurity, performance, and risk mitigation safeguards in their products before placing them on the market.

Providers of foundation models must also comply with European environmental standards and register their applications in a database managed by the European Commission.

Generative AI services are subject to stricter transparency obligations, with providers required to inform users when a piece of content is machine-generated.

Providers of generative AI services must also deploy adequate training and design safeguards, and publicly disclose a summary of copyrighted materials used to develop their models.

The GDPR is also being used to address data protection and privacy risks associated with generative AI applications.

Italy's Data Protection Authority (DPA) has restricted the use of ChatGPT due to concerns over data collection practices, and similar investigations are underway in Germany, France, and Ireland.

The Digital Services Act (DSA) contains provisions to audit algorithms used in content moderation, but currently does not cover generative AI, creating a regulatory gap.

Intellectual Property and Data Protection

Credit: youtube.com, Generative AI and IP Rights: Strategies to Mitigate Legal Risk

Intellectual property and data protection are crucial aspects to consider when implementing generative AI in your business. This includes understanding how AI-generated output can be considered a copyright-protected work.

To navigate these complexities, companies can use existing laws and frameworks as a guide. For instance, existing data protection laws have provisions that can be applied to AI systems, including requirements for transparency, notice, and adherence to personal privacy rights.

Transparency is key in this regard. Companies must clearly communicate the use of AI in data processing, document AI logic, intended uses, and potential impacts on data subjects.

Here are some best practices to consider:

  • Transparency and documentation: Clearly communicate the use of AI in data processing, document AI logic, intended uses, and potential impacts on data subjects.
  • Localizing AI models: Localizing AI models internally and training the model with proprietary data can greatly reduce the data security risk of leaks.
  • Starting small and experimenting: Use internal AI models to experiment before moving to live business data from a secure cloud or on-premises environment.
  • Focusing on discovering and connecting: Use GenAI to discover new insights and make unexpected connections across departments or information silos.
  • Preserving the human element: GenAI should augment human performance, not remove it entirely.
  • Maintaining transparency and logs: Capturing data movement transactions and saving detailed logs of personal data processed can help determine how and why data was used.

In terms of contractual terms, liability, insurance, business continuity, and privacy and confidentiality are all essential considerations when licensing or entering into a contract that relates to a generative AI solution. This includes addressing and understanding provisions regarding confidentiality and data privacy, as well as considering the impact of unavailability on the business.

Risks of Unchecked

Credit: youtube.com, 'For Regulation, Don't Focus On AI Part But On…': Princeton Prof On Reducing Risks | HTLS 2024

The risks of unchecked generative AI are real and significant. Companies risk exposing sensitive proprietary data when they feed it into public AI models, which can be used by third parties or the model owner itself.

This risk can be mitigated by localizing the AI model on the company's own system and training those AI models on their company's own data, but it requires a secure, well-governed data stack for the best results.

Companies may unwittingly find themselves infringing on the intellectual property rights of third parties through improper use of AI-generated content, leading to potential legal issues.

Some companies, like Adobe with Adobe Firefly, are offering indemnification for content generated by their LLM, but the copyright issues will need to be worked out in the future if we continue to see AI systems "reusing" third-party intellectual property.

Here are some of the specific risks associated with unchecked generative AI:

  • Disclosing proprietary information
  • Violating IP protections
  • Exposing personal data
  • Violating customer contracts
  • Risk of deceiving customers

Data privacy breaches can occur if AI systems mishandle personal information, especially sensitive or special category personal data. Companies need to be careful about how they handle customer data in AI systems to avoid these risks.

Industry Response

Credit: youtube.com, How responsible AI can prepare you for AI regulations

Leading generative AI companies have taken a proactive approach by committing to voluntary White House guidelines. These guidelines serve as an interim measure until official legislation is passed by Congress. Companies are taking steps to regulate themselves, showing a willingness to adapt to changing regulations.

Recommended read: Top Generative Ai Companies

China, India, and the UK

China's Cyberspace Administration has issued draft rules to regulate Generative AI providers, requiring them to adhere to measures on data governance, quality of training data, bias mitigation, and transparency.

The draft specifically targets the issue of content moderation, requiring companies to ensure synthetic content created is in line with Chinese societal values, free of misleading information, and doesn't infringe intellectual property.

China's draft rules also mandate that providers perform a security assessment before releasing a Generative AI service to the public, which can be conducted independently or through a third-party entity.

India has taken a different approach, ruling out the need for an AI-specific legislation, while the UK's Competition and Markets Authority is reviewing Foundation Models to understand the AI market and ensure market competitiveness and consumer protection.

Readers also liked: Generative Ai Content Creation

AI Generated Graphic With Random Icons
Credit: pexels.com, AI Generated Graphic With Random Icons

China's draft rules direct providers to prevent discrimination in training data and employ labelling techniques to distinguish synthetic media in accordance with China's Deep Synthesis Provisions.

The UK's light-touch regulatory regime seems to be a departure from China's more stringent approach, with the CMA's review aiming to ensure consumer protection.

If this caught your attention, see: China Dominates Generative Ai Patent Filings

Companies Commit to White House Guidelines

Leading generative AI companies have made a commitment to voluntary White House guidelines.

These guidelines serve as an interim measure until official legislation is passed by Congress.

Generative AI solutions can be procured through contracts, but it's essential to carefully consider the terms, especially regarding liability, as organizations may seek indemnities from the provider for potential IP infringements, data privacy breaches, or confidentiality breaches.

Insurance is another crucial aspect to consider, especially when dealing with smaller AI solution providers, as organizations will want to know if the provider can pay any claims or if relevant insurance is available.

Consider reading: Generative Ai in Insurance

Credit: youtube.com, Legal Aspects of Generative AI

Business continuity is also a significant concern, as Generative AI solutions may become essential to day-to-day business operations, and unavailability could have a substantial impact on the business.

Organizations should give due consideration to provisions regarding confidentiality and data privacy in any contractual framework for the provision of Generative AI services.

Many jurisdictions are developing or enacting new AI laws and regulations, which could override conflicting contract provisions or need to be addressed contractually.

Companies should use time-tested risk reduction strategies based on current regulations and legal precedents to minimize potential issues, rather than waiting for the dust to settle on AI.

Recent class action lawsuits filed in the Northern District of California highlight the importance of responsible data handling and may point to the need to disclose training data sources in the future.

The EU AI Act, currently in Trilogue negotiations, would require companies to transparently disclose AI-generated content, ensure it was not illegal, publish summaries of copyrighted data used for training, and include additional requirements for high-risk use cases.

The Path Ahead

Credit: youtube.com, Panel: Intelligent Regulation of AI: The Way Ahead

As we move forward with Generative AI, legal executives will play a leading role in strategic decision-making within the enterprise. They'll develop responsibilities and accountabilities in respect of developing ethical and legal frameworks.

The risk of infringement on IP rights and the risk to the award of IP protections will be in focus. This is a critical consideration for any organization adopting Generative AI.

Legal executives should stay closely engaged with the evolution of the technology itself, as well as changing laws and regulations. This will help them ensure compliance with law and regulation.

Taking a whole-of-enterprise approach, important stakeholders will include the C-suite, the lines of business, internal expertise, and external advisors and consultants. These stakeholders will help identify risks, opportunities, and changes to business strategy and processes.

Training people to understand the ethical and legal implications of using Generative AI is crucial. This may fall into the domain of the legal executive.

Credit: youtube.com, Navigating the Legal Landscape of AI and Generative AI

The legal landscape surrounding Generative AI is rapidly evolving, but it's still a complex and rapidly changing field.

Companies that try to wait for the dust to settle on AI may lose market share and customer confidence as faster-moving rivals get ahead.

AI giants like OpenAI are primary targets of several lawsuits over their use of copyrighted data to create and train their models.

Recent class action lawsuits filed in the Northern District of California raise allegations of copyright infringement, consumer protection, and violations of data protection laws.

These filings highlight the importance of responsible data handling and may point to the need to disclose training data sources in the future.

The FTC has charged companies with deceiving consumers about their use of facial recognition technology and retention of user data, leading to shutdowns.

States like New York are introducing laws and proposals to regulate AI use in areas such as hiring and chatbot disclosure.

The EU AI Act is currently in Trilogue negotiations and is expected to be passed by the end of the year, requiring companies to transparently disclose AI-generated content and ensure it's not illegal.

Frequently Asked Questions

What is the use policy for generative AI?

Generative AI chatbots must be used in accordance with our company's conduct and antidiscrimination policies, avoiding content that's inappropriate, discriminatory, or harmful

What is the AI legislation in 2024?

The California Legislature passed multiple AI-related bills in 2024, focusing on safety, consumer transparency, and privacy protections. These bills impose new obligations on AI developers and users, with a focus on accountability and safeguarding individual rights.

Keith Marchal

Senior Writer

Keith Marchal is a passionate writer who has been sharing his thoughts and experiences on his personal blog for more than a decade. He is known for his engaging storytelling style and insightful commentary on a wide range of topics, including travel, food, technology, and culture. With a keen eye for detail and a deep appreciation for the power of words, Keith's writing has captivated readers all around the world.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.