stub EU AI Act: A Significant Step Toward Global AI Governance - Securities.io
Connect with us

Thought Leaders

EU AI Act: A Significant Step Toward Global AI Governance

mm

Published

 on

In recent years, Artificial Intelligence (AI) has emerged as a powerful tool that has transformed many aspects of modern life, including creating and consuming content. Using generative AI tools like ChatGPT has opened up new possibilities for content creation but has also raised new challenges and questions around copyright. The issue of copyright and AI-generated content is complex, involving various legal and ethical considerations.

As AI technologies become more prevalent in content creation, it is essential to address the questions of ownership, attribution, and compensation for AI-generated works. One of the primary challenges is that existing copyright laws are struggling to keep up with the rapid advancements in AI technology. The current legal framework, designed for traditional forms of content creation, may not be adequately equipped to address the unique aspects of AI-generated content.

Moreover, as AI-generated content becomes more prevalent, it is crucial to consider the ethical implications, particularly around issues such as bias, privacy, and accountability. AI algorithms can amplify existing biases, leading to unfair treatment of certain groups or individuals. Additionally, AI-generated content can raise privacy concerns as it may involve the use of personal data.

To address these challenges, policymakers, industry leaders, and other stakeholders are working to establish clear guidelines and regulations that balance the interests of creators, users, and AI technologies while considering the ethical implications of AI-generated content. For instance, the European Union (EU) is currently drafting the AI Act, a new law aimed at regulating the use of AI technologies in the EU. We will talk more about this in this article.

What is EU AI Act?

The European Union (EU) introduced the EU AI Act in April 2021, proposing a comprehensive legal and regulatory framework for AI. The proposed regulation covers all types of AI in various sectors, including entities that use AI systems professionally. The regulation aims to tackle challenges and risks linked to AI development and deployment, including discriminatory and rights-violating AI.

The EU AI Act primarily puts the responsibility on AI system providers to create a legal framework for developing, distributing, and using AI. The regulation includes broad and general articles to ensure its application across different industries and use cases. The EU AI Act is currently undergoing the legislative process and is subject to the ordinary legislative procedure for the EU. Members of the European Parliament agreed on the AI Act preliminarily in April 2023, and the text is scheduled to proceed to a plenary vote in June 2023. Upon approval, the EU AI Act will be among the first AI-specific regulations in the world.

It is essential to note that the EU AI Act is a significant development in regulating AI systems as it comprehensively and uniformly addresses the associated risks and challenges comprehensively and uniformly. The regulation's general nature ensures adaptability and applicability across different industries and use cases, marking a significant step towards AI regulation in the EU.

How would EU AI Act help with generative works?

The EU AI Act, a proposed regulation for the use of AI technology, may also help regulate the use of generative works. The act includes provisions on transparency, data quality, and human oversight, which are relevant to developing and using generative AI models such as ChatGPT. In particular, the act would require companies that use AI tools to disclose any copyrighted materials employed in developing their systems. This could help prevent the unauthorized use of intellectual property in generative works. Additionally, the EU proposes requiring companies that provide generative AI services to explain the reasons and ethical standards for their decisions.

It's worth noting that generative AI tools, like ChatGPT, have also come under scrutiny in other areas. For example, the US Consumer Financial Protection Bureau (CFPB) examines how generative AI tools could propagate bias or misinformation and create risks in the financial services sector. Some experts have pointed out that algorithms used by generative AI tools like ChatGPT could be subject to legal protections similar to those that govern the content on social media platforms like YouTube.

Generative AI was not prominently featured in the original proposal for the AI Act, as it only had one mention of “chatbot” in the 108-page document. However, the act has been revised to include stricter rules for “foundation model” systems, which include generative AI systems like ChatGPT. The revised text also emphasizes the importance of developing European standards for AI, which could help ensure that generative AI models meet the act's essential requirements for different levels of risk.

Risks and challenges associated with the development and deployment of AI

The development and deployment of AI come with various risks and challenges that must be addressed to ensure its ethical use. One of the main concerns is that AI systems, if not implemented correctly, can violate human rights and discriminate against marginalized communities. Discriminatory AI systems can lead to biased decision-making processes that disproportionately affect certain groups, such as migrants, refugees, and asylum seekers.

Moreover, AI systems that interact with physical objects, such as autonomous vehicles and robots, have the potential to cause harm, making safety and security a significant ethical concern in AI development. The development of AI-generated code can also lead to unintended consequences, and LLMs' ability to generate functional code is limited, making them powerful tools for answering high-level but specific technical questions.

To address these challenges, the Asilomar AI Principles recommend that AI systems be developed and employed to reduce the risk of unintentional harm to humans. It is also important to ensure that AI systems are designed to be inclusive and transparent and to minimize the risk of unintentional harm to human users.

As the EU and the US are jointly pivotal to the future of global AI governance, it is crucial to ensure that EU and US approaches to AI risk management are generally aligned to facilitate bilateral trade. At the same time, AI developers need to establish safeguards that protect users from potential risks. OpenAI, for instance, has established AI safeguards and has a vision for AI's ethical and responsible development.

How is the United States looking at AI copyright?

The topic of AI copyright rules in the United States is a complex and evolving issue. Several recent legal cases and proposed regulations shed light on the current state of the law.

One major concern is whether AI-generated works can be protected by copyright law. Currently, most countries, including the US, require a human author for copyright protection to arise. However, ongoing discussions and proposed legislation may change this requirement in the future.

Another issue is the use of copyrighted material in training AI models. Some AI tools are trained on massive datasets that contain copyrighted works without obtaining specific licensing for this use. This raises questions about whether this use constitutes copyright infringement.

Recent legal cases also shed light on the issue of AI copyright rules. For example, Getty Images filed a lawsuit against Stability AI in February 2023, alleging copyright, trademark infringement, and trademark dilution.

In April 2023, the US Supreme Court heard a case that could have implications for AI-generated works. The case concerns fair use law and whether AI tools can be protected under it.

Proposed regulations in the European Union may also have an impact on AI copyright rules in the US. The EU is drafting the AI Act to regulate emerging AI technology, including copyright and intellectual property issues.

In conclusion

In conclusion, EU lawmakers have agreed that companies using generative AI tools like ChatGPT will have to disclose any copyrighted material used in developing their systems as part of a larger draft law known as the AI Act. It is a big move in my opinion.

The complex issue of AI-generated content and copyright requires attention from both legal and ethical perspectives. While debates and lawsuits continue regarding the use of generative AI tools in content creation, it is apparent that current copyright laws are struggling to keep up with technological advancements.

As AI continues to revolutionize content production and consumption, policymakers and industry leaders must collaborate to establish guidelines that balance the interests of creators, users, and AI technologies. These guidelines should provide clarity on issues like ownership, attribution, and compensation for AI-generated content.

It is also essential to consider the ethical implications of AI-generated content, including issues like bias, privacy, and accountability. As AI-generated content becomes more prevalent, it is crucial to ensure responsible and transparent production and use.

To address this issue, policymakers, industry leaders, and other stakeholders must work together to establish clear guidelines and regulations. These regulations should balance the interests of all parties involved and take into account the ethical implications of AI-generated content. This effort is critical in ensuring that AI continues transforming content creation and consumption fairly, equitably, and responsibly.

Anndy Lian is the chief digital advisor for the Mongolian Productivity Organisation, a partner and fund manager overseeing blockchain investments for Passion Venture Capital Pte. Ltd. He is the author of the best-selling book, “Blockchain Revolution 2030” published by Kyobo, the largest bookstore chain in South Korea. He was previously the chairman of BigONE Exchange and an Advisory Board Member of Hyundai DAC.