Introduction
The recent technological innovations, as is well known, have brought many advantages to companies as well as individuals, transforming the way how they operate. Among this new technologies, the Artificial Intelligence (AI) appears as a groundbreaking instrument potentially widely applied, from automated decision-making systems to sophisticated machine learning algorithms. For this reason, there has been a need to effectively regulate the AI sector, striking a balance between fostering innovation and safeguarding against potential risks.
Since early 2016, many national, regional and international authorities have started adopting strategies, actions plans and policy papers on Artificial Intelligence, which address a wide range of topics such as regulation and governance, industrial strategy, research and infrastructure, in different ways depending on the country. As regards the three major economies, the US, China and Europe are adopting different approaches. In this insight we will give you a general overview of how the most developed countries have regulated the use of Artificial Intelligence.
The US’s AI regulations are still evolving
The US is still working on its regulatory strategy, but some American states have already passed laws limiting use of AI in some sectors such as police investigations, employment, insurance, health care and facial recognition in public settings.
The President Joe Biden’s executive order on AI, released on 30th October 2023, sets standards for security and privacy protections and builds on voluntary commitments by AI companies, to which is required to provide the Federal Government with an assessment of their vulnerability to cyber-attacks, the data used to train and test the AI and its performances’ evaluation. For this reason it is said that the US is following a “market-driven approach”.
Moreover, Members of Congress have shown interest in passing laws that introduce a licensing process for advanced AI models, establish an independent federal office to oversee AI, and impose liability on companies for privacy and civil rights violations. But a real leading strategy has yet to emerge.
China’s new regulations on AI systems require state control
On 15th August 2023, new regulations took effect in China to regulate generative AI and recommendation systems, that are algorithms that analyse people’s online activity to determine which content, including advertisements, to show in their feeds.
To protect consumers, these laws introduce new restrictions for companies providing AI services, which have to respect rules on data privacy and intellectual property and require all automated decision making to be transparent.
As well as promoting AI innovation, China’s AI regulation ensures state control over the technology. China takes a more vertical “state-driven” approach, combining discrete national-level, provincial, and local regulations to tackle AI issues and uphold state power and cultural values.
The EU AI Act: a rights-driven evolution to balance innovation and concerns about over-regulation
On 8th December 2023, the European Parliament and Council reached a political agreement on the EU’s Artificial Intelligence Act, the first comprehensive AI Act, which is expected it will become law in early 2024.
As part of its digital strategy, the EU wants to regulate the use of AI to ensure better conditions for the development of this innovative technology. The priority is to ensure that AI systems are safe, transparent, traceable, non-discriminatory and environmentally friendly. For this reason, we can say that it is a “rights-driven approach”. In the EU AI Act, AI systems are classified according to the risk they pose to users, and on the basis of different risk levels identified, they will be more or less regulated: the higher the risk, the stricter the rules. AI systems of limited risk will be subject only to transparency requirements. Instead, to high-risk AI systems strict rules will be applied about fundamental rights impact assessments, conformity assessments, data governance requirements, risk management and quality management systems, transparency, accuracy, robustness and cybersecurity.
It was agreed to ban certain high-risk AI systems, that negatively affect safety or fundamental rights or violates EU values. Among the others, we can find AI systems used for one of the following purposes:
- Biometric identification and categorisation systems of natural people that use sensitive characteristics;
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- Emotion recognition in the workplace and educational institutions;
- Social scoring based on social behaviour or personal characteristics;
- AI systems that manipulate human behaviour to circumvent their free will;
- AI used to exploit the vulnerabilities of specific groups;
- Predictive policing.
However, AI systems that have been developed exclusively for military and defence uses, and AI systems used by law enforcement authorities for their institutional purposes will be subject to specific regulations.
The EU AI Act has set out responsibilities of all parties involved in the value chain of AI systems, and of their users located both inside and outside of the EU.
Some EU countries have risen concerns about the over-regulation of AI sector and the strict rules imposed to companies that operate in it, arguing that this could hinder AI startups’ innovation. Specifically, France and Germany have proved to be against the EU AI Act, as France is the first country in terms of AI in continental Europe and because in these countries the biggest AI start-ups are located, of which governments want to protect the activity.
The European Commission planned to apply additional rules to only the most powerful AI models, as classified on the basis of the computing power needed for their training. However, as this is a subjective assessment, the application of these rules, especially to models like OpenAI’s GPT-4 or Google’s Gemini, remains uncertain.
Another widely discussed topic within the EU has been the proposal to ban the use of biometric systems near-totally in public places by the police. However, facing opposition from some countries, like France, it has been currently established that European police forces can use these systems only after court approval, and only for specific crimes or in “exceptional circumstances relating to public security” respecting European standards.
The UK’s AI White Paper and its hands-off approach
In 2015, the UK introduced AI as part of its Digital Strategy and supported its application in business and public sector, providing guidance on design and development of AI systems. In the following years, the UK worked also on cybersecurity and assessment of long-term AI-related risks.
On 29 March 2023, the British Government published the AI White Paper, setting out principles for regulating its use within the country, and in November the UK hosted the first AI safety summit, aiming to position itself as a leader in AI regulation.
The White Paper is based on the following 5 principles, but it leaves significant flexibility to existing regulators in how they adapt these principles to specific sectors:
- Safety, security, and robustness (i.e. AI systems trained and built on robust data);
- Appropriate transparency and explainability (to their users);
- Fairness;
- Accountability and governance (i.e. appropriate oversight over the AI systems’ use);
- Contestability and redress.
Moreover, as reported in an article of Financial Times, the UK’s Government has not intended to pass laws in the short term, arguing that too strict regulation could hinder this industry’s growth. The UK’s Government will carry forward a hands-off approach, focusing on setting expectations for the development and use of AI. But this approach has been criticised, as some claim that it may discourage investors that seek transparency and security in AI’s sector, making probably infeasible the UK aim to become an international standard-setter in AI governance ahead of the US and the EU.
Despite the EU, US and China put some pression, advancing new measures, in the White Paper, the UK Government confirms the adoption of a pro-innovation approach and its commitment to work with businesses, to help institutions to understand and improve the interaction with new technologies and to collaborate with key international partners such as the US and other G7 countries.
Conclusion
In conclusion, as analysed and reported in this insight, the Artificial Intelligence regulatory framework is very diversified and not yet well defined, because most of today’s legislative proposals are still under review and may change over the next weeks or months.
As AI has direct implications on companies and individuals’ lives, the implementation of the previously described regulations is considered a priority globally. Waiting for these political agreements to become true laws, policymakers and industry stakeholders will need to work to define more detailed guidelines on the implementation of regulations on AI in different settings.
Meanwhile, we can say that the approaches adopted by the most developed countries and the AI regulations released by Governments represent a significant step toward the achievement of a balance between regulation and innovation.
Join ThePlatform to have full access to all analysis and content: https://www.theplatform.finance/registration/
Disclaimer: https://www.theplatform.finance/website-disclaimer/