AI Regulations: What Are the Positions of the US, EU, and UK?

 

Artificial intelligence (AI) is one of the breakthroughs brought about by innovations in modern technology. It is one of the branches of computer technology.

The success of AI is moving like the wind, and it is greatly transforming many aspects of human endeavors. For instance, a report published by PWC says that AI could potentially contribute $15.7 trillion to the world economy by 2035.

Artificial Intelligence Explained

Artificial intelligence (AI) is when you simulate human intelligence in machines that are made to think and behave like humans. The machines are programmed to do things like reasoning, problem-solving, perception, and language comprehension.

In other words, artificial intelligence is when you program a computer or software to think or act intelligently like human beings. AI is made possible by studying and analyzing the human brain as well as cognitive activities. It was through the outcomes of these studies that AI was formed.

AI has been very beneficial to mankind in key areas like the prevention of fraud, criminal justice, investment making, portfolio management, language interpretation, gunshot detection, retirement planning, mortgages, medicine, and so on.

Despite the great benefits of AI, there have been calls by several stakeholders around the world for the need for laws and frameworks that would regulate the new technology due to the potential risks it may have.

Some of the potential risks of AI that stakeholders have pointed out are: violation of private policies, job displacement caused by automation of certain jobs, bias in programming, danger of unclear legal regulation, algorithmic bias due to bad data,socioeconomic inequality, and many others.

As a result of the possible risks of AI, many countries have been coming up with frameworks and policies on how to regulate technology.

In this article, we look at how the United States, the European Union, and the United Kingdom have been dealing with AI regulations.

United States

Proposed National AI Commission Act.

AI regulation in the US is one of the major topics that has dominated the global media spaces in the recent days 

The bill on June 20, 2023, was co-sponsored by two federal lawmakers in the House of Representatives, namely Ted Lieu and Ken Buck.

If the proposed National AI Commission Act scales through, it will establish an independent body that will comprise 20 experts within the legislative house to come out with a complete regulatory framework for AI in the United States.

The Act would also review the current and proposed regulatory activities across the United States with a view to choosing the areas that would be incorporated into one framework and law.

If the bill scales through, the commission would be empowered to recommend “any governmental bodies that may be essential to coordinating and regulating AI systems in the United States.

As good as the proposed Act is, there is no indication yet of any support for the legislation by the House Representative leadership.

States Across the US.

The number of bills seeking to regulate AI across the United States is increasing. However,not all the bills have been passed into law by the state legislatures.

As of August 2023, 25 states, including Puerto Rico and the District of Columbia, had introduced bills aimed at regulating AI. Out of those, only 14 states and Puerto Rico passed bills regulating the technology.

One of the main reasons why states are passing bills to regulate AI is to ensure that the rights of the people are well protected.

For instance, in 2020, the Illinois legislative house passed a resolution mandating employers to notify job applicants if the company would use AI to analyze a videotaped interview.

Also, in 2021, the state of Colorado passed a resolution preventing insurers from using algorithms in a way that would cause discrimination on the basis of race, religion, disability, gender expression, sex, gender identity, or sexual orientation.

In 2023, the state of Connecticut mandated the state’s Department of Administrative Services to start monitoring the use of AI by state agencies so that it does not lead to discrimination.

President Biden’s Executive Order

On October 30, the President of the United States signed a historic Executive Order on artificial intelligence. The executive order covers vital areas such as consumer privacy, national security,commercial competition, and civil rights.

The administration said that the purpose of the order is to take the necessary steps towards ensuring the safe, secure, and trustworthy technology of AI.

While signing the order, President Biden said that the country should have the right framework to govern AI so as to realise the promise and avoid the risks involved.

Key Takeaways from Biden’s Executive Order.

  • The order mandates that some AI companies share their safety test results with the federal government.

  • It directs the Department of Commerce to come out with policies that will serve as guidance for AI watermarking.

  • It is also meant to create a cybersecurity system that would ensure AI tools help spot flaws in critical software.

  • It aims at protecting consumer privacy as well as creating policies that agencies can use in assessing privacy techniques used in AI.

  • To provide vital guidance to relevant stakeholders to help avoid AI algorithms that can promote discrimination as well as ensuring the appropriate practices of AI in the justice system.

  • To protect consumers by directing the Department of Health and Human Services towards forming a system that will assess AI-related health-care practices that are potentially harmful

  • In order to support workers, the order will assess the likely negative effects AI could have on the labor market. It will also look at how the government could support the workers that are affected.

European Union (EU)

Let’s take a look at EU and AI in the area of regulation. The EU block has been very swift to propose legislation towards the regulation of AI so as to prevent its members from some particular risks it consists of.

In its proposal, the EU is making moves to regulate AI through the Digital Services Act, the General Data Protection Regulation (GDPR), and the proposed Data Governance Act.

On the other hand, prohibited AI systems are the ones that are totally against the values of the EU or pose a significant risk to the fundamental rights of its people.

United Kingdom

AI in UK goes in another dimension as regards. Unlike the European Union and the United States, which are making significant moves to regulate AI, the government of the UK.

Speaking recently to journalists During a Financial Times conference, the UK’s minister for AI and intellectual property, Viscount Jonathan Camrose, said that the country is not regulating AI soon because of the concern they have that regulation of the technology may stiffen growth.

According to him, the government of the UK believes that premature regulation always poses a risk that could produce more harm by curbing innovation and technological growth.

Technology