European Union Artificial Intelligence Act: Explained: What is the European Union AI Act, and it may mean for ChatGPT – Times of India
The AI Act is expected to be a landmark piece of EU legislation governing the use of artificial intelligence in Europe that has been in the works for over two years.
Lawmakers have proposed classifying different AI tools according to their perceived level of risk, from low to unacceptable. Governments and companies using these tools will have different obligations, depending on the risk level.
WHAT IS THE SCOPE OF THE ACT?
The Act is expansive and will govern anyone who provides a product or a service that uses AI. The Act will cover systems that can generate output such as content, predictions, recommendations, or decisions influencing environments.
Apart from uses of AI by companies, it will also look at AI used in public sector and law enforcement. It will work in tandem with other laws such as the General Data Protection Regulation (GDPR).
Those using AI systems which interact with humans, are used for surveillance purposes, or can be used to generate “deepfake” content face strong transparency obligations.
WHAT’S CONSIDERED ‘HIGH RISK’?
A number of AI tools may be considered high risk, such as those used in critical infrastructure, law enforcement, or education. They are one level below “unacceptable,” and therefore are not banned outright.
Instead, those using high-risk AIs will likely be obliged to complete rigorous risk assessments, log their activities, and make data available to authorities to scrutinise. That would be likely to increase compliance costs for companies.
Many of the “high risk” categories where AI use will be strictly controlled would be areas such as law enforcement, migration, infrastructure, product safety and administration of justice.
WHAT IS A ‘GPAIS’?
A GPAIS (General Purpose AI System) is a category proposed by lawmakers to account for AI tools with more than one application, such as generative AI models like ChatGPT.
Lawmakers are currently debating whether all forms of GPAIS will be designated high risk, and what that would mean for technology companies looking to adopt AI into their products. The draft does not clarify what obligations AI system manufacturers would be subject to.
WHAT IF A COMPANY BREAKS THE RULES?
The proposals say those found in breach of the AI Act face fines of up to 30 million euros or 6% of global profits, whichever is higher.
For a company like Microsoft, which is backing ChatGPT creator OpenAI, it could mean a fine of over $10 billion if found violating the rules.
WHEN WILL THE AI ACT COME INTO FORCE?
While the industry expects the Act to be passed this year, there is no concrete deadline. The Act is being discussed by parliamentarians, and after they reach common ground, there will be a trilogue between representatives of the European Parliament, the Council of the European Union and the European Commission.
After the terms are finalised, there would be a grace period of around two years to allow affected parties to comply with the regulations.
Lawmakers have proposed classifying different AI tools according to their perceived level of risk, from low to unacceptable. Governments and companies using these tools will have different obligations, depending on the risk level.
WHAT IS THE SCOPE OF THE ACT?
The Act is expansive and will govern anyone who provides a product or a service that uses AI. The Act will cover systems that can generate output such as content, predictions, recommendations, or decisions influencing environments.
Apart from uses of AI by companies, it will also look at AI used in public sector and law enforcement. It will work in tandem with other laws such as the General Data Protection Regulation (GDPR).
Those using AI systems which interact with humans, are used for surveillance purposes, or can be used to generate “deepfake” content face strong transparency obligations.
WHAT’S CONSIDERED ‘HIGH RISK’?
A number of AI tools may be considered high risk, such as those used in critical infrastructure, law enforcement, or education. They are one level below “unacceptable,” and therefore are not banned outright.
Instead, those using high-risk AIs will likely be obliged to complete rigorous risk assessments, log their activities, and make data available to authorities to scrutinise. That would be likely to increase compliance costs for companies.
Many of the “high risk” categories where AI use will be strictly controlled would be areas such as law enforcement, migration, infrastructure, product safety and administration of justice.
WHAT IS A ‘GPAIS’?
A GPAIS (General Purpose AI System) is a category proposed by lawmakers to account for AI tools with more than one application, such as generative AI models like ChatGPT.
Lawmakers are currently debating whether all forms of GPAIS will be designated high risk, and what that would mean for technology companies looking to adopt AI into their products. The draft does not clarify what obligations AI system manufacturers would be subject to.
WHAT IF A COMPANY BREAKS THE RULES?
The proposals say those found in breach of the AI Act face fines of up to 30 million euros or 6% of global profits, whichever is higher.
For a company like Microsoft, which is backing ChatGPT creator OpenAI, it could mean a fine of over $10 billion if found violating the rules.
WHEN WILL THE AI ACT COME INTO FORCE?
While the industry expects the Act to be passed this year, there is no concrete deadline. The Act is being discussed by parliamentarians, and after they reach common ground, there will be a trilogue between representatives of the European Parliament, the Council of the European Union and the European Commission.
After the terms are finalised, there would be a grace period of around two years to allow affected parties to comply with the regulations.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.
Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.