The AI Act is a work in progress since 2021, and discussions about how strongly AI should be regulated are only heating up in the EU. While some call for more control and rights for consumers, others fear rules could stall the innovation engine.
Credit: Etienne Ansotte/EU
Members of the EU Parliament have agreed on a first draft for regulating the use of AI.The AI Actis now takingthe next procedural step to be negotiated and worked out with individual member states.In the end, there should be an EU-wide body of law to regulatethe use of AI technologies, such asChatGPT.
Essentially, the AI Act is about categorizing AI systems into specific risk classes ranging from minimal, to systems with high risks, and those that should be banned altogether.And when it comes to AI systems making consequential decisions for people, especially high standards should be applied, particularly to the transparency of data a particular AI was trained for its decision-making, and how the algorithms work to ultimately make decisions.In this way, EU politicians want to ensure that these AI applications function securely and reliably, and don’t violate fundamental human rights.
Until a final set of rules is in place, though, there will be an inevitable amount of discussion within the various EU bodies as there is no consensus. Italy, for instance, has recently taken a tougher stance and banned Open AI’s generative AI tool ChatGPTdue to a lack of age controls for use and possible copyright infringement in the training data.In the meantime, however, Italian authorities have allowed ChatGPT use again under certain conditions.
Other EU countries followed the initiative of the Italian data protection authorities as well.Germany raised that ChatGPT should be banned if it can be proven the tool violates applicable data protection rules.
Too much regulation hampers innovation
While consumer advocates are calling for strict rules to protect citizen rights, business representatives warn that overly strict regulation of technology could lead to more innovation slow down. According to advocates of a less strict interpretation of the AI Act, the EU could fall behind in an important future-oriented industry.
In an open letter, representatives of the Large ScaleArtificial IntelligenceOpen Network (LAION eV) calledon EU politicians to moderate AI regulation to proceed.The intention to introduce AI supervision is welcomed, it says, but such oversight must be carefully calibrated to protect research and development, and maintain Europe’s competitiveness in the field of AI.Signatories include Bernhard Schölkopf, director at the Max Planck Institute for Intelligent Systems in Tübingen, and Antonio Krüger, head of the German Research Center forArtificial Intelligence(DFKI).
LAION demands that open-sourceAI models in particularshouldn’t be over-regulated.Open-source systems in particular allow more transparency and security when it comes to the use of AI.In addition, open-source AI would prevent a few corporations from controlling and dominating the technology.In this way, moderate regulation could also help advance Europe’s digital sovereignty.
Too little regulation weakens consumer rights
On the other hand, the Federation of German Consumer Organizations (VZBV) calls for more rights for consumers.According to a statement by consumer advocates, consumer decisions will in future be increasingly influenced by AI-based recommendation systems, and in order to reduce the risks of generative AI, the planned European AI Act should ensure strong consumer rights and the possibility of independent risk assessment.
“The risk that AI systems lead to false or manipulative purchase recommendations, ratings and consumer information is high,” said Ramona Pop, board member of VZBV. “TheArtificial intelligenceis not always as intelligent as the name suggests. It must be ensured that consumers are adequately protected against manipulation and deception, for example, through AI-controlled recommendation systems.Independent scientists must be given access to the systems to assess risks and functionality. We also need enforceable individual rights of those affected against AI operators.”The VZBV also add that people must be given the right to correction and deletion if systems such as ChatGPT cause disadvantages due to reputational damage, and that the AI Act must ensure AI applications comply with European laws and correspond to European values.
Self-assessment by manufacturers is not enough
Although the Technical Inspection Association (TÜV) basically welcomes groups in the EU Parliament to agree on a common position for the AI Act, it sees further potential for improvement.“A clear legal basis is needed to protect people from the negative consequences of the technology, and at the same time, to promote the use of AI in business,” said Joachim Bühler, MD of TÜV.
Bühler says it must be ensured that specifications are also observed, particularly with regard to transparency of algorithms.However, an independent review is only for a small part of AI systems with high risk intended.“Most critical AI applications such as facial recognition, recruiting software or credit checks should continue to be allowed to be launched on the market with a pure manufacturer’s self-declaration,” said Bühler.In addition, the classification as a high-risk application should be based in part on a self-assessment by the providers.“Misjudgments are inevitable,” he adds.
According to TÜV, it would be better to have all high-risk AI systems tested independently before launch to ensure the applications meet security requirements.“This is especially true when AI applications are used in critical areas such as medicine, vehicles, energy infrastructure, or in certain machines,” said Bühler.
AI should serve, not manipulate
While discussions about AI regulation are in full swing, at a meeting in Takasaki, Japan, at the end of April, the G7 digital ministers spoke out in favor of accompanying the rapid development ofAI with clear international rules and standards according to a statement by the Federal Ministry for Digital Affairs and Transport (BMDV).
“We in the G7 agree that when it comes to regulating AI, we must act quickly,” said Volker Wissing, Germany’s Minister of Transport and Digital Infrastructure. “Generative AI has immense potential to increase our productivity and make our lives better. It’s all the more important that the large democracies lead the way and accompany its development with clever rules that protect people from abuse and manipulation.Artificial Intelligenceshould serve us, not manipulate.”
But it’s questionable whether it will happen as quickly as Wissing would like, seeing as the AI Act has been in the works in Brussels since April 2021.After the agreement in the EU Parliament, trialogue negotiations between the Council, Parliament, and Commission could begin in the summer of 2023.It’s anyone’s guess when a final set of rules can be put in place and converted into applicable law.It’s also questionable if technological development of AI will outpace attempts at AI regulation by then.
- featureRed Hat embraces hybrid cloud for internal IT The maker of OpenShift has leveraged its own open container offering to migrate business-critical apps to AWS as part of a strategy to move beyond facilitating hybrid cloud for others and capitalize on the model for itself.ByPaula Rooney29 May 20235 minsCIO 100Technology IndustryHybrid Cloud
- feature10 most popular IT certifications for 2023 Certifications are a great way to show employers you have the right IT skills and specializations for the job. These 10 certs are the ones IT pros are most likely to pursue, according to data from Dice.BySarah K. White26 May 20238 minsCertificationsCareers
- interviewStepping up to the challenge of a global conglomerate CIO role Dr. Amrut Urkude became CIO of Reliance Polyester after his company was acquired by Reliance Industries. He discusses challenges IT leaders face while transitioning from a small company to a large multinational enterprise, and how to overcome them.ByYashvendra Singh26 May 20237 minsDigital TransformationCareers
- brandpostWith the new financial year looming, now is a good time to review your Microsoft 365 licenses ByVeronica Lew25 May 20235 minsLenovo
EU's AI Act challenge: balance innovation and consumer protection. The AI Act is a work in progress since 2021, and discussions about how strongly AI should be regulated are only heating up in the EU. While some call for more control and rights for consumers, others fear rules could stall the innovation engine.What are the main points of the EU AI Act? ›
The new law promotes regulatory sandboxes, or controlled environments, established by public authorities to test AI before its deployment. MEPs want to boost citizens' right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights.What are the unacceptable risks of the EU AI Act? ›
Unacceptable: Applications that comprise subliminal techniques, exploitative systems or social scoring systems used by public authorities are strictly prohibited. Also prohibited are any real-time remote biometric identification systems used by law enforcement in publicly-accessible spaces.What is the risk based approach in EU AI Act? ›
The EU approach to AI risk management is characterized by a more comprehensive range of legislation tailored to specific digital environments. The EU plans to place new requirements on high-risk AI in socioeconomic processes, the government use of AI, and regulated consumer products with AI systems.What is Article 5 of the EU AI Act? ›
The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the 'real-time' remote biometric identification system at issue is necessary for and proportionate to achieving one of the ...What are the EU four ethical principles regarding AI systems? ›
✓ Develop, deploy and use AI systems in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability.Which 3 three principles determine how and in which areas the EU can act? ›
single market. employment and social affairs. economic, social and territorial cohesion.What are the three limitations of AI? ›
However, despite its many advantages, there are also several limitations to the technology that must be taken into consideration… Some of these limitations include the lack of common sense, transparency, creativity, emotion and safety and ethical concerns.What are the biggest legal issues in AI? ›
- Intellectual Property Rights. ...
- Data Privacy. ...
- Employment Law. ...
- Liability. ...
- Discrimination. ...
- Fairness. ...
- Contract Law. ...
- Regulatory Compliance.
What is the EU AI Act? The AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories.
- Human agency and oversight.
- Technical robustness and safety.
- Privacy and Data governance.
- Diversity, non-discrimination and fairness.
AI-based solutions can provide more effective protection against both known and unknown threats – using machine learning algorithms to detect and respond to threats in real-time. This helps organizations to better safeguard their sensitive data and critical systems.What is Article 13 of the EU AI Act? ›
Article 13 of the AI Act mandates that AI systems provide users with clear, meaningful, and timely information about their capabilities and limitations.What is Article 9 of the AI Act? ›
High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chapter.What is Article 13 of the AI Act? ›
High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users.What are 3 main concerns about the ethics of AI? ›
But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.What is the AI Act summary? ›
The proposal would place prohibitions on certain types of applications, namely remote biometric recognition, applications that subliminally manipulate persons, applications that exploit vulnerabilities of certain groups in a harmful way, and social credit scoring.What are the 4 key principles of AI development? ›
This article provides a a breakdown of four of the eight principles: bias evaluation, explainability, human augmentation, and reproducibility. In a sense, AI models carry inherent biases as they are designed to discriminate towards the relevant answers.What are the three main decision making bodies of the EU? ›
- the European Parliament, representing EU citizens.
- the Council of the European Union, representing EU governments.
- the European Commission, representing the EU's overall interests.
The aims of the EU within the wider world are: uphold and promote its values and interests. contribute to peace and security and the sustainable development of the Earth. contribute to solidarity and mutual respect among peoples, free and fair trade, eradication of poverty and the protection of human rights.
protect, conserve and enhance the EU's natural capital. turn the EU into a resource-efficient, green, and competitive low-carbon economy. safeguard EU citizens from environment-related pressures and risks to health and wellbeing. EU environmental priorities.What are the three advantages and three disadvantages of AI? ›
The advantages range from streamlining, saving time, eliminating biases, and automating repetitive tasks, just to name a few. The disadvantages are things like costly implementation, potential human job loss, and lack of emotion and creativity.What are the two types of problems in AI? ›
The most prevalent problem types are classification, continuous estimation and clustering. I will try and give some clarification about the types of problems we face with AI and some specific examples for applications.What are three 3 types of AI perspectives? ›
Artificial narrow intelligence (ANI), which has a narrow range of abilities; Artificial general intelligence (AGI), which is on par with human capabilities; or. Artificial superintelligence (ASI), which is more capable than a human.What is the disadvantage of AI in law? ›
One of the main disadvantages of AI in legal research is the potential for bias. AI systems are only as good as the data they are trained on, and if the data is biased, the results of the research may be as well. This could lead to inaccurate results and decisions, which could have serious legal consequences.Can AI be problematic in the legal sector? ›
Whilst AI can analyse data and identify certain patterns, it cannot produce creative solutions to legal problems or think commercially. 3. Ethical concerns – The use of AI in the legal industry can raise ethical concerns.What is Elon Musk's warning about AI? ›
“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said in his interview with Tucker ...Why is AI a threat to humanity? ›
We describe three such main ways misused narrow AI serves as a threat to human health: through increasing opportunities for control and manipulation of people; enhancing and dehumanising lethal weapon capacity and by rendering human labour increasingly obsolescent.What is the EU proposal for an AI Act? ›
Regulatory sandboxes: The AI Act proposes the possibility of setting up controlled environments for the development, training, testing and validation of innovative AI systems in real world conditions, so-called “sandboxes.” This has been identified by industry and regulators alike as crucial for innovation to ensure ...Who does the AI Act apply to? ›
The AI Act will apply to AI systems providers (and their authorized representatives), importers, distributors and users, as well product manufacturers that place an AI system on the market or put one into service together with their product and under their own name or trademark.
To that end, the EU ethics guidelines promote a trustworthy AI system that is lawful (complying with all applicable laws and regulations), ethical (ensuring adherence to ethical principles and values) and robust (both from a technical and social perspective) in order to avoid causing unintentional harm.What is the AI compliance policy? ›
It involves verifying that nobody uses AI-powered systems to invade individuals' privacy or cause any harm to them; Finally, AI compliance also assures that AI-powered systems are employed responsibly and in a way that benefits society.What is an example of malicious use of AI? ›
Human Impersonation on Social Networking Platforms
Cybercriminals are also abusing AI to imitate human behavior. For example, they are able to successfully dupe bot detection systems on social media platforms such as Spotify by mimicking human-like usage patterns.
The main distinction between cybersecurity and artificial intelligence is that cybersecurity is concerned with protecting computer systems and the networks that connect them from data theft, whereas artificial intelligence is concerned with the use of intelligent machines to carry out specific tasks based on their ...Will AI replace cyber security? ›
In conclusion, AI will improve threat detection and response in cybersecurity, but it cannot replace human cybersecurity professionals entirely. A combination of AI and human expertise will be necessary to ensure a secure and resilient digital world.What is the general approach of the EU AI Act? ›
The Council has adopted its common position ('general approach') on the Artificial Intelligence Act. Its aim is to ensure that artificial intelligence (AI) systems placed on the EU market and used in the Union are safe and respect existing law on fundamental rights and Union values.What are the 3 most important characteristics of an AI program? ›
Top Characteristics of Artificial Intelligence. Apart from the core three characteristics of AI such as Feature engineering, Artificial Neural Networks and Deep Learning, other characteristics unveil the maximum efficiency of this technology.What are the main objectives of the EU Action Plan? ›
It aims to: Re-orient capital flows towards sustainable investment, in order to achieve sustainable and inclusive growth; Manage financial risks stemming from climate change, natural disasters, environmental degradation and social issues; and.What costs should we expect from the EU's AI Act? ›
The cost of the EU AI Act
Center for Data Innovation, a nonprofit focused on data-driven innovation, published a report claiming that the EU AI Act will cost €31 billion over the next five years and reduce AI investments by almost 20%.
The three main approaches to artificial intelligence are rule-based systems, decision trees, and neural networks. The three main approaches to artificial intelligence are rule-based systems, decision trees, and neural networks.
Main AI Approaches
There are three related concepts that have been frequently used in recent years: AI, machine learning, and deep learning.
- Transparency. From hiring processes to driverless cars, AI is integral to human safety and wellbeing. ...
- Impartiality. Another key principle for AI ethics is impartiality. ...
- Accountability. Accountability is another important aspect of AI ethics. ...
- Reliability. ...
- Security and privacy.
The action plan on sustainable finance adopted by the European Commission in March 2018 has 3 main objectives: reorient capital flows towards sustainable investment, in order to achieve sustainable and inclusive growth. manage financial risks stemming from climate change, environmental degradation and social issues.What is the EU 10 action plan? ›
The ten (10) point action plan aims to leverage financial markets in order to address sustainability challenges, especially those related to climate change.What is the EU Green action plan? ›
Making 25% of EU agriculture organic, by 2030. Reduce by 50% the use of pesticides by 2030. Reduce the use of Fertilizers by 20% by 2030. Reduce nutrient loss by at least 50%.