Artificial Intelligence is a threat to society, an EU committee (EESC) has warned
Artificial intelligence (AI) could pose a threat to the fabric of our society, according to an opinion published last night by the European Economic and Social Committee (EESC).
The EESC’s Workers’ Group, which is comprised of representatives from national trade unions, found 11 areas in which AI raises societal concerns – ranging from labour to warfare.
AI, which covers technology as diverse as the Netflix algorithms which predict your next favourite show to futuristic autonomous robots, should be regulated by EU policy to make it work for society and social wellbeing, the group said.
“We need a human-in-command approach to AI, where machines remain machines and people retain control over these machines at all times,” said the group’s spokesperson Catelijne Muller.
“It is up to us to decide if we want certain jobs to be performed, care to be given or medical decisions to be made by AI, and if we want to accept AI that may jeopardise our safety, privacy or autonomy.”
Read more: Artificial intelligence and robots are one of the biggest technological risks to the world
The AI market currently amounts to around $664m, according to the EU, and is expected to grow to $38.8bn by 2025.
Norms and standards should be developed to regulate this market, the EESC said, including an ethical code and labour strategies to protect jobs.
So what are the 11 social areas which are under threat from AI?
Ethics. According to the committee, self-teaching AI could have an effect on our own human fundamental rights and values – what it means to be human. This, it noted, is connected to the fact that most AI is being programmed by young, white men who cannot be said to represent the general population.
Safety. As the hacks on the NHS and various other organisations proved last month, where there is technology, a malfunction is only a step away. Added to that, any form of AI which is designed to function in society must be tested under a multitude of situations to judge how it will react.
Privacy. AI already pervades our household appliances, health trackers and smartphones, sending data to manufacturers, and care must be taken in how that information is treated, said the group. Businesses may use the data to influence buying choices, or to sell on for their own direct profit.
Transparency and accountability. Since AI is already used to intervene in people’s lives, the committee noted, in cases such as assessing mortgage applications or authorising insurance, its decision-making process must be transparent.
Work. As in the last industrial revolution, a big advance in technology will doubtless lead to job losses. Yet the committee drew attention to the fact that machines can now perform not only physical tasks, but also cognitive ones – potentially affecting highly skilled employees as much as low-skilled workers.
Education and skills. In a rapidly changing labour environment, the average person will have to become much more adept at coding and programming. But despite the best efforts of education providers to prepare for this, not everyone can – or will – adapt to the digital age, said the Workers’ Group.
(In)equality and inclusiveness. The committee was concerned that most artificial intelligence development is in the hands of the big five tech firms – Amazon, Facebook, Apple, Google and Microsoft. They may not make their technology accessible to all, it said, for example by putting a price tag on its use.
Law and regulations. What rights should AI have? If it is fully autonomous, should its manufacturer be held responsible for its actions? The EESC said it is opposed to any form of legal status for robots, fearing the effect this would have on morality.
Governance and democracy. AI could help promote involvement in public policy, the committee conceded. However, it raised concerns that voting behaviour could be influenced by smart technology. Especially worrying is the use of smart algorithms which create “filter bubbles” or promote “fake news” on social media, it said.