Can robots be the victims of harassment in the workplace? The law will have to be revised to deal with more automation
In November, the Bank of England warned that up to 15m British jobs are under threat from automation. And whether you work on the factory or the trading floor, the probability is high that you’ll have to get used to more machines in the workplace.
But as the speed of technological change accelerates, law-makers will have to keep pace, adapting legal frameworks around employment, workplace harassment, discrimination and crime. So what should workers be aware of as robotic solutions become more common at the office?
DISCRIMINATION
Managing a mixed workforce of both human and virtual employees throws up a number of ethical issues. Companies like Affectiva and Emotient are developing technology which can analyse facial expressions, and other body language, and the usefulness of these systems in the recruitment process is already being explored.
According to Freshfields Bruckhaus Deringer, however, this could throw up legal problems for companies, as this technology might breach legislation which protects against discrimination in recruitment. “When behavioural data is collected and compared to similar data about successful workers, unintended correlations can emerge that negatively impact candidates,” indicates a white paper by Littler Mendelsen, a US law firm specialising in robotics employment law.
HARASSMENT
Robots’ entitlement to human rights is perhaps a debate for the future, but as humans and machines work together more closely and on a more frequent basis, it may become a point of contention for businesses. The issue of showing respect to machinery, for example, is likely to cause division.
Emily Dreyfuss told Wired about her experience using a telepresence robot (imagine an iPad mounted on a Segway) so she could move around the office remotely while pregnant and working from home. She describes “how instantly violated” she felt when a colleague picked up the robot and shook it. “He just picked up an extension of my body,” she says. Would the legal system need to account for changing attitudes towards technology used in different contexts?
HUMAN AGENCY
But there is also the issue of criminal action, and how robots could be used to commit a corporate crime. “Some thought needs to go into when, and in what circumstances, we make the designer or manufacturer liable rather than the user,” Bournemouth University’s Jeffrey Wale and David Yuratich told The Conversation. “Much of our current law assumes that human operators are involved.”
The Crown Prosecution Service must currently prove that a defendant was in the necessary guilty state of mind, known as “mens rea”, to have committed the criminal act. But last year, a group of Swiss artists programmed a bot to spend $100 in bitcoin per week on a darknet market which lists both legal and illegal items. They took full responsibility after the bot purchased ecstasy pills, a fake Hungarian passport and other illicit products, but no charges were brought against them.
Businesses are bound by a large number of laws which suppose a degree of human control. These would need to be reexamined in a number fields, from purchasing to transportation, if bosses and lawyers are to have clarity and manage with consistency.
Take unmanned vehicles, for example. The rules of the road assume a human driver “to at least some degree,” say Wale and Yuratich. “Once fully autonomous vehicles arrive, that framework will require substantial changes to address the new interactions between human and machine on the road.”