This week, the all-party parliamentary group (APPG) on the future of work, a special interest group of members of parliament in the U.K., said that the monitoring of workers through algorithms is damaging to employees’ mental health and needs to be regulated through legislation. This legislation, they said, could ensure that companies evaluate the effect of “performance-driven” guidelines, like queue monitoring in supermarkets, while providing employees the means to fight back against perceived violations of privacy.
“Pervasive monitoring and target-setting technologies, in particular, are associated with pronounced negative impacts on mental and physical wellbeing as workers experience the extreme pressure of constant, real-time micromanagement and automated assessment,” wrote the APPG members in a report. “[A new algorithms act would establish] a clear direction to ensure AI puts people first.”
Monitoring employees with AI
The trend toward remote and hybrid work has prompted some companies to increase their use of monitoring technologies — ostensibly to ensure that employees remain on task. Employee monitoring software is a broad category, but generally speaking, it encompasses programs that can measure an employee’s idle time, access webcams and CCTV, track keystrokes and web history, take screenshots, and record emails, chats, and phone calls.
In a survey, VPN provider ExpressVPN found that 78% of businesses were using monitoring software like TimeDoctor, Teramind, Wiretap, Interguard, Hubstaff, and ActivTrak to track their employees’ performance or online activity. Meanwhile, tech giants like Amazon ding warehouse employees for spending too much time away from the work they’re assigned to perform, like scanning barcodes or sorting products into bins.
A Washington Post piece published this week focusing on the legal industry found that facial recognition monitoring has become pervasive in contract attorney work. Firms are requiring contract attorneys to submit to “finicky, error-prone, and imprecise” webcam-based systems that record facial movements and surroundings, sending an alert if the attorney allows unauthorized people into the room. According to the report, some of the software also captures a “webcam feed” for employers that included snapshots of attorney “violations,” such as when a person opens a social media website, uses their phone, or blocks the camera’s view.
Employers cite the need for protection against time theft — according to one source, employers lose about 4.5 hours per week per employee to time theft — but workers feel differently about the platforms’ capabilities. A recent survey by ExpressVPN found 59% of remote and hybrid workers feel stress or anxiety as a result of their employer monitoring them. Another 43% said that the surveillance felt like a violation of trust, and more than half said they’d quit their job if their manager implemented surveillance measures.
Privacy concerns aside, there’s the potential for bias to arise in the software’s algorithms. Studies show that even differences between camera models can cause an algorithm to be less effective in classifying the objects it was trained to detect. In other research, text-based sentiment analysis systems have been shown to exhibit prejudices along race, ethnic, and gender lines — for example, associating Black people with more negative emotions like anger, fear, and sadness.
In some cases, biases and other flaws have caused algorithms to penalize workers for making unavoidable “mistakes.” A former Uber driver has filed a legal claim in the U.K. alleging that the company’s facial recognition software works less effectively on darker skin. And Vice recently reported that AI-powered cameras installed in Amazon delivery vans incorrectly flagged workers whenever cars cut them off, a frequent occurrence in traffic-heavy cities like Los Angeles.
Progress and the road ahead
In the U.S., as in many countries around the world, employees have little in the way of legal recourse when it comes to monitoring software. The U.S. 1986 Electronic Communications Privacy Act (ECPA) allows companies to surveil communications for “legitimate business-related purposes.” Only two states, Connecticut and Delaware, require notification if employees’ email or internet activities are being monitored, while Colorado and Tennessee require businesses to set written email monitoring policies.
As a small sign of progress, earlier this year, California passed AB-701 legislation, which prevents employers from algorithmically counting health and safety compliance against workers’ productive time. Legislation proposed in the New York City Council seeks to update hiring discrimination rules for companies that choose to use algorithms as part of the process.
For the APPG’s part, they recommend that workers be involved in the design and use of algorithm-driven systems that make decisions about the allocation of shifts, pay, hiring, and more. They also strongly suggest that corporations and public sector employers fill out impact assessments aimed at identifying problems caused by the systems, as well as introducing certification and guidance for use of AI and algorithms at work.
“It is clear that, if not properly regulated, algorithmic systems can have harmful effects on health and prosperity,” David Davis, one co-author of the report, wrote. Added fellow co-author Clive Lewis: “There are marked gaps in regulation at an individual and corporate level that are damaging people and communities.”
They have a point.
For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading,
AI Staff Writer
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Source: Read Full Article