A number of large US companies are using AI monitoring systems to analyse employee communications in popular business apps like Slack, Teams, and Zoom …
One AI model claims to be able to analyse the content and sentiment of both text and images posted by employees, reports CNBC.
Some of these tools are being used in relatively innocuous ways – like assessing aggregate employee reactions to things like new corporate policies
“It won’t have names of people, to protect the privacy,” said Aware CEO Jeff Schumann. Rather, he said, clients will see that “maybe the workforce over the age of 40 in this part of the United States is seeing the changes to [a] policy very negatively because of the cost, but everybody else outside of that age group and location sees it positively because it impacts them in a different way.”
But other tools – including another offered by the same company – can flag the posts of specific individuals.
Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors.
Chevron, Delta, Starbucks, T-Mobile, and Walmart are just some of the companies said to be using these systems. Aware says it has analysed more than 20 billion interactions across more than three million employees.
While these services build on non-AI based monitoring tools used for years, some are concerned that they have moved into Orwellian territory.
Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, said AI adds a new and potentially problematic wrinkle to so-called insider risk programs, which have existed for years to evaluate things like corporate espionage, especially within email communications.
Speaking broadly about employee surveillance AI rather than Aware’s technology specifically, Williams told CNBC: “A lot of this becomes thought crime.” She added, “This is treating people like inventory in a way I’ve not seen” […]
Amba Kak, executive director of the AI Now Institute at New York University, worries about using AI to help determine what’s considered risky behavior.
“It results in a chilling effect on what people are saying in the workplace,” said Kak, adding that the Federal Trade Commission, Justice Department and Equal Employment Opportunity Commission have all expressed concerns on the matter, though she wasn’t speaking specifically about Aware’s technology. “These are as much worker rights issues as they are privacy issues.”
An additional concern is that even aggregated data may be easily de-anonymized when reported at a granular level, “such as employee age, location, division, tenure or job function.”
Photo by Andres Siimon on Unsplash
FTC: We use income earning auto affiliate links. More.