Saturday, July 27, 2024
HomeAI News & UpdatesCompanies Like Chevron, Walmart, And Starbucks Are Using AI For Employee Chat...

Companies Like Chevron, Walmart, And Starbucks Are Using AI For Employee Chat Monitoring

Aware is an AI-powered software startup that can understand team chats on Slack, Teams by Microsoft, as well as Workplace by Meta. Keeping tabs on staff actions to deduce possible risks is its stated goal.

Do you plan on making lewd comments about a teammate by directly messaging your coworker on Slack? Please give it some serious thought before sending that message, or AI might mistake it for a possible breach of company policy.

According to the company, Aware is utilized by several prominent American corporations, such as Delta, Starbucks, Chevron, and T-Mobile, to evaluate a maximum of 20 billion distinct messages shared among their over 3 million employees.

While monitoring individuals in the workplace is not new, some experts are worried that making decisions based on incomplete or inaccurate data collected by emerging AI technologies might be a privacy nightmare.

The companies, however, still need to confirm the details. Delta informed that it manages its legal records and does “routine monitoring of trends and sentiment” on its internal social media platform by using Aware.

According to a CNBC interview with Aware co-founder and CEO Jeff Schumann, businesses utilize Aware’s AI technology to see how their regular employees react to changes in corporate policy or advertising campaigns. He added that Employers will be able to observe how employees’ opinions vary based on factors such as age and location.

Possible hazards in the workplace can also be found with Aware. According to Schumann, the startup’s suite of large language models can learn from employee interactions and identify instances of toxic behaviors like “bullying, discrimination, harassment, pornography, and nudity by analyzing text and images in conversations.

“It’s always tracking real-time employee sentiment, and it’s always tracking real-time toxicity,” he added regarding the AI tool.

The CEO assured employees that their identities are not included in the data collected by Aware regarding their sentiment and toxicity on the job. However, violation of confidentiality may occur under severe circumstances.

When eDiscovery detects specific terms or phrases in a Slack message that can be considered a rules violation, Aware’s artificial intelligence can identify the sender. According to Schumann’s statement to CNBC, if the AI determines that the message poses an “extreme risk,” the company has the option to notify human resources about the person’s identity.

“Some of the common ones are extreme violence, extreme bullying, harassment, but it does vary by industry,” according to the CEO. He elaborated by saying that this system would detect instances of insider trading and similar crimes.

It appears that some privacy experts disagree with Aware’s CEO, who told CNBC that the company’s AI models aren’t utilized for decision-making or disciplinary purposes.

“No company is essentially in a position to make any sweeping assurances about the privacy and security of LLMs and these kinds of systems,” CNBC was informed by Amba Kak, executive director of the AI Now Institute at New York University.

“How do you face your accuser when we know that AI explainability is still immature?” Questioned Jutta Williams, co-founder of the group Humane Intelligence, who spoke with the newspaper, AI doesn’t always provide a whole picture of an incident that occurred at work.

Some state regulations, such as New York’s Senate Bill S2628, mandate that firms inform their employees about their digital monitoring methods. However, we are still determining if the companies share this information with their employees.

Among the most recent methods large corporations seem to be monitoring their employees is through the use of artificial intelligence (AI). This is particularly true as employers promote return-to-office mandates.

Workers in New York who use Tesla’s autopilot allegedly accused the automaker in February of monitoring their keystrokes to make sure they were paying attention. Some employees complained to Bloomberg that the technology prohibited them from taking toilet breaks.

A little under a year ago, an investigation found that JPMorgan Chase was keeping tabs on their workers’ calendars, phone calls, and office attendance using an internal program. One employee said that this practice encouraged an atmosphere of “paranoia,” “distrust,” as well as “disrespect.” Employee termination was associated with workplace monitoring systems in the most severe instances.

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments