What is the Future of AI-Driven Employee Monitoring? – InformationWeek

How much work are you getting done, and how are you performing it? Artificial intelligence is poised to answer those questions, if it isnt already. Employers such as Walmart, Starbucks, and Delta, among others, are using AI company Aware to monitor employee messages, CNBC reports.

Employers have been monitoring workers long before the explosion of AI, but this technologys use in keeping tabs on employees has sparked debate. On one side, AI as an employee monitoring tool joins the ranks of other AI use cases touted as the future of work. On the other side, critics raise questions about the potential missteps and impact on employees.

How can AI be used in employee monitoring, and are there use cases that benefit both employers and employees?

Productivity tracking is at the forefront of the AI and employee monitoring conversation. Are employees working when they are on the clock? Answering this question is particularly top-of-mind for employers with people on remote or hybrid schedules.

A lot of workers are doing something called productivity theater to show that they're working when they might not be, says Sue Cantrell, vice president of products and workforce strategy at consulting firm Deloitte Consulting.

AI can be used to sift through data to identify work patterns and measure employee performance against productivity metrics. Fundamentally, the sector is about analytics and being able to process more data and understand patterns more quickly and make intelligent recommendations, Elizabeth Harz tells InformationWeek. She is CEO of Veriato, an insider risk management and employee monitoring company.

Related:Why Technology and Employee Privacy Clash

Veriatos customers most often use its AI-powered platform for insider risk management and user activity monitoring, according to Harz.

Insider risk is a significant cybersecurity concern. The Cost of Insider Risks Global Report 2023, conducted by the Ponemon Institute, found that 75% of incidents are caused by non-malicious insiders. We believe using AI to help teams get more predictive instead of reactive in cyber is critical, Harz explains.

Using AI to monitor workers can be about their own safety, as well as that of the company. People have different dynamics than they did when they went to the office Monday through Friday. It doesn't mean sexual harassment has gone away. It doesn't mean hostile work environments have gone away. It doesn't mean that things that happened previously have just stopped, but we need new tools to evaluate those things, says Harz.

Related:Data Privacy in the Age of AI Means Moving Beyond Buzzwords

AI also offers employers the opportunity to engage employees on performance quality and improvement. If you're able to align the information you're getting on how a particular employee is executing the work relative to what you consider to be best practices, you can use that to create personalized coaching tools that employees ultimately do find beneficial or helpful, Stephanie Bell, chief programs and insights officer at the nonprofit coalition Partnership on AI, tells InformationWeek.

AI-driven employee monitoring has plenty of tantalizing benefits for employers. It can tap into the massive quantities of data employers are gathering on their workforce and identity patterns in productivity, performance, and safety. GenAI really allows for language and sentiment analysis in a way that just really wasn't possible prior to LLMs, says Harz.

Measuring productivity seems like a rock-solid employer use case for AI, but productivity isnt always black and white. Yes, it's easy to collect data on whether or not workers are online or not, Cantrell points out. But is that really measuring outcomes or collecting data that can really help improve organizational value or benefits? That's open to question.

Related:Privacy, Surveillance & Tech: What FISAs Renewal Means

A more nuanced approach to measuring performance could be beneficial to both employer and employee. And enterprises are acknowledging the opportunities in moving away from traditional productivity metrics, like hours worked. Research from Deloitte Insights found that 74% of respondents recognize the importance of finding better ways to measure employee performance and value compared to traditional productivity metrics.

AI monitoring potentially has more benefits when it is used in a coaching capacity. Where we see the real value is around using AI as a coach. When [it] monitors workers, for example, on their work calls and then [provides] coaching in the background or [uses] AI to infer skills from their daily work to suggest improvements for growth or development, or you're using AI to monitor people's movements on a factory floor to [make] suggestions for well-being, says Cantrell.

This kind of coaching tool is less about if an employee is moving their mouse or keeping their webcam on and more about how they are performing their work and ways they could improve.

AI monitoring tools also can be used to make workplaces safer for people. If integrated into video monitoring, for example, it can be used to identify unsafe workplace behaviors. Employers can follow up to make the necessary changes to protect the people working for them.

But like its many other applications, AI-driven employee monitoring requires careful consideration to actually realize its potential benefits. What data is being gathered? How is it being used? Does the use of AI have a business case? You should have a very clear business rationale for collecting data. Don't just do it because you can, Cantrell cautions.

Realizing the positive outcomes of any technology requires an understanding of its potential pitfalls. Employee monitoring, for one, can have a negative impact on employees. Nearly half of employees (45%) who are monitored using technology report that their workplaces negatively affect their mental health, according to an American Psychological Association (APA) survey.

The perception of being watched at work can decrease peoples trust in their employer. They feel like their employer is spying on them, and it can have punitive consequences, says Cantrell.

Employee monitoring can also have a physical impact on workers. In warehouse settings, workers can be expected to hit high productivity targets in fast-paced, repetitive positions. Amazon, for example, is frequently scrutinized for its worker injury rate. In 2021 employees at Amazons facilities experienced 34,000 serious injuries, an injury rate more than double that of non-Amazon warehouses, according to a study from the Strategic Organizing Center, a coalition of labor unions.

Amazon has faced fines for its worker injuries from agencies like the Occupational Safety and Health Administration (OSHA) and Washingtons Department of Labor and Industries. The musculoskeletal injuries in these citations have been linked to the surveillance-fueled pace of work in Amazon warehouses by reports from the National Employment Law Project and the Strategic Organizing Center, Gabrielle Rejouis, a distinguished fellow with the Georgetown Law Center on Privacy & Technology and a senior fellow with the Workers' Rights Institute at Georgetown Law, tells InformationWeek in an email interview.

While AI may fuel workplace surveillance systems, it does not bear the sole responsibility for outcomes like this. It's not like the AI is arbitrarily setting standards, says Bell. These are managerial decisions that are being made by company leaders, by managers to push employees to this rate. They're using the technology to enable that decision-making.

People are an important part of the equation when looking at how AI employee monitoring is used, particularly if that technology is making suggestions that impact peoples jobs.

AI tools could analyze conversations at a call center, monitoring things like emotional tone. Will AI recognize subtleties that a human easily could? Bell offers the hypothetical of a call center employee adopting a comforting tone and spending a little extra time on the phone with a customer who is closing down an account after the death of a spouse. The call is longer, and the emotional tone is not upbeat.

That's the case where you want that person to take the extra time, and you want that person to match the emotional tone of the person on the other end of the line not to maintain across the board standards, she says.

An AI monitoring system could flag that employee for failing to have an upbeat tone. Is there a person in the loop to recognize that the employee made the right choice, or will the employee be penalized?

Employee monitoring bolstered by AI capabilities also has the potential to impact the way employees interact with one another. When you have this generalized surveillance, it really chills employee activity in speech, and in the research that I've done that turns up in making it harder for employees to build trusting relationships with each other, Bell shares.

Employers could potentially use AI monitoring tools to quell workers ability to exercise their rights. One of the most concerning ways that electronic worker surveillance and automated management benefit employers is that it can obscure management union busting, says Rejouis. If surveillance can find any and every mistake a worker makes, employers can use this to provide a non-union justification for firing an organizer.

The regulatory landscape for AIs use in the workplace is still forming, but that doesnt mean employers are completely free of legal concerns when implementing AI employment monitoring tools.

Employee privacy is a paramount concern. We need to make sure that we're complying with a variety of privacy laws, Domenique Camacho Moran, a partner and leader of the employment law practice at New York law firm Farrell Fritz, tells InformationWeek.

Workers generate the data used by monitoring tools. How is their data, much of it personal, being collected? Does that collection happen only in the workplace on work devices? Does it happen on personal devices? How is that data protected?

The Federal Trade Commission is paying attention to how AI management tools are impacting privacy. As worker surveillance and AI management tools continue to permeate the workplace, the commission has made clear that it will protect Americans from potential harms stemming from these technologies, Benjamin Wiseman, associated director, division of privacy at identity protection at the FTC, said in remarks at a Harvard Law School event in February.

With the possibility of legal and regulatory scrutiny, what kind of policies should enterprises be considering?

Be clear with workers about how the data is being used. Who's going to see it? says Cantrell. Involve workers in co-creating data privacy policies to elevate trust.

Bias in AI systems is an ongoing concern, and one that could have legal ramifications for enterprises using this technology in employee monitoring tools. The use of AI in hiring practices, and the potential for bias, is already the focus of legislation. New York, for example, passed a law regarding the use of AI and automated tools in the hiring process in attempt to combat bias. Thus far, compliance with the law has been low, according to the Society for Human Resource Management (SHRM). But that does not erase the fact that bias in AI systems exists.

How do we make sure that AI monitoring is non-discriminatory? We know that was the issue with respect to AI being used to filter and sort resumes in an application process. I worry that the same issues are present in the workplace, says Camacho Moran.

Any link between the use of AI and discrimination opens enterprises to legal risk.

Employee monitoring facespushback on a number of fronts already. The International Brotherhood of Teamsters, the union representing employees of UPS, fought for a ban on driver-facing cameras in the UPS contract, the Washington Post reports.

The federal government is also investigating the use of employee monitoring. In 2022, the National Labor Relations Board (NLRB) released a memo on surveillance and automated management practices.

[NLRB] General Counsel Jennifer Abruzzo announced her intention to protect employees, to the greatest extent possible, from intrusive or abusive electronic monitoring and automated management practices through vigorously enforcing current law and by urging the board to apply settled labor-law principles in a new framework, according to a NLRB press release.

While conversation about new regulatory and legal frameworks is percolating, it could be quite some time before they come to fruition. I don't think we understand enough about what it will be used for for us to have a clear path towards regulation, says Camacho Moran.

Whether it is union pushback, legal action by individual employees, or regulation, challenges to the use of AI in employee are probable. It's hard to figure out who's going to step in to say enough is enough, or if anybody will, says Camacho Moran.

That means enterprises looking to mitigate risk will need to focus on existing laws and regulations for the time being. Start with the law. We know you can't use AI to discriminate. You can't use AI to harass. It's not appropriate to use AI to write your stories. And, so we start with the law that we know, says Camacho Moran.

Employers can tackle this issue by developing internal taskforces to understand the potential business cases for the use of AI in employee monitoring and to create organization-wide policies that align with current regulatory and legal frameworks.

This is going to be a part of every workplace in the next several years. And so, for most employers, the biggest challenge is if you're not going ahead and looking at this issue, the people in your organization are, says Camacho Moran. Delay is likely to result in inconsistent usage among your team. And that's where I think the legal risk is.

What exactly is the future of work? You can argue that the proliferation of AI is that future, but the technology is evolving so quickly and so many of its uses cases are still nascent, it is hard to say what exactly that future will look like years or decades down the road.

If AI-driven employee monitoring is going to be a part of every workplace, what does responsible use look like?

The answer lies in creating a dialogue between employers and employees, according to Bell. Are employers looking for use cases that employees themselves would seek out? she asks.

For example, a 2023 Gartner survey found that 34% of digital workers would be open to monitoring if meant getting training classes and/or career development paths, and 33% were open to monitoring if it would help them access information to do their jobs.

A big part of this is just recognizing the subject matter expertise of the folks who are doing the work themselves andwhere they could use support, Bell continues. Because ultimately that's going to be a contribution back to business outcomes for the employer.

Transparency and privacy are also important facets of responsible use. Do employees know when and how their data is being collected and analyzed by AI?

Consent is an important, if tricky, element of employee monitoring. Can they workers opt out of this type of monitoring? If opting out is an option, can employees do so without the threat of losing their jobs?

Most workplaces are at-will workplaces, which means an employer does not need justification for firing an employee, Rejouis points out. This makes it harder for employees to meaningfully refuse certain changes to the workplace out of fear of losing their jobs.

When allowing employees to opt out isnt possible, say for video monitoring safety on a manufacturing floor, there are still ways to protect workers privacy. Data can be anonymized and aggregated so that we're protecting people's privacy, says Cantrell.

While AI can be implemented as a powerful monitoring tool, the technology itself needs regular monitoring. Is it measuring what is supposed to be measuring? Has any bias been introduced into the system? Is it actually providing benefits for employers and employees? We always need some human judgment involved and awareness of what the potential downsides and bias that the AI is bringing us could be, says Cantrell.

Most enterprises are not building their own AI systems for worker monitoring. They are working with third-party vendors that offer employee monitoring tools powered by AI. Part of responsible use is understanding how those vendors are managing the potential risk and downsides of their AI systems.

The way we're approaching it at Veriato is, just as you can imagine, being extremely thoughtful about what features we release in the wild and what we keep with beta customers and customer advisory panels that just really test and run things for a longer period of time than we would with some other releases to make sure that we have positive experiences for our partners, Harz shares.

Any innovation boom, AI or otherwise, comes with a period of trial and error. Enterprise leadership teams are going to find out what does and does not work.

Bell emphasizes the importance of keeping employees involved in the process. While many organizations are rushing to implement the buzziest tools, they could benefit from slowing down and identifying use cases first. Start with the problem statement rather than the tool, she says. Starting with the problem statement, I think, is almost always going to be the fastest way to identify where something is going to deliver anyone value and be embraced by the employees who would be using it.

Cantrell considers the use of AI in employee monitoring a goldmine or a landmine. It can bring dramatic benefits for both workers and organizations if done right. But if not done right and not done responsibly, workforce trust can really diminish and it can be a what I call a landmine, she says.

Continue reading here:
What is the Future of AI-Driven Employee Monitoring? - InformationWeek

Related Posts

Comments are closed.