Some people see their managers as lacking intelligence, but what if that “manager” is artificially intelligent? What if the manager is an app? A recent study suggests AI may be ready to serve management roles for certain situations — but the jury is still out on how fair it can be to workers.

The study, conducted by Lindsey Cameron, professor at the Wharton School of the University of Pennsylvania, looked at an existing example of workers overseen by an AI-powered manager: ride-hail drivers responding to apps such as Uber or Lyft. Along with scheduling and payments, algorithmic management can also cover a range of management tasks, Cameron observed in a related interview published by Wharton — “anything to do with hiring, firing, evaluating or disciplining workers.”

While mechanized management may seem inhuman and lacking in empathy, it works well for some roles. For example, ride-hail drivers, for the most part, actually enjoy working with their AI-driven apps.

Unlike the case with human managers, there is constant communication between AI-driven apps and workers. “In a typical shift, a ride-hail driver might only complete a dozen rides, but will have more than a hundred unique interactions with the algorithm,” she stated.

AI-driven managers also deliver the flexibility and responsiveness seen in gig work. “Surprisingly, many workers report liking and finding choice while working under algorithmic management,” Cameron wrote in her study. “When you talk to most people who are doing ride-sharing or other app-based work, most of them enjoy it or at least think it’s better than their alternatives,” she added in the Wharton interview.

Cameron’s findings draw on a seven-year qualitative study of ride-hailing drivers, who have been managed by algorithms this whole time. She found that these workers use two sets of tactics in the course of their work. “In engagement tactics, individuals generally follow the algorithmic nudges and do not try to get around the system. In deviance tactics, individuals manipulate their input into the algorithmic management system.”

Both engagement and deviance tactics “both elicit consent, or active, enthusiastic participation by workers to align their efforts with managerial interests, and both contribute to workers seeing themselves as skillful agents,” she observed in the study. “However, this choice-based consent can mask the more-structurally problematic elements of the work,” she cautioned, calling this a “good-bad job” scenario.

For warehouse work that is algorithmically managed, for example, “workers are often pushed to their physical and emotional limits” without the empathy of a human manager. “How do you reason with an algorithm?” Cameron asked.

“Think about Amazon warehouse workers or the person at the checkout line at your grocery store,” she said. “There’s probably an algorithm counting how fast they’re scanning items and evaluating their performance. Think about the emails and text messages you get asking you to rate a worker you interacted with. And let’s not forget how we are asked to tip now after every service transaction—you can be sure that information is being recorded and used a performance indication.”

The advent of algorithmic managers extends well beyond manual-labor roles as well. “Algorithms are becoming embedded in work across professions, industries, skill levels, and income levels,” Cameron pointed out.

White-collar or professional workers are also increasingly becoming subjected to algorithmic management. “We’re seeing a broad sweep of new tools, technology, and digitization under the future of work,” said Cameron. Look no further than the surveillance of at-home workers that took place during the Covid period, “with the introduction of tools that could track your keystrokes or whether you were active at your computer or Bloomberg terminal. If you do any kind of customer-facing job, an algorithm keeps track of your ratings and reviews. There are algorithms that scrape your email to make sure you’re not committing corporate espionage or telling offensive jokes.”

While the advance of AI into management roles is inevitable, Cameron urged keeping human oversight in all AI-driven actions. Importantly, as noted in her study, worker consent is needed. “Choice-based consent illuminates the importance of constant, even if confined, choice as a mechanism that keeps workers engaged, especially in jobs considered to be of poor quality,” she wrote.

“You’ve got to have a human in the loop. You can’t have hard and fast evaluation limits. In some companies, an algorithm can fire you if you’re not meeting your quota. Not only should that not happen, but there needs to be an appeals process when decisions are made.”

“Basically, don’t let the algorithm be stupid,” she urged.



Source link

author-sign