a1easiest

Is It Too Late to Safeguard My Data?

After we endow robots with human-like senses, what will they know about their human workmates…and how will that knowledge be used?

Is privacy a gonner?
Consensus opinion has it that robots will take over human jobs by the millions; however, well before that happens workplace privacy might well be long gone.

Workplace privacy is already being threatened by privacy-invasive monitoring such as closed-circuit video monitoring, Internet monitoring and filtering, e-mail monitoring, instant-message monitoring, phone monitoring, location monitoring, personality and psychological testing, and keystroke logging.

“Smart” robots are soon to pile on.

As robots gain eyes and ears and awareness, and AI romps through the resulting data from them, a new world of work will emerge. Some say it’s here already.

Will the workplace become a zone of trepidation and caution for human workers where their every word and action is observed, overheard, captured and analyzed? What will robots know…and what will algorithms and analytics pull from that digital deluge?

Mathematician and data scientist, Cathy O’Neil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, is a data skeptic who’s not sold on algorithms: “Algorithms decide who gets a loan, who gets a job interview, who gets insurance and much more — but they don’t automatically make things fair.” Her TEDTalk was aptly titled: The Era of Blind Faith in Big Data Must End.

Some disagree with her.

Robots as guardians?
On the one hand, incidents like Kobe Steel caught “falsifying data about the strength and durability of some aluminum and copper products,” might have been nipped in the bud had robots been on the job using their AI powers to snoop for falsified metals. Downstream, Toyota, Honda, Mitsubishi, and all the other trusting Kobe customers, would have been spared the grief of second-rate goods.   Similarly, robots might have been heroes in the Volkswagen emission’s scandal?

In the future, maybe all contracts for manufactured goods will have to be accompanied by a robot audio transcript and surveillance footage of the manufacturing process and its human workers, which a customer’s AI will then scan and analyze for wrongdoing. Can we imagine a future where robots will have to certify all manufactured goods and products?

Humans doing bad to other humans might be a bit more reluctant to attempt deceptions if their every action was scrutinized by a “smart” robot. Not to mention the potential billions of dollars in savings from robots thwarting human chicanery.

On the other hand, a robot overhearing and observing one worker confessing a bad marriage and admitting infidelities to a workmate might find that knowledge subpoenaed in divorce court.

Now imagine all the collected data from those countless millions of workplaces sliced and diced by algorithms from good guys and bad guys alike…and then used for good or ill or something in-between.

What would become of the workplace?

Uncertainty, fear, and worry in a workplace are not very conducive to team bonding or high productivity or even corporate goodwill.

McKinsey’s James Manyika in the recent What is the future of work? comes down on the beneficial side of workplace automation and AI:

“There are all kinds of performance improvements from reducing error rates, being able to do predictions better, and being able to discover novel, new solutions or insights. The benefits in a use-case sense to businesses are hard to disregard and ignore.

“That’s going to drive and encourage businesses to adopt these techniques—and they are. The benefits to the economy are also clear, because we know that—associated with most automation technologies in the past, and even today, and in the future—automation of these systems improves productivity.”

Listing the hypothetical good, bad and ugly in robot-workplace incidents could go on and on ad infinitum. Daily news reporting is already rife with it. Suffice it to say that the robots are coming to workplaces everywhere—many are already there—and everyone is going to have to work through the consequences.

New realities in the digital workplace

Littler’s Garry Mathiason, co-chair of the law firm’s Robotics, Artificial Intelligence (AI) and Automation practice group, is flat-out blunt about workplace privacy: “The advances in machine learning and collaborative robotics have extinguished real ‘privacy’.”

Sobering.

Having watched workplaces evolve over the decades, including a stint at Singularity University to sharpen his tech skills, the veteran class-action litigator, was quick to see the need for his new practice group. “We call our group a practice group,” he explains, “because we are identifying practical ways that transformative 21st century technology can enter the workplace and remain compliant with workplace laws enacted long before this technology existed.”

Coincidentally, the Supreme Court will soon be doing just that in Carpenter vs. United States, No. 16-402, when it tries to square up the 18th century’s Fourth Amendment, which bars unreasonable searches, with “collecting vast amounts of data from cellphone companies showing the movements of a man they say organized…robberies.”

To give everyone an early heads-up on AI and robotics in the workplace, Mathiason and his mates recently penned a book on what to look for and how to react as technology begins to disrupt employees, employers and work.  Legal Compliance Solutions for the Transformation of the Workplace Through Robotics, Artificial Intelligence, and Automation is the title, and two chapters hone right in on “privacy concerns” in what they call the “BORDERLESS WORKPLACE”.

Interestingly, what robots and AI know, according to Mathiason, might well be game savers for humans: “Ethical and legal standards are the only limitations on the reach of technology into the most private activities in the workplace, home, and bedroom.  Ironically, AI systems that audit and enforce privacy rules and ethical standards may be the last line of defense of human privacy.”

“From Siri and Alexa to facial recognition software,” says Mathiason, “there is little that can remain hidden. 

“Today’s manufacturing employer at a minimum needs to: (1) Adopt and circulate a privacy policy providing realistic expectations in the workplace; (2) Conduct a privacy review of robots and AI used in the workplace, limiting access to information that exceeds what is allowed under the privacy policy and applicable law, as well as providing full disclosure of what is not private due to the use of the technology; and (3) Secure personnel and personal information lawfully being collected through robotics and/or AI systems—while the gathering of private and personal confidential information may be permitted, the employer accepts a duty to maintain its privacy from third parties.” 

All that has the makings of being a massive undertaking for any corporate HR department to adequately manage, for law makers to legislate, and for courts to judge. The tentacles of what robots will know and what AI will analyze seem ready to reach into every facet of work and life.

Robots keeping humans healthy
Many robots aren’t in factories making stuff, rather, they are in healthcare facilities and retirement homes tending to or companions of patients or elderly residents. What do those robots know about the people in their care?

Hanging out with grandma all day, every day, robots will be confidants that can easily become privy to confidential information, inner secrets, or intimate knowledge that should be private. To an algorithm it’s just data and not the passcode to access granny’s bank account.

What should an HR department of an eldercare facility say to caregivers who regularly use robots as therapy and as companions to elderly residents?

Asked to weigh-in on how an HR director might direct his or her staff to handle matters of privacy in a healthcare setting, Mathiason is quick to reply that there are many, but the top three:

  • Learn applicable safety rules and standards—compliance will be essential;
  • Make sure you are up-to-date on training regarding how to use domestic care robots, and don’t use the robot for tasks that are not approved by the manufacturer or integrator; and,
  • Plan to use collaborative robots or prepare to become unemployed and obsolete—a major financial firm recently predicted that by 2025, every household will have a domestic robot.

“The prospect of caregivers working side by side with domestic robots is a certainty,” adds Mathiason, “at least until the robots no longer need human help. 

Studies in Japan have found that despite initial concern, elderly robotics users quickly develop deep attachments to their metal companions, often stronger than their relationship with human caregivers.”  

With many world populations aging fast and with the mounting massive healthcare needs attending those elderly populations, robot assistance is inevitable. Can privacy be protected when AI can touch anyone’s medical history and robots are best buddies with millions of elderly?

Hunting the elusive
What will robots know? Virtually…everything! Plus, that everything is hackable …or has been hacked…or will soon be hacked.

Meanwhile, the human response to potential solutions to it all are just beginning to be addressed.

As AI technology outraces the means to control or even to understand it, humanity may soon be faced with a tough question: How important is privacy?

Humans may be out of a job because of robots, but are not out of the voting booth.