Blog

Browse Topics:

more

The Dark Side of AI and Big Data in 2018

 Digital image of globe with conceptual icons. Globalization concept. Elements of this image are furnished by NASAWe often talk about how Big Data and Artificial Intelligence are changing processes in our industry for the better. Between scanning resumes to capturing better candidates more quickly and using data to start meaningful conversations with candidates, there are a lot of positive things happening with Big Data and AI technology in the staffing industry. 

But, as with all technology, along with the good  always comes at least some bad. 

In our industry, we are embracing simple algorithms and already using bots in productive ways. But, just like in Disney cartoons, there is always an evil villain lurking in the background finding ways to use an otherwise good opportunity in the worst way possible. 

Here are a few ways that AI & Big Data have been used in 2018 that show us the dark side of these powerful technologies.

 

When Big Data and AI Go Wrong 

 

Self-Driving Car Kills Pedestrian

One of Uber's self-driving cars (with an emergency backup driver behind the wheel) struck a pedestrian in Arizona, prompting the company to cancel testing efforts that were going on in other cities across the country. As of this writing, it hasn't been determined if the crash in Arizona will lead other companies or state regulators to slow the roll-out of self-driving vehicles on public roads.

 

Researchers Train AI to be Psycho 

A team of AI researchers at MIT researches used algorithms and strategically fed data to an AI technology they named "Norman" after Norman Bates in the classic horror movie Psycho, in order to train it to be well...psycho.  It was "fed" only with short legends describing images of "people dying" found on the Reddit internet platform. They followed up with an inkblot test and where traditional AI would see "two people standing close to each other," Norman saw in the same spot of ink "a man who jumps out a window."

"There is a central idea in machine learning: the data you use to teach a machine learning algorithm can significantly influence its behavior." - Pinar Ynardag, Manuel Cebrian and Iyad Rahwan, MIT

It makes us think twice as to what can happen if this gets in the wrong hands. 


Chinese government plans to launch its Social Credit System in 2020

As in something out of a Black Mirror episode, China is developing the Social Credit System (SCS) to rate the trustworthiness of its 1.3 billion citizens. It is collecting data from things like credit history, personal characteristics, fulfillment capacity (ability to adhere to contractual  obligations) and analyzing it in a system to rank citizens and create scores that can affect things like loan applications, getting a rental car, and visa applications.  As of right now, participation in this system is voluntary, but in 2020 it will be mandatory. 

 

AI Creates Believable Fake Photos 

There is software under development at Nvidia, the big-name computer chip maker that is investing heavily in research involving artificial intelligence, that makes VERY realistic fake celebrity photos.  Researchers recently built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns, and create new images that look much the same subject — but are still a little different. The system can also generate realistic images of horses, buses, bicycles, plants and many other common objects. So why be nervous? With so much attention on what constitutes fake media these days, we could soon face an even wider range of fabricated images than we do today.

“The concern is that these techniques will rise to the point where it becomes very difficult to discern truth from falsity,” said Tim Hwang, director of the Ethics and Governance of Artificial Intelligence Fund

 

IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments

Watson for Oncology is software that uses AI algorithms to recommend cancer treatments for individual patients. Documents released this summer showed the danger and inaccuracy of trusting this software completely. Doctors have voiced concerns over the diagnosis and suggested treatments being suggested by the software. For example, there was a 65-year-old man diagnosed with lung cancer, who also seemed to have severe bleeding. Watson reportedly suggested the man be administered both chemotherapy and the drug “Bevacizumab.” But the drug can lead to “severe or fatal hemorrhage,” according to a warning on the medication.  Prescribing that drug to the patient could prove fatal.

 

In reading all of these stories, you might think that advancing AI and Big Data technology may lead to a bleak and dystopian future, but that is simply not true. For every story about an incorrect cancer diagnosis, there will be another that AI predicted Alzheimers years before a diagnosis. We as a society should tread lightly with what power we have with technology like this, but despite some of the scary stuff we may read in the headlines, it's not as bad as one may think. 

Subscribe_to_Yoh_Blogs

Related Posts

[INFOGRAPHIC] Big Data: What Is It & How To Use It In Talent Acquisition Read Post 5 Common Misconceptions About AI & Big Data Read Post Leading a Data-Driven Business Strategy: A Comprehensive Guide Read Post