Every few years, the technology world unearths a concept that will eventually do one of two things: fall flat on its face, resulting in millions (if not billions) of lost dollars for investors; or alternately, gain enough traction and widespread adoption to catapult us into a new era. Generally speaking, the former fate is often less a matter of misguided thinking than the inability of the market to do what the technology is asking it to do.
Enter artificial intelligence. Alexa, Albert, Watson, Einstein. They have arrived to offer us a next-generation world previously imagined only by Hollywood. Rivaled only by virtual reality in terms of its Kim Kardashian-like status for geeks, AI is currently experiencing some mega-hype – as well as criticism – for its potential to eliminate bias, streamline workflow, and ultimately shift the way we work.
So what’s true and what’s not? Along with widespread popularity come your typical garden-variety misconceptions. Fret not. AI won’t save us and it won’t eradicate us either.
Let’s get to the bottom of it, shall we? Here are five misconceptions about artificial intelligence we can finally put to rest:
Misconception #1 – AI is a new concept
AI is getting a tremendous amount of airtime of late, particularly as giant tech firms rush to jump on the AI bandwagon. However, a Google Trends search for AI, Donald Trump, NFL, and Kim Kardashian shows AI holding steady since 2004 as a general concept consumers have been interested in. The point being, while it’s certainly on a steady incline and will likely continue to be at the center of technology-product roadmaps for the foreseeable future, the idea itself has captivated audiences for a very long time, just like Donald Trump, the NFL and Kim Kardashian. And for the record, it didn’t need to get robbed at gunpoint, take home a trophy, or win an election to stake that claim.
Misconception #2 – AI has a conscience
Chief architect at technology company AirPR, Patrick Liang, has been building an AI- and machine-learning-based technology over the past five years. His feedback on the latest hype? “One of the greatest misconceptions about artificial intelligence is that it can initially think for itself, that it has some sort of conscience. On the contrary, and for the foreseeable future, humans control the inputs and define the specifications.”
In other words, says Liang, “The world isn’t in danger of being overtaken by robots who have somehow designed themselves into existence.”
Misconception #3 – AI can’t be smarter than the people who made it
This may seem counterintuitive to the above point. While AI does not have a conscience, and cannot “will” itself into being per se, this does not mean it can’t outsmart humans in certain circumstances. IBM’s Deep Blue beat chess master Garry Kasparov in 1997. This was the first instance of a computer beating a world champion at chess, and since then machines have only gotten faster and smarter.
In fact, Matthew Lai, a computer scientist at University London College, recently published his master’s thesis in which a machine learning system called Giraffe learned to play “at the International Master level of chess” in just 72 hours.
Realistically, it’s only a matter of time before systems like Giraffe will go beyond chess and be applied to other complex problems that require human strategic thinking.
Misconception #4 – AI will take away our jobs
This is likely the loudest battle cry of Luddites across the world. Has AI changed our lives and our jobs? Of course. Will it continue to stretch humanity in a way that is uncomfortable at times, as we attempt to shift the nature of work and re-educate the job market to acquire relevant skills? Definitely.
Yet, a 2014 Pew Research Center survey resulted in a relatively split opinion about whether AI was a job creator or job destroyer. Most in the latter category cited “lack of relevant education for necessary job skills” as the culprit. In essence, it wasn’t the AI destroying jobs, but rather our inability to re-imagine the education system in order to appropriately equip future job seekers.
In the survey, chief scientist for Salesforce.com JP Rangaswami offered several reasons for the argument against AI as a job destroyer, including this: “Some classes of jobs will be handed over to the ‘immigrants’ of AI and robotics, but more will have been generated in creative and curating activities as demand for their services grows exponentially and barriers to entry continue to fall. For many classes of jobs, robots will continue to be poor labor substitutes.”
Misconception #5 – AI will be adopted in the near term
Those who prefer slow, iterative evolution versus rapid adoption will be happy to know that revolution takes some time. Think about the landline (nearly 40 years) versus the smartphone (less than a decade). AI won’t be fully adopted in the near term, and is something like five to 10 years away from widespread infiltration after already being on our radar for over 10 years.
In fact, according to the Gartner Hype Cycle for emerging technology, AI is currently at the “peak of inflated expectations.” Which is to say, no one knows exactly how it will all shake out.
No matter your viewpoint, AI is certainly here to stay, and we can certainly be grateful for advancements in technology and the opportunity innovation affords.
Correct Perception – AI can help fight fraud and help credit unions better understand their members’ needs and preferences
To empower credit unions in the fight against fraud, CO-OP just announced that COOPER Fraud Analyzer, an advanced data-driven fraud mitigation tool, has launched into pilot testing with four participating credit unions.
COOPER Fraud Analyzer uses advanced rules and decisioning to evaluate transactions based on type, amount, speed and other attributes, and then instantly detect anomalies, such as new account fraud, in-branch teller fraud, and fraudulent check deposits. By quickly identifying questionable activity and reporting it to credit unions, COOPER Fraud Analyzer helps protect account holders and build member trust in their credit union.