fbpx

6 Facts About Artificial Intelligence That Nobody Told You About

February 5, 2019

6 Facts About Artificial Intelligence That Nobody Told You About

Everyone has heard the phrase “artificial intelligence” or “AI”. AI has gotten a lot of publicity in the last few years, as we talk about Amazon’s, Google’s, and Twitter’s “algorithms” that they use to identify possible customers for advertising or possibly abusive twitter accounts. But while everyone has heard the phrase, there are a number of ways people, in general, don’t understand what AI really is or what it really does. Let’s run through a few of them.

1. Artificial Intelligence Isn’t Really New

Artificial Intelligence in computer science is defined as:

the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Since it has become a popular topic again, people tend to imagine AI is new, but in fact what we now call AI was one of the first problems to which the new general-purpose computers of the 1950s were applied. This started when Allen Newell, Herbert Simon, and Cliff Shaw developed first the Logic Theorist and then the General Problem Solver, two programs that were intended to simulate Newell and Simon’s understanding of human reasoning. Their intention from the first was to use a computer to perform the same reasoning tasks as humans do, by simulating the way humans perform those tasks. At the time (1955-1960) Don Knuth predicted that chess programs would beat all humans within 10 years.

It didn’t work out that way. It would take another 30+ years for that truth to catch up.

Commander Data is not what to think when you hear artificial intelligence
Commander Data

2. Artificial Intelligence is not what you think

There is no Commander Data now or on the horizon, no Terminator, not even the robotic Julie Newmar of the ’60s sitcom My Living Doll. And neither Siri nor Alexa is really understanding what you say, as convenient as it is for them to respond to spoken commands.

While they can respond to limited stimuli in limited ways, they aren’t doing what the barista does when you order an almond-milk latte at Starbucks.

Or, at least we don’t think they are. One of the problems AI researchers have is that no one really understands how humans do what they do either. AI that really does think and reason as humans do is called “strong AI”, and pretty much ever since Newell, Shaw, and Simon there have been arguments about whether strong AI is even possible, and how we’d know it if we built it.

Among the most famous is Alan Turing’s “imitation game”, now known as the Turing Test. Turing’s idea was that if you couldn’t tell the difference between a human and a chatbot in an extended chat, you must assume the computer is just as “conscious” as the human.

Other people aren’t convinced, as with John Searle’s “Chinese Room” thought experiment where the argument was made that “brains cause minds” and “syntax does not suffice for semantics”. Computers don’t have actual brains and they often process only syntax.

There is a gigantic body of argument about this, but the bottom-line here is that AI as it stands today isn’t “intelligent” at all.

3. What AI Does is Model Human Processes

Basically, ever since the GPS, artificial intelligence research has focused on developing computer programs that can perform similar tasks to what humans do, whether the programs were really “intelligent” or not. This is basically doing one of 5 things:

3.1 Following Complicated Decision Rules

The last big Artificial Intelligence boom in the ’80s got started with expert systems. In an expert system, a human developer working with a human expert in some particular topic develops a collection of IF-THEN rules; the expert system then uses these rules on new inputs to come to conclusions. For example, a medical diagnostic expert system might have rules like:

IF HASFEVER(patient) AND HASCOUGH(patient) THEN
diagnose “Patent has a cold. Take two aspirin and call tomorrow if you don’t feel better.”

Using systems of rules like this with much higher levels of complexity, expert systems could often perform as well as human experts. Self driving cars today are a perfect example of an expert system being perfected.

3.2 Identify Probable and Improbable Choices

When you talk to Alexa, what is really happening first is that the sound waves picked up by the microphone go to a computer program that has been “trained” to recognize words in those sounds, and then give an internal representation of the sounds as words to another component.

The trick being that humans don’t make very predictable sounds, but other humans understand them anyway. (For an example of this, pick up a smart phone and try to use its “dictation” feature.) Humans are actually very good at making approximate judgements, which is what is required to recognize speech. There are two basic ways computers can do this.

The first is using a Hidden Markov Model. Basically, this models speech as a big collection of sequences of words, along with the likelihood of a word being followed by other words. Looking at a sentence like “My cat eats mice.”, a Markov model would start with the word “My”. The next word is probably a noun, and sure enough “cat” is a noun. Then the most probable next words will be verbs, and “eats” is a verb. That’s followed with a direct object, “mice” and BINGO we’ve recognized the sentence.

On the other hand, if you said “My blue eats mice,” the Markov model would recognize that “My blue” and “eats mice” are very improbable, and say “Sorry, I don’t know that one.”

The other popular approach is using neural networks, which we’ll talk about next.

3.3 Categorizing Inputs

There’s a step in this process we slid past, and that’s how Alexa turns the complicated wave forms picked up by its microphone into recognizable components of speech: how it turns a noise into a language element. The common way this is done is by using a neural network. Neural networks are one of the few ways in which AI is actually trying to model the physical world: it’s a simplified model of how networks of artificial neurons interact.

Neural networks are basically learning algorithms. You feed them inputs, and then let the system know if the outputs are what you expected. Each time you tell the network it’s getting warmer or getting colder, the network adjusts some numbers in the model so it will do better the next time.

Neural networks are used in many other places, such as in computer vision and big-data analysis, but the general effect is always the same: you train the neural network by showing it known inputs and praising or spanking it until it performs as you would expect.

3.4 Generating Outputs to Fit Models

Facial recognition is the one of the areas that has a lot of new uses and where categorization adds value. Starting with human faces as input, it develops a model of faces that “expects” the nose between the eyes and mouth, “sees” blue eyes, and so forth. Once the recognition algorithm is sufficiently well trained, it can look at a stored photograph, compare it to another photo or live video of a person’s face, and reasonably and reliably determine if they are the same person.

As we discussed above, this is basically done using a model of possible inputs, in this case a model of what faces actually look like. But a model can be run the other direction as well.

Nvidia recently published a video demonstrating how very realistic looking human faces can be generated using a model of what faces look like. No input of actual human faces were used to generate these AI faces!

3.5 Relentless Search

One of the earliest real successes for artificial intelligence was in solving puzzles. It turns out that many puzzles or games — anything from tic-tac-toe to the Japanese game of Go — can be solved by simply searching through all the possible moves and then choosing the best ones. Of course, as a game becomes more complicated, the number of possible moves gets very large. The recent successes with Chess and Go are largely the result of computers getting much faster and of companies like IBM and Google being willing to spend millions on game playing.

4. In AI, what seems easy is hard; what seems hard is often easy.

In the early days of artificial intelligence, there was lots of optimism that very soon AI would be able to do things like play good chess (we mentioned Knuth’s 10 year prediction already), understand speech, and translate human languages. Instead, it’s taken 60 years, and our artificial intelligence doesn’t preform as well as we had expected by this point and time.

Those were all things that are relatively easy for humans to do; naively, AI researchers assumed that they would be easy for computers.

On the other hand, you could buy a computer that played pretty credible chess for $20 twenty years ago.

This is known in AI circles as the “AI effect” or as Tesler’s Theorem:

“Intelligence is whatever machines haven’t done yet”.

Once something is really solved, it stops being intelligence, at least to many people.

Amazon's Echo Plus with Alexa, an artificial intelligence in the home
Amazon Alexa

5. You Use AI Every Day

While we argue over what is or isn’t really “intelligence”, we’re using real AI every day. Here’s some of the ways I’ve been using it just today:

  • Gmail uses artificial intelligence to decide which of my emails are important, which ones I should see but are unimportant, and which ones are just spam and puts them into the appropriate folders.
  • Alexa understands me well enough that I set my alarm through her last night and asked her for corned beef recipes this morning.
  • I’m dictating this through my Mac from handwritten notes, and the Mac is doing a very good job reorganizing my litters.
  • Yesterday I spent almost an hour using a chatbot on a service problem.

6. Even Small Companies Can Use AI Now

Whether artificial intelligence is really “intelligent” or not, it has become very useful. Big companies like Google are using it, but small companies now can leverage what the big companies have done for their own benefit. Many companies now use chatbots for their first layer of online support. Many companies are using AI techniques to analyze “big data”, an approach that was pioneered by Amazon’s sales algorithms and Google’s advertising placements.

Alyss Analytics, a Flint Hills Group customer, had FHG develop a very interesting AI application.

Their application analyzes short videos of job applicants to evaluate their soft skills, and score them based on what they say and do and on facial recognition of their expressions.

Alyss Analytics is not a large company, but FHG was able to develop a solution for them by leveraging the AI capabilities available in the Cloud to automate a function that no one would have imagined could be automated 10 years ago.

FHG also built an impressive facial recognition clock in/out system for Starkey Inc., a well established community-based non-profit in Kansas that leverages cloud facial identification AI. This solution documents employees clocking in and out by their face and documents exactly who was clocking in for supervisors with an audit log. The system then interfaces with payroll giant ADP feeding Starkey payroll data to issue paychecks.

So What Comes Next?

We have hardly scratched the surface of what other AI solutions will show up in a few years. For example, there are autonomous vehicles, which are nearly good enough to allow on the roads without a safety driver. (But nearly good enough may be good enough: Elon Musk’s Boring Company wants to build tunnels and run autonomous vehicles in them, away from the crazy unpredictable human drivers.) Other companies are using AI to do things like help control diabetes.

Even without Commander Data, AI is a big part of modern life, and sure to become bigger.

Would you like AI to become a bigger part of your Business Life? Give us a call and get a free estimate on your next project.

Charlie Martin
Consulting Software Engineer

Charlie Martin is a consulting software engineer and writer in Erie, Colorado, with interests in systems architecture, cloud computing, distributed systems in general and innovative applications of blockchains in particular. He is available for consulting through Flint Hills Group.

Charlie-Martin-Flint-Hills-Group-Software-Developer
Charlie-Martin-Flint-Hills-Group-Software-Developer

Charlie Martin
Consulting Software Engineer

Charlie Martin is a consulting software engineer and writer in Erie, Colorado, with interests in systems architecture, cloud computing, distributed systems in general and innovative applications of blockchains in particular. He is available for consulting through Flint Hills Group.

2019-02-05T14:12:55-06:00

Tech that scales your business!

Get the latest trends to grow your business right in your inbox every week. You'll be the smartest person around the water cooler...seriously...