Home >> ALL ISSUES >> 2018 Issues >> Artificial intelligence: what’s possible, why now?

Artificial intelligence: what’s possible, why now?

image_pdfCreate PDF
David Wild

July 2018—When it comes to artificial intell­igence, it can be difficult to distinguish hyperbole from reality. So how much can AI truly replace human tasks in society and, more specifically, in medicine?

Dr. Singh

Ajit Singh, PhD, who spoke at the Executive War College in May on what’s feasible today and the use of AI in medicine, said answering that question requires understanding what drives growth in AI—the discipline of how to make computers do things at which, for now, people are better—and the limits of its implementation.

Despite advances in AI, there will likely always be differences between “us and them,” Dr. Singh said of humans and computers. While physicians need to adapt to AI, the technology will “absolutely not” replace them.

“Emotion, understanding, con­sciousness, and creativity—you will never be able to hire a robot with these emotional quotient elements,” he said.

One of the most important drivers of AI and technological innovation is diversity, and nature can teach us a lot about its role in the growth of AI, said Dr. Singh, a partner at Artiman Ventures, a company in Palo Alto, Calif., focusing on early-stage technology and life science investments, and a professor in the Stanford University School of Medicine. Prior to joining Artiman, Dr. Singh was president and CEO of BioImagene, a digital pathology company acquired by Roche. Before that, he was with Siemens for nearly 20 years.

With the largest diversity of species on the planet, the Galapagos Islands provide an instructive example of how innovation can take place, Dr. Singh said. One reason the islands are so diverse is they offer a “remarkable opportunity for mobility.”

“Three ocean currents meet around the Galapagos Islands’ coastlines, bringing in and mixing life forms and large molecules from different parts of the planet.” With so many different life forms gathering in one location, “genetic experiments” happen constantly and organically, Dr. Singh said. In some cases, the new life forms fail; in other cases they succeed.

As in nature, human innovation is also based on diversity, but of a different type. “What helps us innovate is cognitive diversity,” he said, “or transdisciplinarity.” The latter differs from multidisciplinarity, which means “bubbles” of multiple disciplines that often work separately. “Transdisciplinarity is when disciplines intersect, when they’re able to discuss and discard failures,” Dr. Singh explained. “That’s when innovation happens.”

The origin of AI dates to 1950, when British scientist Alan Turing proposed the idea of a thinking machine (Mind. 1950;59:433–460).

TopAIHis “Imitation Game,” later known as the Turing test, involved posing a series of logical questions to a human and a machine, both of which would be hidden from view. If the human interviewer could not distinguish between machine and human, the machine would be said to have passed the test and could be considered a thinking machine.

Other early AI innovators included Dietrich Prinz of the University of Manchester, who developed the first chess-playing program, and Allen Newell and Herbert Simon, who in 1955 created the Logic Theorist, a computer program that was able to prove 38 of the first 52 theorems in Principia Mathematica, a cornerstone text of mathematical thinking.

These and other critical devel­opments in computing led John McCarthy to coin the term “artificial intelligence” in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence.

With a history stretching back more than 60 years, why is there so much interest and growth in AI today?

“The basic ideas of AI and how to implement AI have not changed much,” Dr. Singh said. “The connection networks built in the 1980s are still the same, and the math has not changed. So what has changed?” Data, he said, which is now in abundance. “This has been the most critical thing we needed for AI to evolve to this point.”

Multiple academic disciplines now collaborate in the development of AI, he said, citing philosophy, mathematics, economics, neuroscience, psychology, cognitive science, computational engineering, control systems, and linguistics. “In order to make AI implement­able, we had to have that trans­disciplinarity coupled with large amounts of relevant data, and it took us several decades to get there,” Dr. Singh said.

The two dominant implementations of AI are symbolic and connectionist, Dr. Singh said, each with its strengths and limitations.

In the symbolic implementation, algorithms are used to mimic human intelligence. A connectionist implementation uses neural networks to build knowledge “from the bottom up.” The capacity of a machine to teach itself is limited by the complexity of its neural networks and by the amount of data it can learn from, Dr. Singh said.

Currently, one central processing unit can contain up to roughly 106 transistors, an amount that can facilitate about 1 billion simultaneous operations. A single machine can contain between 1,000 and 10,000 CPUs as well as up to 109 bits of random-access memory, leading to a cycle time of 10-8 seconds, or 10 nanoseconds, to complete a task. In contrast, the human brain has 1012 neurons, 1014 synapses, and a cycle time of 10-3 seconds, or one millisecond, to fire.

While we will soon have computer systems with as many “neurons” as the human brain, we are not likely to have computers with as many synapses, or interconnectivity, as the human brain, “not in the near future,” Dr. Singh said. Even if a human-level degree of neural interconnectivity could be built, it is unlikely that computers would achieve human-level AI.

“We don’t learn everything from scratch after birth. There’s a lot of genetic hardwiring that has taken place over several million years of evolution that has taught us to do certain things really well,” he said.

For example, humans can understand context, while computers at present cannot. That limitation pre­sents a problem with an application like speech recognition, Dr. Singh said, noting that AI systems can understand individual words from small vocabularies with 99 percent accuracy but cannot truly understand speech.

“If I were in a restaurant and hurriedly said, ‘I’d like my coffee with dream and sugar,’ most waiters would bring me cream and sugar, even though I said dream and sugar,” he said. “Context is extremely important, and unless we’re able to input a lot of context into AI systems, no matter how much vocabulary we input, it won’t be able to understand speech.”

Computers also have a limited capacity to perceive, unlike humans, who can identify patterns, understand scenes, and recognize objects relatively well even with poor lighting and with objects occluding their line of sight.

“If you had to recognize individual faces as they were walking into a conference room one by one, you could likely do that with very high accuracy, and even if you had a cluttered environment where you had partial occlusions and different lighting, the success rate would still be high, whereas for an AI-based face recognition system, the success rate would drop to 30 to 60 percent,” Dr. Singh said.

CAP TODAY
X