Travel & Tours

Nobel Prize-Winner Daniel Kahneman Just Explained What He's Learned About A.I. Outsmarting Humans

When it comes to answering difficult questions, well-built artificial intelligence will always have us beat.

That was a key takeaway from a conversation between economist Daniel Kahneman and MIT professor of brain and cognitive science Josh Tenenbaum at the Conference on Neural Information Processing Systems (NeurIPS) recently. The pair spoke during the virtual event about the shortcomings of humans and what we can learn from them while building A.I.

Kahneman, a Nobel Prize winner in economic sciences and the author of Thinking, Fast and Slow, noted an instance in which humans use judgment heuristics--shortcuts, essentially--to answer questions they don't know the answer to. In the example, people are given a small amount of information about a student: She's about to graduate, and she was reading fluently when she was 4 years old. From that, they're asked to estimate her grade point average.

Using this information, many people will estimate the student's GPA to be 3.7 or 3.8. To arrive there, Kahneman explained, they assign her a percentile on the intelligence scale--usually very high, given what they know about her reading ability at a young age. Then they assign her a GPA in what they estimate to be the corresponding percentile.

How 4 Top Performers Maintain Their Mental and Physical Wellbeing

Of course, the person answering the question doesn't consciously realize that they're following this process. "It's automatic," said Kahneman. "It's not deliberate. It's something that happens to you."

And the guess they offer isn't likely to be a particularly good one. "The answer that came to your mind is ridiculous, statistically," said Kahneman. "The information that you've received is very, very uninformative."

A student's reading ability at age 4, in other words, doesn't have a high correlation with their GPA 14 years later. But when we're faced with a question we can't answer, said Kahneman, we tend to answer a simpler one instead.

"We're rarely stumped," he said. "The answer to a related question will come to our mind, and we may not be fully aware of the fact that we're substituting one question for another."

In reality, the best way to estimate the student's GPA would be to start with an average GPA--say, 3.0 or slightly higher--and make a minor upward adjustment based on what we know about the girl. But research shows that most people don't think this way. They tend to lean too heavily on the information they have (in this case, the girl's reading ability at a young age) and not realize how much information they don't have.

A soundly engineered A.I. system, on the other hand, isn't likely to make the same mistake. Properly build A.I. will use all the data it has and won't overadjust on the basis of one piece of new information.

Engineers should keep this in mind when building A.I., said Tenenbaum. "If there are ways in which human thinking is a model to be followed, we should be following it," he said. "If there are ways in which human thinking is flawed, we should be figuring out how to avoid those in the A.I.s we build."

SOURCE : INC

site_map