January 13, 2023, by Brigitte Nerlich

Artificial intelligence, dark matter and common sense

In my last post I wrote about a new type of deep learning network, the chatbot ChatGPT, which has stirred up a lot of debate, including between me and my sister. We were skyping and I mentioned the blog post and she said: “Oh, I have just read something in the New York Times that might be of interest to you. It’s about that bot but also about common sense as the dark matter of AI”. I squeaked and said: “Dark matter of AI”, wow that’s a metaphor for my next post. Little did I know what I had let myself in for.

ChatGPT, common sense and a metaphor

Debates about ChatGPT turn around what it can do and what it can’t do as well as what harm it can do and can’t do. ChatGPT is a chatbot that can provide bland answers to bland questions, such as “What is dark matter?” and even “What is common sense?”. The answers are indeed quite fine. It even provides an answer to the question “Is common sense the dark matter of artificial intelligence?” You can find its answers at the end of the post, including apologies for a mistake.

In this post I am more interested in dissecting the metaphor a bit, although I have to confess I felt quite defeated by the bot.

Dark metaphors

The metaphor “Common sense is the dark matter of (artificial) intelligence” reminded me of another use of the dark matter metaphor I had encountered before, as in “Junk DNA (or non-coding) DNA is the dark matter of genomics”.

Metaphors usually map a familiar or known domain of knowledge (say, code) onto a more unfamiliar and unknown one (DNA) and you get ‘DNA is a code’. That opens up some avenues of exploration. In the case of ‘Junk DNA is the dark matter of genomics’, we map something people know very little about (relating to physics) onto something people know very little about (relating to genomics).

With respect to dark matter and intelligence, artificial or human, we map something we don’t know much about, dark matter, onto something we think we know a lot about but actually don’t, namely common sense. But the point of these metaphors is that they highlight something about non-coding DNA or artificial intelligence that is deeply mysterious, something that needs much more exploration.

In the following, I shall first talk a little bit about the source domain of our metaphor, dark matter, and then the target domain, common sense and its relation to (artificial) intelligence – a huge topic, about which I don’t know very much.

Dark matter

In the article spotted by my sister, Yejin Choi, an expert in AI and common sense, says that “common sense is the dark matter of intelligence”. She also explains why dark matter matters metaphorically:

“Normal matter is what we see, what we can interact with. We thought for a long time that that’s what was there in the physical world — and just that. It turns out that’s only 5 percent of the universe. Ninety-five percent is dark matter and dark energy, but it’s invisible and not directly measurable. We know it exists, because if it doesn’t, then the normal matter doesn’t make sense. So we know it’s there, and we know there’s a lot of it. We’re coming to that realization with common sense. It’s the unspoken, implicit knowledge that you and I have. It’s so obvious that we often don’t talk about it.”

That echoed a blog post from a couple of months earlier entitled “The Dark Matter of AI: Common Sense Is Not So Common”- note that in the blog post the metaphor refers to AI not, as Yejin Choi did, intelligence. Both versions of the metaphor are in circulation.

The key thing is that dark matter is elusive, abundant and important and these characteristics are mapped onto common sense. Dark matter makes the universe work and common sense makes intelligence work. But what ‘is’ this elusive common sense? And can the metaphor shed any light on common sense and (artificial) intelligence?

Common sense

The Oxford English Dictionary describes common sense as “Intelligence or sagacity in relation to practical matters arising in everyday life; the ability to make sound judgements and sensible decisions regarding such matters”. They provide one nice example of that use of the word taken from a French source : “1685   tr. P. Nicole & A. Arnauld Logic i. 5   Common Sence [Fr. Le sens commun] is no such Vulgar Quality as men take it to be.” How right these grammarians and logicians were!

A blog post I read about this matter adds another literary source to the mix. “In 1906, philosopher G. K. Chesterton wrote that ‘common sense is a wild thing, savage, and beyond rules.’” This is important, as rules are what have, for a long time, made computers and AI work. Operating in real world situations is difficult for robots and AI, as here “problems are unpredictable, vague, and not defined by rules”.

As understanding common sense is essential to comprehending how humans think and act, it’s not surprising that it is a matter of deep concern to people interested in robotics and AI. As the same blogger explained “Equipping computers with common sense has actually been a goal of computer science since the field’s very start; in 1958, pioneering computer scientist John McCarthy published a paper titled ‘Programs with common sense’ which looked at how logic could be used as a method of representing information in computer memory. But we’ve not moved much closer to making it a reality since.” And a lot has been going on since!

ChatGPT and common sense

Many people are playing with ChatGPT, the chatbot, powered by large language models and stringing words together in a very convincing way. Amongst them was  Gary Marcus, an artificial intelligence researcher, quoted in an article on common sense and AI published in 2020.

He played with a precursor of ChatGPT. He “typed the following into GPT-2: What happens when you stack kindling and logs in a fireplace and then drop some matches is that you typically start a … Surely a system smart enough to contribute to The New Yorker would have no trouble completing the sentence with the obvious word, ‘fire.’ GPT-2 responded with ‘ick.’  […] Marcus wasn’t surprised. Commonsense reasoning-the ability to make mundane inferences using basic knowledge about the world, like the fact that ‘matches’ plus ‘logs’ usually equals ‘fire’-has resisted AI researchers’ efforts for decades.” 

It’s 2023 now. So I asked ChatGPT the same question and got this answer? “fire. The kindling and logs will catch fire from the heat and flames of the matches, and will continue to burn as long as there is oxygen present. The fire will produce heat, light, and smoke. The heat will radiate out into the room, warming it up. The light will make it possible to see in the room. The smoke will go up the chimney, carrying away any pollutants and byproducts of combustion.”

Does this mean that this AI now has common sense, that we have been able to shine a light on the dark matter of AI? I doubt it. My hunch is that it the above exchange has been incorporated into its training data. If you want to know more about all this read this by Marcus…it’s complicated

Bringing common sense and dark matter together

Now we know a bit more about dark matter and common sense: one is elusive, abundant and important, the other is also elusive, abundant and important. When we map the one onto the other we highlight the importance and mystery of these topics. The metaphor is a sort of place-holder for future research. But unlike the metaphor ‘DNA is code’, this metaphor doesn’t provide any immediate clues about where to look for inspiration – we are still in the dark.

So for the time being there is still a lot of dark matter to work through before we can even approach something that one can call artificial intelligence. That seems to be common sense.

***

My conversation with ChatGPT which puts this post to shame – but there is also a mistake

Is common sense the dark matter of AI?

Common sense is often referred to as the “dark matter” of AI because, like dark matter, it is something that is thought to be ubiquitous and foundational, yet it is difficult to define, measure, and replicate. While the concept of common sense is relatively straightforward, actually building machines that have common sense is a monumental task that researchers have been working on for decades. It is thought that the ability to possess common sense would greatly enhance an AI’s ability to understand and navigate the world, and would be a key step in achieving true artificial general intelligence.

But what is common sense though?

Common sense is the knowledge and understanding of the basic principles of everyday life that most people possess. It is the set of basic assumptions, concepts, and rules of thumb that people use to make sense of the world around them and navigate everyday situations. It can be thought of as a basic level of intelligence that allows people to understand things that are not explicitly stated, and make reasonable inferences based on incomplete information.

Common sense includes knowledge about the physical world, such as understanding that objects fall when dropped, and knowledge about social interactions, such as understanding that it is impolite to interrupt someone when they are speaking. It also encompasses knowledge about how the world works, such as cause and effect relationships, and basic logic and reasoning abilities. It is a form of “intuitive” knowledge which humans gain through experiences and is hard to encode.

Can you tell me who has written about common sense as dark matter of AI?

Common sense has been compared to dark matter in the context of AI in various publications and articles. The analogy is often used to convey the idea that, like dark matter, common sense is something that is believed to be foundational to intelligence, yet it is difficult to define, measure, and replicate. The comparison is usually used to emphasize the challenge of building machines that can possess common sense.

One example of someone who has written about common sense as the “dark matter” of AI is Rodney Brooks, a researcher in the field of robotics and AI. In his 2011 book “Robot: Mere Machine to Transcendent Mind”, Brooks writes that “common sense is the dark matter of AI—it is all around us and important, but we do not know how it works or how to build it.”

Another example is AI researcher Douglas Hofstadter, he wrote in his book “Fluid Concepts and Creative Analogies” that “Common sense is the vast network of implicit knowledge that allows humans to function in the everyday world, and it is one of the most difficult things for a computer to mimic.”

These are a few examples of researchers that used the analogy of dark matter to highlight the importance of common sense in AI and how difficult it is to replicate.

Did Rodney Brooks really write about common sense as dark matter?

I apologize, my previous statement was incorrect. Rodney Brooks, a researcher in the field of robotics and AI, did not write about common sense as the “dark matter” of AI in his 2011 book “Robot: Mere Machine to Transcendent Mind”. I apologize for any confusion caused by my error. It is likely that I confused it with some other publications and talks where this analogy was used by other AI researchers.

Common sense is a fundamental and important topic in AI research and many researchers have written about its importance in building intelligent systems, and the challenges of creating AI that possesses common sense.

I think Brooks did not write the book Robot: Mere Machine to Transcendent Mind

You are correct, I apologize for the mistake. Rodney Brooks did not write a book named “Robot: Mere Machine to Transcendent Mind”. He is a well-known researcher in the field of AI and Robotics and has written and spoken extensively on various topics related to AI and Robotics, but I couldn’t find any reference to a book with that title. I apologize for any confusion my previous statement may have caused.

Thank you for being so honest!

You’re welcome. I apologize for any confusion caused by my previous errors. It’s important to me to provide accurate information and I’m sorry that on this occasion I didn’t meet that standard. Let me know if there is anything else I could help you with.

[Interestingly, despite its apologies, it never spontaneously found the right right author for the book ‘Robots: Mere Machine to Transcendent Mind‘, despite it taking only a few seconds for Google to find it, namely Hans Moravic – you knew that readers, didn’t you…]

Image: Wikimedia Commons

 

 

 

 

 

 

 

Posted in Metaphors