Skip to main content

Building Commonsense in AI

It is often debated that what makes humans the ultimate intelligent species is the innate quality of doing commonsense reasoning. Humans use common sense knowledge about the world around to take appropriate decisions, and this turns out to be the necessary ingredient for their survival.

AI researches have long thought about building commonsense knowledge in AI. They argue that if AI possess necessary commonsense knowledge then it will be a truly intelligent machine.

We will discuss two major commonsense projects that exploit this idea:

  • Cyc tries to build a comprehensive ontology and knowledge base of everyday commonsense knowledge. This knowledge can be used by AI applications to do human-like reasoning. Started in 1984, Cyc has come a long way. Today, OpenCyc 4.0 includes the entire Cyc ontology, containing 239,000 concepts and 2,093,000 facts and can be browsed on the OpenCyc website - http://www.cyc.com/platform/opencyc/. OpenCyc is available for download from SourceForge under an OpenCyc License.

  • Never Ending Language Language Learning (NELL) is a semantic machine learning system designed by Carnegie Mellon University that is running 24/7 since the beginning of 2010. NELL is continuously browsing through millions of web pages looking for connections between different concepts. NELL tries to mimic the human learning process. NELL achieves this by performing two tasks each day
    • Reading task: extract information from web text to further populate a growing knowledge base of structured facts and knowledge.
    • Learning task: learn to read better each day than the day before, as evidenced by its ability to go back to yesterday’s text sources and extract more information more accurately.
NELL is successfully trying to learn new facts which you can browse at http://rtw.ml.cmu.edu/rtw/

Commonsense Reasoning systems will be an essential element in question answering systems. You can make your own question answering system using either Cyc or NELL.

Cyc, after 31 years, for the first time has been used for commercial purpose by a company called Lucid, to develop their personal assistant. Using the vast repository of Cyc’s commonsense knowledge can make your personal assistant more accurate in answering questions, compared to the assistants devoid of commonsense knowledge.


References:

[1] Panton, Kathy, et al. "Common sense reasoning–from Cyc to intelligent assistant." Ambient Intelligence in Everyday Life. Springer Berlin Heidelberg, 2006. 1-31.

[2] Carlson, A.; Betteridge, J.; Kisiel, B.; Settles, B.; Hruschka Jr, E. R.; and Mitchell, T. M. 2010a. Toward an architecture for never-ending language learning. In AAAI, volume 5, 3.

Comments

  1. How To Make Money On Sports Betting
    Online sports betting is available งานออนไลน์ for a whole host of US and European communitykhabar sports casinosites.one betting markets. 토토 Some US states, like Louisiana and New Jersey, allow 토토 사이트 홍보

    ReplyDelete

Post a Comment

Popular posts from this blog

How is AI Saving the Future

Meanwhile the talk of AI being the number one risk of human extinction is going on, there are lot many ways it is helping humanity. Recent developments in Machine Learning are helping scientists to solve difficult problems ranging from climate change to finding the cure for cancer. It will be a daunting task for humans to understand enormous amount of data that is generated all over the world. Machine Learning is helping scientists to use algorithms that learn from data and find patterns. Below is a list of few of the problems AI is working on to help find solutions which otherwise would not have been possible: Cancer Diagnostics : Recently, scientists at University of California (UCLA) applied Deep Learning to extract features for achieving high accuracy in label-free cell classification. This technique will help in faster cancer diagnostics, and thus will save a lot of lives. Low Cost Renewable Energy : Artificial-intelligence is helping wind power forecasts of u...

In the World of Document Similarity

How does a human infer whether two documents are similar? This question has dazzled cognitive scientists, and is one area under which a lot of research is taking place. As of  now there is no product that is able to match or surpass human capability in finding the similarity in documents. But things are improving in this domain, and companies such as IBM and Microsoft are investing a lot in this area. We at Cere Labs, an Artificial Intelligence startup based in Mumbai, also are working in this area, and have applied LDA and Word2Vec techniques, both giving us promising results: Latent Dirichlet Allocation (LDA) : LDA is a technique used mainly for topic modeling. You c an leverage on this topic modeling to find the similarity between documents. It is assumed that more the topics two documents overlap, more are the chances that those documents carry semantic similarity. You can study LDA in the following paper: https://www.cs.princeton.edu/~blei/papers/BleiNgJordan20...

Understanding Generative Adversarial Networks - Part II

In "Understanding Generative Adversarial Networks - Part I" you gained a conceptual understanding of how GAN works. In this post let us get a mathematical understanding of GANs. The loss functions can be designed most easily using the idea of zero-sum games.  The sum of the costs of all players is 0.         This is the Minimax algorithm for GANs Let’s break it down. Some terminology: V(D, G) : The value function for a minimax game E(X) : Expectation of a random variable X, also equal to its average value D(x) : The discriminator output for an input x from real data, represents probability G(z): The generator's output when its given z from the noise distribution D(G(z)) : Combining the above, this represents the output of the discriminator when  given a generated image G(z) as input Now, as explained above, the discriminator is the maximizer and hence it tries to  maximize V(D, G) . The discriminator wa...