Skip to main content

Buddha vs Child Paradigm



“Artificial Intelligence (AI) is like a child, who needs time to learn and adapt, whereas a typical IT system is like Buddha who knows everything about the problem it was supposed to solve.”   
          - Devesh Rajadhyax, Founder, Cere Labs.


Images: By Purshi - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=9827197
By Shaun MItchem - Diggy starts to learn to walk, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=4762045 




Let us in this post try to elaborate on this Buddha vs Child Paradigm which Devesh coined for 
differentiating between conventional IT systems and AI. It is essential to know the difference because it 
helps in building the right kind of attitude towards implementing AI systems. A typical IT system such as 
ERP does the job for what it was implemented. It is assumed that it will solve the problem for what it was 
made. Take for example an accounting system like Tally. It will help you to manage your accounts in highly
accurate manner. This system is like a Buddha, who is enlightened from the start. We can’t expect it to do
any mistakes (except the initial software bugs that are rectified). Or take for example a calculator. 
A calculator will always give you 1+2 = 3, no matter you do this operation million times assuming that 
the calculator is in a working condition. Thus we expect an IT system to work perfectly, and it does indeed.

But take an AI system. A typical AI system learns from data. Initially it is like a child, finding it difficult to 
even crawl, but slowly and steadily it learns how to walk as it is exposed to more situations which in the 
case of AI is more data. Maturity of an AI system comes with more and more exposure to data. 
In the example of calculator, an AI system will get many examples such as  1+2 = 3, and over a period 
of time it will learn how to perform addition. Initially it might fail to predict 3, but it will come closer and 
closer to 3.

As you have noticed there is a paradigm shift in understanding and using an AI system compared to an 
IT system. Let us summarize few differences…


IT System
AI System
Matured to solve the problem right from 
the start it was supposed to.
Keeps getting matured as it is exposed to 
more data.
Results are expected to be accurate.
Results keep improving.
Patience by IT executives is only required 
till implementation.
Patience is required throughout the lifetime 
of the system.
Mistakes in output can’t be tolerated.
Mistakes in output can be tolerated.
100% accuracy.
Accuracy tries to reach 100%, but is 
mostly probabilistic.

AI systems are the only option to solve problems which are not well defined, such as prediction or
anomaly. That’s why people tolerate its childish behavior.

Thus it is essential to understand this difference between an IT system and an AI system. It will help an 
executive to assist AI researchers and engineers in achieving great results over a period of time.

By,
Siddhesh Wagle,
Head of Research,
Cere Labs

Comments

Post a Comment

Popular posts from this blog

How is AI Saving the Future

Meanwhile the talk of AI being the number one risk of human extinction is going on, there are lot many ways it is helping humanity. Recent developments in Machine Learning are helping scientists to solve difficult problems ranging from climate change to finding the cure for cancer. It will be a daunting task for humans to understand enormous amount of data that is generated all over the world. Machine Learning is helping scientists to use algorithms that learn from data and find patterns. Below is a list of few of the problems AI is working on to help find solutions which otherwise would not have been possible: Cancer Diagnostics : Recently, scientists at University of California (UCLA) applied Deep Learning to extract features for achieving high accuracy in label-free cell classification. This technique will help in faster cancer diagnostics, and thus will save a lot of lives. Low Cost Renewable Energy : Artificial-intelligence is helping wind power forecasts of u...

In the World of Document Similarity

How does a human infer whether two documents are similar? This question has dazzled cognitive scientists, and is one area under which a lot of research is taking place. As of  now there is no product that is able to match or surpass human capability in finding the similarity in documents. But things are improving in this domain, and companies such as IBM and Microsoft are investing a lot in this area. We at Cere Labs, an Artificial Intelligence startup based in Mumbai, also are working in this area, and have applied LDA and Word2Vec techniques, both giving us promising results: Latent Dirichlet Allocation (LDA) : LDA is a technique used mainly for topic modeling. You c an leverage on this topic modeling to find the similarity between documents. It is assumed that more the topics two documents overlap, more are the chances that those documents carry semantic similarity. You can study LDA in the following paper: https://www.cs.princeton.edu/~blei/papers/BleiNgJordan20...

Anomaly Detection based on Prediction - A Step Closer to General Artificial Intelligence

Anomaly detection refers to the problem of finding patterns that do not conform to expected behavior [1]. In the last article "Understanding Neocortex to Create Intelligence" , we explored how applications based on the workings of neocortex create intelligence. Pattern recognition along with prediction makes human brains the ultimate intelligent machines. Prediction help humans to detect anomalies in the environment. Before every action is taken, neocortex predicts the outcome. If there is a deviation from the expected outcome, neocortex detects anomalies, and will take necessary steps to handle them. A system which claims to be intelligent, should have anomaly detection in place. Recent findings using research on neocortex have made it possible to create applications that does anomaly detection. Numenta’s NuPIC using Hierarchical Temporal Memory (HTM) framework is able to do inference and prediction, and hence anomaly detection. HTM accurately predicts anomalies in real...