Skip to main content

The Importance of F1 Score

At CereLabs, we are building various image classification systems. While building any kind of classification system one is often challenged to test the trained models. One useful measure to test such models is accuracy, which is the proportion of true results and the total number of images examined. Accuracy thus communicates the essential message of how close one comes to the correct result. In the case of an image classification system, accuracy is how accurately the trained model is able to classify the test image dataset. If we are trying to classify the image of an apple, accuracy will be the measure of how accurately the classifier is able to detect the apple in an image.

Consider the following confusion matrix.

True Positive (TP)
Actual image contains an apple, and is correctly classified as an apple
False Negative (FN)
Actual image contains an apple but is not classified as an apple
False Positive (FP)
Actual image does not contain an apple but is classified as an apple
True Negative (TN)
Actual image does not contain an apple and is correctly classified as not an apple.

Suppose we use a model that is able to detect apples in an image. We test the model on 100000 images out of which 1000 have apples, and 99000 have no apples. After testing we get the following confusion matrix.

The formula to calculate accuracy is as follows:
Accuracy = (TP + TN) / (TP + TN + FP + FN)
                                                             = (445+97976)/(445+97976+1024+555)
                                                             = 0.98

Although accuracy can be an useful tool to test any model, it can fail when the data is skewed, especially in classification models. In such a case, having a higher accuracy does not guarantee that the classifier is doing well, just that it is good in justifying the skewed data. In the case of detecting apples, if the number of times an apple is presented to the model is low, the chances of the model to detect it as an apple also increases, and hence the accuracy. But in this case we don’t know whether our model will do well when it is presented with an image that is not an apple. We will never come to know even if the accuracy is above 98%.

Thus the disadvantages of using accuracy are as follows:
  • Not able to distinguish between false positives and false negatives
  • Not an useful metric if the data is skewed
  • Accuracy Paradox: Predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy.

These disadvantages may turn out to be catastrophic if we are using Accuracy to calculate correctness of a model in cases of fraud detection, cancer detection, etc. We might want to detect cases where a tumour is classified into benign. Just imagine a model predicting that a person does not have a disease even when the person has the disease. Accuracy score ignores such cases, and we might not be able to test how good a model is to detect tumors or any other diseases. 

In such a scenario, F1 score comes for our rescue. To calculate F1 score we need to calculate Precision and Rescue.

Precision (P): Of all the images classified as an apple, what fraction actually have apple. That means how many positive predictions were correct.

P = TP / (TP + FP).
                                                                     = 445/ (445+1024)
                                                                     = 0.30
Here you can see that the Precision has severely dropped because of the skewed data.

Recall (R): Of all the images that have apple, what fraction were correctly classified as apple.
R = TP / (TP + FN)
                                                                     = 445/ (445+ 555)
                                                                     = 0.44

Now both Precision and Recall are important measures. F1 score helps us in finding a value that incorporates both Precision and Recall. F1 score is the weighted average of Precision and Recall.
F1 = 2 * (P * R) / (P + R)
                                                                   = 2 * (0.30 * 0.44) / (0.30 + 0.44)
                                                                   = 0.36

Thus F1 score captures the balance between Precision and Recall and a good F1 score can justify the strength of a model in classifying data. If there is any imbalance in Precision and Recall, or both Precision and Recall are low, the F1 score will penalize the classifier. The above F1 score shows that the model has failed miserably, which the accuracy score was not able to identify.

Hope you are able to use F1 score in your classification project. The higher the better. If you have any queries, please mention them in comments.

Comments

Popular posts from this blog

How is AI Saving the Future

Meanwhile the talk of AI being the number one risk of human extinction is going on, there are lot many ways it is helping humanity. Recent developments in Machine Learning are helping scientists to solve difficult problems ranging from climate change to finding the cure for cancer. It will be a daunting task for humans to understand enormous amount of data that is generated all over the world. Machine Learning is helping scientists to use algorithms that learn from data and find patterns. Below is a list of few of the problems AI is working on to help find solutions which otherwise would not have been possible: Cancer Diagnostics : Recently, scientists at University of California (UCLA) applied Deep Learning to extract features for achieving high accuracy in label-free cell classification. This technique will help in faster cancer diagnostics, and thus will save a lot of lives. Low Cost Renewable Energy : Artificial-intelligence is helping wind power forecasts of u...

In the World of Document Similarity

How does a human infer whether two documents are similar? This question has dazzled cognitive scientists, and is one area under which a lot of research is taking place. As of  now there is no product that is able to match or surpass human capability in finding the similarity in documents. But things are improving in this domain, and companies such as IBM and Microsoft are investing a lot in this area. We at Cere Labs, an Artificial Intelligence startup based in Mumbai, also are working in this area, and have applied LDA and Word2Vec techniques, both giving us promising results: Latent Dirichlet Allocation (LDA) : LDA is a technique used mainly for topic modeling. You c an leverage on this topic modeling to find the similarity between documents. It is assumed that more the topics two documents overlap, more are the chances that those documents carry semantic similarity. You can study LDA in the following paper: https://www.cs.princeton.edu/~blei/papers/BleiNgJordan20...

Anomaly Detection based on Prediction - A Step Closer to General Artificial Intelligence

Anomaly detection refers to the problem of finding patterns that do not conform to expected behavior [1]. In the last article "Understanding Neocortex to Create Intelligence" , we explored how applications based on the workings of neocortex create intelligence. Pattern recognition along with prediction makes human brains the ultimate intelligent machines. Prediction help humans to detect anomalies in the environment. Before every action is taken, neocortex predicts the outcome. If there is a deviation from the expected outcome, neocortex detects anomalies, and will take necessary steps to handle them. A system which claims to be intelligent, should have anomaly detection in place. Recent findings using research on neocortex have made it possible to create applications that does anomaly detection. Numenta’s NuPIC using Hierarchical Temporal Memory (HTM) framework is able to do inference and prediction, and hence anomaly detection. HTM accurately predicts anomalies in real...