Page 1 :
EVALUATION, INTRODUCTION TO MODEL EVALUATION, Five stages of AI Project cycle:, , WHAT IS EVALUATION?, Evaluation is the method of understanding the reliability of an API Evaluation and is based on the outputs which are, received by feeding the data into the model and comparing the output with the actual answers., , , , Over fitting happens when a machine learning model has become too attuned to the data on which it was trained, and therefore loses its applicability to any other dataset., , MODEL EVALUATION TERMINOLOGIES:, CASE –: Is it raining?, THE SCENARIO, , PREDICTION, , REALITY, , CONCLUSION, , YES, , YES, , TRUE-POSITIVE, TP, , NO, , YES, , FALSE- NEGATIVE, FN, , YES, , NO, , FALSE – POSITIVE, FP, , NO, , NO, , TRUE – NEGATIVE, TN, , CONFUSION MATRIX:, A confusion matrix is a summary of prediction results on a classification problem. The number of correct, and incorrect predictions are summarized with count values and broken down by each class. This is the, key to the confusion matrix. is confused when it makes predictions., CONFUSION MATRIX, , Prediction, , Reality, Yes, , No, , Yes, , TP, , FP, , No, , FN, , TN
Page 2 :
DEFINITION OF THE TERMS:, , , , , , , , Positive (P): Observation is positive (Example: That is car)., Negative (N): Observation is not positive (Example: That is not a car)., True – Positive (TP): Observation is positive and predicted to be positive., True – Negative (TN): Observation is negative and predicted to be negative., False – Positive (FP): Observation is negative but predicted positive., False – Negative (FN): Observation is positive but predicted negative., , EVALUATION METHODS:, 1. ACCURACY: Accuracy represents the number of correctly classified data instances over the total, number of data instances., Accuracy =, , ������� ����������, ����� �����, , × ��� % =, , (��+��), , (��+��+��+��), , × ��� %, , 2. PRECISION: Precision is the percentage of true positive cases out of all the cases where the, prediction is true., Precision =, , ���� ���������, , ��� ��������� ���������, , ��, , × ��� % =, , (��+��), , × ��� %, , 3. RECALL: Recall is the ratio of the total number of correctly classified positive examples and the, total number of positive examples., Recall =, , ���� ���������, , ���� ���������+����� ���������, , =, , ��, , (��+��), , 4. F1 SCORE: F1 score is the measure of balance between the Precision and Recall., F1 Score = � ×, , PRECISION, Low, Low, High, High, , ��������� × ������, ���������+������, , RECALL, Low, High, Low, High, , F1 SCORE, Low, Low, Low, High, , Note: If F1 Score for model is high. Then a model has a good performance., , ASSESSMENT, Q. 1] Calculate the following measures for the given confusion matrix., i), Accuracy, ii) Precision, iii) Recall, iv) F1 Score, Confusion Matrix, True – Positive, True – Negative, Predicted Positive, 100, 45, Predicted Negative, 65, 320, , For more Help, https://onlineconfusionmatrix.com/