**CHAPTER 2 BAYESIAN NETWORK**

**2.5. EVALUATING STRUCTURAL ACCURACY**

**2.5.2 CONFUSION MATRIX**

In the level of the pair potential during supervised learning, it dased on the four characters for evaluation of the goal by using the classifier for the analysis collection.

Predictive analytics, a table of confusion, further identified as a confusion matrix, is a table in its simplest form, having a couple of rows and a couple of columns that provides the estimates of true positives, false positives, false negatives, and true negatives. Every row in the confusion matrix describes a recognized class, every column describes a forecasting class, and all cell includes the number of units in the crossing of those couple of classes. The Confusion matrix structure is presented as follows in Table 2.2.

**Table 2.2: **Confusion Matrix** **

**Actual Class**

**Predicted Class**

Yes No

Yes *True Positive *(TP) *False Negative *(FN)
No *False Positive *(FP) *True Negative *(TN)

The records in the confusion matrix hold integer numbers. The sum of the four records TP + TN + FP + FN = n, corresponds to the total quantity of analysis. Classifications that constitute the principal diagonal of the confusion matrix are the accurate classifications i.e., true negatives and true positives. Additional fields mean

classification errors. Several achievement metrics can obtain from the confusion matrix.

**2.5.2.1. ACCURACY AND ERROR RATE **

Accuracy is the rate of rightly classified cases to all cases in the examination collection, i.e. (TP + TN)/ (TP + TN + FP + FN). The error rate is defined as (1-Accuracy). This metric analyzed by writers as having a weakness to distinguish between classes it considers. Accuracy doesn’t display the accurate classifier’s appearance below the skewed class population. In actuality, classifiers regularly challenge a higher number of negative cases associated with positive cases. [112]; [113]; [114], [115]; [116]) Accuracy estimates the classifier’s achievement including one number to both of the groups also to the individual setting of destination situations. Extra weakness about the accuracy metric is the interchangeable achievement evaluation of a couple of separate situations. The accuracy is the equivalent of both of the situations just, for instance, classifying nearly all positives in the original one includes the faulty classification of nearly every negative. At the opposite side, classifying nearly each positive inside the other situation includes around just half of the false positives.

**2.5.2.2. SENSITIVITY AND SPECIFICITY **

Sensitivity and specificity are the mathematical models of achievement of binary classification analyses. Sensitivity and specificity expressed as a percentage. In clinical examinations, the sensitivity of medical analysis is the possibility of its producing a ‘positive’ result if the case is positive and specificity is the possibility of getting a negative outcome if the case is negative. A general optimal forecast effect can produce 100% sensitive (i.e. forecast every case of a diseased population as sick) and 100% specificity (i.e. not forecast any member of the healthy population as sick).

Visualize a scenario, where cases examined for an illness. The examination result may be positive (sick) or negative (healthy), while the real health situation of a case may be different. Four situations may occur:

• The sick case diagnosed sick termed as “True positive”

• The healthy case classified as sick–“False positive”

• The healthy case recognized as healthy–“True negative”

• The sick case classified as healthy–“False negative”

Of the above circumstances, in two cases, a fault has happened, if a healthy case recognized as sick and the other case where a sick case classified as healthy.

System examination produce analytical conclusions about the distribution on the source of trial data. They further recognize its Statistical Significance Examination. In system testing, there is a “Null hypothesis” which compares to a supposed default

“State of reality” (e.g. that a person is available of infection). Comparing to the null system is an “alternative hypothesis” which compares different situations. The purpose is to define if the null hypothesis can reject in approval of the option. The outcome of the analysis may be positive (it may mean infection) or it may be negative (i.e. it appears to no-show infection). If the outcome of the examination seems negative match including the original states of reality, a failure has happened. There are two kinds of error categorized as “Type I and Type II errors” based on which system has recognized as the reality. Type I error identified as “false positive”, or “α” error, the error of denying the null system if it is true. A false positive shows that an examination demands something to be positive if that is not the case. For instance, an examination assuming that a woman is pregnant if she is not pregnant. Type II error classified as

“error of the second kind” or a “false negative” or “β” error, the error of allowing the null system when the choice system is true. Table 2.3 represents the condition:

Sensitivity is defined as:

Sensitivity = Number of true positives

Number of true positives + Number of false negatives Equation 2-46 An individual Sensitivity value seems not to show how good the examination distinguishes the different types (i.e. about negative events). In the binary classification,

Table 2.3 Test Result in the Confusion matrix

Test Result

Actual Condition

Present Absent

Positive Condition Present + Positive Result = True Positive

Condition absent + Positive result = False Positive (Type I error)

Negative Condition Present + Negative Result

= False negative (Type II error)

Condition absent + Negative result

= True negative

this corresponds to the identical specificity examination or equivalently the sensitivity for the different types.

Specificity is the proportion of true negatives to the number of true negatives plus false positives.

Specificity = Number of true negatives

Number of true negatives + Number of false positives Equation 2-47 Sensitivity and specificity are helpful in providing explanations of different treatments in the medical domain being associated with conventional therapy and including different scaling of testing the increase in cases as distinguished upon old, well installed also applied principles [117].

**2.5.2.3 PRECISION, RECALL AND F-SCORE **

In this section, we concentrate on three conventional achievement metrics; precision, recall, and F-score. For instance, the experimental result may produce the numbers in a confusion matrix. Of these numbers, one can calculate the precision (p) also recall (r) as follows:

𝑝 =_{𝑇𝑃+𝐹𝑃}^{𝑇𝑝} Equation 2-48

𝑟 =_{𝑇𝑃+𝐹𝑁}^{𝑇𝑝} Equation 2-49

The (weighted) harmonic mean of precision and recall produces the F-score [118]

𝐹_{𝛽}= (1 + 𝛽^{2}) ^{𝑝𝑟}

𝑟+𝛽^{2}𝑝= _{(1+𝛽}_{2}^{(1+𝛽}_{)𝑇𝑃+𝛽}^{2}^{)𝑇𝑃}_{2}_{𝐹𝑁+𝐹𝑃} Equation 2-50
Table 2. 4 Sensitivity and Specificity in the Confusion matrix

Condition as determined by Gold Standard

Positive Negative

Test result

Positive True Positive False Positive (Type I error)

→ Positive Predictive value Negative

False Negative (Type II error)

True Negative →Negative Predictive value

**Sensitivity** **Specificity**

Both recall and precision become an actual argument in expressions of probability.

Precision may display, while the system restores the possibility that a target is essential given that, while the recall is the likelihood they deliver a suitable target.