Share

ID3 Algorithm Step by Step Explanation

ID3 is a decision tree based classifier. It is a classification approach which is adequate to situations i.e. missing data, errorneous data etc.

In this tutorial, we will learn the implementation of ID3 algorithm.

How to select best attribute ?

Each attribute in the training data is tested for the selection of best attribute for the construction of tree nodes and classifying the examples. For this purpose, a statistical property is used Information-gain. This property specify how well an attribute classify the training examples.

A measure from information theory Entropy is used for it.

 Entropy(S) = - p \ln p - q \ln q

here, p is probability of classifying positive examples and q is (1-p).

Information gain is defined as the decrease in the entropy split on an attribute.

Gain(S,A) = Entropy(S)- Entropy(S,A)

Day Outlook Temp Humidity Wind Decision
1 Sunny Hot High Weak No
2 Sunny Hot High Strong No
3 Overcast Hot High Weak Yes
4 Rain Mild High Weak Yes
5 Rain Cool Normal Weak Yes
6 Rain Cool Normal Strong No
7 Overcast Cool Normal Strong Yes
8 Sunny Mild High Weak No
9 Sunny Cool Normal Weak Yes
10 Rain Mild Normal Weak Yes
11 Sunny Mild Normal Strong Yes
12 Overcast Mild High Strong Yes
13 Overcast Hot Normal Weak Yes
14 Rain Mild High Strong No
# Importing libraries
import pandas as pd
import matplotlib.pyplot as plt
import math

# read data
df=pd.read_csv('data.csv',index_col=0)

Entropy Computation

We need to compute probability of yes and no.

First we need to count positive examples. In our data, there are nine data instances where target value is Yes.

# Positive examples
positive=df[df['Decision']=='Yes']
positive.reset_index(inplace=True)
print(positive)
print('\n Positive Examples :',positive.shape[0])
   Day   Outlook  Temp Humidity    Wind Decision
0    3  Overcast   Hot     High    Weak      Yes
1    4      Rain  Mild     High    Weak      Yes
2    5      Rain  Cool   Normal    Weak      Yes
3    7  Overcast  Cool   Normal  Strong      Yes
4    9     Sunny  Cool   Normal    Weak      Yes
5   10      Rain  Mild   Normal    Weak      Yes
6   11     Sunny  Mild   Normal  Strong      Yes
7   12  Overcast  Mild     High  Strong      Yes
8   13  Overcast   Hot   Normal    Weak      Yes

 Positive Examples : 9
# Negative examples
negative=df[df['Decision']=='No']
negative.reset_index(inplace=True)
print(negative)
print('\n Negative Examples :',negative.shape[0])
   Day  Outlook  Temp.  Humidity     Wind  Decision
0     1   Sunny    Hot      High     Weak        No
1     2   Sunny    Hot      High   Strong        No
2     6    Rain   Cool    Normal   Strong        No
3     8   Sunny   Mild      High     Weak        No
4    14    Rain   Mild      High   Strong        No

 Negative Examples : 5
total=df.shape[0]                           # Total Number of records
ycount=df[df['Decision']=='Yes'].shape[0]   # Count of positive examples
ncount=df[df['Decision']=='No'].shape[0]    # Count of negative examples
p=ycount/total
q=1-p
entropy_s=-p* math.log(p,2)-q*math.log(q,2)

print('Total records     : ',total)
print('Positive examples :',ycount)
print('Negative examples :',ncount)
print('---------------------------------')
print(' Entropy computation \n')
print('p     = ',p)
print('q     =',q)
print('Entropy(S) = -p log p-q log q')
print('Entropy(S) = ',entropy_s)
Total records     :  14
Positive examples : 9
Negative examples : 5
---------------------------------
 Entropy computation 

p     =  0.6428571428571429
q     = 0.3571428571428571
Entropy(S) = -p log p-q log q
Entropy(S) =  0.9402859586706311

First Node Selection

Now we need to select first node which will get us highest information gain. Let’s consider the first attribute Outlook. This attribute has three values [‘overcast’, ‘rain’,’sunny’].

We will compute the information gain by using following formula $$ Gain(S,Outlook) = Entropy(S) – \sum_{v \in Outlook} { p( v) * Entropy(S | v)} $$

# Information gain of Overcast attribute
outlook_values=pd.unique(df['Outlook'])
print('Outlook Attribute Values' )
print('------------------------')
print(outlook_values)
Outlook Attribute Values
------------------------
['Sunny' 'Overcast' 'Rain']
# For Outlook==Sunny Value

sunny=df[df['Outlook']=='Sunny']
print('-'*50)
print('        Outlook=Sunny Data')
print('-'*50)
print(sunny)
print('-'*50)
ycount_sunny=sunny[sunny['Decision']=='Yes'].shape[0]
ncount_sunny=sunny[sunny['Decision']=='No'].shape[0]
p_sunny=ycount_sunny/(sunny.shape[0])
q_sunny=1-p_sunny
print('Summary')
print('-'*50)
print('Positive counts for Outlook=Sunny :',ycount_sunny)
print('Negative counts ----------------- :',ncount_sunny)
print('P(Decision=Yes | Outlook=Sunny)   :',p_sunny)
print('P(Decision=No  | Outlook=Sunny)   :',q_sunny)
print('Entropy (S|Outlook=Sunny) = -p*log(p)-q*log(q)')
print('-'*50)
print('-'*50)
print('Entropy (S | Outlook=Sunny) = - %3.1f * log(%3.1f) - %3.1f * log (%3.1f)'%(p_sunny,p_sunny,q_sunny,q_sunny))
entropy_sunny=-p_sunny*math.log(p_sunny,2)-q_sunny*math.log(q_sunny,2)
print('                            = ',entropy_sunny)
--------------------------------------------------
        Outlook=Sunny Data
--------------------------------------------------
    Outlook  Temp Humidity    Wind Decision
Day                                        
1     Sunny   Hot     High    Weak       No
2     Sunny   Hot     High  Strong       No
8     Sunny  Mild     High    Weak       No
9     Sunny  Cool   Normal    Weak      Yes
11    Sunny  Mild   Normal  Strong      Yes
--------------------------------------------------
Summary
--------------------------------------------------
Positive counts for Outlook=Sunny : 2
Negative counts ----------------- : 3
P(Decision=Yes | Outlook=Sunny)   : 0.4
P(Decision=No  | Outlook=Sunny)   : 0.6
Entropy (S|Outlook=Sunny) = -p*log(p)-q*log(q)
--------------------------------------------------
--------------------------------------------------
Entropy (S | Outlook=Sunny) = - 0.4 * log(0.4) - 0.6 * log (0.6)
                            =  0.9709505944546686

Similarily we will compute Entropy(S | Outlook=Rain) and Entropy(S | Outlook=Overcast)

# For Outlook==Rain Value

rain=df[df['Outlook']=='Rain']
print('-'*50)
print('        Outlook=Rain Data')
print('-'*50)
print(rain)
print('-'*50)
ycount_rain=rain[rain['Decision']=='Yes'].shape[0]
ncount_rain=rain[rain['Decision']=='No'].shape[0]
p_rain=ycount_rain/(rain.shape[0])
q_rain=1-p_rain
print('Summary')
print('-'*50)
print('Positive counts for Outlook=Rain :',ycount_rain)
print('Negative counts ----------------- :',ncount_rain)
print('P(Decision=Yes | Outlook=Rain)   :',p_rain)
print('P(Decision=No  | Outlook=Rain)   :',q_rain)
print('Entropy (S|Outlook=Rain) = -p*log(p)-q*log(q)')
print('-'*50)
print('-'*50)
print('Entropy (S | Outlook=Rain) = - %3.1f * log(%3.1f) - %3.1f * log (%3.1f)'%(p_rain,p_rain,q_rain,q_rain))
entropy_rain=-p_rain*math.log(p_rain,2)-q_rain*math.log(q_rain,2)
print('                            = ',entropy_rain)
--------------------------------------------------
        Outlook=Rain Data
--------------------------------------------------
    Outlook  Temp Humidity    Wind Decision
Day                                        
4      Rain  Mild     High    Weak      Yes
5      Rain  Cool   Normal    Weak      Yes
6      Rain  Cool   Normal  Strong       No
10     Rain  Mild   Normal    Weak      Yes
14     Rain  Mild     High  Strong       No
--------------------------------------------------
Summary
--------------------------------------------------
Positive counts for Outlook=Rain : 3
Negative counts ----------------- : 2
P(Decision=Yes | Outlook=Rain)   : 0.6
P(Decision=No  | Outlook=Rain)   : 0.4
Entropy (S|Outlook=Rain) = -p*log(p)-q*log(q)
--------------------------------------------------
--------------------------------------------------
Entropy (S | Outlook=Rain) = - 0.6 * log(0.6) - 0.4 * log (0.4)
                            =  0.9709505944546686
# For Outlook==Overcast Value

overcast=df[df['Outlook']=='Overcast']
print('-'*50)
print('        Outlook=Overcast Data')
print('-'*50)
print(overcast)
print('-'*50)
ycount_overcast=overcast[overcast['Decision']=='Yes'].shape[0]
ncount_overcast=overcast[overcast['Decision']=='No'].shape[0]
p_overcast=ycount_overcast/(overcast.shape[0])
q_overcast=1-p_overcast
print('Summary')
print('-'*50)
print('Positive counts for Outlook=Overcast :',ycount_overcast)
print('Negative counts ----------------- :',ncount_overcast)
print('P(Decision=Yes | Outlook=Overcast)   :',p_overcast)
print('P(Decision=No  | Outlook=Overcast)   :',q_overcast)
print('Entropy (S|Outlook=Overcast) = -p*log(p)-q*log(q)')
print('-'*50)
print('-'*50)
print('Entropy (S | Outlook=Overcast) = - %3.1f * log(%3.1f) - %3.1f * log (%3.1f)'%(p_overcast,p_overcast,q_overcast,q_overcast))
entropy_overcast=-p_overcast*math.log(p_overcast,2)
print('                            = ',entropy_overcast)
--------------------------------------------------
        Outlook=Overcast Data
--------------------------------------------------
      Outlook  Temp Humidity    Wind Decision
Day                                          
3    Overcast   Hot     High    Weak      Yes
7    Overcast  Cool   Normal  Strong      Yes
12   Overcast  Mild     High  Strong      Yes
13   Overcast   Hot   Normal    Weak      Yes
--------------------------------------------------
Summary
--------------------------------------------------
Positive counts for Outlook=Overcast : 4
Negative counts ----------------- : 0
P(Decision=Yes | Outlook=Overcast)   : 1.0
P(Decision=No  | Outlook=Overcast)   : 0.0
Entropy (S|Outlook=Overcast) = -p*log(p)-q*log(q)
--------------------------------------------------
--------------------------------------------------
Entropy (S | Outlook=Overcast) = - 1.0 * log(1.0) - 0.0 * log (0.0)
                            =  -0.0
#### Information Gain of Outlook Attribute
gain=entropy_s-(sunny.shape[0]*entropy_sunny+rain.shape[0]*entropy_rain+overcast.shape[0]*entropy_overcast)/total
print('Information gain on Outlook attribute')
print('-'*50)
print('gain(outlook)=',gain)
print('-'*50)
Information gain on Outlook attribute
--------------------------------------------------
gain(outlook)= 0.24674981977443922
--------------------------------------------------
# Computing Information gain for each attribute


def information_gain(df,attribute):
    
    total=df.shape[0]                           # Total number of records
    
    # Computing Entropy(S)
    
    ycount=df[df['Decision']=='Yes'].shape[0]   # Count of positive examples
    ncount=df[df['Decision']=='No'].shape[0]    # Count of negative examples
    ps=ycount/total                             # Probability of positive examples
    qs=1-ps                                     # Probability of Negative examples
    entropy_s=(-ps * ( math.log(ps,2) if ps!=0 else 0 ) -qs * ( math.log(qs,2) if qs!=0 else 0) )
    

    
    columns=df.columns                         # Name of all attributes

    
    # Check whether specified attribute is in the dataframe or not
    if attribute in columns:
        
        # Fetching all possible values of current attribute
        attr_values=pd.unique(df[attribute])
    
    
        # Variable to store entropy for current attribute
        entropy_attr=0.0
    
    
        #Iterating over each possible value of the current attribute
        for value in attr_values:
                   
            # Data frame where curent attribute is equivalent to current value
            temp_df=df[df[attribute]==value]
        
            # Number for record for current value
            attr_count=temp_df.shape[0]
        
            # Probability of positive examples in attribute with current value
            p=temp_df[temp_df['Decision']=='Yes'].shape[0]/attr_count
        
            # probability of negative examples in attribute with current value
            q=1-p
        
            # Entropy for attribute with respect to current value =  - p * log(p) - q * log(q)
            entropy_temp=(-p * ( math.log(p,2) if p!=0 else 0 ) -q * ( math.log(q,2) if q!=0 else 0) )

            
        
            entropy_attr += attr_count/total * entropy_temp
    
        # Information gain for current attribute
        gain_attr=entropy_s-entropy_attr
        return (gain_attr)
    else:
        print('Not a valid attribute')
        return None


#########################################################3
#   First Node Selection
    
# Empty dictionary to store information gain
gain={}

# Iterating over each attribute
for attr in columns:
    
    # Information gain for current attribute
    gain_attr=information_gain(df,attr)
    gain[attr]=gain_attr
print (gain)
        
    
{'Outlook': 0.24674981977443933, 'Temp': 0.02922256565895487, 'Humidity': 0.15183550136234159, 'Wind': 0.04812703040826949, 'Decision': 0.9402859586706311}

First Node Outlook

first

On the basis of information gain computed in previous step, Outlook which is having highest information gain is selected for the first node of the decision tree. First node will be having three child nodes Sunny, Rain, Overcast.

# dataset for Sunny node
print('Dataset for Sunny')
print('--------------------------------')
print(df[df['Outlook']=='Sunny'])
df1=df[df['Outlook']=='Sunny']
gain={}                               # Empty dictionary to store information gain
for attr in columns:                  # Iterating over each attribute
    gain_attr=information_gain(df1,attr) # Information gain for current attribute
    gain[attr]=gain_attr
print (gain)
print('--------------------------------')


print('\n\nDataset for Rain')
print('--------------------------------')
print(df[df['Outlook']=='Rain'])
df1=df[df['Outlook']=='Rain']
gain={}                               # Empty dictionary to store information gain
for attr in columns:                  # Iterating over each attribute
    gain_attr=information_gain(df1,attr) # Information gain for current attribute
    gain[attr]=gain_attr
print (gain)
print('--------------------------------')

print('\n\nDataset for Overcast')
print('--------------------------------')
print(df[df['Outlook']=='Overcast'])
df1=df[df['Outlook']=='Overcast']
gain={}                               # Empty dictionary to store information gain
for attr in columns:                  # Iterating over each attribute
    gain_attr=information_gain(df1,attr) # Information gain for current attribute
    gain[attr]=gain_attr
print (gain)
Dataset for Sunny
--------------------------------
    Outlook  Temp Humidity    Wind Decision
Day                                        
1     Sunny   Hot     High    Weak       No
2     Sunny   Hot     High  Strong       No
8     Sunny  Mild     High    Weak       No
9     Sunny  Cool   Normal    Weak      Yes
11    Sunny  Mild   Normal  Strong      Yes
{'Outlook': 0.0, 'Temp': 0.5709505944546686, 'Humidity': 0.9709505944546686, 'Wind': 0.01997309402197489, 'Decision': 0.9709505944546686}
--------------------------------


Dataset for Rain
--------------------------------
    Outlook  Temp Humidity    Wind Decision
Day                                        
4      Rain  Mild     High    Weak      Yes
5      Rain  Cool   Normal    Weak      Yes
6      Rain  Cool   Normal  Strong       No
10     Rain  Mild   Normal    Weak      Yes
14     Rain  Mild     High  Strong       No
{'Outlook': 0.0, 'Temp': 0.01997309402197489, 'Humidity': 0.01997309402197489, 'Wind': 0.9709505944546686, 'Decision': 0.9709505944546686}
--------------------------------


Dataset for Overcast
--------------------------------
      Outlook  Temp Humidity    Wind Decision
Day                                          
3    Overcast   Hot     High    Weak      Yes
7    Overcast  Cool   Normal  Strong      Yes
12   Overcast  Mild     High  Strong      Yes
13   Overcast   Hot   Normal    Weak      Yes
{'Outlook': -0.0, 'Temp': -0.0, 'Humidity': -0.0, 'Wind': -0.0, 'Decision': -0.0}

We will repeat above process again for each dataset and try to find next node with highest information gain.

Second Node selection

In case of Outlook=Sunny node, we are getting highest information gain for Humidity attribute. Therefore, under Sunny node a child node of Humidity attribute will be created. Humidity attribute has two values High and Normal. Let’s repeat the same process for these two values.

In case of Humidity=High, all data instances are negative example so this node will be labeled as class No.

print('Dataset for Outlook=Sunny and Humidity = High')
print('--------------------------------')

df1=df[df['Outlook']=='Sunny']
df2=df1[df1['Humidity']=='High']
print(df2)


Dataset for Outlook=Sunny and Humidity = High
--------------------------------
    Outlook  Temp Humidity    Wind Decision
Day                                        
1     Sunny   Hot     High    Weak       No
2     Sunny   Hot     High  Strong       No
8     Sunny  Mild     High    Weak       No

In case of Humidity=Normal, all data instances are positive examples therefore this node will be labeled as Yes.

print('Dataset for Outlook=Sunny and Humidity = Normal')
print('--------------------------------')

df1=df[df['Outlook']=='Sunny']
df2=df1[df1['Humidity']=='Normal']
print(df2)

Dataset for Outlook=Sunny and Humidity = Normal
--------------------------------
    Outlook  Temp Humidity    Wind Decision
Day                                        
9     Sunny  Cool   Normal    Weak      Yes
11    Sunny  Mild   Normal  Strong      Yes

Third Node Selection

Now we are going to select third node ( child node for node Rain). We will compute information gain for each attribute for dataset ( Outlook=Rain).

df3=df[df['Outlook']=='Rain']
print(df3)
    Outlook  Temp Humidity    Wind Decision
Day                                        
4      Rain  Mild     High    Weak      Yes
5      Rain  Cool   Normal    Weak      Yes
6      Rain  Cool   Normal  Strong       No
10     Rain  Mild   Normal    Weak      Yes
14     Rain  Mild     High  Strong       No
gain={}                               # Empty dictionary to store information gain
for attr in columns:                  # Iterating over each attribute
    gain_attr=information_gain(df3,attr) # Information gain for current attribute
    gain[attr]=gain_attr
print (gain)
print('--------------------------------')

{'Outlook': 0.0, 'Temp': 0.01997309402197489, 'Humidity': 0.01997309402197489, 'Wind': 0.9709505944546686, 'Decision': 0.9709505944546686}
--------------------------------

Wind attribute is having highest information gain. ( we don’t consider target attribute Decision.) Wind attribute has two values Weak and Strong.

Now we will create two child nodes for the node Wind.

df3=df[df['Outlook']=='Rain']
df3=df3[df3['Wind']=='Weak']
print(df3)
    Outlook  Temp Humidity  Wind Decision
Day                                      
4      Rain  Mild     High  Weak      Yes
5      Rain  Cool   Normal  Weak      Yes
10     Rain  Mild   Normal  Weak      Yes

All instances are positive, therefore this node with value Wind=Weak will be marked as positive.

df3=df[df['Outlook']=='Rain']
df3=df3[df3['Wind']=='Strong']
print(df3)
    Outlook  Temp Humidity    Wind Decision
Day                                        
6      Rain  Cool   Normal  Strong       No
14     Rain  Mild     High  Strong       No

All instances are negative, therefore this node with Wind=Strong markedas negative.

Final Decision Tree

decision tree