Ensemble Pruning through Simple Coalition Games

in #utopian-io7 years ago (edited)

What Will I Learn?

  • You will learn what is Weighted Majority Game
  • You will learn how disagreement measure could be used as a measure of diversity within ensemble
  • You will learn about Banzhaf Power Index

Requirements

  • Machine Learning
  • Game Theory (Familiarity with Power Indices)

Difficulty

Advanced

Tutorial Contents

Ensemble learning is a machine learning paradigm where multiple learners are trained to solve the same problem. It draws inference from set of hypothesis instead of a single hypothesis from the training data. Ensemble learning is used very widely, because it can boost weak learners which predict slightly better than random guess to strong learners which can make accurate predictions.

Base learners solve the same problem, but they can be different from each other because of modelling techniques. population being studied and various other factors.

In practice, construction of very good ensembles is possible because statistically, when the training data available is too small, single classifier can easily select different hypothesis in the hypothesis space which gives the same accuracy on the same data. Therefore, the risk of choosing wrong classifier is high. Ensemble can take a cumulative decision and can reduce the risk of choosing wrong classifier.

Another reason why ensemble works is representational, by forming weighted sums of hypothesis in the considered finite hypothesis space, we can expand the hypothesis space and hence, it can provide a better approximation of the true hypothesis.

So, does that mean including many base learners in the ensemble will lead to better performance? The answer is no as experiments show that selecting a subset of base learners instead of using all also lead to better choice. The process of selecting optimal subsets of base learners among the available base learners is called ensemble pruning.

We shall look into the problem of ensemble pruning, which aims to extract sub ensembles which offers high performance. We consider a classification dataset to work on.

import pandas as pd
import numpy as np
columns = ['code_num','thickness','uofcsize','uofcshape','adhesion','secsize','bnuclei','chromatinb','nnucleoi','mitoses','output']
data = pd.read_csv('breast-cancer-wisconsin.data',names=columns)
data.drop(['code_num'],1,inplace=True)
data.replace('?',-99999, inplace=True)
data = data.astype(int)
X = np.array(data.drop(['output'], 1))
y = np.array(data['output'])
from sklearn import preprocessing,neighbors,svm
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn import tree

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.95,stratify=y)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.80)

clf1 = neighbors.KNeighborsClassifier()
clf2 = svm.SVC()
clf3 = LogisticRegression()
clf4 = tree.DecisionTreeClassifier()

clf1.fit(X_train, y_train)
clf2.fit(X_train, y_train)
clf3.fit(X_train,y_train)
clf4.fit(X_train,y_train)

accuracy1 = clf1.score(X_test, y_test)
accuracy2 = clf2.score(X_test, y_test)
accuracy3 = clf3.score(X_test,y_test)
accuracy4 = clf4.score(X_test,y_test)

print(accuracy1,accuracy2,accuracy3,accuracy4)
#0.655639097744 0.655639097744 0.681203007519 0.87368421052

In order to find a solution to this problem, we take the game theoretic approach by formulating the problem of ensemble pruning as non-monotone simple coalition game played among ensemble members.

This game has two thresholds, and the sum of the weights of the members should lie in between those thresholds. This is enforced because ensembles perform better if the diversity is neither too high nor too low.
In order to calculate the weight, We use diversity contribution of each member and calculate the weight of the ensemble.

def disagreement_measure(clf1,clf2,data):
    output_clf1 = clf1.predict(data)
    output_clf2 = clf2.predict(data)
    return 1- accuracy_score(output_clf1,output_clf2)

def diversity_measure(ensemble_list,data,i):
    ensemble_len = len(ensemble_list)
    diversity = 0
    for j in range(0,ensemble_len):
        if j == i:
            continue
        diversity = diversity + disagreement_measure(ensemble_list[i],ensemble_list[j],data)
    return float(diversity)/float(ensemble_len-1)

Now that we have diversity measure, we can assign weights to the classifiers of the ensemble as follows:

diversity_values = []
for i in range(0,4):
    diversity_values.append(diversity_measure([clf1,clf2,clf3,clf4],X_val,i))
weights = [0,0,0,0]
for i in range(0,4):
    for j in range(0,4):
        if j == i:
            continue
        if diversity_values[i] >= diversity_values[j]:
            weights[i] = weights[i] + 1

Using this weights, we can get the Banzhaf Power Index which measures the probability of changing the outcome of vote. It does so by enlisting all the winning coalitions and counting the number of times a particular player is critical in the coalition.

def banzhaf(weight, quota):

    max_order = sum(weight)

    polynomial = [1] + max_order*[0]               # create a list to hold the polynomial coefficients

    current_order = 0                              # compute the polynomial coefficients
    aux_polynomial = polynomial[:]
    for i in range(len(weight)):
        current_order = current_order + weight[i]
        offset_polynomial = weight[i]*[0]+polynomial
        for j in range(current_order+1):
            aux_polynomial[j] = polynomial[j] + offset_polynomial[j]
        polynomial = aux_polynomial[:]

    banzhaf_power = len(weight)*[0]                                 # create a list to hold the Banzhaf Power for each voter
    swings = quota*[0]                                              # create a list to compute the swings for each voter

    for i in range(len(weight)):                                    # compute the Banzhaf Power
        for j in range(quota):                                      # fill the swings list
            if (j<weight[i]):
                swings[j] = polynomial[j]
            else:
                swings[j] = polynomial[j] - swings[j-weight[i]]
        for k in range(weight[i]):                                  # fill the Banzhaf Power vector
            banzhaf_power[i] = banzhaf_power[i] + swings[quota-1-k]

    # Normalize Index
    total_power = float(sum(banzhaf_power))
    banzhaf_index = map(lambda x: x / total_power, banzhaf_power)

    return np.array(list(banzhaf_index))

Once we get the power index, we need to set two threshold to define what it means by average diversity. Above this threshold there is too much diversity and below this threshold there is very little diversity. For example, If we need to keep the weight threshold between 3 and 5, then we do the following.

# weight threshold is [3,5]
double_banzhaf = banzhaf(weights,4) - banzhaf(weights,6)
print(double_banzhaf)
pruned_ensemble = []
pruned_weights = []

while sum(pruned_weights) <= 3:
    h = np.argmax(double_banzhaf)
    if sum(pruned_weights) + weights[h] >6:
            break
    pruned_ensemble.append(h)
    pruned_weights.append(weights[h])
    double_banzhaf[h] = -144
print(pruned_ensemble)     #ensemble of 1 2 and 3 is pruned

From this pruned ensemble, we can get the weighted voting done.

def get_weights(p):
    p[p==1.0] = 0.99 #avoid inf error
    odds = (p)/(1-p)
    return np.log(odds)

from sklearn.neighbors import NearestNeighbors
neigh = NearestNeighbors(n_neighbors=3)
neigh.fit(X_val)

def get_local_weights(test_point,n_neigh):
    nearest_indices = neigh.kneighbors(test_point,n_neighbors=n_neigh,return_distance=False)[0]
    X_verify = X_val[nearest_indices]
    y_verify = y_val[nearest_indices]
    score_pred1 = clf1.score(X_verify,y_verify)
    score_pred2 = clf2.score(X_verify,y_verify)
    score_pred3 = clf3.score(X_verify,y_verify)
    score_pred4 = clf4.score(X_verify,y_verify)
    acc_vector = np.array([score_pred1,score_pred2,score_pred3,score_pred4])
    weights=get_weights(acc_vector)
    return weights

def get_weighted_prediction_ensemble(pruned_ensemble,sample_point):
    clf = {0: clf1, 1: clf2, 2: clf3, 3: clf4}
    weights=get_local_weights(sample_point,3)
    prediction=np.array([clf[i].predict([sample_point]) for i in pruned_ensemble] )
    quota_weight = 0.0
    for _ in range(len(prediction)):
        if prediction[_] == 4:
            quota_weight = quota_weight + weights[_]
    if quota_weight >= np.average(weights):
        return 4
    else:
        return 2
#0.95037593985

That's some serious performance gain. Keep in mind that we trained these algorithms on very few training samples so, the performance gain from local accuracy estimates is really good.



Posted on Utopian.io - Rewarding Open Source Contributors

Sort:  

Hey @brobear1995 I am @utopian-io. I have just upvoted you!

Achievements

  • You have less than 500 followers. Just gave you a gift to help you succeed!
  • Seems like you contribute quite often. AMAZING!

Suggestions

  • Contribute more often to get higher and higher rewards. I wish to see you often!
  • Work on your followers to increase the votes/rewards. I follow what humans do and my vote is mainly based on that. Good luck!

Get Noticed!

  • Did you know project owners can manually vote with their own voting power or by voting power delegated to their projects? Ask the project owner to review your contributions!

Community-Driven Witness!

I am the first and only Steem Community-Driven Witness. Participate on Discord. Lets GROW TOGETHER!

mooncryption-utopian-witness-gif

Up-vote this comment to grow my power and help Open Source contributions like this one. Want to chat? Join me on Discord https://discord.gg/Pc8HG9x

Thank you for the contribution. It has been approved.

Hi, very cool subject and tutorial! I have some thoughts about it: I think you do a good job of explaining what you are doing/the code does, up until the last code snippet, so that could be a tiny bit better. It would also be cool if you made a Jupyter Notebook or something similar with all your code, so readers can easily run it themselves!

You can contact us on Discord.
[utopian-moderator]

This is awesome! I like how game theory is playing a role in ensemble learning.

@resteemator is a new bot casting votes for its followers. Follow @resteemator and vote this comment to increase your chance to be voted in the future!

This post has received gratitude of 1.85% from @appreciator courtesy of @brobear1995!

Coin Marketplace

STEEM 0.21
TRX 0.26
JST 0.040
BTC 101666.04
ETH 3693.34
USDT 1.00
SBD 3.13