09. Binary Classification

Binary classification

Toc

tensorflow

Neural Network

  • Computing systems inspired by the biological neural networks that constitute animal brains. - Wiki
neuralNetwork
Image 1. Neural network
  • Input layer is a set of neurons to receive inputs and pass them to hidden layer.

  • Hidden layer is a set of neurons to describe hypothesis.

  • Output layer is a set of neurons having output function transforming the outputs of hidden layer to a format for the problem.

  • This is a simplified single neural network model to explain our example.

singleNeuralNetwork
Image 2. Single neural network
  • In this model, there are 3 layers and 4 nodes, and each nodes represent Neuron.
  • Bias neuron is drawn to be clear.
  • This can explain all single neural network, such as linear regression and binary classification.
  • Here, single means there is only one hidden layer and the layer includes only one hypothesis neuron.

Logistic Regression Classification

  • Logistic Regression = Logistic Classification
  • A regression model where the dependent variable is categorical. - Wiki
  • Binary(or binomial) classification or multinomial classification

Binary Classification

  • The task of classifying the elements of a given set into two groups on the basis of a classification rule. - Wiki
  • Its result is True or False. If a hypothesis is satisfied with a condition, the value is 1. If not, 0.
  • Output function checks the condition of the result of hypothesis, and returns the result.

Binary Classification with Linear Hypothesis

import matplotlib.pyplot as plt

hypothesis = lambda _x : 0.1 * _x - 0.1
threshold = 0.5
output = lambda _x : _x >= threshold

x = [i for i in range(10)]
_hypo = list(map(hypothesis, x))
y = list(map(output, _hypo))

plt.plot(x, y, "o")
plt.plot(x, _hypo)
plt.ylim(-0.5, 1.5)
plt.grid()
plt.title("Binary classification")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
output_1_0
Image 3. Binary classification
  • In the example, hypothesis is \(0.1 * x - 0.1 \), and output layer returns True if the result of hypothesis is larger than \( 0.5 \).
  • To get a value which is bigger than 0.5, x should be bigger than 6.
  • Therefore, values being small than 6 is 0, the others are 1.

Problem of Binary Classification with Linear Hypothesis

  • If the result of hypothesis is too bigger than 1 or too smaller than 0, cost function will return very high values. This can change weight and bias to be accustomed to the input value. Therefore, this can make machine dull.
  • The representative example is that an input is too far from other input.
import matplotlib.pyplot as plt

hypothesis = lambda _x : 0.1 * _x - 0.1
threshold = 0.5
output = lambda _x : _x >= threshold

# Correct hypothesis
x = [i for i in range(10)]
x.append(20)
ans_hypo = list(map(hypothesis, x))
ans_y = list(map(output, ans_hypo))

# The result of hypothesis for input 20 is 1.9.
#  However, others are smaller than 1.
#  By MSE, their errors are almost 4 and 1.
#  The error 4 updates weights and bias to fit itself.
#  Therefore, the weight can be smaller.

# Trained hypothesis
hypothesis = lambda _x : 0.09 * _x - 0.1
trn_hypo = list(map(hypothesis, x))
trn_y = list(map(output, trn_hypo))

plt.plot(x, ans_y, "ro")
plt.plot(x, ans_hypo,"r")
plt.plot(x, trn_y, "b*")
plt.plot(x, trn_hypo, "b")
plt.ylim(-0.5, 1.5)
plt.grid()
plt.title("Binary classification")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
output_4_0
Image 4. Problem of linear hypothesis
  • Because of new input 20, the weight becomes smaller, and now 6 is wrong with trained hypothesis.
  • Therefore, simple combination of weighted sum and output function is not enough for this problem.

Active Function

  • Defines the output of that node given an input or set of inputs. - Wiki
  • Active function transforms the result of weighted sum to a format for the next neuron.
activeFunction
Image 5. Active function

$$ H(X) = A(S(X)) $$

  • These equations are for matrix, but scalar values' are very similar.
  • S() is the weighted sum, and A() is the active function.
  • Now, hypothesis is the combination of weighted sum and active function..
  • There are many types of Active function, such as sigmoid function, step function, linear, gausian and so on, but sigmoid function is representative.

Sigmoid Function

  • = Logistic Function
  • Return a value which is from 0 to 1.

$$ g(x) = \frac{1}{1 + \exp^{-x + b}} $$

import numpy as np
import matplotlib.pyplot as plt

sigmoid = lambda _x : 1 / (1 + np.exp(-1 * (w*_x + b)))

x = [i * 0.01 for i in range(-700,700)]

b = 0
for i in range(1, 4):
    w = i * 0.5
    y = list(map(sigmoid, x))
    plt.plot(x, y, label="W={0}, b={1}".format(w, b))

plt.grid()
plt.title("Changing Weight")
plt.xlim(-7, 7)
plt.ylim(-0.2, 1.2)
plt.xlabel("x")
plt.ylabel("y")
plt.legend(loc="lower right")
plt.show()

w = 1
for b in range(-1, 2):
    y = list(map(sigmoid, x))
    plt.plot(x, y, label="W={0}, b={1}".format(w, b))

plt.grid()
plt.title("Changing Bias")
plt.xlim(-7, 7)
plt.ylim(-0.2, 1.2)
plt.xlabel("x")
plt.ylabel("y")
plt.legend(loc="lower right")
plt.show()
output_7_0
Image 6. Active function changing weight
output_7_1
Image 7. Active function chaning bias

Applying Sigmoid Function

$$ S(X) = X*W + b $$ $$ H(X) =A(S(X)) $$ $$ H(X) = \frac{1}{1 + \exp^{- W^{T} X + b}} $$

  • Here, sigmoid function is used as active function.
  • Transpose is optional to make them multiplicable.
  • For previous example, by input 20, the weight is dramatically changed. However, sigmoid function prevents it by limiting the result of weighted sum from 0 to 1.
identify = lambda _x : _x

x = [i * 0.01 for i in range(-700,700)]

y = list(map(identify, x))
plt.plot(x, y)

plt.grid()
plt.title("Identify Function")
plt.xlim(-7, 7)
plt.ylim(-7, 7)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
output_9_0
Image 8. Weighted sum & Active function

Identity Function

  • Identity function is the active and output function for linear regression.
  • Identity function returns the value which is the same as the input.
import numpy as np
import matplotlib.pyplot as plt

# Weighted sum without bias to be simple
weightSum = lambda _x : w * _x
# Active function
sigmoid = lambda _x : 1 / (1 + np.exp(-_x))
# Output function
output = lambda _x : _x > 0.5

# Input
x = [i * 0.01 for i in range(-800, 800)]
# Weight
w = 1

# Hypothesis
hypo = list(map(sigmoid, x))

# Output
y = list(map(output, hypo))

# Graph for hypothesis
plt.plot(x, hypo, label="Hypothesis")

# Graph for output
plt.plot(x, y, label="Answer")

plt.ylim(-0.2, 1.2)
plt.legend(loc="lower right")
plt.grid()
plt.show()
output_10_0
Image 9. Identity function

COMMENTS

Name

0 weights,1,abstract class,1,active function,3,adam,2,Adapter,1,affine,2,argmax,1,back propagation,3,binary classification,3,blog,2,Bucket list,1,C++,11,Casting,1,cee,1,checkButton,1,cnn,3,col2im,1,columnspan,1,comboBox,1,concrete class,1,convolution,2,cost function,6,data preprocessing,2,data set,1,deep learning,31,Design Pattern,12,DIP,1,django,1,dnn,2,Don't Repeat Your code,1,drop out,2,ensemble,2,epoch,2,favicon,1,fcn,1,frame,1,gradient descent,5,gru,1,he,1,identify function,1,im2col,1,initialization,1,Lab,9,learning rate,2,LifeLog,1,linear regression,6,logistic function,1,logistic regression,3,logit,3,LSP,1,lstm,1,machine learning,31,matplotlib,1,menu,1,message box,1,mnist,3,mse,1,multinomial classification,3,mutli layer neural network,1,Non Virtual Interface,1,normalization,2,Note,21,numpy,4,one-hot encoding,3,OOP Principles,2,Open Close Principle,1,optimization,1,overfitting,1,padding,2,partial derivative,2,pooling,2,Prototype,1,pure virtual function,1,queue runner,1,radioButton,1,RBM,1,regularization,1,relu,2,reshape,1,restricted boltzmann machine,1,rnn,2,scrolledText,1,sigmoid,2,sigmoid function,1,single layer neural network,1,softmax,6,softmax classification,3,softmax cross entropy with logits,1,softmax function,2,softmax regression,3,softmax-with-loss,2,spinBox,1,SRP,1,standardization,1,sticky,1,stride,1,tab,1,Template Method,1,TensorFlow,31,testing data,1,this,2,tkinter,5,tooltip,1,Toplevel,1,training data,1,vanishing gradient,1,Virtual Copy Constructor,1,Virtual Destructor,1,Virtual Function,1,weight decay,1,xavier,2,xor,3,
ltr
item
Universe In Computer: 09. Binary Classification
09. Binary Classification
Binary classification
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiE9QfIQg9MqxmXv8wo1jRHrMgva3N0n9uaoJIHiM44Vt8k6nlufCwcOrXM4piATO-QqQmLgh_JEZUv2KXJVRIATvdu0xwckn-JPaRyfJpu9tFP929dbQgKHcd0zfVFfe9EjSkH18A4MxU4/s0/
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiE9QfIQg9MqxmXv8wo1jRHrMgva3N0n9uaoJIHiM44Vt8k6nlufCwcOrXM4piATO-QqQmLgh_JEZUv2KXJVRIATvdu0xwckn-JPaRyfJpu9tFP929dbQgKHcd0zfVFfe9EjSkH18A4MxU4/s72-c/
Universe In Computer
https://kunicom.blogspot.com/2017/06/09-binary-classification.html
https://kunicom.blogspot.com/
https://kunicom.blogspot.com/
https://kunicom.blogspot.com/2017/06/09-binary-classification.html
true
2543631451419919204
UTF-8
Loaded All Posts Not found any posts VIEW ALL Readmore Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS CONTENT IS PREMIUM Please share to unlock Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy