Machine Learning A Probabilistic Perspective Pdf

- 22.36

Machine Learning CAP6610
photo src: cise.ufl.edu

In machine learning, a probabilistic classifier is a classifier that is able to predict, given a sample input, a probability distribution over a set of classes, rather than only outputting the most likely class that the sample should belong to. Probabilistic classifiers provide classification with a degree of certainty, which can be useful in its own right, or when combining classifiers into ensembles.


Machine Learning - A Probabilistic Perspective -- XMind Online Library
photo src: www.xmind.net


Maps, Directions, and Place Reviews



Types of classification

Formally, an "ordinary" classifier is some rule, or function, that assigns to a sample x a class label ?:

The samples come from some set X (e.g., the set of all documents, or the set of all images), while the class labels form a finite set Y defined prior to training.

Probabilistic classifiers generalize this notion of classifiers: instead of functions, they are conditional distributions Pr ( Y | X ) {\displaystyle \Pr(Y\vert X)} , meaning that for a given x ? X {\displaystyle x\in X} , they assign probabilities to all y ? Y {\displaystyle y\in Y} (and these probabilities sum to one). "Hard" classification can then be done using the optimal decision rule

or, in English, the predicted class is that which has the highest probability.

Binary probabilistic classifiers are also called binomial regression models in statistics. In econometrics, probabilistic classification in general is called discrete choice.

Some classification models, such as naive Bayes, logistic regression and multilayer perceptrons (when trained under an appropriate loss function) are naturally probabilistic. Other models such as support vector machines are not, but methods exist to turn them into probabilistic classifiers.


Machine Learning A Probabilistic Perspective Pdf Video



Generative and conditional training

Some models, such as logistic regression, are conditionally trained: they optimize the conditional probability Pr ( Y | X ) {\displaystyle \Pr(Y\vert X)} directly on a training set (see empirical risk minimization). Other classifiers, such as naive Bayes, are trained generatively: at training time, the class-conditional distribution Pr ( X | Y ) {\displaystyle \Pr(X\vert Y)} and the class prior Pr ( Y ) {\displaystyle \Pr(Y)} are found, and the conditional distribution Pr ( Y | X ) {\displaystyle \Pr(Y\vert X)} is derived using Bayes' rule.


Advanced Probability Theory · Michael A Osborne
photo src: www.robots.ox.ac.uk


Probability calibration

Not all classification models are naturally probabilistic, and some that are, notably naive Bayes classifiers, decision trees and boosting methods, produce distorted class probability distributions. In the case of decision trees, where Pr(y|x) is the proportion of training samples with label y in the leaf where x ends up, these distortions come about because learning algorithms such as C4.5 or CART explicitly aim to produce homogeneous leaves (giving probabilities close to zero or one, and thus high bias) while using few samples to estimate the relevant proportion (high variance).

For classification models that produce some kind of "score" on their outputs (such as a distorted probability distribution or the "signed distance to the hyperplane" in a support vector machine), there are several methods that turn these scores into properly calibrated class membership probabilities.

For the binary case, a common approach is to apply Platt scaling, which learns a logistic regression model on the scores. An alternative method using isotonic regression is generally superior to Platt's method when sufficient training data is available.

In the multiclass case, one can use a reduction to binary tasks, followed by univariate calibration with an algorithm as described above and further application of the pairwise coupling algorithm by Hastie and Tibshirani.


artificial intelligence - Machine learning algorithms: which ...
photo src: stackoverflow.com


Evaluating probabilistic classification

Commonly used loss functions for probabilistic classification include log loss and the mean squared error between the predicted and the true probability distributions. The former of these is commonly used to train logistic models.

A method used to assign scores to pairs of predicted probabilities and actual discrete outcomes, so that different predictive methods can be compared, is called a scoring rule.

Source of the article : Wikipedia



EmoticonEmoticon

 

Start typing and press Enter to search