Feature Selection Package - Algoriths - Classifiers - Naive Bayes
Description
The naive Bayes classifier is a probabilistic classifier based on Bayes' rule. The naive
Bayes classifier classifies each feature independently of the presence of others.
Usage
Method Signature:
[a] = bayes(a, trainX, trainY
, testX, testY)
Output:
a:
This is the struct you passed in for a, with the tree and its accuracy added as fields.
These fields are named
classifier, and
tree_accuracy respectively.
Input:
a:
Is a struct that has field 'D', and/or 'K', with the field you would
like the classifier to use set to true. If you would like to use
neither, that is fine as well, but you may use a maximum of 1. It is ok
if both fields are present in your struct, so long as only one of them
is set to true.
D means that you wish to incorporate supervised discretion when
you are processing numeric attributes.
K means that you wish to use kernel estimation for modeling numeric
attributes rather than a single normal distribution.
trainX: training data, each row is an instance.
trainY: training data, each column is a class.
testX: testing data, each row is an instance.
testY: testing data, each column is a class.
Code Example
% Using the wine.dat data set, which can be found at
% [fspackage_location]/classifiers/knn/wine.mat
a.D = true;
a.K = false;
bayes(a, X, Y, X, Y)
Paper
BibTex entry for:
George H. John, Pat Langley: Estimating Continuous Distributions in
Bayesian Classifiers. In: Eleventh Conference on Uncertainty in
Artificial Intelligence, San Mateo, 338-345, 1995.
@inproceedings{John1995,
address = {San Mateo},
author = {George H. John and Pat Langley},
booktitle = {Eleventh Conference on Uncertainty in Artificial Intelligence},
pages = {338-345},
publisher = {Morgan Kaufmann},
title = {Estimating Continuous Distributions in Bayesian Classifiers},
year = {1995}
}