Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

A noise eigenspace projection method is described to improve accuracy and parsimony for pattern classification.  A set of 11 classifiers was employed with 10-fold cross validation on 13 publicly available datasets.  For each dataset, a “baseline”’ classification analysis was first performed using interclass principal component analysis (PCA) scores based on the signal eigenspace of the correlation matrix.  Next, intraclass eigendecomposition was run and percentiles of feature values were projected onto noise eigenvectors from each class to develop an m-tuple of information-to-noise ratio (INR) estimators.  The top 5 INR estimators were then used for classification analysis.  By projecting percentiles of feature values on the class-specific noise eigenvectors, it was found that INR estimators resulted in a 14 percentage point increase in mean ensemble classifier fusion accuracy over datasets from 84% to 98% when compared with use of signal PCs for class prediction.  Unlike feature selection, which identifies features whose values differ greatly across class labels, the proposed approach exploits the fact that the noise eigenspace from all input features differs across classes.  We also only used the top 5 INR estimators, which is typically lower than the number of input features selected through filtering or wrapping.  Lastly, for operational employment, we recommend use of the k-nearest neighbor (KNN) classifier because of its proximity to ensemble classifier fusion results, high performance, and low computational cost.   In conclusion, use of INR estimators based on class-specific noise can greatly improve classification performance and model parsimony without use of input feature selection.