All Models¶
pyod.models.abod module¶
Anglebased Outlier Detector (ABOD)

class
pyod.models.abod.
ABOD
(contamination=0.1, n_neighbors=5, method='fast')[source]¶ Bases:
pyod.models.base.BaseDetector
ABOD class for Anglebase Outlier Detection. For an observation, the variance of its weighted cosine scores to all neighbors could be viewed as the outlying score. See [BKZ+08] for details.
Two version of ABOD are supported:
Fast ABOD: use k nearest neighbors to approximate.
Original ABOD: consider all training points with high time complexity at O(n^3).
 Parameters
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
n_neighbors (int, optional (default=10)) – Number of neighbors to use by default for k neighbors queries.
method (str, optional (default='fast')) –
Valid values for metric are:
’fast’: fast ABOD. Only consider n_neighbors of training points
’default’: original ABOD with all training points, which could be slow

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.auto_encoder module¶
pyod.models.cblof module¶
Clustering Based Local Outlier Factor (CBLOF)

class
pyod.models.cblof.
CBLOF
(n_clusters=8, contamination=0.1, clustering_estimator=None, alpha=0.9, beta=5, use_weights=False, check_estimator=False, random_state=None, n_jobs=1)[source]¶ Bases:
pyod.models.base.BaseDetector
The CBLOF operator calculates the outlier score based on clusterbased local outlier factor.
CBLOF takes as an input the data set and the cluster model that was generated by a clustering algorithm. It classifies the clusters into small clusters and large clusters using the parameters alpha and beta. The anomaly score is then calculated based on the size of the cluster the point belongs to as well as the distance to the nearest large cluster.
Use weighting for outlier factor based on the sizes of the clusters as proposed in the original publication. Since this might lead to unexpected behavior (outliers close to small clusters are not found), it is disabled by default.Outliers scores are solely computed based on their distance to the closest large cluster center.
By default, kMeans is used for clustering algorithm instead of Squeezer algorithm mentioned in the original paper for multiple reasons.
See [BHXD03] for details.
 Parameters
n_clusters (int, optional (default=8)) – The number of clusters to form as well as the number of centroids to generate.
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
clustering_estimator (Estimator, optional (default=None)) –
The base clustering algorithm for performing data clustering. A valid clustering algorithm should be passed in. The estimator should have standard sklearn APIs, fit() and predict(). The estimator should have attributes
labels_
andcluster_centers_
. Ifcluster_centers_
is not in the attributes once the model is fit, it is calculated as the mean of the samples in a cluster.If not set, CBLOF uses KMeans for scalability. See https://scikitlearn.org/stable/modules/generated/sklearn.cluster.KMeans.html
alpha (float in (0.5, 1), optional (default=0.9)) – Coefficient for deciding small and large clusters. The ratio of the number of samples in large clusters to the number of samples in small clusters.
beta (int or float in (1,), optional (default=5)) – Coefficient for deciding small and large clusters. For a list sorted clusters by size C1, C2, …, Cn, beta = Ck/Ck1
use_weights (bool, optional (default=False)) – If set to True, the size of clusters are used as weights in outlier score calculation.
check_estimator (bool, optional (default=False)) –
If set to True, check whether the base estimator is consistent with sklearn standard.
Warning
check_estimator may throw errors with scikitlearn 0.20 above.
random_state (int, RandomState or None, optional (default=None)) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
n_jobs (integer, optional (default=1)) – The number of jobs to run in parallel for both fit and predict. If 1, then the number of jobs is set to the number of cores.

clustering_estimator_
¶ Base estimator for clustering.
 Type
Estimator, sklearn instance

cluster_labels_
¶ Cluster assignment for the training samples.
 Type
list of shape (n_samples,)

cluster_sizes_
¶ The size of each cluster once fitted with the training data.
 Type
list of shape (n_clusters_,)

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

cluster_centers_
¶ The center of each cluster.
 Type
numpy array of shape (n_clusters_, n_features)

small_cluster_labels_
¶ The cluster assignments belonging to small clusters.
 Type
list of clusters numbers

large_cluster_labels_
¶ The cluster assignments belonging to large clusters.
 Type
list of clusters numbers

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.cof module¶
ConnectivityBased Outlier Factor (COF) Algorithm

class
pyod.models.cof.
COF
(contamination=0.1, n_neighbors=20)[source]¶ Bases:
pyod.models.base.BaseDetector
ConnectivityBased Outlier Factor (COF) COF uses the ratio of average chaining distance of data point and the average of average chaining distance of k nearest neighbor of the data point, as the outlier score for observations.
See [BTCFC02] for details.
 Parameters
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
n_neighbors (int, optional (default=20)) – Number of neighbors to use by default for k neighbors queries. Note that n_neighbors should be less than the number of samples. If n_neighbors is larger than the number of samples provided, all samples will be used.

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.combination module¶
A collection of model combination functionalities.

pyod.models.combination.
aom
(scores, n_buckets=5, method='static', bootstrap_estimators=False, random_state=None)[source]¶ Average of Maximum  An ensemble method for combining multiple estimators. See [BAS15] for details.
First dividing estimators into subgroups, take the maximum score as the subgroup score. Finally, take the average of all subgroup outlier scores.
 Parameters
scores (numpy array of shape (n_samples, n_estimators)) – The score matrix outputted from various estimators
n_buckets (int, optional (default=5)) – The number of subgroups to build
method (str, optional (default='static')) – {‘static’, ‘dynamic’}, if ‘dynamic’, build subgroups randomly with dynamic bucket size.
bootstrap_estimators (bool, optional (default=False)) – Whether estimators are drawn with replacement.
random_state (int, RandomState instance or None, optional (default=None)) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
 Returns
combined_scores – The combined outlier scores.
 Return type
Numpy array of shape (n_samples,)

pyod.models.combination.
average
(scores, estimator_weights=None)[source]¶ Combination method to merge the outlier scores from multiple estimators by taking the average.
 Parameters
scores (numpy array of shape (n_samples, n_estimators)) – Score matrix from multiple estimators on the same samples.
estimator_weights (list of shape (1, n_estimators)) – If specified, using weighted average
 Returns
combined_scores – The combined outlier scores.
 Return type
numpy array of shape (n_samples, )

pyod.models.combination.
majority_vote
(scores, weights=None)[source]¶ Combination method to merge the scores from multiple estimators by majority vote.
 Parameters
scores (numpy array of shape (n_samples, n_estimators)) – Score matrix from multiple estimators on the same samples.
 weightsnumpy array of shape (1, n_estimators)
If specified, using weighted majority weight.
 Returns
combined_scores – The combined scores.
 Return type
numpy array of shape (n_samples, )

pyod.models.combination.
maximization
(scores)[source]¶ Combination method to merge the outlier scores from multiple estimators by taking the maximum.
 Parameters
scores (numpy array of shape (n_samples, n_estimators)) – Score matrix from multiple estimators on the same samples.
 Returns
combined_scores – The combined outlier scores.
 Return type
numpy array of shape (n_samples, )

pyod.models.combination.
median
(scores)[source]¶ Combination method to merge the scores from multiple estimators by taking the median.
 Parameters
scores (numpy array of shape (n_samples, n_estimators)) – Score matrix from multiple estimators on the same samples.
 Returns
combined_scores – The combined scores.
 Return type
numpy array of shape (n_samples, )

pyod.models.combination.
moa
(scores, n_buckets=5, method='static', bootstrap_estimators=False, random_state=None)[source]¶ Maximization of Average  An ensemble method for combining multiple estimators. See [BAS15] for details.
First dividing estimators into subgroups, take the average score as the subgroup score. Finally, take the maximization of all subgroup outlier scores.
 Parameters
scores (numpy array of shape (n_samples, n_estimators)) – The score matrix outputted from various estimators
n_buckets (int, optional (default=5)) – The number of subgroups to build
method (str, optional (default='static')) – {‘static’, ‘dynamic’}, if ‘dynamic’, build subgroups randomly with dynamic bucket size.
bootstrap_estimators (bool, optional (default=False)) – Whether estimators are drawn with replacement.
random_state (int, RandomState instance or None, optional (default=None)) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
 Returns
combined_scores – The combined outlier scores.
 Return type
Numpy array of shape (n_samples,)
pyod.models.feature_bagging module¶
Feature bagging detector

class
pyod.models.feature_bagging.
FeatureBagging
(base_estimator=None, n_estimators=10, contamination=0.1, max_features=1.0, bootstrap_features=False, check_detector=True, check_estimator=False, n_jobs=1, random_state=None, combination='average', verbose=0, estimator_params=None)[source]¶ Bases:
pyod.models.base.BaseDetector
A feature bagging detector is a meta estimator that fits a number of base detectors on various subsamples of the dataset and use averaging or other combination methods to improve the predictive accuracy and control overfitting.
The subsample size is always the same as the original input sample size but the features are randomly sampled from half of the features to all features.
By default, LOF is used as the base estimator. However, any estimator could be used as the base estimator, such as kNN and ABOD.
Feature bagging first construct n subsamples by random selecting a subset of features, which induces the diversity of base estimators.
Finally, the prediction score is generated by averaging/taking the maximum of all base detectors. See [BLK05] for details.
 Parameters
base_estimator (object or None, optional (default=None)) – The base estimator to fit on random subsets of the dataset. If None, then the base estimator is a LOF detector.
n_estimators (int, optional (default=10)) – The number of base estimators in the ensemble.
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
max_features (int or float, optional (default=1.0)) –
The number of features to draw from X to train each base estimator.
If int, then draw max_features features.
If float, then draw max_features * X.shape[1] features.
bootstrap_features (bool, optional (default=False)) – Whether features are drawn with replacement.
check_detector (bool, optional (default=True)) – If set to True, check whether the base estimator is consistent with pyod standard.
check_estimator (bool, optional (default=False)) –
If set to True, check whether the base estimator is consistent with sklearn standard.
Deprecated since version 0.6.9: check_estimator will be removed in pyod 0.8.0.; it will be replaced by check_detector.
n_jobs (optional (default=1)) – The number of jobs to run in parallel for both fit and predict. If 1, then the number of jobs is set to the number of cores.
random_state (int, RandomState or None, optional (default=None)) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
combination (str, optional (default='average')) –
The method of combination:
if ‘average’: take the average of all detectors
if ‘max’: take the maximum scores of all detectors
verbose (int, optional (default=0)) – Controls the verbosity of the building process.
estimator_params (dict, optional (default=None)) – The list of attributes to use as parameters when instantiating a new base estimator. If none are given, default parameters are used.

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.hbos module¶
Histogrambased Outlier Detection (HBOS)

class
pyod.models.hbos.
HBOS
(n_bins=10, alpha=0.1, tol=0.5, contamination=0.1)[source]¶ Bases:
pyod.models.base.BaseDetector
Histogram based outlier detection (HBOS) is an efficient unsupervised method. It assumes the feature independence and calculates the degree of outlyingness by building histograms. See [BGD12] for details.
 Parameters
n_bins (int, optional (default=10)) – The number of bins.
alpha (float in (0, 1), optional (default=0.1)) – The regularizer for preventing overflow.
tol (float in (0, 1), optional (default=0.1)) – The parameter to decide the flexibility while dealing the samples falling outside the bins.
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.

bin_edges_
¶ The edges of the bins.
 Type
numpy array of shape (n_bins + 1, n_features )

hist_
¶ The density of each histogram.
 Type
numpy array of shape (n_bins, n_features)

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.iforest module¶
IsolationForest Outlier Detector. Implemented on scikitlearn library.

class
pyod.models.iforest.
IForest
(n_estimators=100, max_samples='auto', contamination=0.1, max_features=1.0, bootstrap=False, n_jobs=1, behaviour='old', random_state=None, verbose=0)[source]¶ Bases:
pyod.models.base.BaseDetector
Wrapper of scikitlearn Isolation Forest with more functionalities.
The IsolationForest ‘isolates’ observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. See [BLTZ08][BLTZ12] for details.
Since recursive partitioning can be represented by a tree structure, the number of splittings required to isolate a sample is equivalent to the path length from the root node to the terminating node.
This path length, averaged over a forest of such random trees, is a measure of normality and our decision function.
Random partitioning produces noticeably shorter paths for anomalies. Hence, when a forest of random trees collectively produce shorter path lengths for particular samples, they are highly likely to be anomalies.
 Parameters
n_estimators (int, optional (default=100)) – The number of base estimators in the ensemble.
max_samples (int or float, optional (default="auto")) –
The number of samples to draw from X to train each base estimator.
If int, then draw max_samples samples.
If float, then draw max_samples * X.shape[0] samples.
If “auto”, then max_samples=min(256, n_samples).
If max_samples is larger than the number of samples provided, all samples will be used for all trees (no sampling).
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
max_features (int or float, optional (default=1.0)) –
The number of features to draw from X to train each base estimator.
If int, then draw max_features features.
If float, then draw max_features * X.shape[1] features.
bootstrap (bool, optional (default=False)) – If True, individual trees are fit on random subsets of the training data sampled with replacement. If False, sampling without replacement is performed.
n_jobs (integer, optional (default=1)) – The number of jobs to run in parallel for both fit and predict. If 1, then the number of jobs is set to the number of cores.
behaviour (str, default='old') –
Behaviour of the
decision_function
which can be either ‘old’ or ‘new’. Passingbehaviour='new'
makes thedecision_function
change to match other anomaly detection algorithm API which will be the default behaviour in the future. As explained in details in theoffset_
attribute documentation, thedecision_function
becomes dependent on the contamination parameter, in such a way that 0 becomes its natural threshold to detect outliers.New in version 0.7.0:
behaviour
is added in 0.7.0 for backcompatibility purpose.Deprecated since version 0.20:
behaviour='old'
is deprecated in sklearn 0.20 and will not be possible in 0.22.Deprecated since version 0.22:
behaviour
parameter will be deprecated in sklearn 0.22 and removed in 0.24.Warning
Only applicable for sklearn 0.20 above.
random_state (int, RandomState instance or None, optional (default=None)) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
verbose (int, optional (default=0)) – Controls the verbosity of the tree building process.

estimators_
¶ The collection of fitted subestimators.
 Type
list of DecisionTreeClassifier

estimators_samples_
¶ The subset of drawn samples (i.e., the inbag samples) for each base estimator.
 Type
list of arrays

max_samples_
¶ The actual number of samples
 Type
integer

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.knn module¶
kNearest Neighbors Detector (kNN)

class
pyod.models.knn.
KNN
(contamination=0.1, n_neighbors=5, method='largest', radius=1.0, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, n_jobs=1, **kwargs)[source]¶ Bases:
pyod.models.base.BaseDetector
kNN class for outlier detection. For an observation, its distance to its kth nearest neighbor could be viewed as the outlying score. It could be viewed as a way to measure the density. See [BRRS00][BAP02] for details.
Three kNN detectors are supported: largest: use the distance to the kth neighbor as the outlier score mean: use the average of all k neighbors as the outlier score median: use the median of the distance to k neighbors as the outlier score
 Parameters
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
n_neighbors (int, optional (default = 5)) – Number of neighbors to use by default for k neighbors queries.
method (str, optional (default='largest')) –
{‘largest’, ‘mean’, ‘median’}
’largest’: use the distance to the kth neighbor as the outlier score
’mean’: use the average of all k neighbors as the outlier score
’median’: use the median of the distance to k neighbors as the outlier score
radius (float, optional (default = 1.0)) – Range of parameter space to use by default for radius_neighbors queries.
algorithm ({'auto', 'ball_tree', 'kd_tree', 'brute'}, optional) –
Algorithm used to compute the nearest neighbors:
’ball_tree’ will use BallTree
’kd_tree’ will use KDTree
’brute’ will use a bruteforce search.
’auto’ will attempt to decide the most appropriate algorithm based on the values passed to
fit()
method.
Note: fitting on sparse input will override the setting of this parameter, using brute force.
Deprecated since version 0.74:
algorithm
is deprecated in PyOD 0.7.4 and will not be possible in 0.7.6. It has to use BallTree for consistency.leaf_size (int, optional (default = 30)) – Leaf size passed to BallTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
metric (string or callable, default 'minkowski') –
metric to use for distance computation. Any metric from scikitlearn or scipy.spatial.distance can be used.
If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string.
Distance matrices are not supported.
Valid values for metric are:
from scikitlearn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’]
from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’]
See the documentation for scipy.spatial.distance for details on these metrics.
p (integer, optional (default = 2)) – Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. See http://scikitlearn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances
metric_params (dict, optional (default = None)) – Additional keyword arguments for the metric function.
n_jobs (int, optional (default = 1)) – The number of parallel jobs to run for neighbors search. If
1
, then the number of jobs is set to the number of CPU cores. Affects only kneighbors and kneighbors_graph methods.

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.lmdd module¶
Linear Model Deviationbase outlier detection (LMDD).

class
pyod.models.lmdd.
LMDD
(contamination=0.1, n_iter=50, dis_measure='aad', random_state=None)[source]¶ Bases:
pyod.models.base.BaseDetector
Linear Method for Deviationbased Outlier Detection.
LMDD employs the concept of the smoothing factor which indicates how much the dissimilarity can be reduced by removing a subset of elements from the dataset. Read more in the [BAAR96].
Note: this implementation has minor modification to make it output scores instead of labels.
 Parameters
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
n_iter (int, optional (default=50)) – Number of iterations where in each iteration, the process is repeated after randomizing the order of the input. Note that n_iter is a very important factor that affects the accuracy. The higher the better the accuracy and the longer the execution.
dis_measure (str, optional (default='aad')) –
Dissimilarity measure to be used in calculating the smoothing factor for points, options available:
’aad’: Average Absolute Deviation ; ‘var’: Variance ; ‘iqr’: Interquartile Range.
random_state (int, RandomState instance or None, optional (default=None)) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.lof module¶
Local Outlier Factor (LOF). Implemented on scikitlearn library.

class
pyod.models.lof.
LOF
(n_neighbors=20, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, contamination=0.1, n_jobs=1)[source]¶ Bases:
pyod.models.base.BaseDetector
Wrapper of scikitlearn LOF Class with more functionalities. Unsupervised Outlier Detection using Local Outlier Factor (LOF).
The anomaly score of each sample is called Local Outlier Factor. It measures the local deviation of density of a given sample with respect to its neighbors. It is local in that the anomaly score depends on how isolated the object is with respect to the surrounding neighborhood. More precisely, locality is given by knearest neighbors, whose distance is used to estimate the local density. By comparing the local density of a sample to the local densities of its neighbors, one can identify samples that have a substantially lower density than their neighbors. These are considered outliers. See [BBKNS00] for details.
 Parameters
n_neighbors (int, optional (default=20)) – Number of neighbors to use by default for kneighbors queries. If n_neighbors is larger than the number of samples provided, all samples will be used.
algorithm ({'auto', 'ball_tree', 'kd_tree', 'brute'}, optional) –
Algorithm used to compute the nearest neighbors:
’ball_tree’ will use BallTree
’kd_tree’ will use KDTree
’brute’ will use a bruteforce search.
’auto’ will attempt to decide the most appropriate algorithm based on the values passed to
fit()
method.
Note: fitting on sparse input will override the setting of this parameter, using brute force.
leaf_size (int, optional (default=30)) – Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
metric (string or callable, default 'minkowski') –
metric used for the distance computation. Any metric from scikitlearn or scipy.spatial.distance can be used.
If ‘precomputed’, the training input X is expected to be a distance matrix.
If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string.
Valid values for metric are:
from scikitlearn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’]
from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’]
See the documentation for scipy.spatial.distance for details on these metrics: http://docs.scipy.org/doc/scipy/reference/spatial.distance.html
p (integer, optional (default = 2)) – Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. See http://scikitlearn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances
metric_params (dict, optional (default = None)) – Additional keyword arguments for the metric function.
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the decision function.
n_jobs (int, optional (default = 1)) – The number of parallel jobs to run for neighbors search. If
1
, then the number of jobs is set to the number of CPU cores. Affects only kneighbors and kneighbors_graph methods.

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.loci module¶
Local Correlation Integral (LOCI). Part of the codes are adapted from https://github.com/Cloudy10/loci

class
pyod.models.loci.
LOCI
(contamination=0.1, alpha=0.5, k=3)[source]¶ Bases:
pyod.models.base.BaseDetector
Local Correlation Integral.
LOCI is highly effective for detecting outliers and groups of outliers ( a.k.a.microclusters), which offers the following advantages and novelties: (a) It provides an automatic, datadictated cutoff to determine whether a point is an outlier—in contrast, previous methods force users to pick cutoffs, without any hints as to what cutoff value is best for a given dataset. (b) It can provide a LOCI plot for each point; this plot summarizes a wealth of information about the data in the vicinity of the point, determining clusters, microclusters, their diameters and their intercluster distances. None of the existing outlierdetection methods can match this feature, because they output only a single number for each point: its outlierness score.(c) It can be computed as quickly as the best previous methods Read more in the [BPKGF03].
 Parameters
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
alpha (int, default = 0.5) – The neighbourhood parameter measures how large of a neighbourhood should be considered “local”.
k (int, default = 3) – An outlier cutoff threshold for determine whether or not a point should be considered an outlier.

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1
Examples
>>> from pyod.models.loci import LOCI >>> from pyod.utils.data import generate_data >>> n_train = 50 >>> n_test = 50 >>> contamination = 0.1 >>> X_train, y_train, X_test, y_test = generate_data( ... n_train=n_train, n_test=n_test, ... contamination=contamination, random_state=42) >>> clf = LOCI() >>> clf.fit(X_train) LOCI(alpha=0.5, contamination=0.1, k=None)

decision_function
(X)[source]¶ Predict raw anomaly scores of X using the fitted detector.
The anomaly score of an input sample is computed based on the fitted detector. For consistency, outliers are assigned with higher anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit the model using X as training data.
 Parameters
X (array, shape (n_samples, n_features)) – Training data.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.lscp module¶
Locally Selective Combination of Parallel Outlier Ensembles (LSCP). Adapted from the original implementation.

class
pyod.models.lscp.
LSCP
(detector_list, local_region_size=30, local_max_features=1.0, n_bins=10, random_state=None, contamination=0.1)[source]¶ Bases:
pyod.models.base.BaseDetector
Locally Selection Combination in Parallel Outlier Ensembles
LSCP is an unsupervised parallel outlier detection ensemble which selects competent detectors in the local region of a test instance. This implementation uses an Average of Maximum strategy. First, a heterogeneous list of base detectors is fit to the training data and then generates a pseudo ground truth for each train instance is generated by taking the maximum outlier score.
For each test instance: 1) The local region is defined to be the set of nearest training points in randomly sampled feature subspaces which occur more frequently than a defined threshold over multiple iterations.
2) Using the local region, a local pseudo ground truth is defined and the pearson correlation is calculated between each base detector’s training outlier scores and the pseudo ground truth.
3) A histogram is built out of pearson correlation scores; detectors in the largest bin are selected as competent base detectors for the given test instance.
4) The average outlier score of the selected competent detectors is taken to be the final score.
See [BZNHL19] for details.
 Parameters
detector_list (List, length must be greater than 1) – Base unsupervised outlier detectors from PyOD. (Note: requires fit and decision_function methods)
local_region_size (int, optional (default=30)) – Number of training points to consider in each iteration of the local region generation process (30 by default).
local_max_features (float in (0.5, 1.), optional (default=1.0)) – Maximum proportion of number of features to consider when defining the local region (1.0 by default).
n_bins (int, optional (default=10)) – Number of bins to use when selecting the local region
random_state (RandomState, optional (default=None)) – A random number generator instance to define the state of the random permutations generator.
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function (0.1 by default).

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1
Examples
>>> from pyod.utils.data import generate_data >>> from pyod.utils.utility import standardizer >>> from pyod.models.lscp import LSCP >>> from pyod.models.lof import LOF >>> X_train, y_train, X_test, y_test = generate_data( ... n_train=50, n_test=50, ... contamination=0.1, random_state=42) >>> X_train, X_test = standardizer(X_train, X_test) >>> detector_list = [LOF(), LOF()] >>> clf = LSCP(detector_list) >>> clf.fit(X_train) LSCP(...)

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.mcd module¶
Outlier Detection with Minimum Covariance Determinant (MCD)

class
pyod.models.mcd.
MCD
(contamination=0.1, store_precision=True, assume_centered=False, support_fraction=None, random_state=None)[source]¶ Bases:
pyod.models.base.BaseDetector
Detecting outliers in a Gaussian distributed dataset using Minimum Covariance Determinant (MCD): robust estimator of covariance.
The Minimum Covariance Determinant covariance estimator is to be applied on Gaussiandistributed data, but could still be relevant on data drawn from a unimodal, symmetric distribution. It is not meant to be used with multimodal data (the algorithm used to fit a MinCovDet object is likely to fail in such a case). One should consider projection pursuit methods to deal with multimodal datasets.
First fit a minimum covariance determinant model and then compute the Mahalanobis distance as the outlier degree of the data
See [BRD99][BHR04] for details.
 Parameters
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
store_precision (bool) – Specify if the estimated precision is stored.
assume_centered (bool) – If True, the support of the robust location and the covariance estimates is computed, and a covariance estimate is recomputed from it, without centering the data. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, the robust location and covariance are directly computed with the FastMCD algorithm without additional treatment.
support_fraction (float, 0 < support_fraction < 1) – The proportion of points to be included in the support of the raw MCD estimate. Default is None, which implies that the minimum value of support_fraction will be used within the algorithm: [n_sample + n_features + 1] / 2
random_state (int, RandomState instance or None, optional (default=None)) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

raw_location_
¶ The raw robust estimated location before correction and reweighting.
 Type
arraylike, shape (n_features,)

raw_covariance_
¶ The raw robust estimated covariance before correction and reweighting.
 Type
arraylike, shape (n_features, n_features)

raw_support_
¶ A mask of the observations that have been used to compute the raw robust estimates of location and shape, before correction and reweighting.
 Type
arraylike, shape (n_samples,)

location_
¶ Estimated robust location
 Type
arraylike, shape (n_features,)

covariance_
¶ Estimated robust covariance matrix
 Type
arraylike, shape (n_features, n_features)

precision_
¶ Estimated pseudo inverse matrix. (stored only if store_precision is True)
 Type
arraylike, shape (n_features, n_features)

support_
¶ A mask of the observations that have been used to compute the robust estimates of location and shape.
 Type
arraylike, shape (n_samples,)

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. Mahalanobis distances of the training set (on which :meth:`fit is called) observations.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.mo_gaal module¶
pyod.models.ocsvm module¶
Oneclass SVM detector. Implemented on scikitlearn library.

class
pyod.models.ocsvm.
OCSVM
(kernel='rbf', degree=3, gamma='auto', coef0=0.0, tol=0.001, nu=0.5, shrinking=True, cache_size=200, verbose=False, max_iter=1, contamination=0.1)[source]¶ Bases:
pyod.models.base.BaseDetector
Wrapper of scikitlearn oneclass SVM Class with more functionalities. Unsupervised Outlier Detection.
Estimate the support of a highdimensional distribution.
The implementation is based on libsvm. See http://scikitlearn.org/stable/modules/svm.html#svmoutlierdetection and [BScholkopfPST+01].
 Parameters
kernel (string, optional (default='rbf')) – Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to precompute the kernel matrix.
nu (float, optional) – An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken.
degree (int, optional (default=3)) – Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.
gamma (float, optional (default='auto')) – Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’. If gamma is ‘auto’ then 1/n_features will be used instead.
coef0 (float, optional (default=0.0)) – Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’.
tol (float, optional) – Tolerance for stopping criterion.
shrinking (bool, optional) – Whether to use the shrinking heuristic.
cache_size (float, optional) – Specify the size of the kernel cache (in MB).
verbose (bool, default: False) – Enable verbose output. Note that this setting takes advantage of a perprocess runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context.
max_iter (int, optional (default=1)) – Hard limit on iterations within solver, or 1 for no limit.
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.

support_
¶ Indices of support vectors.
 Type
arraylike, shape = [n_SV]

support_vectors_
¶ Support vectors.
 Type
arraylike, shape = [nSV, n_features]

dual_coef_
¶ Coefficients of the support vectors in the decision function.
 Type
array, shape = [1, n_SV]

coef_
¶ Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel.
coef_ is readonly property derived from dual_coef_ and support_vectors_
 Type
array, shape = [1, n_features]

intercept_
¶ Constant in the decision function.
 Type
array, shape = [1,]

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None, sample_weight=None, **params)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
sample_weight (arraylike, shape (n_samples,)) – Persample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.pca module¶
Principal Component Analysis (PCA) Outlier Detector

class
pyod.models.pca.
PCA
(n_components=None, n_selected_components=None, contamination=0.1, copy=True, whiten=False, svd_solver='auto', tol=0.0, iterated_power='auto', random_state=None, weighted=True, standardization=True)[source]¶ Bases:
pyod.models.base.BaseDetector
Principal component analysis (PCA) can be used in detecting outliers. PCA is a linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space.
In this procedure, covariance matrix of the data can be decomposed to orthogonal vectors, called eigenvectors, associated with eigenvalues. The eigenvectors with high eigenvalues capture most of the variance in the data.
Therefore, a low dimensional hyperplane constructed by k eigenvectors can capture most of the variance in the data. However, outliers are different from normal data points, which is more obvious on the hyperplane constructed by the eigenvectors with small eigenvalues.
Therefore, outlier scores can be obtained as the sum of the projected distance of a sample on all eigenvectors. See [BSCSC03][BAgg15] for details.
Score(X) = Sum of weighted euclidean distance between each sample to the hyperplane constructed by the selected eigenvectors
 Parameters
n_components (int, float, None or string) –
Number of components to keep. if n_components is not set all components are kept:
n_components == min(n_samples, n_features)
if n_components == ‘mle’ and svd_solver == ‘full’, Minka’s MLE is used to guess the dimension if
0 < n_components < 1
and svd_solver == ‘full’, select the number of components such that the amount of variance that needs to be explained is greater than the percentage specified by n_components n_components cannot be equal to n_features for svd_solver == ‘arpack’.n_selected_components (int, optional (default=None)) – Number of selected principal components for calculating the outlier scores. It is not necessarily equal to the total number of the principal components. If not set, use all principal components.
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
copy (bool (default True)) – If False, data passed to fit are overwritten and running fit(X).transform(X) will not yield the expected results, use fit_transform(X) instead.
whiten (bool, optional (default False)) –
When True (False by default) the components_ vectors are multiplied by the square root of n_samples and then divided by the singular values to ensure uncorrelated outputs with unit componentwise variances.
Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making their data respect some hardwired assumptions.
svd_solver (string {'auto', 'full', 'arpack', 'randomized'}) –
 auto :
the solver is selected by a default policy based on X.shape and n_components: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient ‘randomized’ method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards.
 full :
run exact full SVD calling the standard LAPACK solver via scipy.linalg.svd and select the components by postprocessing
 arpack :
run SVD truncated to n_components calling ARPACK solver via scipy.sparse.linalg.svds. It requires strictly 0 < n_components < X.shape[1]
 randomized :
run randomized SVD by the method of Halko et al.
tol (float >= 0, optional (default .0)) – Tolerance for singular values computed by svd_solver == ‘arpack’.
iterated_power (int >= 0, or 'auto', (default 'auto')) – Number of iterations for the power method computed by svd_solver == ‘randomized’.
random_state (int, RandomState instance or None, optional (default None)) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Used when
svd_solver
== ‘arpack’ or ‘randomized’.weighted (bool, optional (default=True)) – If True, the eigenvalues are used in score computation. The eigenvectors with small eigenvalues comes with more importance in outlier score calculation.
standardization (bool, optional (default=True)) – If True, perform standardization first to convert data to zero mean and unit variance. See http://scikitlearn.org/stable/auto_examples/preprocessing/plot_scaling_importance.html

components_
¶ Principal axes in feature space, representing the directions of maximum variance in the data. The components are sorted by
explained_variance_
. Type
array, shape (n_components, n_features)

explained_variance_
¶ The amount of variance explained by each of the selected components.
Equal to n_components largest eigenvalues of the covariance matrix of X.
 Type
array, shape (n_components,)

explained_variance_ratio_
¶ Percentage of variance explained by each of the selected components.
If
n_components
is not set then all components are stored and the sum of explained variances is equal to 1.0. Type
array, shape (n_components,)

singular_values_
¶ The singular values corresponding to each of the selected components. The singular values are equal to the 2norms of the
n_components
variables in the lowerdimensional space. Type
array, shape (n_components,)

mean_
¶ Perfeature empirical mean, estimated from the training set.
Equal to X.mean(axis=0).
 Type
array, shape (n_features,)

n_components_
¶ The estimated number of components. When n_components is set to ‘mle’ or a number between 0 and 1 (with svd_solver == ‘full’) this number is estimated from input data. Otherwise it equals the parameter n_components, or n_features if n_components is None.
 Type

noise_variance_
¶ The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/metmppca.pdf. It is required to computed the estimated data covariance and score samples.
Equal to the average of (min(n_features, n_samples)  n_components) smallest eigenvalues of the covariance matrix of X.
 Type

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

property
explained_variance_
The amount of variance explained by each of the selected components.
Equal to n_components largest eigenvalues of the covariance matrix of X.
Decorator for scikitlearn PCA attributes.

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

property
noise_variance_
The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/metmppca.pdf. It is required to computed the estimated data covariance and score samples.
Equal to the average of (min(n_features, n_samples)  n_components) smallest eigenvalues of the covariance matrix of X.
Decorator for scikitlearn PCA attributes.

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.sod module¶
Subspace Outlier Detection (SOD)

class
pyod.models.sod.
SOD
(contamination=0.1, n_neighbors=20, ref_set=10, alpha=0.8)[source]¶ Bases:
pyod.models.base.BaseDetector
Subspace outlier detection (SOD) schema aims to detect outlier in varying subspaces of a high dimensional feature space. For each data object, SOD explores the axisparallel subspace spanned by the data object’s neighbors and determines how much the object deviates from the neighbors in this subspace.
See [BKKrogerSZ09] for details.
 Parameters
n_neighbors (int, optional (default=20)) – Number of neighbors to use by default for k neighbors queries.
ref_set (int, optional (default=10)) – specifies the number of shared nearest neighbors to create the reference set. Note that ref_set must be smaller than n_neighbors.
alpha (float in (0., 1.), optional (default=0.8)) – specifies the lower limit for selecting subspace. 0.8 is set as default as suggested in the original paper.
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.so_gaal module¶
pyod.models.sos module¶
Stochastic Outlier Selection (SOS). Part of the codes are adapted from https://github.com/jeroenjanssens/scikitsos

class
pyod.models.sos.
SOS
(contamination=0.1, perplexity=4.5, metric='euclidean', eps=1e05)[source]¶ Bases:
pyod.models.base.BaseDetector
Stochastic Outlier Selection.
SOS employs the concept of affinity to quantify the relationship from one data point to another data point. Affinity is proportional to the similarity between two data points. So, a data point has little affinity with a dissimilar data point. A data point is selected as an outlier when all the other data points have insufficient affinity with it. Read more in the [BJHuszarPvdH12].
 Parameters
contamination (float in (0., 0.5), optional (default=0.1)) – The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function.
perplexity (float, optional (default=4.5)) – A smooth measure of the effective number of neighbours. The perplexity parameter is similar to the parameter k in kNN algorithm (the number of nearest neighbors). The range of perplexity can be any real number between 1 and n1, where n is the number of samples.
metric (str, default 'euclidean') –
Metric used for the distance computation. Any metric from scipy.spatial.distance can be used.
Valid values for metric are:
’euclidean’
from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’]
See the documentation for scipy.spatial.distance for details on these metrics: http://docs.scipy.org/doc/scipy/reference/spatial.distance.html
eps (float, optional (default = 1e5)) – Tolerance threshold for floating point errors.

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

threshold_
¶ The threshold is based on
contamination
. It is then_samples * contamination
most abnormal samples indecision_scores_
. The threshold is calculated for generating binary outlier labels. Type

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1
Examples
>>> from pyod.models.sos import SOS >>> from pyod.utils.data import generate_data >>> n_train = 50 >>> n_test = 50 >>> contamination = 0.1 >>> X_train, y_train, X_test, y_test = generate_data( ... n_train=n_train, n_test=n_test, ... contamination=contamination, random_state=42) >>> >>> clf = SOS() >>> clf.fit(X_train) SOS(contamination=0.1, eps=1e05, metric='euclidean', perplexity=4.5)

decision_function
(X)[source]¶ Predict raw anomaly score of X using the fitted detector.
The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y=None)[source]¶ Fit detector. y is ignored in unsupervised methods.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
 Returns
self – Fitted estimator.
 Return type

fit_predict
(X, y=None)¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')¶ DEPRECATED
 Fit the detector, predict on samples, and evaluate the model by
predefined metrics, e.g., ROC.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 scoringstr, optional (default=’roc_auc_score’)
Evaluation metric:
‘roc_auc_score’: ROC score
‘prc_n_score’: Precision @ rank n score
score : float
Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC.

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)¶ Predict if a particular sample is an outlier or not.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X, method='linear')¶ Predict the probability of a sample being outlier. Two approaches are possible:
simply use Minmax conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first.
use unifying scores, see [BKKSZ11].
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
method (str, optional (default='linear')) – probability conversion method. It must be one of ‘linear’ or ‘unify’.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
pyod.models.xgbod module¶
XGBOD: Improving Supervised Outlier Detection with Unsupervised Representation Learning. A semisupervised outlier detection framework.

class
pyod.models.xgbod.
XGBOD
(estimator_list=None, standardization_flag_list=None, max_depth=3, learning_rate=0.1, n_estimators=100, silent=True, objective='binary:logistic', booster='gbtree', n_jobs=1, nthread=None, gamma=0, min_child_weight=1, max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5, random_state=0, missing=None, **kwargs)[source]¶ Bases:
pyod.models.base.BaseDetector
XGBOD class for outlier detection. It first use the passed in unsupervised outlier detectors to extract richer representation of the data and then concatenate the newly generated features to the original feature for constructing the augmented feature space. An XGBoost classifier is then applied on this augmented feature space. Read more in the [BZH18].
 Parameters
estimator_list (list, optional (default=None)) – The list of pyod detectors passed in for unsupervised learning
standardization_flag_list (list, optional (default=None)) – The list of boolean flags for indicating whether to take standardization for each detector.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
n_estimators (int) – Number of boosted trees to fit.
silent (bool) – Whether to print messages while running boosting.
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
n_jobs (int) – Number of parallel threads used to run xgboost. (replaces
nthread
)gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (int) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (int) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each split, in each level.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights.
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights.
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) – Random number seed. (replaces seed)
missing (float, optional) – Value in the data which needs to be present as a missing value. If None, defaults to np.nan.
importance_type (string, default "gain") – The feature importance type for the
feature_importances_
property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note: **kwargs is unsupported by scikitlearn. We do not guarantee that parameters passed via this argument will interact properly with scikitlearn.

decision_scores_
¶ The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
 Type
numpy array of shape (n_samples,)

labels_
¶ The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying
threshold_
ondecision_scores_
. Type
int, either 0 or 1

decision_function
(X)[source]¶ Predict raw anomaly scores of X using the fitted detector.
The anomaly score of an input sample is computed based on the fitted detector. For consistency, outliers are assigned with higher anomaly scores.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples. Sparse matrices are accepted only if they are supported by the base estimator.
 Returns
anomaly_scores – The anomaly score of the input samples.
 Return type
numpy array of shape (n_samples,)

fit
(X, y)[source]¶ Fit the model using X and y as training data.
 Parameters
X (numpy array of shape (n_samples, n_features)) – Training data.
y (numpy array of shape (n_samples,)) –
The ground truth (binary label)
0 : inliers
1 : outliers
 Returns
self
 Return type

fit_predict
(X, y)[source]¶ DEPRECATED
 Fit detector first and then predict whether a particular sample
is an outlier or not. y is ignored in unsupervised models.
 Xnumpy array of shape (n_samples, n_features)
The input samples.
 yIgnored
Not used, present for API consistency by convention.
 outlier_labelsnumpy array of shape (n_samples,)
For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency.

fit_predict_score
(X, y, scoring='roc_auc_score')[source]¶ Fit the detector, predict on samples, and evaluate the model by predefined metrics, e.g., ROC.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
y (Ignored) – Not used, present for API consistency by convention.
scoring (str, optional (default='roc_auc_score')) –
Evaluation metric:
’roc_auc_score’: ROC score
’prc_n_score’: Precision @ rank n score
 Returns
score
 Return type

get_params
(deep=True)¶ Get parameters for this estimator.
See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Parameters
deep (bool, optional (default=True)) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
params – Parameter names mapped to their values.
 Return type
mapping of string to any

predict
(X)[source]¶ Predict if a particular sample is an outlier or not. Calling xgboost predict function.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers.
 Return type
numpy array of shape (n_samples,)

predict_proba
(X)[source]¶ Predict the probability of a sample being outlier. Calling xgboost predict_proba function.
 Parameters
X (numpy array of shape (n_samples, n_features)) – The input samples.
 Returns
outlier_labels – For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1].
 Return type
numpy array of shape (n_samples,)

set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.See http://scikitlearn.org/stable/modules/generated/sklearn.base.BaseEstimator.html and sklearn/base.py for more information.
 Returns
self
 Return type
Module contents¶
References
 BAgg15
Charu C Aggarwal. Outlier analysis. In Data mining, 75–79. Springer, 2015.
 BAS15(1,2)
Charu C Aggarwal and Saket Sathe. Theoretical foundations and algorithms for outlier ensembles. ACM SIGKDD Explorations Newsletter, 17(1):24–47, 2015.
 BAP02
Fabrizio Angiulli and Clara Pizzuti. Fast outlier detection in high dimensional spaces. In European Conference on Principles of Data Mining and Knowledge Discovery, 15–27. Springer, 2002.
 BAAR96
Andreas Arning, Rakesh Agrawal, and Prabhakar Raghavan. A linear method for deviation detection in large databases. In KDD, volume 1141, 972–981. 1996.
 BBKNS00
Markus M Breunig, HansPeter Kriegel, Raymond T Ng, and Jörg Sander. Lof: identifying densitybased local outliers. In ACM sigmod record, volume 29, 93–104. ACM, 2000.
 BGD12
Markus Goldstein and Andreas Dengel. Histogrambased outlier score (hbos): a fast unsupervised anomaly detection algorithm. KI2012: Poster and Demo Track, pages 59–63, 2012.
 BHR04
Johanna Hardin and David M Rocke. Outlier detection in the multiple cluster setting using the minimum covariance determinant estimator. Computational Statistics & Data Analysis, 44(4):625–638, 2004.
 BHXD03
Zengyou He, Xiaofei Xu, and Shengchun Deng. Discovering clusterbased local outliers. Pattern Recognition Letters, 24(910):1641–1650, 2003.
 BJHuszarPvdH12
JHM Janssens, Ferenc Huszár, EO Postma, and HJ van den Herik. Stochastic outlier selection. Technical Report, Technical report TiCC TR 2012001, Tilburg University, Tilburg Center for Cognition and Communication, Tilburg, The Netherlands, 2012.
 BKKSZ11(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)
HansPeter Kriegel, Peer Kroger, Erich Schubert, and Arthur Zimek. Interpreting and unifying outlier scores. In Proceedings of the 2011 SIAM International Conference on Data Mining, 13–24. SIAM, 2011.
 BKKrogerSZ09
HansPeter Kriegel, Peer Kröger, Erich Schubert, and Arthur Zimek. Outlier detection in axisparallel subspaces of high dimensional data. In PacificAsia Conference on Knowledge Discovery and Data Mining, 831–838. Springer, 2009.
 BKZ+08
HansPeter Kriegel, Arthur Zimek, and others. Anglebased outlier detection in highdimensional data. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, 444–452. ACM, 2008.
 BLK05
Aleksandar Lazarevic and Vipin Kumar. Feature bagging for outlier detection. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, 157–166. ACM, 2005.
 BLTZ08
Fei Tony Liu, Kai Ming Ting, and ZhiHua Zhou. Isolation forest. In Data Mining, 2008. ICDM‘08. Eighth IEEE International Conference on, 413–422. IEEE, 2008.
 BLTZ12
Fei Tony Liu, Kai Ming Ting, and ZhiHua Zhou. Isolationbased anomaly detection. ACM Transactions on Knowledge Discovery from Data (TKDD), 6(1):3, 2012.
 BPKGF03
Spiros Papadimitriou, Hiroyuki Kitagawa, Phillip B Gibbons, and Christos Faloutsos. Loci: fast outlier detection using the local correlation integral. In Data Engineering, 2003. Proceedings. 19th International Conference on, 315–326. IEEE, 2003.
 BRRS00
Sridhar Ramaswamy, Rajeev Rastogi, and Kyuseok Shim. Efficient algorithms for mining outliers from large data sets. In ACM Sigmod Record, volume 29, 427–438. ACM, 2000.
 BRD99
Peter J Rousseeuw and Katrien Van Driessen. A fast algorithm for the minimum covariance determinant estimator. Technometrics, 41(3):212–223, 1999.
 BScholkopfPST+01
Bernhard Schölkopf, John C Platt, John ShaweTaylor, Alex J Smola, and Robert C Williamson. Estimating the support of a highdimensional distribution. Neural computation, 13(7):1443–1471, 2001.
 BSCSC03
MeiLing Shyu, ShuChing Chen, Kanoksri Sarinnapakorn, and LiWu Chang. A novel anomaly detection scheme based on principal component classifier. Technical Report, MIAMI UNIV CORAL GABLES FL DEPT OF ELECTRICAL AND COMPUTER ENGINEERING, 2003.
 BTCFC02
Jian Tang, Zhixiang Chen, Ada WaiChee Fu, and David W Cheung. Enhancing effectiveness of outlier detections for low density patterns. In PacificAsia Conference on Knowledge Discovery and Data Mining, 535–548. Springer, 2002.
 BZH18
Yue Zhao and Maciej K Hryniewicki. Xgbod: improving supervised outlier detection with unsupervised representation learning. In International Joint Conference on Neural Networks (IJCNN). IEEE, 2018.
 BZNHL19
Yue Zhao, Zain Nasrullah, Maciej K Hryniewicki, and Zheng Li. LSCP: locally selective combination in parallel outlier ensembles. In Proceedings of the 2019 SIAM International Conference on Data Mining, SDM 2019, 585–593. Calgary, Canada, May 2019. SIAM. URL: https://doi.org/10.1137/1.9781611975673.66, doi:10.1137/1.9781611975673.66.