Low Order Moments
Computes the basic dataset characteristics such as sums, means,
second-order raw moments, variance, standard deviations, etc.

Quantile
Computes quantiles that summarize the distribution of data across
equal-sized groups as defined by quantile orders.

Correlation and Variance-Covariance Matrices
Quantifies pairwise statistical relationship between feature vectors.

Cosine Distance Matrix
Measures pairwise similarity between feature vectors using cosine distances.

Correlation Distance Matrix
Measures pairwise similarity between feature vectors using correlation
distances.

Cholesky Decomposition
Decomposes a symmetric positive-definite matrix into a product of a
lower triangular matrix and its transpose. This decomposition is a basic
operation used in solving linear systems, non-linear optimization,
Kalman filtration, etc.

QR Decomposition
Decomposes a general matrix into a product of an orthogonal matrix and
an upper triangular matrix. This decomposition is used in solving
linear-inverse and least-squares problems. It is also a fundamental
operation in finding eigenvalues and eigenvectors.

Singular Value Decomposition (SVD)
SVD decomposes a matrix into a product of a left singular vector,
singular values, and a right singular vector. It is the basis of
principal component analysis, solving linear inverse problems, and data
fitting.

Principal Component Analysis (PCA)
PCA reduces the dimensionality of data by transforming input feature
vectors into a new set of principal components orthogonal to each other.

K-Means
Partitions a dataset into clusters of similar data points. Each cluster
is represented by a centroid, which is the mean of all data points in
the cluster.

Expectation-Maximization
Finds maximum-likelihood estimate of the parameters in models. It is
used for the Gaussian mixture model as a clustering method. It can also
be used in non-linear dimensionality reduction, missing value problems, etc.

Outlier Detection
Identifies observations that are abnormally distant from other
observations. An entire feature vector (multivariate), or a single
feature value (univariate), can be considered in determining if the
corresponding observation is an outlier.

Association Rules
Discovers a relationship between variables with certain level of confidence.

Linear and Radial Basis Function Kernel Functions
Map data onto higher-dimensional space.

Quality Metrics
Computes a set of numeric values to characterize quantitative properties
of the results returned by analytical algorithms. These metrics include
confusion matrix, accuracy, precision, recall, Fscore, etc.

And also these Machine Learning functions:

Linear Regression
Models relationship between dependent variables and one or more
explanatory variables by fitting linear equations to observed data.

Naïve Bayes Classifier
Splits observations into distinct classes by assigning labels. Naïve
Bayes is a probabilistic classifier that assumes independence between
features. Often used in text classification and medical diagnosis, it
works well even when there are some level of dependence between features.

Boosting
Builds a strong classifier from an ensemble of weighted weak
classifiers, by iteratively re- weighting according to the accuracy
measured for the weak classifiers. A decision stump is provided as a
weak classifier. Available boosting algorithms include AdaBoost (a
binary classifier), BrownBoost (a binary classifier), and LogitBoost (a
multi-class classifier).

Support Vector Machine (SVM)
SVM is a popular binary classifier. It computes a hyperplane that
separates observed feature vectors into two classes.

Multi-Class Classifier
Builds a multi-class classifier using a binary classifier such as SVM.