This represents a toolbox for artificial neural networks,
based on "Matrix ANN" book
Current feature:
- Only layered feedforward networks are supported *directly* at the moment
(for others use the "hooks" provided)
- Unlimited number of layers
- Unlimited number of neurons per each layer separately
- User defined activation function (defaults to logistic)
- User defined error function (defaults to SSE)
- Algorithms implemented so far:
* standard (vanilla) with or without bias, on-line or batch
* momentum with or without bias, on-line or batch
* SuperSAB with or without bias, on-line or batch
* Conjugate gradients
* Jacobian computation
* Computation of result of multiplication between "vector" and
Hessian
- Some helper functions provided
For full descriptions start with the toplevel "ANN" man page.
functions:
ann_FF — Algorithms for feedforward nets.
ann_FF_ConjugGrad — Conjugate Gradient algorithm.
ann_FF_Hess — computes Hessian by finite differences.
ann_FF_INT — internal implementation of feedforward nets.
ann_FF_Jacobian — computes Jacobian by finite differences.
ann_FF_Jacobian_BP — computes Jacobian trough backpropagation.
ann_FF_Mom_batch — batch backpropagation with momentum.
ann_FF_Mom_batch_nb — batch backpropagation with momentum (without bias).
ann_FF_Mom_online — online backpropagation with momentum.
ann_FF_Mom_online_nb — online backpropagation with momentum.
ann_FF_SSAB_batch — batch SuperSAB algorithm.
ann_FF_SSAB_batch_nb — batch SuperSAB algorithm (without bias).
ann_FF_SSAB_online — online SuperSAB training algorithm.
ann_FF_SSAB_online_nb — online backpropagation with SuperSAB
ann_FF_Std_batch — standard batch backpropagation.
ann_FF_Std_batch_nb — standard batch backpropagation (without bias).
ann_FF_Std_online — online standard backpropagation.
ann_FF_Std_online_nb — online standard backpropagation
ann_FF_VHess — multiplication between a "vector" V and Hessian
ann_FF_grad — error gradient trough finite differences.
ann_FF_grad_BP — error gradient trough backpropagation
ann_FF_grad_BP_nb — error gradient trough backpropagation (without bias)
ann_FF_grad_nb — error gradient trough finite differences
ann_FF_init — initialize the weight hypermatrix.
ann_FF_init_nb — initialize the weight hypermatrix (without bias).
ann_FF_run — run patterns trough a feedforward net.
ann_FF_run_nb — run patterns trough a feedforward net (without bias).
ann_d_log_activ — derivative of logistic activation function
ann_d_sum_of_sqr — derivative of sum-of-squares error
ann_log_activ — logistic activation function
ann_pat_shuffle — shuffles randomly patterns for an ANN
ann_sum_of_sqr — calculates sum-of-squares error