Neural Network Lab
Understanding Neural Network Batch Training: A Tutorial
There
are two different techniques for training a neural network: batch and
online. Understanding their similarities and differences is important in
order to be able to create accurate prediction systems.
Neural Network Weight Decay and Restriction
Weight
decay and weight restriction are two closely related, optional
techniques that can be used when training a neural network. This article
explains exactly what weight decay and weight restriction are, and how
to use them with an existing neural network application or implement
them in a custom application.
Deep Neural Networks: A Getting Started Tutorial
Deep
Neural Networks are the more computationally powerful cousins to
regular neural networks. Learn exactly what DNNs are and why they are
the hottest topic in machine learning research.
Neural Network Dropout Training
Dropout
training is a relatively new algorithm which appears to be highly
effective for improving the quality of neural network predictions. It's
not yet widely implemented in neural network API libraries. Learn how to
use dropout training if it's available in an existing system, or add
dropout training to systems where it's not yet available.
Neural Network Cross Entropy Error
To
train a neural network you need some measure of error between computed
outputs and the desired target outputs of the training data. The most
common measure of error is called mean squared error. However, there are
some research results that suggest using a different measure, called
cross entropy error, is sometimes preferable to using mean squared
error.
Neural Network How-To: Code an Evolutionary Optimization Solution
Evolutionary
optimization can be used to train a neural network. A virtual
chromosome holds the neural network's weights and bias values, and the
error term is the average of all errors between the network's computed
outputs and the training data target outputs. Learn how to code the
solution.
Learning to Use Genetic Algorithms and Evolutionary Optimization
Evolutionary
optimization (EO) is a type of genetic algorithm that can help minimize
the error between computed output values and training data target
output values. Use this demo program to learn to the method.
How To Standardize Data for Neural Networks
Understanding
data encoding and normalization is an absolutely essential skill when
working with neural networks. James McCaffrey walks you through what you
need to know to get started.
Neural Network Training Using Particle Swarm Optimization
Although
mathematically elegant, back-propagation isn't perfect. Instead
consider using particle swarm optimization (PSO) to train your neural
network; here's how.
Particle Swarm Optimization Using C#
Particle
swarm optimization isn't usually seen as the first-choice technique for
training a neural network but, as James McCaffrey demonstrates, it's a
useful alternative.
Understanding and Using K-Fold Cross-Validation for Neural Networks
James
McCaffrey walks you through whys and hows of using k-fold
cross-validation to gauge the quality of your neural network values.
Neural Network Training Using Back-Propagation
James McCaffrey explains the common neural network training technique known as the back-propagation algorithm.
Neural Network Back-Propagation Using C#
Understanding how back-propagation works will enable you to use neural network tools more effectively.
Neural Network Data Normalization and Encoding
James McCaffrey explains how to normalize and encode neural network data from a developer's point of view.
Neural Network Activation Functions in C#
James
McCaffrey explains what neural network activation functions are and why
they're necessary, and explores three common activation functions.
The Neural Network Input-Process-Output Mechanism
Understanding
the feed-forward mechanism is required in order to create a neural
network that solves difficult practical problems such as predicting the
result of a football game or the movement of a stock price.
Classification Using Perceptrons
Learn how to create a perceptron that can categorize inputs consisting of two numeric values.
Modeling Neuron Behavior in C#
James McCaffrey presents one of the basic building blocks of a neural network.
Комментариев нет:
Отправить комментарий