What is the difference between epoch and iteration when training a multi-layer perceptron?
This question is related to
machine-learning
neural-network
deep-learning
artificial-intelligence
terminology
1.Epoch is 1 complete cycle where Neural network has seen all he data.
2. One might have say 100,000 images to train the model, however memory space might not be sufficient to process all the images at once, hence we split training the model on smaller chunks of data called batches. e.g. batch size is 100.
3. We need to cover all the images using multiple batches. So we will need 1000 iterations to cover all the 100,000 images. (100 batch size * 1000 iterations)
4. Once Neural Network looks at entire data it is called 1 Epoch (Point 1). One might need multiple epochs to train the model. (let us say 10 epochs).
Many neural network training algorithms involve making multiple presentations of the entire data set to the neural network. Often, a single presentation of the entire data set is referred to as an "epoch". In contrast, some algorithms present data to the neural network a single case at a time.
"Iteration" is a much more general term, but since you asked about it together with "epoch", I assume that your source is referring to the presentation of a single case to a neural network.
An epoch contains a few iterations. That's actually what this 'epoch' is. Let's define 'epoch' as the number of iterations over the data set in order to train the neural network.
According to Google's Machine Learning Glossary, an epoch is defined as
"A full training pass over the entire dataset such that each example has been seen once. Thus, an epoch represents N/batch_size
training iterations, where N is the total number of examples."
If you are training model for 10 epochs with batch size 6, given total 12 samples that means:
the model will be able to see whole dataset in 2 iterations ( 12 / 6 = 2) i.e. single epoch.
overall, the model will have 2 X 10 = 20 iterations (iterations-per-epoch X no-of-epochs)
re-evaluation of loss and model parameters will be performed after each iteration!
I believe iteration is equivalent to a single batch forward+backprop in batch SGD. Epoch is going through the entire dataset once (as someone else mentioned).
epoch is an iteration of subset of the samples for training, for example, the gradient descent algorithm in neutral network. A good reference is: http://neuralnetworksanddeeplearning.com/chap1.html
Note that the page has a code for the gradient descent algorithm which uses epoch
def SGD(self, training_data, epochs, mini_batch_size, eta,
test_data=None):
"""Train the neural network using mini-batch stochastic
gradient descent. The "training_data" is a list of tuples
"(x, y)" representing the training inputs and the desired
outputs. The other non-optional parameters are
self-explanatory. If "test_data" is provided then the
network will be evaluated against the test data after each
epoch, and partial progress printed out. This is useful for
tracking progress, but slows things down substantially."""
if test_data: n_test = len(test_data)
n = len(training_data)
for j in xrange(epochs):
random.shuffle(training_data)
mini_batches = [
training_data[k:k+mini_batch_size]
for k in xrange(0, n, mini_batch_size)]
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
if test_data:
print "Epoch {0}: {1} / {2}".format(
j, self.evaluate(test_data), n_test)
else:
print "Epoch {0} complete".format(j)
Look at the code. For each epoch, we randomly generate a subset of the inputs for the gradient descent algorithm. Why epoch is effective is also explained in the page. Please take a look.
A full training pass over the entire dataset such that each example has been seen once. Thus, an epoch represents N/batch size training iterations, where N is the total number of examples.
A single update of a model's weights during training. An iteration consists of computing the gradients of the parameters with respect to the loss on a single batch of data.
as bonus:
The set of examples used in one iteration (that is, one gradient update) of model training.
See also batch size.
source: https://developers.google.com/machine-learning/glossary/
Typically, you'll split your test set into small batches for the network to learn from, and make the training go step by step through your number of layers, applying gradient-descent all the way down. All these small steps can be called iterations.
An epoch corresponds to the entire training set going through the entire network once. It can be useful to limit this, e.g. to fight overfitting.
In the neural network terminology:
Example: if you have 1000 training examples, and your batch size is 500, then it will take 2 iterations to complete 1 epoch.
FYI: Tradeoff batch size vs. number of iterations to train a neural network
The term "batch" is ambiguous: some people use it to designate the entire training set, and some people use it to refer to the number of training examples in one forward/backward pass (as I did in this answer). To avoid that ambiguity and make clear that batch corresponds to the number of training examples in one forward/backward pass, one can use the term mini-batch.
Epoch and iteration describe different things.
An epoch describes the number of times the algorithm sees the entire data set. So, each time the algorithm has seen all samples in the dataset, an epoch has completed.
An iteration describes the number of times a batch of data passed through the algorithm. In the case of neural networks, that means the forward pass and backward pass. So, every time you pass a batch of data through the NN, you completed an iteration.
An example might make it clearer.
Say you have a dataset of 10 examples (or samples). You have a batch size of 2, and you've specified you want the algorithm to run for 3 epochs.
Therefore, in each epoch, you have 5 batches (10/2 = 5). Each batch gets passed through the algorithm, therefore you have 5 iterations per epoch. Since you've specified 3 epochs, you have a total of 15 iterations (5*3 = 15) for training.
I guess in the context of neural network terminology:
In order to define iteration (a.k.a steps), you first need to know about batch size:
Batch size: You probably wouldn't like to process the entire training instances all at one forward pass as it is inefficient and needs a huge deal of memory. So what is commonly done is splitting up training instances into subsets (i.e., batches), performing one pass over the selected subset (i.e., batch), and then optimizing the network through backpropagation. The number of training instances within a subset (i.e., batch) is called batch_size.
Iteration: (a.k.a training steps) You know that your network has to go over all training instances in one pass in order to complete one epoch. But wait! when you are splitting up your training instances into batches, that means you can only process one batch (a subset of training instances) in one forward pass, so what about the other batches? This is where the term Iteration comes into play:
Definition: The number of forward passes (The number of batches that you have created) that your network has to do in order to complete one epoch (i.e., going over all training instances) is called Iteration.
For example, when you have 10000 training instances and you want to do batching with size of 10; you have to do 10000/10 = 1000 iterations to complete 1 epoch.
Hope this could answer your question!
To understand the difference between these you must understand the Gradient Descent Algorithm and its Variants.
Before I start with the actual answer, I would like to build some background.
A batch is the complete dataset. Its size is the total number of training examples in the available dataset.
Mini-batch size is the number of examples the learning algorithm processes in a single pass (forward and backward).
A Mini-batch is a small part of the dataset of given mini-batch size.
Iterations is the number of batches of data the algorithm has seen (or simply the number of passes the algorithm has done on the dataset).
Epochs is the number of times a learning algorithm sees the complete dataset. Now, this may not be equal to the number of iterations, as the dataset can also be processed in mini-batches, in essence, a single pass may process only a part of the dataset. In such cases, the number of iterations is not equal to the number of epochs.
In the case of Batch gradient descent, the whole batch is processed on each training pass. Therefore, the gradient descent optimizer results in smoother convergence than Mini-batch gradient descent, but it takes more time. The batch gradient descent is guaranteed to find an optimum if it exists.
Stochastic gradient descent is a special case of mini-batch gradient descent in which the mini-batch size is 1.
To my understanding, when you need to train a NN, you need a large dataset involves many data items. when NN is being trained, data items go in to NN one by one, that is called an iteration; When the whole dataset goes through, it is called an epoch.
You have a training data which you shuffle and pick mini-batches from it. When you adjust your weights and biases using one mini-batch, you have completed one iteration. Once you run out of your mini-batches, you have completed an epoch. Then you shuffle your training data again, pick your mini-batches again, and iterate through all of them again. That would be your second epoch.
Source: Stackoverflow.com