Jump to content

Mini-batch: Difference between revisions

2,102 bytes removed ,  28 February 2023
no edit summary
No edit summary
No edit summary
Line 1: Line 1:
{{see also|Machine learning terms}}
{{see also|Machine learning terms}}
==Introduction==
==Introduction==
Mini-batch training is a machine learning technique used to efficiently train large datasets. This division of the entire dataset into smaller batches allows for faster training as well as improved convergence of the model to its optimal solution.
In [[machine learning]], [[mini-batch]] is when you randomly divide a [[dataset]] into smaller [[batch]]es during [[training]]. The [[model]] only trains on these mini-batches during each [[iteration]] instead of the entire dataset. The number of [[example]]s ([[data points]]) in a mini-batch is called the [[batch size]]. This division of the entire dataset into smaller batches allows for faster training as well as faster [[convergence]]. It is also far more efficient to calculate the [[loss]] on the mini-batch of examples than the entire dataset.


==Theoretical Background==
==Theoretical Background==
Line 21: Line 21:


Typically, batch size is chosen based on a tradeoff between convergence speed and stability. A range of 32 to 128 is commonly observed in practice.
Typically, batch size is chosen based on a tradeoff between convergence speed and stability. A range of 32 to 128 is commonly observed in practice.
==Explain Like I'm 5 (ELI5)==
Imagine you have a large pile of candy that you would like to share with your friends. Make sure each friend gets an equal share, so instead of giving all the candy to one friend and then moving on, divide the pile into smaller portions and give each pile to each one separately. Not only will this guarantee an equitable distribution of treats among each friend, but it is also faster than giving everything to one person at once.
Similar to machine learning, when you have a large dataset, training the model on all data at once would take too long. So instead, divide the big dataset into smaller piles and train the model on each pile separately. This way, your model can learn from part of the data instead of all at once, allowing it to learn faster from smaller portions of it.
==Explain Like I'm 5 (ELI5)==
Sure! Imagine you have an abundance of candies and want to share them with your friends. However, due to the quantity, it would be impossible for all to be given at once. So what do you do?
Take some of your candies from the big bag and place them in a smaller one, then give that bag to one of your friends. Repeat this process until all the candies have been distributed among all participants.
Machine learning works similarly. Imagine you have a large collection of examples, but your computer can't handle them all at once. So instead, we take several small samples and call that a "mini-batch." With these mini-batches, the computer learns from these examples bit by bit - just like how you share candies with friends one at a time.
[[Category:Terms]] [[Category:Machine learning terms]]
{{see also|Machine learning terms}}
==Introduction==
Machine learning often encounters data that is too large to be processed all at once by a model, so the system divides it into smaller subsets called batches. A mini-batch is one type of batch that contains only a few samples.


==What is mini-batch in machine learning?==
==What is mini-batch in machine learning?==
Line 76: Line 57:


==Explain Like I'm 5 (ELI5)==
==Explain Like I'm 5 (ELI5)==
Mini-batch is a group of pictures that a computer scans in order to learn how to recognize objects such as cats and dogs. By looking at many mini-batches of pictures in succession, it accumulates knowledge more rapidly - similar to learning how to color by numbers: you start out by coloring just a few numbers at a time and progresses by practicing until you get better at it.
Imagine having a large pile of candy you would like to share with your friends. Make sure each friend gets an equal share, so instead of giving all the candy to one friend and then moving on, divide the pile into smaller portions and give each pile to each one separately. Not only will this guarantee an equitable distribution of treats among each friend, but it is also faster than giving everything to one person at once.


==Explain Like I'm 5 (ELI5)==
Similar to machine learning, when you have a large dataset, training the model on all data at once would take too long. So instead, divide the big dataset into smaller piles and train the model on each pile separately. This way, your model can learn from part of the data instead of all at once, allowing it to learn faster from smaller portions of it.
Have you ever tried to organize a lot of toys at once? For instance, say you have one hundred toys and need to put them away in your toy box. While it might be possible to do it all at once, it would be much simpler if only some pieces were put away at a time.
 
Machine learning involves breaking up data into smaller groups, known as mini-batches. Breaking this large set of information down into manageable chunks makes it easier to work with - just like with toys!
 
Instead of teaching the computer everything at once, we give it a mini-batch of data to work with. It learns from this mini-batch and then we give another one until it can comprehend all the data accurately!
 
Just as it's easier for humans to organize toys in smaller groups, computers also learn better from smaller sets of data - which is exactly what mini-batches are all about!




[[Category:Terms]] [[Category:Machine learning terms]] [[Category:Not Edited]]
[[Category:Terms]] [[Category:Machine learning terms]] [[Category:Not Edited]]