Catastrophic Forgetting: Learning’s Effect on Machine Minds
What if every time you learned something new, you forgot a little of what you knew before? That sort of overwriting doesn’t happen in the human brain, but it does in artificial neural networks. It’s appropriately called catastrophic forgetting. So why are neural networks so successful despite this? How does this affect the future of things like self-driving cars? Just what limit does this put on what neural networks will be able to do, and what’s being done about it?
The way a neural network stores knowledge is by setting the values of weights (the lines in between the neurons …read more
Continue reading Catastrophic Forgetting: Learning’s Effect on Machine Minds