Researchers have put forward a contradictory idea to the widely accepted working of machine learning. The scientists from MIT and University of Wellington applied the algorithm to typical classification examples.
According to information theory, deep neural networks store encoded information within layers. Initial layers hold a signal to the classification of the item. In neural networks, intermediate layers hold information, which helps in the mapping of data. Raw data transforms into the intermediate signals, with the system predicting and compressing. Through this process, high-level concepts formed through the bottleneck of information.
ML May Group Labradors with Martini Glasses
However, in spite of successful application recent studies negate the concept of information bottleneck. Scientists found that for common classifications, each input has only one correct output. Thus, neural layers do not carry predictive information. The existence of ‘trivial’ signals of the inputs fed to the neural networks also complicates the study. Hence, the weaker the representation, the wider the change between compression and prediction.
Researchers thus say that the information bottleneck does not recognize compression in the way humans do. Yet, the theory does hold true for less deterministic tasks. For instance, prediction of weather through a large dataset is likely to exist as an accurate description of information bottleneck.
A paper, titled, ‘Caveats for information bottleneck in deterministic scenarios’ explains the finding of scientists. The scientists cannot stress enough that bottleneck works counterintuitively on typical machine learning problems. And this experiment is expected to spread awareness about the same in the machine learning community.