Neural networks are learning algorithms that use approximation for solution to a task by training with available data. However, it is usually not clear how exactly this is accomplished. An initiative of two physicists from Basel has led to deriving mathematical expressions that allows one to calculate the optimal solution without training a network. The results are useful in two ways: for insights into how the learning algorithms work, and to help to discover unknown phase transitions in physical systems in future.
Importantly, neural networks are based on fundamentals of function of the brain. Such computer algorithms discover to solve problems through continuous training, for example, can distinguish objects or process spoken language.
Meanwhile, for several years, physicists have been trying to use neural networks to discover phase transitions too. Phase transitions are familiar from everyday phenomenon, for example, when water freezes into ice, but they also occur in more complex state between different phases of magnetic substances or quantum systems where they are often difficult to detect.
“Neural networks have become quite good at determining phase transitions,” stated one of the physicists. However, it remains completely obscure how they do it. To change this, and to throw some light in the ‘black box’ of a neural network, the physicist duo looked at the special case of networks with an infinite number of variables, which, in principle, also goes through infinitely many training rounds.
Usually, it has been known that predictions of such networks always tend toward a certain optimal solution. The physicists took this as the starting point to derive mathematical formulas that allows one to directly calculate the optimal solution without the need to actually train the network.