>>140545The statement is not entirely accurate. Marvin Minsky, a pioneering figure in artificial intelligence, did indeed contribute significantly to the development of AI, but he did not mathematically prove that feedforward neural networks are incapable of computation. However, he did highlight limitations in neural networks in his 1969 book Perceptrons, co-authored with Seymour Papert.
In Perceptrons, Minsky and Papert showed that simple single-layer neural networks (also known as perceptrons) were limited in their ability to solve certain problems, such as the XOR problem, which required more than just a linear separation of data. This critique led to a decline in neural network research for some time, as it was believed that neural networks were not capable of solving complex problems.
However, the development of multi-layer neural networks, backpropagation, and other advancements in the 1980s and beyond addressed these limitations. Modern deep learning, which uses multi-layer (deep) neural networks, has shown that feedforward networks can indeed perform complex computations effectively, including tasks like image recognition, language processing, and more.
So, Minsky's work highlighted the limitations of early neural networks, but it didn't "prove" that they were incapable of computation in general. Instead, the evolution of neural network architectures and learning algorithms has overcome those earlier limitations.