I had a weird dream last night. I’m sure you’ve said that many times. But why? From Sigmund Freud to Carl Jung to Calvin Hall, dreams have been researched ad nauseam and there’s still no good answer to what shapes them. Unconscious desires? Keeping neurons active? Memory consolidation? Psychosexual wish fulfillment? No one knows. Even Freud supposedly noted, “Sometimes a cigar is just a cigar.”
But clues are starting to emerge from artificial intelligence and its use of brain-mimicking neural networks. In 1968, Philip K. Dick published the novel “Do Androids Dream of Electric Sheep?”—on which the 1982 movie “Blade Runner” was based. Turns out the answer is yes.
First, a quick and oversimplified primer on neural networks. They contain digital layers of interconnected nodes, modeled on the human brain’s neurons and synapse connections, which are great at finding patterns. Based on numerical weightings in each node after scanning millions of photos, a layer might conclude that a roundish object with eyes, nose and mouth is a face. Then that data moves onto the next layer, which looks for patterns in those patterns. Perhaps it determines whether it’s a cat’s or dog’s face. Then another layer finds patterns of patterns of patterns until, for example, it can identify the dog’s breed as a Siberian Husky. And so on.
Theorized in 1943, trainable neural networks have been around since 1958, when psychologist Frank Rosenblatt demonstrated the Perceptron machine at the Cornell Aeronautical Laboratory. But only in the past decade has AI scaled to the masses and gained the ability to understand your voice or recognize faces in photos.
What changed? Faster processors and cheaper memory certainly helped, but neural networks in the past often got stuck by “overfitting” data to a conclusion, such as determining that everything with feline-shaped eyes is a cat. Or concluding that dinosaurs built Stonehenge. We do that too when we knock on wood or wear our lucky socks because our team won last time we did. Computers can be superstitious like us.
In 1970 Finnish student Seppo Linnainmaa proposed “back-propagation”—again oversimplifying here—sending errors backward through neural-network layers to generalize the weightings and better find patterns.