Few days back, someone I follow on Curius had added link of this video on their Curius profile:
This reminded me of the principle described by John Gall:
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.1
If you think about it, you see it everywhere, from evolution of life, to all the technological progress, to the idea behind MVP.
But the thing is that we live in times where technology has gotten so sophisticated that it is becoming harder and harder for us to imagine about simpler technology that works.
Feynman had narrated an example of it about how when he was a kid, they used to open up radios and you could see all the component parts. Basically, when you opened up devices at that time, what you got was a diagram of how that thing worked, but as technology evolved further, it became more and more opaque, and you no longer got that diagram when you opened something up.
But the underlying working principles haven’t changed much. We probably have uncovered more of physics in last 50 years. But the kind of physics needed to build or understand most of elementary things, is not really new; only its implementations have become more sophisticated.
And I don’t intend to say that that sophistication is inherently bad. The technology was bound to get more sophisticated. There was no other way. What I am pointing out is that from the technologies we see around us, we now have a longer route to re-trace to get back to first principles.
Similarly, we look at computer systems and software, and it’s so daunting. Like, just look at the graphics of any modern video game. And it seems unimaginable how humans can program something like that. But that’s because we have forgotten that we once used to play Prince of Persia, Mario, Pac-Man, Tetris and the like.
We look at people accomplished in certain areas, say, for example, writers, and think the same. And fail to realize that it’s not like a person, one day, randomly decides to write something, and comes up with something like Macbeth2. I’m not saying that one can’t one day randomly decide to do something he hasn’t done before and still come up with good results. I think quite the opposite: if someone’s the right kind of person for the job, he is highly likely to come up with good results in his early attempts, provided that he starts with a simple version. A writer doesn’t need to come up with a very sophisticated insight, or a scientist a complicated discovery, or an engineer with a complex invention, in early iterations, he only needs to come up with something that works3.
Now, this seems to be gravitating to very cliché ideas, and so let me turn this around.
Even though all complex systems that work evolve from simpler systems that also worked, starting from a simple system that works, does not guarantee in itself that it will also evolve into a complex system or that it will continue working if it evolves.
What makes a simple system evolve into a complex system that still keeps working is a strong feedback loop.
Those systems that don’t have a very strong feedback loop to guide their evolution, will either have to slow down (stop evolving) if they have to stay working, or will fail at some point if they keep iterating further on whims without any feedback loop to guide them.
A strong feedback loop is one that is strong and both of its ends, i.e., it is (i) highly perceptive at the sensory end, and (ii) it’s highly precise and moderately4 fast in it’s execution.
I think systems get more and more risk averse as they increase in complexity, and that is why most of them stop evolving, and hence we end up at local maxima, never seeing the light of what a global maximum looks like.
- This is popularly known as Gall’s law.
Source: John Gall (1975) Systemantics: How Systems Really Work and How They Fail p. 71 ↩︎ - I haven’t read Macbeth and don’t really know what’s so great about it, and thus, I shouldn’t probably use this example, but I can’t think of any for now, so I am just assuming that if mathematicians use a literary work as an example benchmark in their theories, it must be good enough. ↩︎
- I had earlier written “come up with something useful” but the usefulness is context-dependent and the term works in itself contains the context. For instance, if someone is writing something funny to amuse someone, and it actually amuses them, then it works whether or not anyone thinks it useful. ↩︎
- I say moderately fast because there seems to be a tradeoff between preciseness and speed of execution. ↩︎