Princess Josh asks:
Do we have any idea when we will run out of science?
I mean, the rate of scientific progress is generally considered to have increased exponentially over the course of the last six to eight millennia, to the point where we’re learning more about physics and the universe each decade than we did in the entire millennium from 0AD (that may be only approximately true, but you get the idea). Does human science progress have an answer to the question of what the limits of human scientific progress might be? Is there a risk of a dead end that humans, no matter how ingenious, will never be able to circumvent?
This is a tricky one for two reasons.
One: Attempts to predict the future of science are almost invariably hilariously inaccurate.
Thomas Kuhn came up with the concept of scientific paradigms and paradigm shifts in his 1962 book The Structure of Scientific Revolutions (a rather dense work, but worthwhile if you want to understand how science works). A paradigm can essentially be boiled down to an accepted scientific worldview – that is, we think the world works this way, and we come up with a whole host of scientific theories that use the rules and assumptions of that paradigm. However, those theories are also implicitly and constantly testing the paradigm, and as soon as we make a scientific observation that’s provably inconsistent with our accepted scientific worldview we have to either modify it so that it will accept the anomaly, or else toss it out entirely and come up with a new paradigm. This is a paradigm shift, the transition from one scientific paradigm to another.
Previous famous examples of paradigm shifts include the shift from geocentrism (the celestial model that had the Earth at the centre of the universe) to heliocentrism (replacing the Earth with the Sun) provoked by Nicolas Copernicus, as well as Einstein’s theory of general relativity usurping two hundred years of Newtonian mechanics1. The new theory is always more accurate than the one preceding it – and we know this because it will usually explain everything the old theory explains plus some new stuff – but we have no way of knowing if it is 100% accurate. In the case of general relativity it’s almost certainly incomplete, since attempts to reconcile general relativity (the science of the very large) with quantum mechanics (the science of the very small) have so far only resulted in non-provable quasi-scientific ideas like M-theory, but even theories that have so far stood up to every test science can throw at them – conservation of energy, for example – would have to be thrown out and replaced with something better2 if we found just one definite occurrence of it being incorrect.
This is the basic principle behind the scientific method and it’s carried the human race a very long way over the last few centuries. However, it has one tiny snag: every scientific theory has the potential to be wrong, but you don’t know that it is wrong until you have the evidence sitting in front of you. Until that happens you’re in the rather frustrating position of having to cross your fingers and hope for the best. Amongst other things, this makes it incredibly difficult to see a paradigm shift coming more than a decade or so in advance. Scientists are therefore a little bit wary about making predictions about the future prospects of science these days. Nobody wants to be viewed by posterity as another Lord Kelvin, who once asserted that “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” A few years later Einstein tore down Newtonian mechanics with his paper on special relativity, and the following decades saw the incipience of quantum mechanics and particle physics as scientific disciplines. Kelvin was a smart man and a fine scientist, but he fell into the trap of thinking that there was one absolute way of thinking about the universe. There isn’t. There is only a current paradigm.
Two: It is difficult — if not impossible — to disentangle the mutual effect that scientific progress and technological progress have on each other.
Science improves technology, and technology improves science. They’re two sides of the same coin; a new scientific theory allows new technologies to be developed, and no matter what your new technology is – a new computer chip, a new material, a new way of generating a laser – the chances are that someone somewhere will be able to use it in an experiment to test/refine/come up with another scientific theory. Computers are probably the most dramatic example here; pretty much all the work being carried out in modern scientific laboratories wouldn’t even begin to be feasible without the modelling capabilities of modern computers. Sixty years ago computers were just small enough to fit into several rooms and the president of IBM predicted there would be no need for more than five computers in the world. Today we have computers which fit in the palms of our hand which have thousands of times more computing power than the behemoth machines of yesterday, and there are billions of them.
Like scientific paradigm shifts, nobody saw this technological paradigm shift coming, and yet today’s society simply couldn’t function without technologies that were in their infancies just two decades ago. The same is true of science. Leaving aside the science fiction future of Ray Kurzweil’s singularity, while it is impossible to predict the future with any degree of accuracy it is likely based on the last few decades that the pace of scientific and technological change will continue to increase. There are some discrete physical barriers/limitations that have to be overcome – dwindling resources, data storage/processing issues, and human brains being inefficiently squishy collections of neurons that have trouble comprehending the latest scientific theories without at least ten years of dedicated study – but it would be very unwise for me to say that one of these barriers is going to bring humanity’s technological progress to a screeching halt. When confronted with an obstacle science tends to provide its own solution in the form of a new technology. Nanotechnology and fusion could fix the resource problem. Quantum computing could give us more processing power than we would ever reasonably need. And if anyone ever develops a true AI, and its first reaction upon becoming self-aware and seeing the human race isn’t to send legions of Arnold Schwarzeneggers to kill us all, that AI could instantly grasp concepts that it would take a human years to get to grips with.
The operative word here is could. Some of these things could happen. None of them could happen. We could all be living in the nerd rapture a couple of centuries from now, or the human race could be reduced to a last few pitiful survivors squabbling over the last can of dog food in a Mad Max-esque apocalyptic future. What I am trying to get across here is that attempting to predict the long-term future of science and technology is a loser’s game. Most people who try get it wrong, and those who get it right do so more out of serendipity and sheer dumb luck than they do because they saw which way the wind was blowing. The best thing that we can do is stop worrying about what might lie beyond this scientific event horizon and instead focus on what we can do now3. If we don’t, then we’ll never even get there.
- Newtonian mechanics still function perfectly adequately as an approximation for literally every single body that you as a human being are likely to encounter in your lifetime, which is why you still get taught Newton’s laws in secondary school. They also have the advantage that they are relatively simple and easy to understand, whereas general relativity requires graduate-level schooling in order to fully grasp it.
- Conservation of energy is pretty much the most solid scientific principle there is, and it is incredibly unlikely that it would ever be proven wrong. However, if it ever was it would be very bad news for science, since conservation of energy acts as the foundation for myriad other theories and assumptions. You’d pretty much have to dismantle all of physics and rebuild it again from the ground up.
- Where “now” means “the next half-century”. Long-term planning horizons are something which often elude today’s politicians, but I’m not seriously advocating we go into the future blind just because it’s a little difficult to tell where humanity is going to wind up in a hundred years’ time.