The Fermi paradox is the known contradiction between the lack of observational evidence for advanced alien civilizations and the estimates of the number of such civilizations in the Universe. The latest results of the Kepler space observatory suggest that about one-fifth of all known stars in the Universe should host an Earth-like planet, whereas there should be, on average, more than one planet (not necessarily rocky) per star. It is estimated that, in the Milky Way alone, about 40 billion Earth-like planets should lie within the so-called habitable zone — i.e., the region around a star where water can be found as a liquid on its surface.

A second interesting fact is that life on Earth has emerged very quickly after the formation of our planet 4.5 billion years ago. A recent study that appeared in the journal Nature early this year, reported the identification of micro-fossils that suggest life might have emerged a mere 100 million years after Earth’s formation. If this is typical behavior, and life really emerges so quickly given the right conditions on a planet, then the obvious implication is that life might be very common in the Universe.

The huge amount of Earth-like planets and the rapid development of life makes the Fermi paradox even more difficult to solve. There have been countless articles and discussions on possible solutions to this problem, some of which are very intriguing. Here I will not discuss another possible solution, but rather focus on what the paradox can tell us, given our current technological developments.

We are currently living in a very special time, the third wave of artificial intelligence (AI), which has seen the flourishing of narrow intelligence machines. It has been suggested that artificial narrow intelligence (ANI) will keep improving and might soon lead to the birth of artificial general intelligence (AGI). Artificial general intelligence is roughly defined as an agent capable of performing a large variety of tasks with a performance similar to that of a human being. In other words, general intelligence is not limited to a specific narrow task as is ANI today. Considering the current trend and extrapolating to the future, it is estimated that the creation of the first AGI might occur in the next few decades. This estimate is very uncertain of course since there are many assumptions involved, but there seems to be a general consensus that this will occur in the near future.

Once this step has been achieved, the predictions about what will happen after that become extremely uncertain. Since an AGI has the same human capacity to solve any problem, it has a far superior speed in both learning and reasoning (for example, the neurons in the human brain “fire” signals at a rate of about 200 Hz, whereas a machine can go well beyond that limit), it is possible that in a very short timescale (from few minutes to years), it might improve itself exponentially and become what is known as a super-intelligence. A machine with artificial super-intelligence would vastly outperform any human being in any task.

It has been reasoned that super-intelligence might set the dawn of a new era (or doom) for humanity. Even if this scenario appears relatively distant from our daily experience, there seems to be no clear reason why this should not happen, since it is based only on a simple assumption — i.e., that intelligence is nothing but a (very) complex computational system. If super-intelligence will truly appear in the next few decades, then it should become possible to solve problems that today appear impossible to crack or make revolutionary discoveries in fields where progress is slow or stagnant. It could also lead to a complete transformation of civilization as we know it.

These considerations on super-intelligence are important to make here because we need to notice that our civilization is much closer to super-intelligence than it is to interstellar travel and/or the ability to build megastructures. Interstellar travel technology seems to be at least a century away. It appears also reasonable to assume that if a civilization wants to achieve interstellar travel capabilities, then its technological development must almost certainly pass through the development of super-intelligence. Indeed space flight on interstellar scales needs, as a necessary condition, machines capable to travel autonomously and react to unexpected events in real-time without the need of calling home, which would take several years at best, given the finite speed of light. Therefore, developing smart machines seems the premise for any attempt to send a probe to another star or to build colossal engineering structures. Our civilization is very close to developing super-intelligence within a century or two since the invention of the first calculators — so it appears justified to assume that, for an advanced civilization, super-intelligence is a technology that can be achieved relatively quickly after creating the first calculators.

We can thus now shift the discussion of the Fermi paradox to another level. Instead of asking why we don’t see evidence of other civilizations — assuming that interstellar travel might be feasible for advanced alien species, we can ask why we don’t see evidence of artificial super-intelligence in the Universe, and what kind of constraints this imposes on super-intelligence itself.

Of course, the considerations that follow apply only in one case: that intelligence appears on other planets, and that advanced civilizations are a relatively common phenomenon among biological ecosystems operating under the pressure of Darwinian evolution.

Before continuing let’s recap the assumptions made here:

  1. Intelligence and advanced civilizations are common in the Universe
  2. A space-faring civilization develops super-intelligence before being either capable of performing interstellar travels or able to build megastructures
  3. Super-intelligence leads to the quick development of technologies that allow interstellar travel and/or the construction of megastructures
  4. Super-intelligence can be controlled and does not lead to the self-destruction of the civilization that develops it

We also need to discuss how solid the lack of evidence (so far?) for civilizations in our own galaxy is. This fact is based on relatively weak observational data, since our ability to scout planets around other stars with our current telescopes is very limited. For example, civilizations like ours would be undetectable with current instrumentation even if they exist at a distance of just a few tens of light-years. This situation might rapidly change in the near future when a new generation of telescopes will be able to probe the chemical composition of many exoplanetary atmospheres and find evidence for chemical reactions at least compatible with life. Other telescopes like the Square Kilometer Array  — a radio telescope of unprecedented sensitivity — might also significantly increase our chances to detect radio signals from other planets. The current Breakthrough Listen project is also going to produce a significant improvement on the number and type of stars sampled in search of intelligent life.

However, what we can certainly dismiss is the possibility that there are many such advanced civilizations able to perform massive engineering projects and build megastructures. Two years ago a nice discovery made by the Kepler mission spurred a great deal of discussion on what could be causing the strange dimmings in the light emitted by the so-called Tabby’s star. One such possibility, considered extremely unlikely by astronomers and used more like a last-resort possibility, was that the dimmings were produced by a megastructure like a Dyson sphere, i.e., a gigantic sphere able to absorb a significant fraction of the power emitted by the host star. The Kepler telescope has monitored so far about 150,000 stars and has found no megastructures (including Tabby’s star). Therefore we know that if any such thing exists, there must be less than one every 1-2 hundred thousand stars. Considering that many other stars have been observed multiple times with other telescopes, not as sensitive as Kepler but certainly able to catch a variation of a few tens of percent in the brightness of the star, we can probably safely say that there is less than a Dyson sphere-like structure every million stars. As we said earlier, there are about 40 billion Earth-like planets in the habitable zone and for the sake of simplicity let’s assume that these planets are hosted by one star each (so no multiple habitable planetary systems, unlike the recent TRAPPIST-1 discovery; this assumption is a crude simplification but it’s good enough for our purposes). If each habitable planet gave rise to an advanced civilization with super-intelligence, which in turn would unlock the possibility to build one megastructure around the host star, then we would expect one star in ten to show signs of megastructures (because the Milky Way is made by about 400 billion stars). But we know that there is less than one such megastructure per million stars, and therefore our hypothesis 2 implies that there should be less than one advanced civilization that has developed super-intelligence for every 40,000 habitable planets. If this is not true then our hypotheses 2 and/or 3 must be wrong.

Hypothesis 2 (super-intelligence comes before space travels) can be wrong if, for some reason, the development of space travel and/or the engineering capabilities to build megastructures could be accomplished without needing artificial intelligence. Considering what has been said earlier, i.e., that our civilization is on the verge of developing super-intelligence, the possibility of building megastructures or performing interstellar travel without the development of computers seems impossible. Indeed any megastructure would need careful planning and massive calculations to develop the project, and it would almost certainly require numerical simulations to proceed with the construction without incurring major disruptions. Even if civilization would consist of 100 billion individuals all working with perfect coordination on such a project it would be impossible to match the computational power of even our modern computers. A similar objection would work even if such an alien civilization is made by individuals vastly more intelligent than us. Indeed it is virtually impossible for any biological computational system to even remotely match what a machine would be capable of since machines are limited only by the laws of physics rather than those of evolution and biology. An example is our brain, limited in size (due to evolution) and speed of signal transmission (due to its biological nature). And of course, one could legitimately ask why such an incredibly intelligent individual, assuming it exists, would not be capable of developing computers. Another possibility is that super-intelligence might bring such enormous advantages and vast knowledge to a society that exploring other planetary systems might become a boring hobby. However, any society that runs a super-intelligence and has managed to control it for its own purposes and goals will eventually run into the problem of exhausting resources or will need more power to maintain its growth. This again would lead to either interstellar travel or to the building of megastructures. Hypothesis 2 seems therefore unlikely to be wrong unless we have not considered some fundamental element in our discussion.

Hypothesis 3 (super-intelligence allows space travel & megastructures) is perhaps the most interesting one since it might truly be telling us something about the features that a super-intelligence must have. If indeed achieving super-intelligence does not unlock the possibility to travel through stars, or pursue a complex manipulation of the environment that would be visible from other stars, then there must be something wrong in our expectations of what super-intelligence would look like, and perhaps, taking a step backward, about what intelligence itself truly is.

The lack of evidence for megastructures and interstellar travel is also incompatible with hypothesis 1 (life and civilizations are common), but I will refrain from discussing its consequences in detail since there is ample material that can be found in the literature and at the moment I do not have anything new to add to the topic. The only thing that might be worth mentioning is the case in which only one out of the 40 billion potentially habitable planets hosts an advanced civilization that achieves super-intelligence (besides ours of course). In that case, the civilization must have built less than 400,000 megastructures in our galaxy (i.e., the total number of stars in our galaxy divided by the number of stars we know have no visible megastructure). This number — for how large it might seem — is however exceedingly small. Suppose a civilization takes a large amount of time, say a million years, to colonize a new planet, build a megastructure and then begin to seek a new planet with a daughter colony. Then if the same process of growth that leads to the need for a new planet continues at the same pace indefinitely, each newly colonized planet will keep producing a daughter colony every million years. In short: 1 colony after 1 million years, 2 colonies after another million years, then 4, 8, 16, etc. colonies for each following million years. Anyone who knows about the wheat and chessboard problem knows that this is a geometric progression with a common ratio of 2, which leads to an explosion in value after a few iterations. To reach 400,000 colonies then, one needs 19 iterations and thus such civilization should be visible (to us at least) within 20 million years. This is of course a tiny timescale compared to the age of our galaxy (13 billion years).

Finally, if hypotheses 1-3 are correct then hypothesis 4 must be wrong which leads to the doom scenario. This last hypothesis has also been discussed in several books and in a large amount of literature on existential risk, so I will not repeat those arguments here (see for example the Great Filter argument). It is worth mentioning, however, that given our current understanding of technology and the presence of life in the Universe, hypotheses 1 and 4 seem to be the most likely to be wrong among the four. Since this topic is vast and there are many ramifications, I will attempt to publish some more discussion and analysis on the topic in a future post.