Will we engineer our own downfall?


Staff member
The rate of progress in gene technology (anyone have the right term for this?) suggests to me we will inevitably create our own unique lifeforms. In fact, it could be argued that just modifying our own genes means this is happening already.

However, what happens when things go astray? Could we create something that takes us down? We talk about artificial intelligence being a danger. Well, what about a biological form of artificial intelligence, where we have engineered super-intelligence, a huge brain, nasty octopus like legs and a venomous sting? I am sure there are plenty of malicious people out there who would be happy to create such an organism.

Are we on the verge of breakthroughs in science that will ultimately kill us all?


Founding Member
Genetic engineering has already created genetically modified organisms in the plant kingdom. Take a look at various grain-based snacks that tout "Non-GMO" as one of their merits. These GMO life forms are examples of playing around with genetic code. In other threads we have discussed two issues related to this process: The engineering admonition of "if it ain't broke, don't fix it" and the more philosophical "Idle hands are the devil's workshop" thread.

Many people have protested many times already regarding whether GMOs are a threat. I honestly don't know enough about that level of biochemistry to talk about the actual process, but the result is easier to understand. If you can identify a particular detrimental gene and replace it with a benign one, or if you can identify a particularly desirable gene and use it to replace a neutral one, the net result should be good - but the problem is that since we don't fully understand the genetic code's functioning at that level, we don't know if that replacement has side-effects. And it is the side effects that are the bete noir of this process. The catch is that we won't know about the side effects until the genetic engineering has already occurred.

Therefore, in directly answer to your question, I must reply: A distinct "Maybe." :confused:
Are we on the verge of breakthroughs in science that will ultimately kill us all?
We don't know the answer of course, but there's a concept called the Great Filter that deals with a similar question. It deals with the subject of why alien life would spell catastrophe for us.

The concept originates in Robin Hanson's argument that the failure to find any extraterrestrial civilizations in the observable universe implies the possibility something is wrong with one or more of the arguments from various scientific disciplines that the appearance of advanced intelligent life is probable; this observation is conceptualized in terms of a "Great Filter" which acts to reduce the great number of sites where intelligent life might arise to the tiny number of intelligent species with advanced civilizations actually observed (currently just one: human). This probability threshold, which could lie behind us (in our past) or in front of us (in our future), might work as a barrier to the evolution of intelligent life, or as a high probability of self-destruction. The main counter-intuitive conclusion of this observation is that the easier it was for life to evolve to our stage, the bleaker our future chances probably are.
There's also a YT video on this subject (9 mins duration)