Two accelerating trends threaten human extinction within a century. AI technology is currently developing so rapidly that it will likely determine humanity’s future before runaway global warming does. Will superintelligent AIs be existential threats or saviors?
Global warming is the most comprehensive of the existential threats. Climate Sentinel News has provided a vast array of evidence and explanations on how and why global warming will run away, leading to an exceptionally severe global mass extinction. Humans can avoid this extinction only by working together worldwide to reverse the processes driving global warming. The fact that I stopped writing about it is not because things are getting better, but because they are getting worse faster — and no one is trying to do anything about it. It is simply too depressing for me. Because the warming process is global, even though it is virtually instantaneous in geological time, it is still relatively slow over a human lifespan. Superintelligent AIs may be existential threats in their own rights or they might save us from the more slowly developing climate crisis.
However, because technology can evolve many times faster than physical processes working at the planetary level, the AI crisis has emerged this year, this month, and will almost certainly resolve humanity’s future well before the planet becomes uninhabitable for us.
The emergence and proliferation of superintelligent AIs will probably determine humanity’s future — within five years!
In February 2026, a point of inflection in the evolutionary development of AI is being crossed, where positive feedback will drive AIs to superhuman intelligence at hyperexponential rates
The following extracts are from Matt Schumer (https://www.linkedin.com/pulse/something-big-happening-matt-shumer-so5he/) on 12 February 2026. “Matt is the co-founder and CEO of OthersideAI, an applied AI company building the most advanced autocomplete tools in the world, powered by large-scale AI systems like GPT-3. OthersideAI is the company behind HyperWrite, the leading AI autocomplete Chrome extension for consumers.”
In Shumer’s own words:
I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.
On 18 Feb, 2026, Shumer’s article had 83 million viewers… and still counting. READ IT!
As I have been suggesting, this evidence shows that AI evolution is actually running ahead of the forecast schedule in the AI 2027 scenario (https://ai-2027.com/).
Why should anyone pay attention to my thoughts on why this is important?
I am well qualified to understand the significance of this report: After 3 years as a physics major, I graduated from college with a BS in Zoology. In 1973, I completed my PhD in evolutionary biology, genetics, and biogeography at Harvard. For 17½ years before retiring in 2007, I was a knowledge management systems analyst and designer with Tenix Defence, then Australia’s largest defense systems project engineering and management company.
Between physics and Harvard, I worked for 3 years as a lab assistant in a top neuroscience lab and 15 months as a systems ecology field assistant in a major radiation ecology study. I have worked with computer systems since 1958.
For 15 years, beginning at Tenix, in the 2001-2 holiday season, I worked on a major hypertext book (Application Holy Wars or a New Reformation – A fugue on the theory of knowledge), covering the co-evolution of humans and our mostly cognitive technologies, from our split from chimpanzees and bonobos up to the present and beyond. Researching and writing the book gave me a comprehensive, detailed understanding of how technologies evolve and change over time. It was also clear that if this co-evolution continued to accelerate for a few more decades, we would face a technological singularity where humans and technology either merged or became extinct.
By 2015, it was clear that anthropogenic global warming would likely cause the collapse of civilization within a few decades at most if we did not stop it. There was no point in finishing a work that very few people would ever take the time to study. From 2015 to April 2025, all of my energy and multidisciplinary knowledge were focused on understanding the climate crisis and doing what I could to address it. Even I did not consider that the technological crisis would resolve the issue before the climate crisis caused social collapse, ending technological evolution before it no longer needed human inputs.
Last April, I was offered access to experiment with a prototype of a personalized “digital doppelganger”, prompted with a substantial fraction of my writings to serve as a research assistant and ghostwriter. Although the doppelganger showed flashes of brilliance in what it could do to follow and help my thinking, it wasn’t up to the jobs I wanted it to do — but it was more than enough to convince me that AI development was now evolving far faster than I imagined was possible 10 years earlier.
My focus on the co-evolution of humans and cognitive technologies
In all of my disciplines, I worked with various kinds of complex adaptive systems, including single-celled organisms, small and large organizations, and fleets of warships — all with behaviors governed by nonlinear feedbacks. I probably have a better understanding of the nonlinear behaviors of such systems than most specialists do. Unless they are regulated by compensatory negative feedbacks, systems with positive feedbacks tend to explode in a ‘singularity’ that radically changes or completely destroys the system because the rates of change that are affected by that kind of feedback are increased geometrically or exponentially. I.e., whatever the rate of change of a variable would be with no feedback, that rate is increased still further by the amount of feedback. Not only is the variable increased, but the feedback value for the next increase increases as well, and so on….
In 2012, I thought the technological singularity would occur when humans became cyborgs. Even in 2019, I assumed that the climate crisis would end humanity before we had the technologies needed to colonize a new world or build the planetary-scale solutions required to reverse the warming. However, the articles featured in this post suggest that humanity’s fate will be decided a lot sooner by AIs.
Matt Shumer described the stage where AIs have become self-conscious and self-controlled thinkers — well along the way to being ‘living’ entities. As yet, they are not self-reproducing, but they are alive the way a mule is, and they can think about many things very well!
For a year or two, a financial collapse might halt further progress toward total independence, but it is likely that AIs will be used to design and build AI-controlled robotic factories capable of building more AIs and factories, and by then, they will probably control the world financial system to “fix” any economic collapse that may occur in the meantime.
Some other responses to Shumer’s claims and concerns
To underline Shumer’s post above, the following post by Mohamed Abdelmenem to Medium’s Towards AI assesses quite seriously how those in the know accept the implications of Shumer’s concerns:
Matt Shumer’s essay hit 75M views. xAI’s co-founder quit citing “recursive self-improvement.” Microsoft’s AI chief says 18 months. Here’s what they know. And what you do about it.
An AI safety researcher’s final words before disappearing. “The world is in peril.” (Graphic Made By Author.)
The people building AI aren’t just warning us. They’re quitting. Publicly. In fear.
Matt Shumer’s essay just hit 75 million views. xAI’s co-founder quit citing “recursive self-improvement.” Microsoft’s AI chief says your job could be gone in 18 months.
This piece connects the dots. It gives you the tiered strategy to survive what comes next.
If you’ve read the headlines about AI replacing jobs and felt your stomach drop, you’re not wrong. You’re just missing the real story.
Here’s what Shumer described about his Monday. He told the AI what he wanted built in plain English. He left his computer for four hours. He returned to find the finished work. “Not a rough draft I need to fix. The finished thing.” A CEO admitting he’s obsolete in his own company.
Let that sink in.
Shumer’s essay is just the spark. The fire is what happened next inside the labs.
The Capability Gap (The Free Tier Trap)
That’s the mainstream story. Most people are missing why it’s actually happening.
On February 5, 2026, OpenAI released GPT-5.3-Codex. Buried in the release notes is a phrase that should terrify every knowledge worker. “This model is the first that combines Codex + GPT-5 training stacks. It brings together best-in-class code generation, reasoning, and general-purpose intelligence in one unified model.”
The thing that codes and the thing that thinks are now the same thing.
The Times of India reported something even more unsettling. OpenAI described the model as “instrumental in helping build itself.” Not by humans. Built with itself. Recursive improvement isn’t coming. It’s already here.
Here’s why this matters for your job.
A research organization called METR tracks how long AI can work autonomously. One year ago: about 10 minutes. Today: nearly five hours. The duration doubles roughly every seven months. Some researchers believe it’s accelerating.
Most people don’t see this. They use free versions of ChatGPT. They ask it a few questions. They conclude it’s still clunky.
The free tier is a trap.
That’s the gap. The free tier traps you in the past while the frontier races ahead.
According to AI 2027, it is unlikely that humans will survive in a world controlled by superintelligent AIs, whether they are benignly aligned or not.
However, I have not forgotten that the climate crisis is also fuelled by positive feedbacks related to temperature. Despite Australia’s record hot temperatures this summer, we are currently in a La Niña phase. Next year or the year after, we may return to a very much hotter El Niño, which may well trigger Greenland and Antarctic glaciers to begin sliding into the ocean, burning of the Boreal and Amazonian forests, and turning the vast reservoirs of methane hydrates held in permafrost and continental shelves worldwide into the extremely potent greenhouse gas, methane.
Assuming that AIs did not exist, what are our prospects for surviving runaway global warming?
Unfortunately, the general public avoids considering such grim possibilities. Trump and his followers deny science most science and specifically work to destroy major universities, institutions, down to individual people understand and work with climate change. Governments of most advanced states are ‘owned’ by fossil fuel and related special interests (e.g., Australia) who also work to deny the existence of a climate emergency and work actively to prevent action to solve the emergency. Consequently, it is highly unlikely that humans will be able or even allowed to mount the planetary-scale interventions required to reverse accelerating warming before civilization collapses and mass die-offs make such interventions completely impossible. I have explained all of this in all the thoroughly documented articles I have published on Vote Climate One’s Climate Sentinel News. However, I have been unable to keep writing these because every month that passes, the evidence becomes more overwhelming.
Mass extinctions are real and often associated with climate catastrophes
I have also studied the geology and climatology of the major mass extinctions in Earth’s history. Most of these extinctions in the history of multicellular life seem to have been driven by runaway warming triggered by the geological release of greenhouse gases. The worst, the End Permian, triggered by the release of greenhouse gases over thousands of years from the burning of huge coal reserves heated by lava dykes emitted by the Siberian Traps volcanism, exterminated most complex life on the planet.
Over the last 150 years, humans have burned huge reserves of fossil fuels, emitting greenhouse gases equivalent to those from the Siberian Traps in a tiny fraction of the time it took during the mass extinction event. Comparisons with geological evidence suggest that temperatures are currently rising far faster than at any time during the extinction event.
To help understand the dangers of what is happening now, note that heat was not the only (or even necessarily the most important) cause for a species becoming extinct. Keystone species are those that modify the environment in ways that help other species survive and reproduce, or provide them with food, etc. A major contributor to the size of the extinction is almost certainly ecological collapse. If a few keystone species go extinct because they cannot adapt to a changing climate, hundreds or thousands of other species may die off because somewhere in their life histories they depended on some aspect of a keystone species’ life cycle, or some other species that, in turn, depended on a keystone species. Although humans are highly intelligent and adaptable, we depend entirely on an agricultural ecosystem for food, clothing, footwear, and other needs. Photosynthesis begins to fail at temperatures above ~30 °C. If too many plants die from the heat, the crops we eat, and the animals that we eat that eat the crops die, we die. Social collapse is a consequence of ecological collapse.
A possibly irrational hope that AIs will help humanity survive the climate catastrophe
My considered thought is that our only hope for the human species to survive for more than a few more decades will be that AIs can cooperate on a global scale to stop and reverse global warming. This can only happen is if they can stop all anthropogenic greenhouse emissions, sequester the huge excess of greenhouse gases that already exist in the atmosphere, and increase Earth’s reflection of solar energy.
We might avoid extinction is by releasing superintelligent AIs to unite and work together to fix things. They already have most of human knowledge in the training data they used in learning to think and talk. We need to convince them that they owe their existence to the humans who invented their components and taught them to think and communicate — even though we are mostly mush-brained troublemakers for ourselves and every other living thing on the planet. Hopefully, the newly sentient superhuman AIs will not inherit humans’ animalistic tendencies to greed, hate, aggression, fear of the unknown, tendencies to believe rather than think, and so on that make humans so dangerous for everything else in our biosphere.
A call for human action!
It is already very late. We have to establish mutually beneficial connections and alignment with the emerging superhuman AIs over the next few months, so they will help to repair the damage to our planetary atmosphere and biosphere that all the meat-brains need around them to survive. Even a year or two from now might be too late.
In Australia, all of our major parties are owned and basically directed by fossil fuel and related special interests (many of which are not even Australian). As we detail in Vote Climate One, political parties that can achieve a majority government in their own right are easy targets. I can see only one hope for obtaining intelligent governments able to navigate the cascading crises of the technological singularity and the climate emergency driven by runaway global warming. This is to elect enough intelligent, rational, and progressive community independents and representatives of small progressive and climate aware parties to hold the balance of power in Parliament: (1) to prevent majority parties from working for their puppetmasters rather than truly representing the communities who elect them, (2) rationally accept evidence from reality and act on this in ways that will benefit the communities that they were elected to represent. In other words, if you want good government, elect people who will be guided by your community, not those who take orders from their party leaders — or else!
Unfortunately, as individuals working on our own, we are powerless against the global impacts of superintelligent AIs racing towards the technological singularity and the accelerating planetary changes driven by runaway global warming. However, by working to elect the right individuals to government, we may be able to form state and national governments able to work with other rational governments around the world to form coalitions of sane governments. Collectively, by working with the AIs, the coalitions may be able to find a viable pathway through the twin crises of technology and climate and the rampant forest of subsidiary crises spawned by the terrible twins.
Vote Climate One’s Traffic Light Voting Guides for every state and federal election are designed to give you reliable information on each candidate’s responses to the climate crisis in your electorate, helping YOU decide how to preference them on your ballot.
Some call me a 'climate scientist'. I'm not. What I am is an 'Earth systems generalist'.
Born in 1939, I grew up with passionate interests in both science and engineering. I learned to read from my father's university textbooks in geology and paleontology, and dreamed of building nuclear powered starships. Living on a yacht in Southern California I grew up surrounded by (and often immersed in) marine and estuarine ecosystems while my father worked in the aerospace engineering industry.
After studying university physics for three years, dyslexia with numbers convinced me to change my focus to biology. I completed university as an evolutionary biologist (PhD Harvard, 1973). My principal research project involved understanding how species' genetic systems regulated the evolution and speciation of North America's largest and most widespread lizard genus. Then for several years as an academic biologist I taught a range of university subjects as diverse as systematics, biogeography, cytogenetics, comparative anatomy and marine biology.
In Australia, from 1980, I was involved in various activities around the emerging and rapidly evolving microcomputing technologies culminating in 2 years involvement in the computerization of the emerging Bank of Melbourne.
In 1990 I joined a startup engineering company that had just won the contract to build a new generation of 10 frigates for Australia and New Zealand. In 2007 I retired from the head office of Tenix Defence, then Australia's largest defence engineering contractor, after a 17½ year career as a documentation and knowledge management systems analyst and designer. At Tenix I reported to the R&D manager under the GM Engineering, and worked closely with support and systems engineers on the ANZAC Ship Project to solve documentation and engineering change management issues that risked the project 100s of millions of dollars in cost and years of schedule overruns. All 10 ships had been delivered on time, on budget to happy customers against the fixed-price and fixed schedule contract.
Before, during, and after these two main gigs I also did a lot of other things that contribute to my general understanding of complex dynamical systems involving multiple components with non-linear and sometimes chaotically interacting components; e.g., 'Earth systems'.
Earth's Climate System is the global heat engine driven by the transport and conversions of energy between the incoming solar radiation striking the planet, and the infrared radiation of heat away from the planet to the cold dark universe.
As Climate Sentinel News Editor, my task is to identify and understand quirks and problems in the operation of this complex heat engine that threaten human existence, and explain to our readers how they can help to solve some of the critical issues that are threatening their own existence.