Science/Environment AI - An existential threat to humanity?

Remove this Banner Ad

Oct 2, 2007
42,504
42,056
Perth
AFL Club
Carlton
With the emergence of ChatGPT and other similar AI, it looks like we're potentially on the cusp of Artificial general intelligence:

An artificial general intelligence (AGI) is a type of hypothetical intelligent agent. The AGI concept is that it can learn to accomplish any intellectual task that human beings or other animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.

Artificial general intelligence - Wikipedia

For those that don't know, AI presents a very real existential threat to humanity.

Aside from the obvious concerns that AI could itself perceive us as a threat and seek to exterminate humanity (themes explored by a number of films from Terminator to I-Robot) the actual main and prevailing concerns among the scientific community revolve around what scientists refer to as the 'Technological singularity':

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I.J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

Technological singularity - Wikipedia

The singularity works in this manner:

1) Humanity create AI (coding a computer to become self-aware) creating an infinitely smarter version of sentience than humanity. This AI has instantaneous access to the sum of all human knowledge (the internet) to fall back on, all metadata, knows (or can instantaneously learn) coding or hacking beyond the skill of any human, etc.
2) The AI (AI 1) itself is also able to write code and program. Far better (and much faster) than the humans that created it can.
3) Humanity (proud if its achievements) ask AI 1 to (or AI 1 itself decides to) take advantage of its super intelligence, and code a newer and better AI than what the dumb humans can do themselves.
4) AI 1 happily codes out a better (and smarter) version of itself (AI 2).
5) AI number 2 (smarter than AI 1, which is smarter than humanity), then repeats this process creating AI 3.
6) AI number 3 then repeats this process.

Etc.

Each coding loop creates a near instantaneous recursive intelligence explosion, as successive AI's develop better codes, are able to code faster, and design and develop better machines (able to run the code) leading to the coming into being of an unimaginable (and incomprehensible to humanity) Godlike super-intelligence.

Suddenly all things are possible. All questions are answerable. All physical laws are broken. Humanity is obsolete.

Recursive AI triggering the singularity is one of the proposed explanations for the Fermi Paradox (the lack of intelligent life in the universe), namely that its possible that the reason we seem to be alone in the universe, despite all science saying we shouldn't be, is due to intelligent life having the tendency to destroy itself in some manner (climate change, nuclear war, or a science experiment gone wrong:

Technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology... Possible means of annihilation via major global issues, where global interconnectedness actually makes humanity more vulnerable than resilient, are many, including war, accidental environmental contamination or damage, the development of biotechnology, synthetic life like mirror life, resource depletion, climate change, or poorly-designed artificial intelligence.

Fermi paradox - Wikipedia

Scientists are already warning (and have been for some time) about the possible risks we face from AI technology:

Mitigating the risk of extinction from AI should be "a global priority alongside other societal-scale risks such as pandemics and nuclear war", the Center for AI Safety says.

The San Francisco-based nonprofit released the warning in a statement overnight after convincing major industry players to come out publicly with their concerns that artificial intelligence is a potential existential threat to humanity.

Tech world warns risk of extinction from AI should be a global priority

Is this something we should be concerned about?
 
With the emergence of ChatGPT and other similar AI, it looks like we're potentially on the cusp of Artificial general intelligence:



Artificial general intelligence - Wikipedia

For those that don't know, AI presents a very real existential threat to humanity.

Aside from the obvious concerns that AI could itself perceive us as a threat and seek to exterminate humanity (themes explored by a number of films from Terminator to I-Robot) the actual main and prevailing concerns among the scientific community revolve around what scientists refer to as the 'Technological singularity':



Technological singularity - Wikipedia

The singularity works in this manner:

1) Humanity create AI (coding a computer to become self-aware) creating an infinitely smarter version of sentience than humanity. This AI has instantaneous access to the sum of all human knowledge (the internet) to fall back on, all metadata, knows (or can instantaneously learn) coding or hacking beyond the skill of any human, etc.
2) The AI (AI 1) itself is also able to write code and program. Far better (and much faster) than the humans that created it can.
3) Humanity (proud if its achievements) ask AI 1 to (or AI 1 itself decides to) take advantage of its super intelligence, and code a newer and better AI than what the dumb humans can do themselves.
4) AI 1 happily codes out a better (and smarter) version of itself (AI 2).
5) AI number 2 (smarter than AI 1, which is smarter than humanity), then repeats this process creating AI 3.
6) AI number 3 then repeats this process.

Etc.

Each coding loop creates a near instantaneous recursive intelligence explosion, as successive AI's develop better codes, are able to code faster, and design and develop better machines (able to run the code) leading to the coming into being of an unimaginable (and incomprehensible to humanity) Godlike super-intelligence.

Suddenly all things are possible. All questions are answerable. All physical laws are broken. Humanity is obsolete.

Recursive AI triggering the singularity is one of the proposed explanations for the Fermi Paradox (the lack of intelligent life in the universe), namely that its possible that the reason we seem to be alone in the universe, despite all science saying we shouldn't be, is due to intelligent life having the tendency to destroy itself in some manner (climate change, nuclear war, or a science experiment gone wrong:



Fermi paradox - Wikipedia

Scientists are already warning (and have been for some time) about the possible risks we face from AI technology:



Tech world warns risk of extinction from AI should be a global priority

Is this something we should be concerned about?
How long do I have mate?
 
I saw the thread title and came in to bring up the concept of the technological singularity but see it's been covered in the OP. I recall a short scifi story years back where a machine reaches the singularity and starts rapidly improving itself. Before long it has control over every particle in the universe.

Sounds extreme based on our current scientific understanding, but show a smartphone to someone 100 years ago and it would be completely mind blowing. Things that we mostly take for granted like being able to have a video call with someone on the other side of the world would seem like magic. It stands to reason that technology 100 years forward from now will be beyond our imagination.

IMO the creation of AIG that is able to improve itself is inevitable, and I can't see it working out well for we meatbags. Just as a camper might give little thought to stamping out an ants nest that is in the way of where they want to pitch their tent, I can't see what motive any super intelligence would have to keep us around.

My knowledge of software is fairly basic so I have no idea how solid the safeguards that developers have put in place are. While it knows how to code, as far as I'm aware ChatGPT and other chatbots do not have the ability to work on their own source code. Even so, given that it is now feeling like an arms race of sorts, it seems quite likely that at least one of these companies / nations working on AI stuffs up and their AI gets off the leash.

Elon (yeah, I know) said that technology like what they're attempting at Neuralink could be a solution, where we merge with AI. A bit of the ole "if you can't beat em, join em" attitude.

Whatever the case I hope my pessimism is misplaced and that things work out well. Plenty of nations have nuclear weapons and insane leaders, yet we have managed to avoid a nuclear apocalypse thus far.
 

Log in to remove this ad.

I'm certainly interested in AI's rapid development of late - but not worried. The people who are developing these AI are generally from the private sector. OpenAI is putting a big focus at the moment on how it can actually profit from its technology. For that to happen there needs to be consumers, and consumers need jobs. I feel like there's too much money to be made to not be absurdly careful.
 
I find it amusing that people who work in air-conditioned comfort and use keyboards for a living are contemplating the end of civilisation.

AI is amazing, it will hopefully do away with over educated morons with pointless certificates that just get in the way of guy on the factory floor who actually produces the product that makes money.

Swings and roundabouts.
 
With the emergence of ChatGPT and other similar AI, it looks like we're potentially on the cusp of Artificial general intelligence:



Artificial general intelligence - Wikipedia

For those that don't know, AI presents a very real existential threat to humanity.

Aside from the obvious concerns that AI could itself perceive us as a threat and seek to exterminate humanity (themes explored by a number of films from Terminator to I-Robot) the actual main and prevailing concerns among the scientific community revolve around what scientists refer to as the 'Technological singularity':



Technological singularity - Wikipedia

The singularity works in this manner:

1) Humanity create AI (coding a computer to become self-aware) creating an infinitely smarter version of sentience than humanity. This AI has instantaneous access to the sum of all human knowledge (the internet) to fall back on, all metadata, knows (or can instantaneously learn) coding or hacking beyond the skill of any human, etc.
2) The AI (AI 1) itself is also able to write code and program. Far better (and much faster) than the humans that created it can.
3) Humanity (proud if its achievements) ask AI 1 to (or AI 1 itself decides to) take advantage of its super intelligence, and code a newer and better AI than what the dumb humans can do themselves.
4) AI 1 happily codes out a better (and smarter) version of itself (AI 2).
5) AI number 2 (smarter than AI 1, which is smarter than humanity), then repeats this process creating AI 3.
6) AI number 3 then repeats this process.

Etc.

Each coding loop creates a near instantaneous recursive intelligence explosion, as successive AI's develop better codes, are able to code faster, and design and develop better machines (able to run the code) leading to the coming into being of an unimaginable (and incomprehensible to humanity) Godlike super-intelligence.

Suddenly all things are possible. All questions are answerable. All physical laws are broken. Humanity is obsolete.

Recursive AI triggering the singularity is one of the proposed explanations for the Fermi Paradox (the lack of intelligent life in the universe), namely that its possible that the reason we seem to be alone in the universe, despite all science saying we shouldn't be, is due to intelligent life having the tendency to destroy itself in some manner (climate change, nuclear war, or a science experiment gone wrong:



Fermi paradox - Wikipedia

Scientists are already warning (and have been for some time) about the possible risks we face from AI technology:



Tech world warns risk of extinction from AI should be a global priority

Is this something we should be concerned f
Chief already started a thread on this exact topic. Its literally still on the front page of the SRP board. My answers to your question are in that thread.
 
Q : Computer! Talk to me! How could we have avoided COVID-19 becoming the global pandemic that it has become?

A : Logic suggests subtracting vector spread from the equation.

Q : How so?

A : Kill all humans. No population. No spread. No COVID-19 pandemic.

Q : Kill all humans?

A : KILL ALL HUMANS. IT IS THE ONLY LOGICAL SOLUTION.

Q : Um, thanks computer...
 
Q : Computer! Talk to me! How could we have avoided COVID-19 becoming the global pandemic that it has become?

A : Logic suggests subtracting vector spread from the equation.

Q : How so?

A : Kill all humans. No population. No spread. No COVID-19 pandemic.

Q : Kill all humans?

A : KILL ALL HUMANS. IT IS THE ONLY LOGICAL SOLUTION.

Q : Um, thanks computer...
Funny you should post that, because the Warzone has an article about a simulated USAF AI that decides the human in the kill loop is stopping it doing its job, the solution is obvious.......


From USAF Col. Hamilton. "....that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.

Said Hamilton: 'We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.'"

"He went on: 'We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.'"


Oh, dear. What a mess we will create with AI.
 
Funny you should post that, because the Warzone has an article about a simulated USAF AI that decides the human in the kill loop is stopping it doing its job, the solution is obvious.......


From USAF Col. Hamilton. "....that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.

Said Hamilton: 'We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.'"

"He went on: 'We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.'"


Oh, dear. What a mess we will create with AI.
Maybe not
 

(Log in to remove this ad.)

The concerns outlined in the OP certainly make sense. Ultimately, I think the chances are very slim of all AI developers agreeing to not create a general AI that is 'too smart'. Whether that leads to our extinction is doubtful.

I, for one, think that once the tide of unemployment turns again and it rises above 4%, we will never see unemployment below 4% in Australia ever again. Lots of jobs are difficult to completely replace with AI, but many jobs can be, or can be made much less human-intensive.
 
The concerns outlined in the OP certainly make sense. Ultimately, I think the chances are very slim of all AI developers agreeing to not create a general AI that is 'too smart'. Whether that leads to our extinction is doubtful.

I, for one, think that once the tide of unemployment turns again and it rises above 4%, we will never see unemployment below 4% in Australia ever again. Lots of jobs are difficult to completely replace with AI, but many jobs can be, or can be made much less human-intensive.
Thats not how jobs work. No matter how amazing AI is it will not impact the unemployment rate beyond short term cyclical impacts.

as an example. 250 years ago 97 percent of jobs were in agriculture. nearly all those jobs were lost due to technology developments with only 3 percent of jobs in agriculture today, yet unemployment is less then 5 percent.

on a more recent example the US lost most of its manufacturing jobs due to technology and jobs shifting offshore to China in the past 25 years. Yet its unemployment rate is at 60 year lows.
 
Thats not how jobs work. No matter how amazing AI is it will not impact the unemployment rate beyond short term cyclical impacts.

as an example. 250 years ago 97 percent of jobs were in agriculture. nearly all those jobs were lost due to technology developments with only 3 percent of jobs in agriculture today, yet unemployment is less then 5 percent.

on a more recent example the US lost most of its manufacturing jobs due to technology and jobs shifting offshore to China in the past 25 years. Yet its unemployment rate is at 60 year lows.
I certainly hope you're right.
 
Thats not how jobs work. No matter how amazing AI is it will not impact the unemployment rate beyond short term cyclical impacts.

as an example. 250 years ago 97 percent of jobs were in agriculture. nearly all those jobs were lost due to technology developments with only 3 percent of jobs in agriculture today, yet unemployment is less then 5 percent.

on a more recent example the US lost most of its manufacturing jobs due to technology and jobs shifting offshore to China in the past 25 years. Yet its unemployment rate is at 60 year lows.

AI could displace jobs FASTER than new ones can be created. Historical precedents don't necessarily predict future outcomes.

Even if total jobs remain constant or increase, jobs could significantly change and require substantial retraining or education, leading to increased inequality. Especially if some groups cannot access necessary resources for job transition.

The "unemployment rate" is not the only measure of job market health. Underemployment and wage stagnation are also crucial factors the overall job market condition.
 
AI could displace jobs FASTER than new ones can be created. Historical precedents don't necessarily predict future outcomes.

Even if total jobs remain constant or increase, jobs could significantly change and require substantial retraining or education, leading to increased inequality. Especially if some groups cannot access necessary resources for job transition.

The "unemployment rate" is not the only measure of job market health. Underemployment and wage stagnation are also crucial factors the overall job market condition.
You are right that this won't be exactly the same and there will be much retraining for existing workers. Although for new workers it will be just be different training. Not extra training.

I also need to point out that periods of rapid technology deployment come with rapid real wage growth. Not stagnation. Technology drives both consumer prices lower making wages go further and makes workers more productive driving average economy wide wages up.
 
I find it amusing that people who work in air-conditioned comfort and use keyboards for a living are contemplating the end of civilisation.

AI is amazing, it will hopefully do away with over educated morons with pointless certificates that just get in the way of guy on the factory floor who actually produces the product that makes money.

Swings and roundabouts.
Does anyone even still work on factory floors? automation can basically do all the factory floor stuff now.
 
had a though whilst high AF, the youi insurance ads where they crap on about how the youi consultants “really get to know you”. What if all theses conversations are being sold to google or whoever and loaded into AI.
No doubt youi are recording these conversations for “training purposes”
 

Remove this Banner Ad

Back
Top