[ad_1]
GIVING artificial intelligence control over nuclear weapons could trigger an apocalyptic conflict, a leading expert has warned.
As AI takes a greater and greater role in the control of devastating weaponry, so the chances of technology making a mistake and sparking World War 3 increase.
These include the USA’s B-21 nuclear bomber, China’s AI hypersonic missiles, and Russia’s Poseidon nuclear drone.
Writing for the Bulletin of the Atomic Scientists, expert Zachary Kellenborn, a Policy Fellow at the Schar School of Policy and Government, warned: “If artificial intelligences controlled nuclear weapons, all of us could be dead.”
He went on: “Militaries are increasingly incorporating autonomous functions into weapons systems,” adding that “there is no guarantee that some military won’t put AI in charge of nuclear launches.”
Kellenborn, who describes himself as a US Army “Mad Scientist”, explained that “error” is the biggest problem with autonomous nuclear weapons.
He said: “In the real world, data may be biased or incomplete in all sorts of ways.”
Kellenborn added: “In a nuclear weapons context, a government may have little data about adversary military platforms; existing data may be structurally biased, by, for example, relying on satellite imagery; or data may not account for obvious, expected variations such as imagery taken during foggy, rainy, or overcast weather.”
Training a nuclear weapons AI program also poses a major challenge, as nukes have, thankfully, only been used twice in history in Hiroshima and Nagasaki, meaning any system would struggle to learn.
Despite these concerns, a number of AI military systems, including nuclear weapons, are already in place around the world.
DEAD HAND
In recent years, Russia has also upgraded its so-called “Doomsday device”, known as “Dead Hand”.
This final line of defence in a nuclear war would fire every Russian nuke at once, guaranteeing total destruction of the enemy.
First developed during the Cold War, it is believed to have been given an AI upgrade over the past few years.
In 2018, nuclear disarmament expert Dr Bruce Blair told the Daily Star Online he believes the system, known as “Perimeter”, is “vulnerable to cyber attack” which could prove catastrophic.
Dead hand systems are meant to provide a backup in case a state’s nuclear command authority is killed or otherwise disrupted.
US military experts Adam Lowther and Curtis McGuffin claimed in a 2019 article that the US should consider “an automated strategic response system based on artificial intelligence”.
POSEIDON NUCLEAR DRONE
In May 2018, Vladimir Putin launched Russia’s underwater nuclear drone, which experts warned could trigger 300ft tsunamis.
The Poseidon nuclear drone, due to be finished by 2027, is designed to wipe out enemy naval bases with two megatons of nuclear power.
Described by US Navy documents as an “Intercontinental Nuclear-Powered Nuclear-Armed Autonomous Torpedo”, or an “autonomous undersea vehicle” by the Congressional Research Service, it is intended to be used as a second-strike weapon in the event of a nuclear conflict.
The big unanswered question over Poseidon is; what can it do autonomously.
Kellenborn warns it could potentially be given permission to attack autonomously under specific conditions.
He said: “For example, what if, in a crisis scenario where Russian leadership fears a possible nuclear attack, Poseidon torpedoes are launched under a loiter mode? It could be that if the Poseidon loses communications with its host submarine, it launches an attack.”
Announcing the launch at the time, Putin bragged that the weapon would have “hardly any vulnerabilities” and “nothing in the world will be capable of withstanding it”.
Experts warn its biggest threat would be triggering deadly tsunamis, which physicist Rex Richardson told Business Insider could be equal to the 2011 Fukushima tsunami.
B21 BOMBER
The US has launched a $550m remotely-piloted bomber that can fire nukes and hide from enemy missiles.
In 2020, the US Air Force’s B-21 stealth plane was unveiled, the first new US bomber in more than 30 years.
Not only can it be piloted remotely, but it can also fly itself using artificial intelligence to pick out targets and avoid detection with no human output.
Although the military insists a human operator will always make the final call on whether or not to hit a target, information about the aircraft has been slow at getting out.
AI FIGHTER PILOTS & HYPERSONIC MISSILES
Last year, China bragged its AI fighter pilots were “better than humans” and shot down their non-AI counterparts in simulated dogfights.
The Chinese military’s official PLA Daily newspaper quoted a pilot who claimed the technology learned its enemies’ moves and could defeat them just a day later.
Chinese brigade commander Du Jianfeng claimed the AI pilots also helped make the human participants better pilots by strengthening their flying techniques.
Last year, China claimed its AI-controlled hypersonic missiles can hit targets with 10 times as much accuracy as a human-controlled missile.
Chinese military missile scientists, writing in the journal Systems Engineering and Electronics, proposed using artificial intelligence to write the weapon’s software “on the fly”, meaning human controllers would have no idea what would happen after pressing the launch button.
CHECKMATE AI WARPLANE
In 2021, Russia unveiled a new AI stealth fighter jet – while also making a dig at the Royal Navy.
The 1,500mph aircraft called Checkmate was launched at a Russian airshow by a delighted Vladimir Putin.
One ad for the autonomous plane – which can hide from its enemies – featured a picture of the Royal Navy’s HMS Defender in the jet’s sights with the caption: “See You”.
If artificial intelligences controlled nuclear weapons, all of us could be dead
Zachary Kellenborn
The world has already come close to devastating nuclear war which was only prevented by human involvement.
On September 27, 1983, Soviet soldier Stanislav Petrov was an on-duty officer at a secret command centre south of Moscow when a chilling alarm went off.
It signalled that the United States had launched intercontinental ballistic missiles carrying nuclear warheads.
Faced with an impossible choice – report the alarm and potentially start WW3 or bank on it being a false alarm – Petrov chose the latter.
He later said: “I categorically refused to be guilty of starting World War 3.”
Kellenberg said that Petrov made a human choice not to trust the automated launch detection system, explaining: “The computer was wrong; Petrov was right. The false signals came from the early warning system mistaking the sun’s reflection off the clouds for missiles. But if Petrov had been a machine, programmed to respond automatically when confidence was sufficiently high, that error would have started a nuclear war.”
He added: “There is no guarantee that some military won’t put AI in charge of nuclear launches; international law doesn’t specify that there should always be a ‘Petrov’ guarding the button. That’s something that should change, soon.”
[ad_2]
Source link