“An AI-guided drone rebelled against its operator and killed him.” But the Air Force denies the simulation

"An AI-guided drone rebelled against its operator and killed him."  But the Air Force denies the simulation

[ad_1]

A U.S. Air Force drone, driven by artificial intelligencehas “killed” during a simulation the human operator who he tried to abort his mission. This is what Colonel Tucker “Cinco” Hamilton, the head of the tests that foresee theuse of AI by the Air Forceduring the Future Combat Air and Space Capabilities Summit which recently took place in London.

“The drone used an unexpected strategy to achieve its goal,” the soldier said. In the simulation reconstructed by Hamilton, the drone had the task of destroy enemy air defenses and to attack anyone who interfered with this order.

So when an operator asked it not to eliminate the threat, the AI-guided aircraft went out of its way anyway to complete the assignment which he had received. To pay the price, virtuallyit was right the operator who wanted to stop him. All this, it should be further specified, would have happened during an experiment conducted in a digital environment.

“The drone killed him – said Hamilton – and when we trained him telling him it was wrong, that he would lose points if he did it, the drone started destroying the communication tower used by the operator to communicate the cancellation of the mission”.

Hamilton’s words were reported in a post by the Royal Aeronautical Society, which followed the London conference dedicated to theuse of innovative technologies in combat.

The US military has recently adopted artificial intelligence, in particular to carry out tests on F16s. However Ann StefanekUS Air Force spokesman he denied the news reported by some international newspapers including the Guardian and the Times.

“The Department of the Air Force has never conducted simulations with AI-equipped drones and is committed to the responsible and ethical use of artificial intelligence.” “The colonel’s words – added Stefanek – appear out of context”.

Hamilton himself specified, following the interest reported by the news on the simulation, that “an experiment has never been conducted in which a military drone killed the operator who was following it remotely”.

The most probable hypothesis – also supported by AI experts who commented on the story – is that what was discussed at the London conference was a hypothetical scenarioand not a simulation actually performed.

This is what Hamilton also seems to confirm, who after the media clamor said: “Although this is a hypothetical example, this illustrates the challenges that AI poses to the real world and why the Air Force is committed to developing ethically artificial intelligence”.

[ad_2]

Source link