Tech Report : Its About Time We Ban the Killer Robots!
If you want it done right, you've got to do it yourself. Ever heard that one?
What about, the early bird gets the worm?
That would be more appropriate based on what I intend to discuss in this article. The future isn't as far away as you'd think. And just what type of future will we all have to look forward to?
Well, quite a bleak one assuming we don't make the right moves in the present. Now, I'll just let the cat out of the bag. My mind is stuck on the potential for total devastation the likes of which come straight out of a Terminator movie!
Fortunately, there is widespread public support for a ban on so-called “killer robots”, which campaigners say would “cross a moral line” after which it would be difficult to return.
The good news? Well, it seems a group of scientists has called for a ban on the development of weapons controlled by artificial intelligence (AI). Well it's about time eh?!
There's no doubt in my mind that autonomous weapons are far more dangerous than conventional ones because nobody is behind the wheel. They may malfunction in unpredictable ways and kill innocent people. Yah, that sucks!
Beyond that, many ethics experts also argue that it is a moral step too far for AI systems to kill without any human intervention.
Recently, scientists weighed in at the American Association for the Advancement of Science meeting in Washington DC.
Human Rights Watch (HRW) is one of the 89 non-governmental organisations from 50 countries that have formed the Campaign to Stop Killer Robots, to press for an international treaty.
Among those leading efforts for the worldwide ban is HRW's Mary Wareham.
"We are not talking about walking, talking terminator robots that are about to take over the world; what we are concerned about is much more imminent: conventional weapons systems with autonomy," she told BBC News.
"They are beginning to creep in. Drones are the obvious example, but there are also military aircraft that take off, fly and land on their own; robotic sentries that can identify movement. These are precursors to autonomous weapons."
Ryan Gariepy, chief technological officer at Clearpath Robotics, backs the ban proposal.
His company takes military contracts, but it has denounced AI systems for warfare and stated that it would not develop them.
"When they fail, they fail in unpredictable ways," he went on to add.
"As advanced as we are, the state of AI is really limited by image recognition. It is good but does not have the detail or context to be judge, jury and executioner on a battlefield.
"An autonomous system cannot make a decision to kill or not to kill in a vacuum. The de-facto decision has been made thousands of miles away by developers, programmers and scientists who have no conception of the situation the weapon is deployed in."
According to Peter Asaro, of the New School in New York, this type of situation raises issues of legal liability if the system makes an unlawful killing.
"The delegation of authority to kill to a machine is not justified and a violation of human rights because machines are not moral agents and so cannot be responsible for making decisions of life and death.
"So it may well be that the people who made the autonomous weapon are responsible."
At a meeting in Geneva at the close of last year this same topic was discussed and sadly ended in a stalemate after nations including the US and Russia indicated they would not support the creation of a global agreement to ban autonomous killer robots.
Mary Wareham the co-ordinator of the Campaign to Stop Killer Robots, compared the movement to successful efforts to eradicate landmines from battlefields.
'By permitting fully autonomous weapons to be developed, we are crossing a moral line', Human Rights Watch warns.
Sadly it seems that this is where we are going like a ship sailing in the night without a captain.
This is made evident by Russia, Israel, South Korea and the US indicating at the annual meeting of the Convention on Conventional Weapons that they would not support negotiations for a new treaty.
All of these nations, as well as China, are investing significantly in weapons with decreasing levels of human control.
Can we as a people stop a dystopian future full of killer robot wars from occuring? I really hope the answer is yes, for all of our sakes.
What are your thoughts on this topic?
Thanks for reading!
Authored by: Techblogger
Source:
Call to ban killer robots in wars - BBC News
https://www.bbc.com/news/science-environment-47259889
WORLD CALLS FOR INTERNATIONAL TREATY TO STOP KILLER ROBOTS BEFORE ROGUE STATES ACQUIRE THEM - The Independent
Like all tech we just need time to make it better and safer, killer robots will be just like this, we need to build fail-safes in order to protect ourselves from them.
Russia, china and the us will never allow the bans on the killer robots because they know any of them could build them in secrecy, while in nuclear bombs you can trace the radioactivity of the bomb with killer bots you wouldn't be able to, which makes them that much dangerous!
We will probably see countries arming themselves up with killer robots until there is a state of mutually assured destruction, and, with time, countries will start to decrease the number of killer robots until there are only a few... Think of the killers robot like the next nuclear weapons, at first there were many, now the amount have decreased!
Your level lowered and you are now a Red Fish!