System safeguards for A.I. are unlikely to make any difference

A decade ago AI research wasn’t as hot as it is now. But right now, in 2016 AI is very much a profitable endeavor. Many now argue that with regards to AI there is a risk for (a) mass unemployment, (b) mass political destabilization (for instance mass-abuse of intelligent drones by terrorists), or even (c) a hard take-off of self-improving AI triggering a socalled “singularity”, which (in very short) is something we might simplified describe as “a point beyond we don’t have a clue what happens next”. Specialists in the field agree – (a) is already happening and will get much worse, likewise (b) is already here or no decade removed, whereas we shouldn’t be overly concerned (yet) that (c) might happen the next 10 years.

Many people in the AI field are nonetheless ringing alarm bells we should do something. The most commonly quoted argument (in particularly by MIRI and The future of Humanity institute) is we should invest a lot of resources in socalled “friendly AI”. I tend to agree, but looking at the world as it exists right now there is ample evidence that even safety mechanisms designed to protect the very most vulnerable completely and publicly fail.

Here’s a very sad example:
– a situation where everyone agrees there is a serious problem
= a situation where clearly and openly the bad actor conspires against the legal system or framework
– a situation where all parties conspire to not deal with the problem
– a situation where the problem is widely exposed in the media
– a situation where you express anger at the problem can get you jailed
– … and even then nothing happens and we all proudly declare “the system works” while everyone else sees the system completely failed.

The problem with AI systems is that they are extremely profitable in the short run, and their profits tend to accrue to people who are already obscenely powerful and affluent. That essentially means we enter in a Robocop scenario where corporate control will almost certainly implement protections against loss of revenue. Take for instance TPP and article (a) above – it is conceivable an Automation corporation that offers other corporations vast benefits from robotization (and consequently – laying of most workers as redundant) and in the light of the TransPacific Partnership legal framework that if a country protects such automation in favor of more employment of its citizens the corporation that does the automation service can sue country for lost revenues.

I conclude there are next to no reliable ways to protect against major calamities with AI. All existing systems are already openly conspiring against such a mechanism or infrastructure.

I suppose we’ll know before 2030 how things go, but looking at just how corrupt academia, legal systems, governments and NGO’s have become world-wide in the last few decades I am not holding my breath.

And in case you were wondering in the above interjected example, look here (Lousy grammar warning). Let them know what you think – and in doing so there’s a clear indication how to address the Hard Take-off scenario – more people should know and start giving a damn and speak out against it. Apathy can get us all in to major trouble.