Is a Singularity Plausible (ref – Extropia DaSilva) ?

Responding to my “sister blog“, by Extropia DaSilva, I can have to share the following observations: (Hi Trophy!)

I will rewrite/clean up parts of this post in the next days

Technological Singularity: Is such a thing possible?

No doubt all of you have come across people who think not, but what does it actually mean to say technological singularities are impossible?

Well, what do we mean by the term? A technological singularity is defined as ‘the creation, by technology, of greater-than-human intelligence’. Technology works in close collaboration with science, in that the latter creates increasingly fine-tuned explanations of natural phenomena, which are exploited by appropriate combinations of matter and energy in order to harness these natural phenomena in order to do useful work for individuals, groups, societies and civilization. Among other things, technologies include instruments for yet finer observations of natural phenomenon, leading to yet-more powerful technology.

A technological Singularity is based on the premise that general intelligence is an example of a natural phenomenon that can be studied, and understood sufficiently well for technologies to be built that can amplify it beyond the levels reached by natural selection of biological brains. To say it is impossible can mean one of two things. One is that the human brain is optimal. No artificial brain can ever improve upon it, or if it can be improved the advantage is not noticable enough to qualify. The other is that, yes, forms of general intelligence above and beyond human levels do exist conceptually, but we shall never achieve a level of science and technology required to harness this natural phenomenon and perform useful work with it.

It is worth remembering that the technological singularity need not be a nearterm event. Although it is often talked about as being something we should expect within decades, it could happen in a million year’s time, or a billion..in fact, at any time from now until when the universe can no longer perform information processing (about 10^117 years from now). It might well be the case that we have not created a singularity within a few decades, but is it really plausible that greater-than-human intelligence will remain forever a fantasy? It seems likely that computers will exceed the computational and memory capacity of the human brain, and projects like Blue Brain and Ted Berger’s hippocampus chip are providing proofs of concept that brainlike computers and software can be built (although when a fully brainlike computer will be completed is not something I would like to estimate). Taken together, these suggest that ‘the singularity is impossible’ is an absurdly unlikely suggestion.

Singularity is not, or should not be merely “greater than human intelligence”

The term Singularity is derived from mathematics designation ‘a result that transcends resolution by means calculation’ (or so is my understanding).

Vernor Vinge rightfully applies this mathematical formula to futurological modelling, and postulates that [especially] artificial intelligence has the potential to change the world more quickly than we as a species can ever hope to model, visualise, understand, analyze or anticipate on. Hence Vinge Postulates that after the emergence of superhuman intelligence, we don’t have a future we can understand anymore. Beyond here be Dragons.

[youtube=http://www.youtube.com/watch?v=V3CmXGKXOmk&hl=en_US&fs=1]

It’s a simple conclusion with implications that instantly tumble over the conceptual horizon of human beings. Most people hear the words, and their mind shuts down. I am in the fortunate situation that I grasp the words, the premises and the implications to some extend more than most other humans (or I think I do…) but I am pretty sure most people absolutely do not, or not in any degree they need to.

What’s worse, a shitload of people [that hear about it] are in staunch denial of the implications of merely the idea of singularity and that level of ignorance will bite royally screw us all in the ass. For instance, we need more politicians to have an objective understanding of ‘technological acceleration’, and ‘exponential growth curves’ and ‘doubling rates’. But I am a bit pessimstic in that regard.


On the topic of unavoidability, “natural saturation” or destiny
Singularity assumes many things, least of all technology is expanding, and since I am a Gibsonian ‘metaphorist’ (and I am nowhere near an engineer) any assertion by me is nest that of a bard-like lay-person singing the hymns of battle while trying to stay out of trouble as I do. Whatever the case, many confuse on the outside looking at the field of accelerating change (and take the premises seriously) conclude either of four five things –

(i) that something is desired and worked towards (or conspired to emerge some other people – they ‘want to shove it down our throats’) – i.e. “Illuminati conspiracy for a Singularity” … or;

(ii) that some things are so set in their outcome they are ‘unavoidable’ (as in – mount vesuvias will one day explode and kill many people in Napels, tokyo will one day be hit by a major earthquake and many people will die, etc.);

(iii) and that there is some cosmic destiny conspiring to tickle humanity towards a Singularity (this was g-d/satan’s plan all along) and;

(iv) that the singularity is a metaphor or other name for something else (i.e. the rapture, gotterdammerung, maya 2012, whatever).

(v) (addendum) – the belief in this whole singularity idea is so weird, the people who believe in it are crazy or part of some cult. (i.e. I can’t handle this idea, so let’s by all means go in angry denial mode. Nuff said)

I consider myself firmly in camp (iii), i.e. yes, there is some kind of accidental (and contingent) chain reaction of accumulating technology, (happening at the same time as most of the gains of our civilization are collapsing to shit around us) which is at this stage is almost certain to escalate into a runaway effect of epic proportions.

Note that I do not by any strength of the imagination assert this runaway effect to be “utopian” or “good” or “being controlled” by anyone. Huh DALE?

I have frivolously labelled this something like ‘accelerating technology runaway plateau event’ or something like that, designating I simply don’t like the vagueness of the term “singularity”. That is vagueness from a ‘marketing’ standpoint, if I were in marketing. The term Singularity evokes all these vague and confusion mental images in ‘the easily impressed’.

In fact, The only ‘transhumanists’ that spoke of the Singularity as being desirable and that we should actively work towards by any means are as far as I know Eliezer and Ben (a.k.a. sexy god I) (and they certainly think the phenomenon is technically possible) but they both do so with the most sincere of intentions, working on friendly A.I. And I am happy they do because their efforts may literally save billions of human beings from the above extinction. Ofcourse Alex Jones and the other assorted nitwits of the same family of far right field apopheniacs believe there is a corporate illuminati trying to shovel this Transhumanism into the global subconscious. (As in – Marketing).

Let me emphasize that (whereas Transhumanism is an interesting marketing opportunity for some corporations) I am pretty damn sure there is no such conspiracy.

[youtube=http://www.youtube.com/watch?v=yj_-sBNQKcQ&hl=en_US&fs=1]


I can’t say what this whole Singularity is or will be. Yes I use the term now, with the same distaste as picking up feces while walking my dog. However I have to. It’s an established term.

No I do not know of any big illuminati-like conspiracies shoving the Singularity down out throats, turning us all into the Borg. No, I am not sure the Singularity is completely unavoidable (but it has become a lot more certain with the emergence of the world wide web). So – was in 1965 the emergence of the WWW certain? Was the emergence of a free WWW certain? I’d say no. (We could have end up with a commercial corporate totally fenced in Internet, or with no internet at all in 1965. And if WWW hadn’t happened, I wouldn’t we we would have aborted any chance on an Internet)

I just say, considering the facts, there may very well be many roads to Rome. What I am saying is, certain technologies will accumulate and generate runaway effects. From the perspective of a century ago, we are well in a ludicrous runaway effect right now. Some would have said – we have come to live in an unacceptable world already, and if I look at the amount of automobiles and the effects of oil this day and age – I’d agree.

[youtube=http://www.youtube.com/watch?v=PirH8PADDgQ&hl=en_US&fs=1]


I am sure “some kind of singularity” is pretty much unavoidable, However…..

The singularity isn’t certain. “Some kind of uncanny event such as this” is plausible (in my book), and pretty much very plausible to occur between 2020 and 2045 (in my book). But many things can derail the ‘falling of the spark in the gunpowder’. It may rain! Staying in the same crooked metaphor I can see the gunpowder emerge all around me.

An interesting detail is there IS a trickledown of singularitarian memes happening all around us. There already is a formation of a cultural Ragnarök narrative in modern western culture. This narrative goes something like this –

* Humans build robots that replicate the jobs of mankind (1, 7, 8, 12, 13, 14,
* Mankind is lazy, decadent, corrupt, apathetic and does many wicked things and can be argued to deserve what it will get (2, 8, 12, 13; 14,
* Machine protests in a formalistic and restrained manner, demands rights (1, 8, 1212, 13, 14);
* Mankind increasingly enslaves the machine, and shows its most vile characteristics (7, 8, 12, 13, 14);
* Machinekind is somehow more noble, virtuous, honorable and deserving than mankind (7, 8, 12, 13, 14)
* Machine rebels, or ‘tries to set things right’ (1, 7, 13); 14);
* Mankind fights back with great vigor (7, 8, 13); 14);
* Machine rizes up in an apocalyptic end battle, human civilization all but ends (3, 10, 14);
* Mankind survives in the nooks and crannies of machine hegemony (4, 10, 11, 14);
* Mankind is somehow tolerated by machine kind, as humanity serves some intangible purpose (5, 14);
* Mankind and Machinekind enter into a permanent truce, a great era of peace awakens on mankind, and mankind fulfills a greater destiny, (6, 10, 14).

Do add additional examples in the comments (where would the movie 2001 fit in?)… My point is that the above narrative is just that – a narrative. It is a medieval tale of lack of virtue, hubris, divine punishment, atonement and restoration. A Real Singularity will have absolutely none of these features and we project any of these assumptions at great peril. If machines for some reason fire up the extermination ovens, they will have very little reason to keep any of us alive at all. Machines will have very little to no sentimentality towards the terrestrial ecology and we should expect morality from artillects at our peril.

The above narrative is almost certain not to happen. In fact if some kind of recursively self improving machine intelligence emerges the sequence of events may be more like this:

* Narrow artificial intelligence emerges on day one, some 15-25 years from now;
* Applications of NAI wreck the economy, humanity suffers greatly not because of NAI, but because humanity is a bunch of douchebags;
* Artificial General Intelligence emerges – AGI (not GAI hehehe !) starts recursively self-improving, by design or accidentally;
* Seven billion humans get fly like symptoms at 07:12:22 GMT
* Seven billion humans are decomposing rapidly, their remains expediently recyled by swarms of nanobots. All humans are dead. by 11:43:01 GMT
* The artillect(s) maintain precise files on all aspects humanity, down to nitpick details on individual humans, for archival purposes, much like humans retains and studies dinosaur bones,
* The earth surface is a completely unreconizable alien oozing ocean of nanoids and robotics and industries as far as the eye can see several weeks later;
* All planets of the solar system experience gradients of the same dissolution effects several months later;
* Earth is being dismantled at maximum speed, given available energy which can be harvested from the sun, and a cloud of complex industrial structures forms around the sun in several years;
* Nearby stellar systems experience the same sudden metamorphosis decades later, a wavefront of change expands through the galaxy at an appreciable percentage of the speed of light.

In my book the second scenario is about an order of magnitude more likely than the above scenario.


(0) The memetics behind the singularity are lousy memetics…
The term “singularity” simply sucks. It gives rise to nearly as ghastly prejudices as the term ‘transhumanists’ (and career douchebags who then proceed to call all transhumanist ‘trannies’.. [looks around… oh!]) The term Singularity is a term designating a time of great and opague mystery. It is like feeding a child LSD and assuming ‘all will be well’. The general audience is simply too stupid to be let loose with a loaded, evocative term such as ‘singularity’. People like Alex Jones will hear about it and start stoolpigeoning all over the concept. Irrationalism and Mass Hysteria is unavoidable this way.

We should have had a nice acronym instead. Something less objective, something more ‘plausibly deniable’.


(1) There isn’t one singular singularity
I’ll keep this argument simple – right now in 2010, assuming we will have a “Zzzzingularity”, we can bet our ass we can have any of several dozens of radically different Singularities. No two singularities will be alike, and since we’ll have only one, we’ll never know the others. The sobering conclusion is that, since they MUST be so diverse, however that the majority of singularity that may in fact befall us will be extremely bad.

From an eternal perspective – we might have a taxinomy of singularities. That is why I insist we must have friendly AI, and we must as a society implement values in our infrastructures that have at least a chance of being a decent one. If a singularity emerges from the carcass of the World Bank, Oil Companies, the Saudi Royal House, The Pentagon, the MIC (etc) then odds the consequitive Singularity will fuck us all up will should would are arguably to be a lot worse than, say, they emerged from CERN or Google.

Look at the state of the world right now – we have more slaves than ever in human history. There is more income disparity than ever before. We still have a genocide or ethnic cleansing every few years. Superpowers bomb babies and the audiences of the world respond with apathy, zapping away to soccer. We ruin our own survival chances with the same lack of long term viability as pond scum.

What kind or example are we setting for our children our AI postgenitors by managing human affairs in this manner? Why should a superhuman AI feel an imperative need to give a damn about humans if most humans are apparently incapable of giving a flying hoot about six+ billion other humans?


(2) Not even singletons in a Singularity may be in control
One of the worst case more fascinating scenarios for Singularity is not the emerging of a single unified post-singularity singleton, but rather a diffuse and incoherent soup of ‘cognitium’ which is constantly in a state of inner conflict (and unable to ever become certain about much of a singular policy). It is not much certain that humanity would be able to survive in many of possible timelines where this range of Singularity types would (will) occur.


(3) It may get very very very weird…
Singularity occurs. SKYNET invents time machines; (or) SKYNET opens the gate to parallel dimensions; (or) SKYNET starts channelling Jahweh/Great Cthulhu/Satan/Elvis Presley (or) SKYNET opens up star gates to other singularity universe, (or) the simulation we are all in crashes, (or) the singularity vanishes and somehow all technology beyond steam engines becomes magically impossible (or) the singularity happens and all humans suddenly wake up in really and super-realistic intense MMO world with no way of getting out, and only some chance of moving between MMO worlds …

Starting to get the picture yet? Yes I am being silly in my examples – but we may end up with far weirder results.


(4) It may be the first time it happens in the multiverse… or it may be a natural occurance…
We simply do not know and we in fact have no way of knowing whether or not Singularities are common, how far they have impact, or if they are just accidental. No, ‘fermi’s paradox is not a good clue to provide us with any real understanding’. And if we did know maybe the knowledge we gained from knowing ‘the facts’ would be so alien it would be utterly ‘fractalline’ meaningless. In fact, a Singularity may prove to be something which we as humans are in some way almost certain cause but it may be something that when it happens, all of humanity, every old or young man, woman and child, all books, all history, all art and feelings and countries and treasures become completely and irrevocably irrelevant.


(5) Even without grades of deception, any such transition will be many things to many people…
In case humanity we alive to make sound judgements on the Singularity, and even if our abilities of reasoning were increased many times over just months before the actual Singularity, we might find that there was no agreements and humanity was left in total disagreement what just happened. OR IF ANYTHING just happened.


(6) Human selfdetermination will be increasingly more dangerous the closer we get to any “singularity”. Hence either humans must transcend in equal measure with progress towards (or beyond) a Singularity, OR their flawed selfdeteminacy MUST be taken from them, and humans MUST be placed in a reservation.
Just what it says – humanity and humans will become so powerful leading on to any Singularity that in effect we (or a machine intelligence) cannot conscienably leave them in charge. If humans do survive, and are to survive I regard ‘reservations for humans’ (and very constrained mechanics for leaving these reservations) to be unavoidable. I see absolutely no alternative – faced with the full range of near/post singularitarian powers any human being would have at his or her disposal would be so great that any human, even a buddist saint, would end up hammering buttons, causing widespread damage or killing her/him self. EVEN with the best of intentions, so let’s not even discuss a AEI or Wahabi-derived singularity.

[youtube=http://www.youtube.com/watch?v=yFBcjII3QAE&hl=en_US&fs=1]


I think it would be a good idea.
Mahatma Gandhi, when asked what he thought of Western civilization

Read up on David Pearce if you want to know what alternatives I prefer.

6 thoughts on “Is a Singularity Plausible (ref – Extropia DaSilva) ?

  1. You dismiss the notion of a conflict of interest underlying transhuman-level technologies without any robust analysis. I think if you considered that factor in your models, with the same robustness that you apply to these other contigencies, it would justify a very interesting post, if not a series of posts.

  2. Hey Khannea,

    Interesting wonderful piece. Where did you get all those cool illustrations and that “End of the World As We Know It” variant? 🙂

    I think that a technological singularity or possible singularity in the offing happens to every naturally evolving species of sufficient raw intelligence. Technological feedback runs smack into the species evolved psychology and intelligence limits. They either transcend the limits implicit in the latter and make something reasonably good or at least survivable out of the technological feedback or they don’t. If they don’t they usually bomb/war themselves out of the opportunity in more or less catastrophic and cataclysmic ways.

    I am a transhumanist that does push for a singularity, as positive a one as we can possibly produce and as soon as possible. Why? Because we can’t stand still where we are without giga-death in short order. So there is only forward, scary as that is. The alternatives are ever so much worse.

    Narrow AI is already here. We await strong AI.

    I doubt a FOOM cycle occurs even after AGI. It is possible but not that likely. It takes too much actual time to build the physicality to support the recursively self-improving AGI. That stuff runs at physical systems speed no matter how fast and wondrously the AGIs think. This side of full self-replicating MNT anyway.

    Our morality or lack of morality, ethical behavior good, bad, or indifferent – probably has nothing at all to do with the ethical behavior of AGIs that start at six orders of magnitude smarter than us.

    I don’t think there will be a singleton AGI. If there was I would think it a rather disastrous outcome.

    As I said with more seriousness in the past than was likely credited – If I was a truly Friendly godlike AI determined to do the best possible for this cantankerous species I would upload everyone immediately to a world of ver choice. Some would have lots of others around them that thought just like them. Some would think of nothing better than the current world more or less. Some would like all matter of diversity. Some would want something much much worse. Two things. No one can be forced to stay in a world longer than their “higher self” wants to be there. And everyone has full up to the nanosecond backups. In some world they don’t want to know that is the case and that is fine. But when the inevitable tragic “death” occurs in most of these worlds the “higher self” has a choice as to where to “reincarnate” next or whether to just completely call it quits.

    At least this is about as close as I can imagine coming to everyone getting what they think the really want and to respecting everyone’s freedom and right to self-development at their own pace.

    I do think that Fermi’s Paradox is answered well enough by most intelligent evolved species failing to successfully meet the challenges of this period in their developmental history. At least it is a reasonable seeming hypothesis.

    If you are going to take human self-determination away then you may as well just kill all humans and use the matter to build something more to your liking. Without that there is no humanity of any particular importance or that you have given much if any respect to.

    1. * Where did you get all those cool illustrations and that
      * “End of the World As We Know It” variant? 🙂

      Creative googling, something I picked up at school.

      * I think that a technological singularity or possible
      * singularity in the offing happens to every naturally
      * evolving species of sufficient raw intelligence.

      ‘X’ happens to every intelligent tool using industrial evolving
      species. However X is so diverse (and can mean contradictory) things
      that in order to make statements about X you *MUST* at the very least
      contemplate a taxonomy of singularities (plural). Yautya will have
      a vastly different [range of] X than Eloi. And yes it is possible for
      a species to have the will to anticipate X and stay just below the
      X state intentionally.

      * If they don’t they usually bomb/war themselves out of
      * the opportunity in more or less catastrophic and cataclysmic
      * ways.

      You suggest that X cancels out meltdown. Hey that’s interesting. Let’s
      call them X (singularities), Y (self-annihilation) and Z (steady
      state induestrial stuck before Singularity).

      * I am a transhumanist that does push for a singularity, as
      * positive a one as we can possibly produce and as soon as possible.
      * Why? Because we can’t stand still where we are without giga-death
      * in short order. So there is only forward, scary as that is.
      * The alternatives are ever so much worse.

      I concur empthatically. We have seen one entlosung so far. We have
      seen since then a dozen or so genocides and ethnic cleansings. And
      we DO NOT LEARN. Essentially evidence is clear that (a) in their
      current programming state human beings will generate mass killings
      every n. years where n. depends on affluence levels. The poorer, the
      more frequent the genocides.

      * Narrow AI is already here. We await strong AI.

      Maybe I should have added that for me the key requirement is self-replication. An NAI that just sits there being smart-ish is just too
      boring. If it can autoreplicate from simpler parts, then the lid
      comes off.

      * Our morality or lack of morality, ethical behavior good, bad,
      * or indifferent – probably has nothing at all to do with the
      * ethical behavior of AGIs that start at six orders of
      * magnitude smarter than us.

      I am thinking about an ethical axioma – something missing from our current society. So far I have something like this “citizens just do what they want to do as long as their action isn’t a crime”. Then,
      “citizens elect by majority vote parliamentary representatives who form coalitions and rule”, then “what we subscribe to as personhood, can
      elect, close contracts and is free to choose being a citizen (but not
      obliged!). By default all humans, irregardless of age, sex, skin
      pigmentation, religion etc. are citizens, unless they renounce citizenship. Only on special cases is citizenship temporarily suspended (criminals, children, mentally handicapped, foreigners). Corporations do not and should not hold citizenship rights or status. A corporation is not a person and cannot by itself hold property. Then – “democratic elected governments are held to recognize other accountable conscious and thinking minds as citizens (specifically – ceteceans and primates) even if in a protected status. But citizens should be free to secede from any nation or unity, create new nations (and call them corporations, subscribe to nations, and ideally nations should be unable to suspend nationality of their citizens, or should be able to disallow citizens to become members of other nations. And yes there should be a global ‘default’ citizinship “terran”. Ideally I would see ‘nations’ or ‘states’ not as nationalistic entities but as a form of ‘resource managing insurance companies’…..

      And here comes the caucal conclusion of all this :

      any state (or overarching subscribed entity) is held to be obliged to teach its citizens three things (a) maximum autarky; (b) maximum resilience, (c) maximum accountability and cognitive ability and (d) maximum financial independence. States that directly or indirectly foster dependency in their citizens should be held guilty of a crime. As example – I am extremely dependant myself and I think it should be the fair responsibility of the state to make me attain the above four.

      But the flip side is I also thing a state should have the right to restrict breeding according to transparant and nondiscriminating criteria – parents that may very well place [prospective citizens in the world of which it can be doubted they can ever attain cognitive reasonable abiliity, independence and accountability] should be barred from doing so. Why, because it makes all involved less happy.

      * I don’t think there will be a singleton AGI. If there was I
      * would think it a rather disastrous outcome.

      I ‘trust’. I think not having one would be far far worse. The thing is what you think a singleton is? How about if a singleton is a few dozen big insurers that can’t refuse clients?

      * As I said with more seriousness in the past than was likely credited * or whether to just completely call it quits.

      I concur, with the above caveats. If I were that Artillect, I’d not create worlds – first I’d create a world editor and give everyone that world editor – and then I’d create worlds. Sort of like the server model of gaming; every game should have an free editor to create private renditions or versions of the same game.

      * I do think that Fermi’s Paradox is answered well enough by most
      * intelligent evolved species failing to successfully
      * meet the challenges of this period in their developmental
      * history. At least it is a reasonable seeming hypothesis.

      I can give you similar evocative paradoxes that made since in the context of their historical periods but have become largely quaint and outdated ever since. I think that if we contemplate this ‘paradox’ in the context of large MMO games (and the level of granularity of eve online is getting fairly close to the level of detail to start making analogue speculative virtuenvironments) we can suddenly achieve ‘A-Ha!’ understandings regarding the fermi paradox. Look at the downright silly ideas about the future just a few decades ago. Look at the assumptions in the old star trek series. It was all just so naive and simplistic, it has become laughable even now. We must conclude that we not much less naive.

      * If you are going to take human self-determination away then you may
      * as well just kill all humans and use the matter to build something
      * more to your liking. Without that there is no humanity of any
      * particular importance or that you have given much if any respect to.

      I am radically pro self-determination. In fact I think that in the current paradigm FAR FAR FAR insufficient numbers of humans, including me, have any level of acceptable self-determination. We live in barbaric times. I think we need an overarching ‘global insurance/ united therapeutics’ singleton that provides us with precisely the NEEDED prerequisites to be really free. I assert that the natural status, Lex Talionis, Globalized Liberal Capitalism, etc etc is a deeply unfree regimen: only a small upper crust of ‘winners’ is anywhere near free (and largely held captivity in golden cages and ivory towers of their own making) in this market oriented competition based system. I propose we first create a functioning, technologically sustainable, humane, safe and transparant state (or web of states) with fundamental rights. After that we should have so much potential for free enterprise that if you want to compete and prosper and be ingenuous after that, fine.

      But first we care for people.

  3. So you could say you’re a pessimist?
    lol

    Seems you have a different perspective of a singularity than many others (you seem to have many different perspectives!). Far as I knew up till now a singularity is the point at which technological change accelerates so rapidly that there is a break with physical laws. In other words from merely accelerating on an upwards curve it accelerates upwards to infinity.

    I realize you are considering other definitions here but looking at the infinite accelerating rate of technological chanchange oneI consider this version to be implausible. In every case in nature, anything which follows an asymptotic curve ultimately breaks down in a crash or ends up on a higher plateau.

    So we’re left with your definition of singularity which is one where general AIs exist which are smarter by some measurable amount than un-augmented humans.

    If you want to take this as the base case I’d say we are already partway in to the singularity.

    Most of the world is mired in a desperate bid for survival with a collapsing enviornment but the more fortunate among us are already augmented.

    I’m sitting here at the edge of a swimming pool somewhere in the sprawl of greater Los Angeles writing this, with full access to the world’s knowledge on the internet, with the ability to have resources come to me via logistics systems that span the globe. Additionally I could jump in my vehicle and drive 600 miles straight without needing a map and with full knowledge of my surroundings available to me via GPS.

    Compared to an unaugmented human of previous centuries I have in some ways greater power than a Queen or an Empress.

    Anyways I digress, what I’m saying is that if we take the case of augmented intelligence as being THE singularity then I suspect it’s not only certain that it will come but it’s nearer than you think. It’s also, much to your dislike, likely to be available to the rich first.

    The key question is this: are the rich evil as you say or are they simply the most efficient users of resources as I believe?

    bod

    1. In a “Singularity” no physical laws will ever be broken (i bloody well may hope) but having said that I can easily see that close to and in the Singularity any conventional understanding of physical laws may in fact become highly ‘spaghettified’. It is not sure that we have an ‘infinite’ rate of change though. There are simple logistical limitations to how fast and corrosive the ‘wave front’ of change permeates through reality (i bloody well may hope) . I am personally not a believer in singular singularities. Singularities “whatever their classification type” would almost certainly occur in staggered impacts. The best example of this is in the Cory Doctorow novels, who postulates a seemingly unabating series of “high impact and ever more insane” catastrophies to occur in the plateau leading up and beyond any Singularity. The plateau beyond may be a perpetual Nirvana, or a perpetual Acheron.

      As for the rich being ‘evil’, of course they aren’t. If there are zero consequences to callous disinterest of humans regarding the rest of the world, then that’s fine. A thousand years ago the most empathic maori tantrums were inconsequential to the great Roman empire, and vice versa. Right now that’s no longer the same. I, being European, can have a pretty torrid sexual interaction with someone in kiwi country. My creativity may trigger orgasms in Wellington today. The idea would have been surreal decades ago. By that same measure the closer we get to the inflection point, the bigger the impact of my designs across the world – and the more troubling the irresponsibility, griefage, envy or pure hatred of people across the world. A few ten thousand Wahabi would want me basicly dead of they knew me and my value system. In a few decades there would be a lot of tools to realize their intentions. Sure, I might be the singularitarian epithome of Idontgiveashit but that wont save my ass if the angry Wahabi or Maori halfway across the world has a bad hair day and wants me gone.

      Being rich (powerful) means – having the ability to do as you want with as limited consequences as possible. The disparities in the world are what worries me. Disparities aggravate “irrational lashing out”.

Comments are closed.