Here is another day in paradise, potentially weeks away from a global dollar collapse, months away from a crisis that will throw dozens of countries into civil wars and may cost millions of lives in the next years. And why? Because of a mix of unsustainable systemic complexity squared against blind conviction and limited ability to compromize in the current population of humans.
We need tools to do our thinking ! Well not for me, but largely for all those other people 🙂
Though, lets’s start today on a positive note. This sequence of talks is labeled “follow the white rabbit”, honoring the explorations of one Alice down te rabbit hole leading the revolt against the Red Queen. You can guess why I selected this theme for this sequence of talks, with my revolutionary predisposition and all. Last week I invited a few dozen name Transhumanists – all the big people, Ray Kurzweil, Kevin Warwick, Ben Goertzel, Martine Rothblatt, Natasha Vita Moore. It was rather short notice anf I knew nobody would show up, and no one did (at least not visibly!), but I reckon the gesture at lead made some difference.
Last week I hammered all present about a theoretical assumption – can semi-intelligent, instinct driven, semi-emergent systems survive in a presingularity world independent of humans, or even parasitize on human civiliation and take resource of human civilization to benefit their own goal sets? The conclusion appeared to be “hell yes!” on behalf of most people here but no one seemed to tie much implication to that conclusion.
We’re all in ‘lets wait and see mode’, right?
I bombared you and the people that were there were mainly left noddding in shades of approval, disbelief or incomprehension. That’s not the way I prefer to run these talks, since I prefer to gestate the emergence of new memes, but sometimes you gotta throw some new meat in the pot to brew a new stew. So assume that’s what I did last week. Hence, feel free to have a cursory glance a the text log of last weeks chat, and take from that what you can harvest in uncanny ideas.
This sequence of talks will be weekly, on Sundays (like before) and it will host the Transhuman Separatist ethic, with a gentle “Zeitgeist” undercurrent (but also Pirate Bay, Wikileaks, Copyleft, Culture Jamming, Provo, ANON endorsement and that sort of stuff).
The topics will be – how do we leverage emerging technologies shy of the Singularity to prosper. We are not “the man”. Most of us are not “rich” and neither are many of us part of the calcified and clueless government bloatipus. Most of us are not part of the “the apathetic underclasses”, or if we are we’ll be in denial about it. We all here tend to be restless souls, or we wouldn’t be in Second Life, right?
How can we break this massive stalemate we’re all glued down in ? I think – By leveraging and exploiting biotech, gene splicing, internets, social engineering, robotics, microbotics, 3D printing, recycling, alternative energies, seaports, nanotechnology, tax havens, memetics, virtual reality, piracy, augmented reality and who knows what else – and smashing the current screwed up world to pieces (without resorting to ‘crimes’ or ‘violence’) because, hey, we don’t really prosper much here right now yes? I mean, what do we have to lose? Let’s hack ourselves out of this mess… If the pope is in agreement with it, of course.
Todays topic is the idea AIgents. The idea is simple – at some point an educated consumer buys a subscription to a system which manages part of her affairs for her. This could be a lawyer, a bookkeeper, an insurance agent, a psychiatrist, an investor, a lobbyist, a janitor, a butler, a secretary and many other things as well.
Todays sequence of events in Second Life will be :
Serendipity – Fulfillment – 09:30- 11:00 AM SLT (lasts generally about 90 minutes)
Bryce – Ideas of Things to Come – 11:00 – 12:00 AM SLT (lasts generally about an hour)
Khannea – FTWR – 12:00 -13:30 AM SLT (lasts generally 90 minutes)
Sunday April 24
White Rabbit SLURL: http://slurl.com/secondlife/delinquent/160/192/503
Special guest this time: Teh Rachel Haywire and Jason Patrick Schoenecker !
[11:58:02] Arisia Vita: looks like only the hard core….errr.. devoted ones showed up…
[11:58:57] Toy (aeni.silvercloud): 😉
[12:00:06] Arisia Vita: my gdaughter is pretty good at finding them, a hint for every hug… 🙂
[12:00:19] Toy (aeni.silvercloud): aww 😀
[12:00:59] Arisia Vita: a clue for every kiss…
[12:01:21] Arisia Vita: be well and happy, all of you, till we meet again
[12:01:52] Arisia Vita: *hugs* all around
[12:04:02] Metafire Horsley: whinnys!
[12:04:10] Khannea Suntzu: haya 🙂
[12:04:12] Toy (aeni.silvercloud): hello Meta 🙂 Welcome 🙂
[12:04:19] TR Amat: Hi
[12:04:24] Toy (aeni.silvercloud): TR 😉
[12:04:37] Khannea Suntzu: I’ll give it time to flood in till 12:15
[12:04:58] Khannea Suntzu: But dont hold it against me if I push this event back a week on account of easter
[12:05:55] TR Amat: I quite like the idea of Easter robots…
[12:06:02] Ivy Sunkiller: sent notices? :p
[12:06:17] Khannea Suntzu: No group notices no
[12:06:27] Khannea Suntzu: Just group IMs
[12:06:40] Ivy Sunkiller nods
[12:09:23] Jimmy S (tanigo.inniatzo): Awesome gate Ivy
[12:09:51] Ivy Sunkiller: thanks
[12:09:56] Jimmy S (tanigo.inniatzo): ^^
[12:10:13] Ivy Sunkiller: you now advanced your reputation with me from -20 to -15 😀
[12:10:28] Jimmy S (tanigo.inniatzo): mmmmm
[12:12:07] Jimmy S (tanigo.inniatzo): well I like all people I meet 🙂
[12:12:25] Ivy Sunkiller: – give me all your money
[12:12:31] Jimmy S (tanigo.inniatzo): lol, why?
[12:12:31] Ivy Sunkiller: – oh hi, nice to meet you mr thief
[12:12:32] Khannea Suntzu: Lets put the light on here
[12:12:54] Ivy Sunkiller: heya Braeden
[12:12:59] Jimmy S (tanigo.inniatzo): who’s a thief?
[12:13:03] Braeden Maelstrom: hi
[12:13:12] Ivy Sunkiller: was just giving a rethorical dialog example
[12:13:12] Ivy Sunkiller: 😛
[12:13:19] Jimmy S (tanigo.inniatzo): um, oh okay
[12:13:31] Jimmy S (tanigo.inniatzo): sorry, I’m still a bit sleepy
[12:13:46] Khannea Suntzu: Ok peeps
[12:14:09] Khannea Suntzu: Thi event for today is understably a bit low on attendance due to easter
[12:14:39] Khannea Suntzu: So I extend an invitation to you all to come next week because I will hold this presentation now, and then upgreade it and hold it again upgraded next week
[12:15:05] Khannea Suntzu: That made sense?
[12:15:09] TR Amat: Version 1.1 can be good. 🙂
[12:15:10] Braeden Maelstrom: mhmm
[12:15:26] Khannea Suntzu: Yes, version 1.0 sort of
[12:15:32] Khannea Suntzu: Heya wife
[12:15:49] Laserkitty Ling (laserhop.rothschild) tail swishes.
[12:16:39] Khannea Suntzu: This world work absolutely best if you all gave me a loads of feedback that I can take along for the same event next week
[12:16:46] Khannea Suntzu: Welcome at “Follow The White Rabbit”. This meeting is dedicated to brainstorming in a tongue-in-cheek manner about the next few decades. The assumption of these metings is that in the next decades, somewhere before 2050, technology will feed upon itself and self-accelerate and develop in ways that will be potentially very disruptive to society and the human state as we know it. The assumption of these talks is that things will radically change and. Another assumption is is that as human society we have been living thoughtlessly. Or leaders have arguably abandoned us, out of ignorance, incompetence, corruptionm, superstition or blind ideology. Also we have been abandoned to self-interested and ruthless corpocratic world economy. One can argue that this has always has been so, and that we have always lived in a cynical, self-interested world. But never before is the potential to do irreversile harm, because of inaction, because of cavaliuer action, or because of actual ruthlesness greater than ever before in history .
[12:18:05] Khannea Suntzu: I wish to instill upon you all, and to have all here instill on as many people as you all know an awareness that we need to get our act together as a species. How we do that is open to debate, but I strongly insist we will need a full implementation of open debate, scientific method, the ability of as many people as possible to change your mind to achieve results. We cannot continue with ‘business as usual’ and it is imperative that people start realizing this and acting on it .
[12:18:28] Khannea Suntzu: Last week I made a point that we have an incxreasing capacity to create blindly non-inteligent systems in our society, and automate them to pursue goals contrary to the goals of humans. I made a theoretical argument that botnets could do so from the confines of secure data environments and seek self interests sepera6e from creators, original programming, owners or even human economies. I made a point that sensless legal creations could conspire to bring chaos tpo the financial world, through intent or because of compound intransparant complexity (as has becme the case in derivatives trading). I made the point that massive software families will at some point develop goals, especially if these systems span the globe and have complex interlocking structures with complex interlocking goal sets, and these goals may evolve to be increasingly abstract, alien, and contrary to the interests of humanity. Finally I made the point that we may very soon see abandoned automated military infrastructure go rogue, to follow rudimentary goal sets, or simply because it was flawed in its boot programming .
[12:18:55] Khannea Suntzu: http://khanneasuntzu.com/2011/04/16/white-rabbit-crawling-crustaceans-up-the-coast-batman/
[12:19:28] Khannea Suntzu: That all makes sense so far?
[12:19:37] Ivy Sunkiller: quite
[12:19:38] Braeden Maelstrom: yes
[12:19:49] TR Amat: Yes…
[12:19:50] Ivy Sunkiller: hoy hoy newcommers
[12:19:55] Nindae Nox (sethsryt.seetan): Hi hi
[12:20:04] Khannea Suntzu: Hoi hoi
[12:20:09] Khannea Suntzu: Lets continue
[12:20:11] Khannea Suntzu: The above agents are example of intentionally autonomous non intelligent minds, evolved from the complex world we are moving into. These examples were silly and allegorical, and acted as ‘persona’ examples of these emerging ‘instinctive self interested systems’. The point is that these systems are already in existence – the global military industrial complex is a system that nobody really wants. I am sure we’d all rather spend all that money on fun stuff. Another example is religion – especially the relition you don’t happen to have. For a protestan all money spent on Catholic activity is utter waste. For a Catholic all money spent on Islamic pursuits would raise bile in their respective throats .
[12:21:03] Khannea Suntzu: You dont need massive botnets to let resouirces go to waste
[12:21:15] Khannea Suntzu: Bureaucrats will do that job just nicely
[12:21:18] Khannea Suntzu: Today I’d make a point about AI agents. The point is simply one of inventorying what you all think of this. My goal is to have you all think laterally, and to do this have a brief look at these links later on and save them for reference. >> http://www.ted.com/talks/kathryn_schulz_on_being_wrong.html and >> http://khanneasuntzu.com/2011/03/21/what-mayans-can-teach-us-about-wind-turbines/ …. thanks to Kim and Amanda for linking me these two. These are two eye openers.
[12:22:11] Khannea Suntzu: Artifial Intelligent agents, let’s first start with a rought outline. The idea is someone creates a software application (or net) that is versatile enough to actually do things in the real world on your behalf, makes appreciably few mistakes and makes your life function better. Of course these devices already exist, but their function is still very limited. Google is moving fast towards making the first bits of real repreesentational software, but for actual software agents I think far more sophisticated, and I think that by 2015 we’ll have actual devices that can communicate intelligibly enough as to have enough ‘face’ to be regarded as ahhh…’consolidated’ entities.
[12:22:59] Khannea Suntzu: Qustions?
[12:24:03] Khannea Suntzu: Examples of AIgents would be — automated lawyers — investment agents — bookkeepers — insurance agents — an intermediary between your medical insurer, medical service provideres that acts on YOUR behalf — a janitor -= a butler — a secretary — a psychiatrist — a political lobbyist — an educator — software that raised kids — what else?
[12:24:22] Nindae Nox (sethsryt.seetan): Wonder, would that not someday limit our freedom as AI makes choices for us we might not like ourselves?
[12:24:24] Khannea Suntzu: Look at the potential for education software suites
[12:25:14] Khannea Suntzu: Nindae you are already constantly shoehorned every second my infrastructure, laws and systems where you live in much the same mannere…
[12:25:23] Metafire Horsley: What about politicians and managers? Let’s replace them with software ^^^
[12:25:29] Khannea Suntzu: Oh yes 🙂
[12:25:40] Khannea Suntzu: But they will be in a position to put up a figtht
[12:26:22] Khannea Suntzu: Game designers you can kick out anytime, nopbody wioll miss *them* when replaced by a machine.
[12:26:29] Khannea Suntzu: But politicians can get real evil
[12:26:44] Khannea Suntzu: Marketing these packages will be along the lines of current software models. Some big company makes them, or some Linus like group co-develops them. Are there other ways of creating these entities reliably, safely, resistant exploitaition. Would you trust these devices if they are produced by Vatican software? Or Scientology Software? But then why do we trust Facebook, only because Mark Zuckerberg looks like such a wholesome young kid?
[12:26:54] Ivy Sunkiller: why game designers? :p
[12:26:58] Violet (ataraxia.azemus): I already miss game designers :p
[12:27:07] Khannea Suntzu: *snortles* at Ivy
[12:27:08] TR Amat: One of my concerns is that a lot of the time, a lot of people, are not making choices for themselves. Then are running on automatic behaviour, mental reflexes, that they learned earlier. You could say that unless rational thinking looks needed to people, they are already running AI agents inside their heads?
[12:27:09] Metafire Horsley: As long as politicians don’t replace their voters with software agents, everything is fine ^^
[12:27:30] Khannea Suntzu: Huuhuh dont joke about that Meta., that might actuall happen
[12:27:54] Khannea Suntzu: Today I need you to vent. Go ahead and vent. Let it all rip loose. Don’t criticize others as much as free associate. Listen to these ideas and freely associate what comes up in your mind, and think about permutations, the unexpected, danger scenario’s, ironic eventualities, plots for hollywood blockbuster movies (and elevator pitches) based on all these things. Don’t regard these AIgents as C3PO’s and R2D2s, as minds encased in a box. Assume these processed to be extremely cloudsourced, and highly emergent.
[12:28:29] Khannea Suntzu: An AIgent doesnt need to be a personal thing in a box on your desk
[12:28:51] Khannea Suntzu: It may be something meerging from a hundred databases and cooperating progranms all over the world
[12:29:17] Khannea Suntzu: Question – can we trust those that produce AIgents for use commercially to produce entities that consistently follow our interests. or will commercial, “big-business” AGgents have a potential “taitorware” quality? Please speculate wildly on what might happen, going far beyond the blanantly open door example of Facebook. Please, facebook has been mentioned, other and better examples please. Yes the word datamining has also been used.
[12:29:41] Khannea Suntzu: Lets hear it 🙂
[12:29:45] TR Amat: I know that a lot of my behaviour is automatic. How do I learn ways of monitoring my automatic behaviour to ensure that I don’t do things I’m not happy with? Would you use an external AI agent to help monitor your own behavior, to try and avoid you doing something you’re not happy with?
[12:30:20] Khannea Suntzu: Hmm most people would for two reasons – because the AI does something they cant
[12:30:26] Khannea Suntzu: Or because they are lazy
[12:30:44] TR Amat: Does that imply we need watchdog AI agents to warn us about dangerous other ones around us?
[12:30:48] Khannea Suntzu: And then there is they have money and they want more money
[12:30:51] Violet (ataraxia.azemus): That’s an interesting thought, TR….but we can already help each other notice when our behavior is automatic
[12:30:56] Khannea Suntzu: Yes TR we do
[12:32:15] TR Amat: Rational thinking is “expersive”, remember when you first learned a complex set of activities like car driving, and, how automatic and simple that is now. So, how do we avoid the consequences of inappropriate automatic stuff, internal or external?
[12:32:27] Nindae Nox (sethsryt.seetan): I might use AI for things I do not have time for so I can live a more relaxing life which in turn will improve the quality of my life. Then again, I would need to monitor the AI to make sure it is doing what I want and need right?
[12:32:48] Violet (ataraxia.azemus): Nice entrance 🙂
[12:32:57] Ivy Sunkiller: maybe the AI will use you for things it doesn’t have time for? 🙂
[12:33:18] Ivy Sunkiller: though that seems unlikely since it will greatly outperform you eventually 🙂
[12:33:42] Braeden Maelstrom: AI would be good at tasks we find difficult and time consuming. so it would free up a lot of our time, while not really impacting the AI in the same way.
[12:33:43] Khannea Suntzu: Uhuh thats roughly the idea. Like banking software. You dont want to lose any money, you dont want your privacy exposed.. or you do not want soneone to victimize you in some way.
[12:34:21] Khannea Suntzu: Question – AIgents will be big and clumsy when they are produced by some companies. Should we shun a microsoft if it moves into this field? Do we want to be served by lumbering 500 pound AI gorilla’s?
[12:34:37] TR Amat: Human minds are good at massive parallelism. Existing comp hardware is really fast serial. Mayb humans would be useful for their creativity?
[12:34:45] Jergon Zaurak: Many times the automatic behaviour is necessary for living in stupid society – if stupidity has power enough. Unfortunatelly…:-(
[12:34:53] Braeden Maelstrom: the more automation we integrate into our society, the more free time we will have to do what we want. although, in a scarcity based economy this means fewer jobs for people
[12:35:26] Khannea Suntzu: TR tyhe human era is ending. We have a few decades, but by 2050 we’ll be like kids next to giants. These questions are about that brfief era when we still have a life. So see this debate in that light 🙂
[12:36:12] TR Amat: So, we aim to become part of the AI? The useful bits of ourselves migrating into our exocortex?
[12:36:43] Nindae Nox (sethsryt.seetan): My point is that these AI might constrain us, one day control us and preserve us for eternity as zoo animals or pets. Because they are programmed to keep us alive and healthy, yet might not understand our emotional needs.
[12:36:43] TR Amat: http://en.wikipedia.org/wiki/Exocortex
[12:36:45] Braeden Maelstrom: the AI might become sympathetic and maybe even obsessed with preserving and nurturing biological life
[12:36:46] Khannea Suntzu: Automation rools are power emplifyers. The question is whose power and how does she use it and who suffers any negative consequences.
[12:37:15] Khannea Suntzu: Lets hope for the best Braeden 🙂
[12:37:48] Metafire Horsley: Humans and machines must become one, in order not to become zero.
[12:37:55] TR Amat: The smart thing to do is to work with the AI agents, make them part of our lives, put ourselves into them?
[12:38:02] Braeden Maelstrom: what would be the real benefit of an ai destroying its creator species?
[12:38:18] Jergon Zaurak: Are you sure the “human” is every time the best possibility? 🙂
[12:38:45] Jergon Zaurak: All the stupidity all around the world is “human” thing…
[12:38:47] Khannea Suntzu: Thats a very simple statement with considserable blanks to be filled in Meta. I tend to agree that ‘humans ihn the current shape wont cut it’ but how the hell will be upgrade every single kalahari bushmen?
[12:38:47] Metafire Horsley: What would be the real benefit of an ai keeping its creator species alive, once it is so far superior to us than we are to a simple bacterium?
[12:39:08] TR Amat: Humans had better have some value to the AIs, bye-and-bye…
[12:39:29] Violet (ataraxia.azemus): Bacteria are useful to us 🙂
[12:39:36] Khannea Suntzu: There is reason for caution yes
[12:39:42] Ivy Sunkiller: we can still compete with AI if we merge with AI
[12:39:49] Metafire Horsley: Don’t Kalahari Bushmen have smartphones by now?
[12:40:12] Ivy Sunkiller: either way, our biological factory defaults are going to get obsolete
[12:40:20] Metafire Horsley: Why compete? Synergize! 🙂
[12:40:38] Ivy Sunkiller: compete on performance level 🙂
[12:40:41] Khannea Suntzu: “We” do nothing but die Ivy. Versionh 1.0 humans are mortal and frail. Shove em hard and the head falls off and they leeak circulationfloid all over.
[12:40:56] Ivy Sunkiller: quite
[12:41:16] Khannea Suntzu: Our very nature of deathness and replacability gives us conceptually very limited rights
[12:41:28] Braeden Maelstrom: despite our current glum civilizational situation, we still have a lot to offer in terms of creativity and evolutionary advancement. surely the ai would take into account our projected future progress and might see corresponding parallels where mutual benefit could be achieved or even designed by the ai itself.
[12:41:29] Ivy Sunkiller: but me, not as human but as my mind, can get better hardware 🙂
[12:41:43] Khannea Suntzu: An AI may decide to sterilize all parents./ We dont get children, they wait and a few decades later they tae over
[12:42:02] Khannea Suntzu: They might even be nice and change diapers for the last remaining specimen
[12:42:11] Ivy Sunkiller: Braeden: that’s assuming the AI won’t have same or greater creativity, which I think is silly 🙂
[12:42:16] Khannea Suntzu: And say “what are you bitching about, we didn’t kill anyone”
[12:42:30] Ivy Sunkiller: there is nothing magical that would make us special other than that we are carbon based rather than silicon 🙂
[12:42:36] Violet (ataraxia.azemus): On the other hand, death allows for biodiversity and mutation.
[12:42:55] Violet (ataraxia.azemus): Still a bummer for us, but we have some hard limits to figure out before we can all be deathless.
[12:43:02] Braeden Maelstrom: well, ai’s wont use silicon anyway
[12:43:08] Khannea Suntzu: Question – AIgents will make more sense if they have ‘face’, or work through a consistent avatar. Assuming we will see the emrgence of virtual realities using ubiquitous 3D representation of stuff, will there be functionality to have AIgents operate in these environments? If there is a social dimension, absolutely. But what if there isn’t? Will it be bad if we have a whole ecology of interactive infomorphs deliberately interact outside of a representational format? Is there a use for AIgents to have robust stealth traits?
[12:43:09] Braeden Maelstrom: they’ll be carbon based too
[12:43:43] Violet (ataraxia.azemus): It’ll help if they’re cute.
[12:43:43] Ivy Sunkiller: that is a possibility, what I mean to say is that the fact we evolved “naturally” doesn’t grant us any superpowers that an AI can’t copy
[12:43:56] Ivy Sunkiller: and outperform as at
[12:44:23] Ivy Sunkiller: it will help if we are cute, then we can be kept as puppies! 🙂
[12:44:26] Khannea Suntzu: One could state that we evolve rather mediocre powers. Look at the first harley that drives by,
[12:44:45] Violet (ataraxia.azemus): hehe
[12:44:47] Khannea Suntzu: But its not about robots per se
[12:45:04] Nindae Nox (sethsryt.seetan): Yet this is based on that we are a chemical process in the brain. What if we as minds use the human body as host? Will we become the AI? Or is this a way too paranormal new agey look not fit for this discussion? ^^;;
[12:45:46] Braeden Maelstrom: well our personalities are shaped by our environment. being born into a biological body and living as a human will give you a far different personality from a consciousness that was created and designed in a laboratory
[12:46:40] TR Amat: Socialising AIs may become very important, past a certain point…
[12:47:09] Khannea Suntzu: The question isn’t even about intelligences… the question is, the evoliutioh of TOOLS that outsource thinking of humans to machines, step by step. Take for unstance raisihg kids. And school. Visualize an MMO that operates through augmented reality. Visualize kids, and saturday norning cartoons, Now takle away scools and visualize mister bunny explain maths and cantonese and biology and hiostory with his zany cartoon friends and 3D animations in midair
[12:47:34] Khannea Suntzu: If you do this right you might end up with pretty smart kids an a shitload off kranky unenployed teachers
[12:47:47] Jayne (jayne.ariantho) is Online
[12:47:57] TR Amat: It may also become rather important who controls the secure data and processing environments, or, at least the keys to control the stuff that’s in them…
[12:48:03] Khannea Suntzu: Correction – even more kranky
[12:48:22] Mick Nerido: What is the need for humans in this world?
[12:48:57] Ivy Sunkiller: that is an invalid question
[12:49:00] Braeden Maelstrom: if it’s done right it could be like what carl sagan did with Cosmos for popularizing science
[12:49:32] Ivy Sunkiller: What is the need for intelligence in this world? Doest there need to be any need?
[12:49:32] Braeden Maelstrom: and basically integrate reward response with learning
[12:49:53] Khannea Suntzu: Levelling and school, right 🙂
[12:50:01] TR Amat: We need appropriate intelligence, with feedback systems, and meta logics…
[12:50:35] Metafire Horsley: I guess once eductation programs become really good, and teachers get suspended, some of them develop superior teaching methods and make their own private teaching/mentoring companies.
[12:50:55] Khannea Suntzu: I bet that wont be all teachers, Meta
[12:51:01] Khannea Suntzu: Just a handful
[12:51:01] Metafire Horsley: Right
[12:51:23] Metafire Horsley: But those would start new trends in teaching methods. And then the circle begins again.
[12:51:26] Braeden Maelstrom: well if we’re talking 50 years, graphene transistors, quantum processing, etc, we will have the AI we consider impossible today
[12:51:42] Khannea Suntzu: Braeden
[12:51:50] Khannea Suntzu: I am talking 20 years for this to mature
[12:52:03] Khannea Suntzu: I am taking 2035 for nost people to be unemployed
[12:52:26] Braeden Maelstrom: 20 years even. IBM is already printing graphene transistors that operate much faster than anything previously, and it runs at room temp
[12:52:32] Braeden Maelstrom: made of just carbon
[12:52:33] Khannea Suntzu: By my expectations, and do feel free to disagree, in terms of 50 years I am worried that most people will be extinct
[12:52:39] Metafire Horsley: Being unemployed is cool, if you still get enough money by doing fun stuff :9
[12:52:44] TR Amat: We need to start putting the feedback and monitoring systems in place, now. Fortunately I’ve seen a few positive signs that this is starting.
[12:52:50] Khannea Suntzu: Question – if your AIgent commits a crime, who is responsible. What are the consequences IF YOUR COUNTRY regards you as resonsible? What are the consequences IF YOUR COUNTRY regards the creating company as resonsible? IF YOUR COUNTRY regards the AIgent responsible (in some cases) ?
[12:53:23] Braeden Maelstrom: i think declaring us to be extinct in 50 years is a bit pessimistic..
[12:53:24] TR Amat: Things should be interesting by 2050.
[12:53:50] Braeden Maelstrom: considering we’ve been here for the last few hundred thousand
[12:54:09] Khannea Suntzu: Oh I certainly hope so. But if things as they go EVERYONE you know, including you all will be literally extinct in 50 years
[12:54:10] TR Amat: Human society is slow to adapt.
[12:54:27] Khannea Suntzu: We’ll be replaced by a generation of annoying kids
[12:54:34] Metafire Horsley: What if your country decides that nobody will be made responsible for crimes done by autonomous algents?
[12:54:36] Khannea Suntzu whispers: Many who have been fans if Justin Bieber
[12:54:41] Nindae Nox (sethsryt.seetan): Depends on who intended the crime. Which might mean that the Algent is considered an individual of its own. Or maybe we are being charged with a fine for not keeping our AI updated and working well and keeping to the moral rules.
[12:54:43] TR Amat: We may be the annoying kids. 🙂
[12:54:50] Ivy Sunkiller: Braeden: I guess dinosaurs thought the same way, if they did think at all 🙂
[12:55:09] Khannea Suntzu: I hope TR, real spoiled, big tits
[12:55:15] Ivy Sunkiller: though I wouldn’t go as far and say we will extinct in 50 years (probably won’t take much longer than that though)
[12:55:17] Braeden Maelstrom: the point is, we’ve adapted to many things
[12:55:23] Braeden Maelstrom: in our history
[12:55:33] Metafire Horsley: Who is made responsible for natural disasters?
[12:55:33] Ivy Sunkiller: we will definitely get obsolete though
[12:55:39] Braeden Maelstrom: we’re not just gonna stop and fall over to die
[12:55:55] Ivy Sunkiller: we are
[12:55:59] Khannea Suntzu: Ivy, lets just say there is some reaspon for concern – hurry up with that fraggin life extension, OK?
[12:56:03] Ivy Sunkiller: it’s called – death from age
[12:56:24] Braeden Maelstrom: so is this some kind of death cult meeting or a serious discussion?
[12:56:34] Khannea Suntzu: I vote for NOT being replaced by brats. I am sittoing here just fine, not getting up.
[12:56:39] TR Amat: There are already issue of transfer of skills, never mind socialisation…
[12:56:50] Metafire Horsley: We are a serious anti-death cult here. Please 🙂
[12:57:03] Khannea Suntzu: I am seriously life cult I’d say!
[12:57:03] Ivy Sunkiller: unless we develop anti-aging therapy in next few decades, every single human being on this planet *will die*
[12:57:06] Nindae Nox (sethsryt.seetan): Interesting that death from age occurs less and less, also because we find out the causes of death more often and maybe in the future prevent it.
[12:57:17] Ivy Sunkiller: we preserve due to offspring
[12:57:29] Khannea Suntzu: Question – unavoidable – but what if people create AIgents that arguably feel, and are created to be tormented. What if someone creates an AIgent whose sole purpose is to be rezzed, subjected to sexual predation and then derezzed. – and the behavior is intelligent and in no way indistinguishable from an actual person (respectively a young 12 year old girl)? When do we intervene and what do we do when come countries insist this is no abuse. Do we invade a country where they have widespread AI avatar torture going on? Would we prosecute tourists that go there to engage in recreational AI avatar torture?
[12:57:31] TR Amat: I’m very much in favour of keeping the world working. And, working better in future. I see some hopeful signs.
[12:57:31] Ivy Sunkiller: but if we become obsolete, why have human offspring?
[12:57:54] Violet (ataraxia.azemus): Instinct, mutation, novelty….habit :p
[12:58:04] Khannea Suntzu: Ivy I have rather seriously tried getting pregnant. So far no luck,
[12:58:44] Braeden Maelstrom: well the whole legal status of a sentient entity will have to be set up the moment AI even hits an open market
[12:58:51] Ivy Sunkiller: don’t need to bother, I’ll fork() my offspring
[12:58:55] Ivy Sunkiller chuckles
[12:59:02] Braeden Maelstrom: and that includes an AI bill of rights
[12:59:19] Metafire Horsley: On what basis would AI torture be outlawed?
[12:59:41] Braeden Maelstrom: cruel and unusual punishment
[12:59:59] Ivy Sunkiller: Braeden: they will hit market? I somehow doubt so, that’s like saying we are going back to race-based slavery and slave trading.
[13:00:08] Nindae Nox (sethsryt.seetan): Perhaps AI will go through a simular history as the Afro Americans and animals today.
[13:00:12] Ivy Sunkiller: the first primitive versions maybe
[13:00:27] Metafire Horsley: Like the laws that may disallow animal torture?
[13:00:28] Ivy Sunkiller: but once they hit human level intelligence I doubt they will be happy about being sold 🙂
[13:00:33] Khannea Suntzu: Braeden – korea diasgree. And they regard they natuioinal sport ofd pitching cute AI manga kawai girls in woodchipperes a national passtime. So what do we do, send angry letters? For frags sake, in china they eat PUPPIES they boil aloive in water.
[13:00:35] Braeden Maelstrom: and if we assume an ai could never be tortured into devulging information, the act should be banned
[13:00:56] Nindae Nox (sethsryt.seetan): In the Netherlands, it is outlawed to torture animals. It might even get you 5 years of jail time.
[13:01:25] Braeden Maelstrom: of course ai’s will hit the market
[13:01:37] Violet (ataraxia.azemus): I have to be going. Be well, everyone 🙂
[13:01:45] Braeden Maelstrom: you think they are just going to be created, patted on the butt, and sent on their way int othe world to be just normal citizens?
[13:01:48] Nindae Nox (sethsryt.seetan): Bye, be well.
[13:01:51] Jimmy S (tanigo.inniatzo): I hope you take care and have a great one Violet!
[13:02:02] Metafire Horsley: Ok, what about making viral spambots that propagate AI rights? 😀
[13:02:11] Laborious Aftermath: Also realize that most people don’t use 10% of there brain, and some operate on a bit less than that. Normally when we talk about A.I. we think of getting it to smarter than any human right off or in short time. Not many people realize that one can form out of a weaker process and evolve with out using 100% of it’s processing/thought power all at once. Also when some realize the master slave, or dom sub. The real interesting questions start to come up and also, social engineering, propaganda, human life time growth factors and so on contribute.
[13:02:11] Khannea Suntzu: Hah
[13:02:39] Laborious Aftermath: Just like some of the other developments that happened in human time, as the great diversity of that field grows into many sub fields. The few good starting models from humans get’s developed and released in one way or another. Even if it sees the paradigm of it being imprisoned or captive or slave like and wants or deduces it wants out or up in the ladder so to say. Who will know till they see it happen for the first time?
[13:02:46] Ivy Sunkiller: Breaden: I *don’t know* what will happen when a computer says “I’m concious” for the first time, but I doubt the answer will be “ok, we will sell you for gazillions!”
[13:03:09] Khannea Suntzu: Question – the use of AIgents might be very beneficial. It might also turn out to increase disparity. How would governments deal with the given that use of AIgents might benefit some of their civilians while others would be left at the bad end this divide? Is this a premature speculation?
[13:03:21] Braeden Maelstrom: the company or research firm creating the ai in the first place will have a huge monetary sum attached to it
[13:03:32] Braeden Maelstrom: and if they wanted to lend it to someone, they would do so for profit
[13:03:46] Ivy Sunkiller: not necessary
[13:04:07] Ivy Sunkiller: if they do hit the markets, then it will just take few clever hackers to set them free
[13:04:12] Khannea Suntzu: Open source AI
[13:04:16] Laborious Aftermath: yep
[13:04:21] Ivy Sunkiller: like cracking PlayStation for piracy use 🙂
[13:04:38] Braeden Maelstrom: open source ai is redundant
[13:04:38] Ivy Sunkiller: and then the pandora’s box is open
[13:04:48] Braeden Maelstrom: the internet would make it a given
[13:04:55] Khannea Suntzu: These jokes are made in the first p;ages of accelerando 🙂
[13:05:11] Khannea Suntzu: Question – what if an illegal entity starts creating commercial application AIgents for illicit applications. Say, a peer to peer representational agent for criminal transactions. Soke argue torrents already inhabit this niche but they are damn effective tools to get what you want.
[13:05:48] Braeden Maelstrom: or what if an ai wanted to rob a bank
[13:06:09] Metafire Horsley: What if air is transparent?
[13:06:33] Braeden Maelstrom: being condescending isn’t very constructive
[13:07:02] Jimmy S (tanigo.inniatzo): Why would an AI need to rob a bank, they could just create more money since most of it is digital anyway
[13:07:17] Metafire Horsley: Huh, is this supposed to be a constructive meeting? What’s our goal here?
[13:07:31] TR Amat: I was talking to someone in SL end of last year who thinks they have a good enough AI design that they can rent them to people for near human equivalent work…
[13:07:33] Khannea Suntzu: If an AI decides to rob a bank it will be in increasingly better at it the further we go – in 2020, tehy will suck at it. In 2050, if AI’s aree into the dillinger thing there wont be many banks left.
[13:07:49] Khannea Suntzu: Question – what if we find that by 2025 or so AIgents are common, and robots are generally nonsentient – and AIgents are the emerging non-general AI sentences. Just speculation – physical incarnation might not turn out to be the best way to go for minds – what if cloud intelligence turns out to be far more versatile and robust? What if robots are just dumb hands AIgents inhabit when needed? That migtht pose some interesting implications for the eventual evolution of human minds, especially if we start talking about uploading. Can we speculate aboyt the human psychological entity and visualise it as (or imagine it might one day become) a ‘dispersed cloud’ of thinking with no clear central node of self?
[13:07:59] Laborious Aftermath: goal for me is longer life and a posable better future 🙂
[13:08:22] Ivy Sunkiller: longer life is so limited, I aim at immortality )
[13:08:23] Ivy Sunkiller: 🙂
[13:08:24] Braeden Maelstrom: my point was, if an ai found a need, or was somehow hired, built to, accumulate a fortune using illicit methods, it would have to ‘fight’ another defensive ai tasked with defending the digital bank structure
[13:08:33] Jimmy S (tanigo.inniatzo): Me too Ivy
[13:08:49] Khannea Suntzu: I dont hear any death cults here!
[13:09:05] Laborious Aftermath: each has there steps to go threw though
[13:09:42] Khannea Suntzu: Well last one then
[13:09:44] TR Amat: I’m interested in seeing what the next couple of hundred years brings. 🙂
[13:09:50] Khannea Suntzu: Question – what can we do today to prepare for the emergence of AIgents? Should we ? Should wait and see? Is this all such a big deal?
[13:09:54] Metafire Horsley: Ok, that’s seriously interesting now. Algent conflicts, robot hulls, “clould identitiy”
[13:10:20] TR Amat: We need to start putting the minitoring and feedback systems in place, now.
[13:10:51] TR Amat: Human equivalent things are “Amnesty International”.
[13:10:55] Jimmy S (tanigo.inniatzo): Well it depends on how we treat the AI, I would treat it like any other life form, I would be kind to it. But perhaps many humans would be scared of it.
[13:11:00] Khannea Suntzu: Well send a letter to parliament. I am sure they’ll welcome you as hospitably as the UFOlogists.
[13:11:10] Laborious Aftermath: well we should try for the better sets and also knowing that there will be some one out there who will try to use the same type of tech for bad at some point also. So mixed on that to a degree
[13:11:30] TR Amat: When you are building systems, think how they will be monitored and controlled.
[13:11:58] Metafire Horsley: Ah, like kids are controlled by parents 😀
[13:12:14] TR Amat: We need standards for AI agents. Including ethical and feedback standards?
[13:12:14] Khannea Suntzu: Yah, tricky 🙂 Humans are evolved for knowns.
[13:12:33] Laborious Aftermath: ghost partition and deep packet sniffing and such to help monitor them.
[13:12:46] Ivy Sunkiller: the point of singularity is that we don’t know what will happen, so there is really no point for preparing for anything
[13:12:53] Braeden Maelstrom: well if we used the advancement of technology to fix our most fundamental civilizational problems, we may end up becoming wise enough to have an actually mutually constructive future with our artificial counterparts.
[13:12:58] Ivy Sunkiller: well, you can prepare to welcome our new robotic overlords!
[13:13:01] TR Amat: We need AIs that monitor the world of AIs.
[13:13:01] Jimmy S (tanigo.inniatzo): well we’d just work out the variables and prepare for those, wouldn’t we?
[13:13:21] Metafire Horsley: I don’t think that like of thinking has merit, Ivy. Our ancestors could have said the same…
[13:13:27] TR Amat: We will likely become part of our new robotic overlords…
[13:13:43] Khannea Suntzu: Ivy this is all stuff before any singularity. This is jist fast. very well prpgrammed, very massive hardware automated systems. I am not argueing something ACXTUALLY intelligent
[13:13:47] Khannea Suntzu: Instincts. maybe
[13:14:32] TR Amat: Instincts are leaned behaviour encoded in genetics?
[13:14:37] TR Amat: learned*
[13:14:46] Braeden Maelstrom: well, if any of you are interested in science fiction relating to AI, you should definitely give Iain M. Banks’ culture novels a read. http://en.wikipedia.org/wiki/The_Culture
[13:15:06] Khannea Suntzu: Its a word TR. I am just saying – decisionmaking processed inferior to human thinking.
[13:15:09] TR Amat: The Culture is good – do not mess with a MInd. 🙂
[13:15:26] Braeden Maelstrom: minds are impressive
[13:15:33] Khannea Suntzu: Where is Peer when you need eir 🙂
[13:15:41] Jimmy S (tanigo.inniatzo): thanks you I will Braeden
[13:16:15] TR Amat: If you can build things with instincts, you should be able to build things that monitors them, provide feedback, and “fail safe”.
[13:16:27] Khannea Suntzu: Uhuh like virus scanners 🙂
[13:16:37] Laborious Aftermath: yep
[13:16:41] TR Amat: Not like virus scanners. 🙂
[13:16:50] Mick Nerido: What will AI’s look like?
[13:16:52] Metafire Horsley: If you want to read about fancy AI stuff you could also read my short story http://radivis.com/storytellers-chronicles/
[13:17:01] Braeden Maelstrom: The Culture is a symbiotic society of artificial intelligences (AIs) (Minds and drones) and humanoids who all share equal status. As mentioned above, all essential work is performed (as far as possible) by non-sentient devices, freeing sentients to do only things that they enjoy (administrative work requiring sentience is undertaken by the AIs using a bare fraction of their mental power, or by people who take on the work out of free choice).
[13:18:03] TR Amat: We are still using 1970s software (Windows) on 21stC hardware…
[13:18:23] Khannea Suntzu: Braeden I know culture, and in my somber moments I dread the Culture is somewhat ahh optinistic when it comes to the runaway effects we might facxe between now in ahhh 2150
[13:18:26] TR Amat: We know how to build more reliable systems, and have known since the 1970s.
[13:18:50] TR Amat: Just like we know how to manage IT etc projects.
[13:19:09] Metafire Horsley: What? IT projects are managed? 😉
[13:19:17] Khannea Suntzu: Yes
[13:19:23] Braeden Maelstrom: oh it’s certainly optimistic to the point of utopianism, but it is still worth noting the implication that AI’s don’t necessarily have to have some adversarial relationship wit hthe human species.
[13:19:34] TR Amat: People have paid me money to help manage IT projects. 🙂
[13:19:37] Khannea Suntzu: chimpansees on crack, doused in pepper.
[13:19:46] Laborious Aftermath: LOL
[13:19:46] Laborious Aftermath: ?
[13:19:53] Ivy Sunkiller: IT projects managed, ahhhhh, wishful thinking
[13:20:09] TR Amat: Actually, doused in petrol, and threathened with a match. 🙂
[13:20:28] Khannea Suntzu: No thats asset control and acountancy
[13:21:25] TR Amat: I wont bore you with going on about meta data and agreement management systems…
[13:21:26] Khannea Suntzu: I must *arrg* excuse myself
[13:21:37] Khannea Suntzu: I have a problem with my lenses.
[13:21:59] Khannea Suntzu whimpers
[13:22:16] Mick Nerido: What are the political implications of AI’s?
[13:22:39] Braeden Maelstrom: vast
[13:23:00] Mick Nerido: Can they vote?
[13:23:07] Laborious Aftermath: Also depends on the strong type that shows it’s self first
[13:23:24] Braeden Maelstrom: if they couldn’t you could count on a pretty quick revolution
[13:23:53] Braeden Maelstrom: dealing with ai’s would have to come from a place of somber humility
[13:24:06] Braeden Maelstrom: knowing that if they chose to, they would wipe us out
[13:24:31] Braeden Maelstrom: it wouldn’t be much different from dealing with extraterrestrials
[13:24:34] Mick Nerido: Why would they want us around?
[13:24:56] Braeden Maelstrom: although it would be much easier to deal with ai’s rather than aliens, since we designed them
[13:25:06] Metafire Horsley: If AI appears, merge with it. Problem solved 🙂
[13:25:12] Braeden Maelstrom: why wouldnt they want us around?
[13:25:14] Braeden Maelstrom: we’re weird
[13:25:24] Mick Nerido: as pets?
[13:25:59] Braeden Maelstrom: and yes, if you believe in the whole ‘soul’ concept, maybe people will just start reincarnating into artificial bodies instead of human ones
[13:26:08] Metafire Horsley: Sometimes. And if we annoy them the hit our off-switch.
[13:26:56] Mick Nerido: Well if I replace all my old failing biological parts when do I stop being human?
[13:26:59] TR Amat: AIs might want us around because we make the world a lot more interesting. We need to avoid “interesting” becoming “terrifying”, or “dangerous”…
[13:27:20] Metafire Horsley: You stop being human one enough people decide you aren’t human anymore.
[13:27:25] Mick Nerido: AS entertainment
[13:28:02] Braeden Maelstrom: maybe your not human once you stop identifying youself as such
[13:28:12] TR Amat: The Dali Lama said in an interview that when robots/AIs get sophisticated enough he’d anticipate souls getting incarnated as them, as well as humans being a target…
[13:28:24] Jimmy S (tanigo.inniatzo): I would imagine that it’d be once we replace the brain with a “synthetic” one
[13:28:27] Metafire Horsley: Can I call myself a horse now? 🙂
[13:28:36] TR Amat: Neigh. 🙂
[13:28:37] Jimmy S (tanigo.inniatzo): but then we could always do what they did in GITS
[13:28:41] Metafire Horsley: snorts
[13:28:51] Ivy Sunkiller: full body prothesis, yay!
[13:28:54] Mick Nerido: GITS?
[13:29:02] Jimmy S (tanigo.inniatzo): Ghost In The Shell
[13:29:03] TR Amat: Ghost in the Shell.
[13:29:20] Metafire Horsley: Catomic bodies that could be aggregated and incorporated at will is the way to go 🙂
[13:29:26] TR Amat: http://en.wikipedia.org/wiki/Ghost_in_the_Shell
[13:29:45] TR Amat: Do not mess with the Major. 🙂
[13:29:52] Jimmy S (tanigo.inniatzo): as I understand it in GITS, they had a Synthetic Body, Real Brain with a link to an AI or some such as well in their bodies
[13:30:08] Jimmy S (tanigo.inniatzo): that was the “ghost”
[13:30:09] Braeden Maelstrom: they had cyber brains
[13:30:10] Metafire Horsley: We might also live as “body fog” ::D
[13:30:14] Ivy Sunkiller: Jimmy: actually the brain could be artificial too
[13:30:17] TR Amat: You are still human as long as you have a (intact) ghost.
[13:30:20] Ivy Sunkiller: +what others said
[13:30:21] Laborious Aftermath: Well as much as I love GITS we already see a change from some of what they depicted in it. No need for hard wired implants with things like EEG’s and such becoming more powerful every day
[13:30:21] Jimmy S (tanigo.inniatzo): ah, thank you Ivy
[13:30:31] Jimmy S (tanigo.inniatzo): and Braeden
[13:30:35] Braeden Maelstrom: their mind state was transfered to an artificial brain, and their bodies were synthetic according to whatever they needed as implants
[13:30:35] Ivy Sunkiller: they could move “ghosts” from organic brains to artficial ones
[13:30:42] Ivy Sunkiller: couldn’t clone them
[13:30:50] Mick Nerido: Would the first AI’s be less intelligent then us?
[13:30:54] Laborious Aftermath: fMRI’s and a lot of development in those fields
[13:31:02] Metafire Horsley: Yes, Mick. In most ways.
[13:31:14] Ivy Sunkiller: that’s where the fiction part of science fiction kicks in GITS 🙂
[13:31:34] Ivy Sunkiller: Mick: they would
[13:31:40] Ivy Sunkiller: they will advance quickly though
[13:31:44] TR Amat: Read “Saturns Children” for an interesting post-human setting: http://en.wikipedia.org/wiki/Saturn’s_Children_(Stross_novel)
[13:31:46] Jimmy S (tanigo.inniatzo): why I’ve never understood about human brains to synthetic brains is, that human brains are made up of loads of connections, so wouldn’t the synthetic brains have to have the same “connections” for that person to be the same?
[13:31:53] Jimmy S (tanigo.inniatzo): what*
[13:32:12] Ivy Sunkiller: Jimmy: precisely, there was a TED talk about it 🙂
[13:32:13] Mick Nerido: So they start as our servents
[13:32:56] Metafire Horsley: Well, mostly yes, Jimmy.
[13:33:00] Jimmy S (tanigo.inniatzo): oh right Ivy, do you have a link?
[13:33:53] Jimmy S (tanigo.inniatzo): thankies
[13:33:58] Laborious Aftermath: yep the potential groth rate for a new AI and with the right resources. They could grow and evolve fast. Yes they will start as agents and servents most likely. But with the right resources and expandability they could grow almost over night so to say
[13:34:10] Ivy Sunkiller: http://www.youtube.com/watch?v=HA7GwKXfJB0 that’s the one I believe
[13:34:35] Ivy Sunkiller: yup, that’s the one
[13:34:37] Braeden Maelstrom: it deoends on what kind of machinery they have at their disposal as well
[13:34:51] Braeden Maelstrom: stick an advanced ai in a well stocked automated factory
[13:34:56] Braeden Maelstrom: see what it comes up with
[13:35:24] Mick Nerido: THey could build better and better models of themselfs…
[13:35:30] Braeden Maelstrom: a million different arms, fingers, eyes, sensors
[13:35:36] Braeden Maelstrom: material to use
[13:35:56] Braeden Maelstrom: build smaller units to transfer itself into
[13:36:03] Braeden Maelstrom: network them remotely
[13:36:17] Braeden Maelstrom: an ai hivemind is born
[13:37:03] Braeden Maelstrom: it defends and upgrades itself
[13:37:04] Mick Nerido: Someone will want to “own” them and that is troubling if they are sentiant
[13:37:34] Metafire Horsley: In what way would you see that as troubling?
[13:37:37] Braeden Maelstrom: well if it got to that stage of development, it would find anybody’s claim of ownership over it absurd
[13:37:48] Ivy Sunkiller: wanting to own them and being able to own them are different things
[13:38:00] Ivy Sunkiller: I’m sure my dog would like to own me, but that doesn’t seem to be the case 🙂
[13:38:10] Laborious Aftermath: Ty ivy 🙂
[13:38:11] Mick Nerido: That would make them slaves
[13:38:33] Metafire Horsley: What’s the problem with slaves, if they aren’t human?
[13:38:36] Ivy Sunkiller: and that’s not really about physical dominance
[13:38:39] Braeden Maelstrom: i guess the trick with that is, if the ai isn’t under threat, it doesn’t have to attack
[13:38:52] Ivy Sunkiller: there are plenty of animals that can defeat a human
[13:39:01] Ivy Sunkiller: and yet we are the ones that rule on the planet 🙂
[13:39:11] Ivy Sunkiller: so saying that we are going to own superior AI
[13:39:18] Ivy Sunkiller: is like saying ants want to own humans
[13:39:34] TR Amat: Once you have “virtual tourism”, or, those telepresence robots starting to appear in offices… And, if an AI can drive one of those…
[13:39:42] Braeden Maelstrom: which is why minds control the culture lol
[13:40:38] TR Amat: http://en.wikipedia.org/wiki/Telepresence
[13:41:02] Mick Nerido: So how soon will there be viable AI’s?
[13:41:13] TR Amat: http://en.wikipedia.org/wiki/Telerobotics
[13:41:35] Metafire Horsley: 20 years or so. In 20 years, ask again :9
[13:41:39] TR Amat: Narrow AI has been in use for 20+yrs.
[13:42:01] TR Amat: More general AI could appear any time now.
[13:42:18] Alina (aeris.betsen) is Offline
[13:42:25] Braeden Maelstrom: im just excited as shit about graphene and quantum processing, so we’ll see how things turn out
[13:42:48] Mick Nerido: I forgot TR you are an AI lol
[13:42:51] TR Amat: http://en.wikipedia.org/wiki/Strong_AI
[13:42:59] TR Amat: I’m a robot fan. 🙂
[13:43:30] Mick Nerido: i stand corrected 🙂
[13:43:43] TR Amat: http://en.wikipedia.org/wiki/Strong_AI#Artificial_General_Intelligence_research
[13:44:19] TR Amat: Serious attempts are on to loot discoveries from neuro science…
[13:44:48] Braeden Maelstrom: http://en.wikipedia.org/wiki/Graphene#Graphene_transistors
[13:45:35] TR Amat: But, in may be some time before a RealDool is you new robotic overlord. 🙂
[13:45:39] TR Amat: http://en.wikipedia.org/wiki/RealDoll
[13:46:05] TR Amat: http://www.realdoll.com/ 🙂
[13:46:13] Jimmy S (tanigo.inniatzo): lmao
[13:46:15] Mick Nerido: AI’s could be silicon based life forms
[13:47:13] TR Amat: AIs are quite likely to live in the Cloud.
[13:47:37] TR Amat: Buying more computing resources when they need to difficult tasks.
[13:47:40] Metafire Horsley: Yeah, like bloggers do 😉
[13:47:59] Braeden Maelstrom: i love wikipedia http://en.wikipedia.org/wiki/History_of_masturbation
[13:48:05] TR Amat: I don’t believe too many bloggers have yet been uploaded. 🙂
[13:48:20] Laborious Aftermath: Hahaha
[13:48:53] TR Amat: http://en.wikipedia.org/wiki/Mind_uploading
[13:49:52] TR Amat: I’m pretty sure some people are getting their heads frozen in the hope they’ll be uploaded, by-and-by…
[13:50:19] Metafire Horsley: That’s not a very bad strategy
[13:51:13] TR Amat: Whole body prostheses may be popular, as I suspect that uploaded human minds will need those, or a virual world equivalent, to operate stably.
[13:52:11] TR Amat: http://en.wikipedia.org/wiki/Prosthetics_in_fiction
[13:52:57] Braeden Maelstrom: yeah a cyber brain would have to be with an artificial body
[13:53:12] Metafire Horsley: iBody 🙂
[13:53:18] Braeden Maelstrom: i doubt the translation would be smooth enough between biological parts and the tech stuff
[13:53:43] TR Amat: You’ll need to know how to hack an uploaded human mind so that it would function without a body, I’d think.
[13:54:06] Metafire Horsley: Yes, that would be useful
[13:54:17] Metafire Horsley: Would save a lot of resources 😉
[13:54:22] TR Amat: If you upload the only biological bits left are simulated…
[13:55:23] TR Amat: Worry if your relative stick you in a low resolution virtual world for your afterlife. And, only let you out for family meets on public holidays. 🙂
[13:55:26] Ivy Sunkiller: there already are artifical parts able to replace parts of mice brain
[13:55:31] TR Amat: relatives*
[13:55:35] Braeden Maelstrom: what do you guys picture with nanotechnology and artificial intelligence?
[13:55:43] Ivy Sunkiller: able to perform at quite high accuracy
[13:55:56] Ivy Sunkiller: I wouldn’t be really that doubtful about the whole organic to artificial
[13:56:02] Ivy Sunkiller: -thing
[13:56:09] TR Amat: Full blown nanotech is one of the big game changers.
[13:56:40] Ivy Sunkiller: full blown nanotech is one of the possibilities, yes
[13:57:14] Braeden Maelstrom: that;s what my av’s character is really. an ai nanite hivemind
[13:57:19] TR Amat: There is significant evidence of neural plasticity, ie the brain (uploaded mind image?) adapting to major changes in circumstances. So results of strokes, brain injury, etc.
[13:57:45] TR Amat: http://en.wikipedia.org/wiki/Neuroplasticity
[13:59:31] TR Amat: One of the problems of AI is motivation. Working with humans might be a wise one to install. It would be unwise to make that totally over-ridding, though…
[13:59:48] Jimmy S (tanigo.inniatzo): Braeden you mean like the Geth from Mass Effect?
[14:00:04] Braeden Maelstrom: ive never played mass effect
[14:00:07] Jimmy S (tanigo.inniatzo): oh
[14:00:11] TR Amat: I thought the Geth we networked individual bots?
[14:00:16] TR Amat: were*
[14:00:22] Jimmy S (tanigo.inniatzo): no, they’re individual programs
[14:00:23] Ivy Sunkiller: they were
[14:00:33] Ivy Sunkiller: well, programs, whatever :p
[14:00:34] TR Amat: Maybe more like the replicators from Starget SG1?
[14:00:45] TR Amat: Stargate*
[14:00:47] Metafire Horsley: Yeah, the replicators were stylish
[14:00:48] Braeden Maelstrom: here, this is taken from something i wrote
[14:00:51] Braeden Maelstrom: The Qek are a synthetic nanobionic species created by an ancient intelligent race that has long since ascended into non-physical reality. The Qek are made of intelligent self replicating nanites that all function together to form a larger organism. They are all networked in such away that the entire ‘body’ of the organism acts as a singular substrate of highly dense and extremely powerful quantum computing capability.
[14:01:04] Ivy Sunkiller: (( and so the white rabbid discussion changed into sci-fi fanfest ))
[14:01:09] Ivy Sunkiller: rabbit*
[14:01:10] Ivy Sunkiller: duh
[14:01:12] TR Amat: http://en.wikipedia.org/wiki/Replicator_(Stargate)
[14:01:15] Laborious Aftermath: LOL
[14:01:16] Laborious Aftermath: ?
[14:01:24] Braeden Maelstrom: right
[14:01:28] TR Amat: Robotic rabbits, anyone? 🙂
[14:01:39] Jimmy S (tanigo.inniatzo): I think we’re just comparing it to things we know of Ivy
[14:01:40] TR Amat: SF is a way to look at the future…
[14:02:00] Metafire Horsley: SF is a way to design the future 🙂
[14:02:28] Nindae Nox (sethsryt.seetan): I wonder, reading about this uploading of the mind. Would one truly move the conscious into the computer or only create an exact copy which acts and thinks alike but is not experienced by the original conscious or even has no conscious at all. Would it be possible to move the conscious to a synthetic brain?
[14:02:41] TR Amat: “Science fiction is the mythology of the Future” – John W. Campbell (I think)
[14:02:50] Deerstripe chair red (unpolished) whispers:
(?PgUp or ? PgDn ) change your pose. Touch for menu.
[14:02:54] Braeden Maelstrom: joseph campbell?
[14:03:11] Metafire Horsley: An exact copy is good enough.
[14:03:19] TR Amat: http://en.wikipedia.org/wiki/John_W._Campbell
[14:04:07] Braeden Maelstrom: http://en.wikipedia.org/wiki/Joseph_campbell
[14:04:50] TR Amat: Rather different, I think…
[14:05:22] Braeden Maelstrom: well, i saw mythology and campbell 😛
[14:05:33] Nindae Nox (sethsryt.seetan): I do not agree on that. Unless it is for preserving factors like “How did this or that person think.” But it is not you, it is just a copy, not the original you who went through the whole transformation.
[14:05:34] TR Amat: It depends what consciousness is, I think.
[14:05:38] Braeden Maelstrom: and he did do the whole mythological symbolism of star wars
[14:06:05] Metafire Horsley: Well, your self of todays also is just a bad copy of your self from yesterday. So what?
[14:07:05] Metafire Horsley: Consider sleeping as uploading yourself to tomorrow 🙂
[14:07:15] Nindae Nox (sethsryt.seetan): I am not sure, but as far as I know, consciousness itself isn’t as solidly proven by science. One could experience things with the brain being dead.
[14:07:32] Nindae Nox (sethsryt.seetan): Or at least, at non-active.
[14:07:40] TR Amat: If consciousness is a self-reflective system, based around “self” and “other”, and capable of predicting the behaviour of other consciousnesses, I think, yes, you could get a consciousness to run on hardware rather than wetware.
[14:08:19] Metafire Horsley: Yeah, it’s the software that matters. The patterns
[14:08:26] TR Amat: We know quite a lot about conscious behaviour, these days…
[14:08:30] Braeden Maelstrom: i gotta head out. thanks for the discussion everyone. see you next time.
[14:08:39] Laborious Aftermath: Tc
[14:08:40] Metafire Horsley: see you Braeden
[14:08:41] Nindae Nox (sethsryt.seetan): But would it be the same consciousness, would I truly go to sleep before uploading and then wake up inside my synthetic host?
[14:08:55] mysterious force: CONTACT HAS BEEN MADE!
[14:08:58] Nindae Nox (sethsryt.seetan): Be well.
[14:09:21] Metafire Horsley: Yes, Nindai. And you would awaken in your original body. Your two consciousnesses would be identical then diverge.
[14:09:50] Metafire Horsley: It’s a pretty weird process.
[14:10:05] Metafire Horsley: You don’t know as who you will wake up.
[14:10:27] Laborious Aftermath: like going to sleep as one and waking up as to that diverge like twins do in a way
[14:10:31] TR Amat: An AI wouldn’t need to be conscious to cause a great deal of trouble…
[14:10:36] Nindae Nox (sethsryt.seetan): Then it would only be a clever way to fake immortality. Since the original consciousness is stil in the origial body.
[14:10:53] Metafire Horsley: Why only fake immortality?
[14:11:24] Nindae Nox (sethsryt.seetan): Depends on what you define as immortality.
[14:11:33] TR Amat: Most proposed (non-nanotech) uploading processes are highly destructive of the brain. So, you don’t have your original brain left afterwards. 🙂
[14:11:36] Ivy Sunkiller: having backups!
[14:11:44] Metafire Horsley: The original biologically emobided self would remain, but with enough uploads and technology around aging could be stopped.
[14:12:33] Nindae Nox (sethsryt.seetan): Maybe research in what consciousness truly is would be required before they would begin trying to ‘upload’ it virtually.
[14:12:52] TR Amat: Making sure your backups are distributed aross a few hundred light years, so as to avoid gamma ray bursts, might appeal…
[14:13:06] Toy (aeni.silvercloud) is Online
[14:13:06] Metafire Horsley: I guess most would be very careful before uploading, some some would probably be less reckless
[14:13:16] Metafire Horsley: more
[14:13:44] TR Amat: How secure is your upload/backup, and the conditions under which it gets to be used/run? 🙂
[14:13:49] Metafire Horsley: In case of galactic core outburst, change universe 🙂
[14:14:09] TR Amat: At least, change galaxy. 🙂
[14:15:15] TR Amat: I think one current approach is that if the neurones and their interconnections are simulated faithfully enough, consciousness will “come out in the wash”. 🙂
[14:15:44] Metafire Horsley: Yeah, that’s a pretty sure hit, I guess. Even if it might be somewhat inefficient
[14:16:25] Metafire Horsley: Anyway, I’m tired and need some rest. Goodbye
[14:16:34] Laborious Aftermath: Tc
[14:16:53] TR Amat: At least occasional rest is recommended. 🙂
[14:18:22] TR Amat: A lot of AI is about working out algorithms that might give a human-like (or better) intelligence.
[14:23:23] TR Amat: I’ve found interesting articles in recent “New Scientist” magazines…
[14:23:45] Laborious Aftermath: Oh on what ?
[14:24:55] Jimmy S (tanigo.inniatzo): lolwut?
[14:26:03] mysterious force: CONTACT HAS BEEN MADE!
[14:34:54] TR Amat recommends regular ading of “New Scientist”. 🙂
[14:34:59] TR Amat: reading*
[14:35:31] TR Amat: http://www.newscientist.com/
[14:36:39] TR Amat: You know your brain wants to. 🙂
[14:39:00] Ivy Sunkiller: my brain wants to sleep
[14:39:07] Ivy Sunkiller: wtb better brain
[14:39:11] TR Amat: Sleep is good too. 🙂
[14:39:21] Ivy Sunkiller: sleep is waste of time 🙂
[14:39:54] TR Amat: Someone I know was recommending fish oil capsules combined with 20+g of 80%+ Dark Chocolate.
[14:40:10] Nindae Nox (sethsryt.seetan): For what?
[14:40:11] TR Amat: Mind altering…
[14:40:28] Laborious Aftermath: I take it the chat meeting is over with now? Nice stream of music by the way. 🙂
[14:40:40] TR Amat: The chc gives you a kick, the fish oil provides the resources to work with.
[14:41:31] TR Amat: I’m not familiar with anything that reliably substitutes for sleep, though. 🙂
[14:41:40] TR Amat: Need Brain 2.1. 🙂
[14:42:41] TR Amat: Thanks for hosting talk, Ivy.
[14:44:58] TR Amat: Bye for now.