View Single Post
  #10  
Old 12-07-2012, 05:01 PM
lisarea's Avatar
lisarea lisarea is offline
Solitary, poor, nasty, brutish, and short
 
Join Date: Jul 2004
Posts: XVMMVIII
Blog Entries: 1
Default Re: Ensign Steve waxes philosophical on the Singularity, a thrad by Ensign Steve

I tend to agree that it's not going to be some headline event. We're not going to wake up and have our newsfeeds announce that the Technological Singularity has arrived or anything; and I do think that, depending on your definition, we're already there.

We already have systems that make decisions with minimal human intervention. That is why the stock market has flash crashes. It's not that the systems are inarticulable or anything. That's I think a pretty common misperception, that if we don't know how something works, that means it's some kind of woo or magic. It's not, though.

It's like human intelligence. Also not woo or magic, but it's too complex to articulate and sort out and explain. You need incredible brain plasticity to take in all the knowledge required to use language optimally, and people cannot even explain what they're doing when they do. And computers work the same way brains do.

So here are two fun freaky facts.

One. Transient global amnesia. This is an apparently common condition, where part of your brain experiences some kind of malfunction, and starts rebooting itself over and over, running a POST process, where the person goes through a script, asking specific questions and running through the same bearings getting information about recent events. The process gets a little longer each time, getting further along in the process, but while it is happening, they always start at the same place.

Here is video of a lady with TGA, rebooting:


from this episode of Radio Lab, which listen to it. The doctor is really cool, all planning how if it happens to him, he's going to see if he can break out of the POST process.

Two. The other thing. So you know how people with schizophrenia often have delusions that there are machines controlling or reading their thoughts? Those are called influencing machines, and they way way predate computers. Way before we had anything anyone would really recognize as a thinking machine, people with schizophrenia were imagining thinking machines. And not just randomly or anything, either. The first illustration at that link is of an influencing loom. Loom! Looms are the precursors to the first machines we recognize today as computers. So there's something about the way that machines we design function that we recognize at some level as being like the way we function. Which, of course, makes sense that humangs would design systems that mimic our own systems, whether consciously or not.

This is why this works.

(Which brings us to the problem of privilege and the 'digital divide,' wherein remind me to argue that children who have too easy access to prepared consumer content might be at a fair amount of disadvantage, too, as opposed to kids who have to learn how to make their computers work.)

So other computers, just like us, learn best and most effectively not through prescriptive instruction, but through immersion. And they do. After a certain amount of rudimentary instruction, artificial intelligence systems observe and make associations to mimic and interpret human language and other knowledge.

And what with the internet now being so gigantic and so rich with trivial details, it's a goldmine. There is information out there about even the most banal and abstract topics that is just ready for gleaning. So not only do machines have access to encyclopedic information about specific topics, but they have tweets about what people had for breakfast, Facebook discussions illustrating human communications and relationships, Pinterest boards demonstrating human aesthetics and associations.

So according to a lot of people, seed AIs are the big tipping point, like that's when we can officially announce Technological Singularity! But maybe that's not really so much a big disruptive event. We already have some pretty complex and unarticulated heuristics out there doing stuff as it is, mostly as far as I know in relatively limited domains, and also as far as I know, anyway, they still have off switches if nothing else.

But we already have, and have had for a while, computers that learn, and not at all unlike the way we learn. And for technology to advance much, they need to be able to learn independently and without direct human intervention.

Just a note, too, that we blew past the First Law of Robotics a long, long time ago. I'm talking to you, But.
Reply With Quote
Thanks, from:
But (12-07-2012), chunksmediocrites (12-08-2012), Clutch Munny (12-13-2012), Crumb (12-07-2012), Ensign Steve (12-07-2012), Kael (12-07-2012), Kyuss Apollo (12-13-2012), LadyShea (12-08-2012), slimshady2357 (12-07-2012), SR71 (10-20-2018), The Man (01-04-2014)
 
Page generated in 0.25260 seconds with 11 queries