View Single Post
  #27  
Old 12-08-2012, 08:14 PM
lisarea's Avatar
lisarea lisarea is offline
Solitary, poor, nasty, brutish, and short
 
Join Date: Jul 2004
Posts: XVMMMDCXLII
Blog Entries: 1
Default Re: Ensign Steve waxes philosophical on the Singularity, a thrad by Ensign Steve

Quote:
Originally Posted by Dragar View Post
So my opinion is a bit above my station, but I work with a number machine cognition people (one of whom transferred over from a neuroscience academic career path) and seems to align roughly the same with them.

We're in no danger of a singularity any time soon - our computers and our way of programming computers appears to be fundamentlally different to how biological computers work. And I am fairly convinced that our notions of intelligence hinge on that sort of functionality.

And to make things worse, to draw on Ensign Steve's point that we don't understand how our current computers work - we really don't understand how biological computers work.
Well, then, what is the technological singularity that we're in no danger of? The definition that most use is of 'smarter than human intelligence,' or of systems that function in ways we don't understand.

So what does that mean, anyway? Computers have been faster and more reliable than us at certain types of tasks pretty much forever. It's not that humangs--and I mean humangs in the universal or collective sense, not existential--are incapable of understanding how to do math, but we don't feel like it because math is boring and stupid so we let the boring and stupid computers do it instead. Even if a computer is doing some kind of calculation that we literally don't have the collective time for, like if every person in the world were to collaborate on it, it still doesn't make it unfathomable. It's not doing something totally inconceivable, it's just doing too many conceivable things for us to keep up with sans computers.

For technology to actually do something that is beyond human comprehension, it would have to employ some kind of supernatural force.

So again with language. Humans are capable of using natural human languages innately, but we are not capable of describing them. Early attempts at natural language processing were sort of brute force rule-describing programs, but there is only so far we could go with that because of the sheer volume of rules, and because we haven't articulated most of them yet. We could. There are rules. We don't have time for that shit is all. So modern NLP focuses more on machine learning. That is, rather than trying to write down all the rules that govern language, computers observe language as it's used and make their own observations just like human children do. Computers still can't do this at the level that humans do by a long stretch, but they're already doing things that people haven't fully articulated, similar to the way people do.

Quote:
Originally Posted by But View Post
In a sense, they are beyond human comprehension (there are so many interpretations of every word). Time and resources are limited, but it's also a societal problem (we are science-fiction nurds,[*] so let's say the neural network of our species super-organism is dysfunctional and because of that the information processing.. OK forget that, you know what I mean).

As a society, we don't seem to have that much of an idea of what we're doing there. Actually, ideas abound, but they don't translate into effective action. We are in the middle of a disaster, we have mountains of good data about it, but the information processing is just fucked up. People still behave a lot like a herd of zombies, so it takes only small pushes in particular directions by the professional perception-manipulators ("PR industry") to control everything.
That's why it's important to distinguish between the universal and existential human. One individual person doesn't have to understand something. If that were the standard, in my case for example, the stock market would literally be some kind of magical fantasy land ruled by warlocks or something, because I dunno. I am capable of understanding it, and back when I was young and needed the money, I had some dark chapters in my life where I had to become a little conversant in it, but I am not dedicating any of my precious long term memory on the stupid stock market.

Other people understand it, though, and hypothetically, I could understand it myself if I were forced to. I just dunwanna because that is part of the human understanding that is only for assholes.

And that extends to other human knowledge too.

And there are certain types of information (not all) that are pretty accurately discovered by the wisdom of crowds, but every discrete person in the crowd could be individually wrong.

Of course, not all types of information are like that, and that crowd wisdom can be very effectively manipulated with shiny stuff, like, ummm, Brave New World,** which is exactly what is happening now with consumer model technology, and we should be very very worried about that.

* For the record, I am not a science fiction nurd. I have not really been into science fiction since I was a kid, so I really only know what people are talking about with stuff like Skynet from the context, and from looking shit up on the internet. Science fiction shortcuts are longcuts for poor old Lisa Pea.

** Check me out, adapting my communications in order to accommodate the interests and proclivities of you young hepcats, making references to various Star Tracks and shit.
Reply With Quote
Thanks, from:
But (12-08-2012), chunksmediocrites (12-16-2012), Crumb (12-09-2012), Shake (11-10-2015), ShottleBop (01-05-2013)
 
Page generated in 0.13957 seconds with 11 queries