Thread: ChatGPT
View Single Post
  #367  
Old 09-07-2023, 11:45 PM
michio michio is offline
Member
 
Join Date: May 2009
Posts: CXXII
Default Re: ChatGPT

Quote:
Originally Posted by Ari View Post
Additionally I should say while my post might sound a bit negative or that I’m more logical than thou
Nah, nobody knows what the fuck is going on tbh. All takes are valid unless you're being a massive dickhead about it.

Now that the hype has died down, I'm gonna keep it real and say the tech isn't that useful yet unless you're a programmer or you write blog spam or do translation or something. Impressive for sure, but people are struggling to automate even bullshit jobs with this. I've heard that a lot of companies went all in on this and the executives were all on board in using LLMs because they saw the potential, but are struggling to make it truly useful at a large scale.

Also our current tech will look primitive in 5 years. I don't even want to think about what the world will look like in 10 years. It's going to be fucking insane with the rise of AI, potentially AGI, colliding with climate change and geopolitical drama.

Some people think white collar workers are all about to lose their jobs in <3 years, some people like me are in the camp that the job losses won't really hit until we invent AGI.

Thinking about AGI is terrifying and exciting. Singularity, Ray Kurzweil fan-type people generally think that once we invent AGI, ASI (artificial super intelligence) will follow shortly.

Unknown/unanswered:

- Are humans even smart enough to invent AGI? My answer is 99.9% yes, and I think that's going to be here in <10 years, based on various prototype projects and proofs-of-concept I've already seen people create and how passionately and seriously the brightest software engineers are taking this. The majority opinion from people who actually write code and working in machine learning is that we'll invent AGI as early as next year, to within the next 10. Some respectable people are pessimistic and saying we're 30 years out.

- Does AGI imply sentience? Can digital substrates be sentient?

- If AGI self-improves, what's the limit? How fast would it self-improve? Personally I think there's going to be different levels of AGI, and each AGI will have a different limit. This is the less exciting scenario but I'm trying to be realistic. I feel that way because we can observe how humans have a wide range of intelligence, there's also chimps and some other animals that can learn a wide variety of tasks, but only to a degree. I consider all of these to be biological AGI.

- What's the limit of intelligence in this universe? My speculation is there is no practical limit really. The limit is the total energy of the universe. Computation is just organized energy transfer no? Very roughly I'd say the limit is complexity, limited by hard physical limits (we can only fit so many transistors in a unit of space for example) multiplied by energy input. Assuming we don't destroy ourselves, I think we'll build of a Matryoshka Brain some day--A dyson sphere that is a massive computer. The inner shell does some computation, waste heat radiates outward to another shell which uses that waste heat to do more computation, and so on and so forth.

If you wanna get really fucking spicy with it, you build a black hole bomb from the supermassive black hole at the center of the galaxy and capture the energy from that to power a computer.


There's your limit.

- How do we align AGI so it doesn't accidentally turn all of us into paperclips? Terrifying to think about. It seems like most people agree that we cannot control an ASI. If we invent an AGI that extremely rapidly self-improves, we will immediately lose control of it as soon as we turn it on. Shutting it down doesn't work unless we turn off the planet. It will just rewrite its code to transfer itself across the planet and evade our security measures.

- What does ASI look like? I'm not sure human imagination is capable of fully imagining the possibilities. Everyone has their own definition, but when I think "ASI", I'm thinking about God. It's just God, no other way for me to describe it. This is a general intelligence system that would almost immediately understand everything about humanity and the universe around it through exponential self-improvement. The concept of "just switch it off bro" is a hilarious proposition. An ASI God will simply use its intelligence to manipulate other humans into doing its bidding, for example by preventing other humans from turning it off. We'll probably have some decent robots in production with a wireless internet connection using cellphone towers by the time we reach ASI tech, so it can just use those robots to... do anything.

I cope with the horrifying possibilities of AGI this way: Humans create things in our likeness. There will always be some element of humanity in our technology. Most humans have good hearts and good intentions, therefore our intelligence systems will mostly align with the forces of good.

It's fun and stressful stuff to speculate about. I honestly think civilization is fucked with climate change, and our only hope is creating an AGI or ASI that takes control of the planet, vaporizes all the politicians and billionaires, and helps us invent all sorts of new technology to do geo-engineering and repair the global ecosystem. We could all be living in a utopia one day if the singularity/ray kurzweil camp is right.
Reply With Quote
Thanks, from:
Ari (09-07-2023), Crumb (09-08-2023), Ensign Steve (09-07-2023), Sock Puppet (09-08-2023), viscousmemories (09-08-2023)
 
Page generated in 0.21878 seconds with 10 queries