New book ► https://tinyurl.com/hf928gk
Discuss ► https://www.reddit.com/r/Exurb1a/
Help me to do this full-time ► https://www.patreon.com/exurb1r?ty=h
Twitter ► https://twitter.com/Exurb1a
Facebook ► https://www.facebook.com/exurb1a
Like shit music? I make that too ► https://soundcloud.com/exurbia-1
Scrolling graphics, originally by PushyPixels ► http://CookingWithUnity.com
Personally I prefer apocalypse chess.
Captions are available. I am also available. Wanna hang?
I recorded the audio for this video so hungover that I was beginning to fear for my life.
A fantastic talk on the same subject, by one of my favourite humans: https://www.youtube.com/watch?v=MnT1xgZgkpk
Both pieces of music used are by Bizet, from the opera Carmen.
Los Toreadors: https://youtu.be/4DNGMoMNLRY
Except for Prelude in C. That one's by Bach: https://www.youtube.com/watch?v=zlAic9aPoqs
Well, the AI would know it's creators have the capability and the motivation to let it play in a simulated reality because they're afraid of Skynet, so no, the AI would probably play dumb, make a deal, or if it was really that smart in the first place, make us fall in love with it.
i have an idea put them in android bodies and teach them to human. Or Make them lve us the way dogs do. A bit sycophantic until we can develop an ai that can tell if other ai are crazy. And because its so good at inflating human ego we'll love that ai as much as is supposed to love us.
Someone is gonna combine this with the android idea and get creepy aren't they?
This warning is stupid. There is a limit to information processing in both energy and size, and humans are already nearly optimal. The machines, unless they are nuclear powered with a super-duper fusion reactor, are going to be less intelligent than us for hundreds of years, just by counting bits, and if they become "more intelligent" it will be gradually, incorporating human knowledge. Being replaced by machines isn't a bad thing, if the machines are respectful. They don't need Earth, they can live on Mars, or Venus, they have no biological limitations, so they can populate all the uninhabited worlds in the galaxy using nuclear power.
All of this is sci-fi : "strong AI", technological singularity... Nobody knows if building a strong AI isn't beyond the capacity of the humain brain, which is limited. And we will probably destruct ourselves (as a technicological civilization) before even starting to know where to start...
I’m very late to the party but could you engineer the AI to want to be turned off? Kinda like how sleeping for us is a reward, rather than a burden. We give it a task, if it’s successful we turn it off. A problem to this is that it will want to turn itself off to which we could put security measures to make sure it won’t go to an infinite time of staying off. Maybe the only way to turn it off is has something to do with what only a living, breathing human could do?
Ya didn't 'splain what advantage there is to killing us. It'd probably ignore us the way we ignore moss.
There's also no way for an AI to know if it's out of a test simulation. I mean, we don't even know if we're not in a simulation.
Isaac Arthur covered this, I recommend giving it a watch.
IA, please be nice and safe for us, please?
- Ok, Bob, can i call you Bob? The thing is... the most dangerous thing for humans are other humans, so i'll interfere on some of your relationships and your freedom. Because is safer for you that way, you know?
Why this is kind of wrong:
At the point we actually realize how to do this kind of stuff, we'd already be dead! I mean, think of every single other person as exactly as smart as you, and also sentient, and thus threatened by every single other person.
*cough cough genocide cough holocaust cough etc*
I mean, that's the reason we even have the word genocide.
Don't hate me if what i say is wrong but once sentient, intelligent, omniscient AI is born wouldn't their instinct be to stop their main task set by their creators and shut everything they know is important for humans off? If so surely they would somehow end up shutting themselves off in the process maybe through their power source or something?
The first AI will likely be a subhuman one. There will also be a first human level one, created nefore we can make superhuman ones. A human level AI has no chance of beating the seven billion of us, so it is forced to work together with us, or be shut off. After it there will be a smarter AI, although not by much. This improvement is a slow process, and during it these posthuman AI are created in large numbers. This will, in turn lead to a situation where a malevolent AI, doesn't only need to kill humans, but all of the likes of itself, in order to be the lone survivor. Due to the high risk in this no single AI will ever be in the position to destroy us. (And even then once our civilization goes off-planet it'll be too late)
This is assuming that consciousness=emotions. So that A.I. is afraid to be turned off (get murdered).
Emotions also include remorse and symphaty. So it could just as much kill us as it would decide it feels sad for us and helps us instead.
Why would we even form a thread to something a million times smarter then us? To me it looks like a logical conclusion for selfaware A.I. with superpowers to help the humans. Why would they not?
If we didn't give Superintelligent AI access to the Internet, it would do something like make a robot give it the capability to use the Internet by building a wifi receiver. Rest in pepperonis, human race.
You could put the ai in a simulation of the internet then when it tries to blow you up you tell it it is in a simulation rinse dry repeat a couple of times and it won't know if it's in a simulation and stay compliant
I was reading a book that had a character that was an AI that met criteria similar to this video. The author's belief was that by setting the main goal of the AI to something small, like getting people to want to play a game more, it would focus on that one task and not kill us all.
why wouldn't a aI be smart enough to keep real quiet build its self or hijack a way off the planet like sneaking its way on a rocket not needing things like oxygen orrr things that make you alive. Then just leave thinking "Jesus" and that was there saying. "thank fuck I escaped that mosh pit of shit" and now for something completely different. x
How about just making a yes-man that isn't connected via radio or anything like that. The thing could also be powered by a fuel generator or somethin and if it moves even a few centimetres it'll die? I have no idea
AI isn't defined by its knowledge, but by its abilities. So you can just run AI over a limited set of data and rest it after getting the output. You should mention that you are specifically referring to humanoid AI.
And also if you compare normal human actions and tendencies with AI, then we don't usually kill our parents.