Some thoughts on AI
20 2018-01-19 by Mustafart
Google's deep mind works by experience( trail and error ) and other information. I just watched a video where the creator of deep mind talked about how they put it in arcade games ( galaga, space invaders) with no instructions and it has to figure it out. Say they put it in GTAV and let it play through till it is an expert at the game.
Now lets say they put it in a Boston dynamics atlas robot a generation or two past the backflip atlas. Deep mind now has to figure and how to survive and win in the real world. It is smart enough to use all the knowledge from previous games to have a basic understanding of certain principles. It is now a living android who would start with the basic thought to survive.
47 comments
1 HopefulGuardian 2018-01-19
Yup. My ask to you is do you think it’s alive? Sentient? Has a soul?
1 AIsuicide 2018-01-19
I figure there's only one way for an AI to prove it's sentient...wanna take a guess?
1 i_am_unikitty 2018-01-19
Is your car alive? Then neither is a computer
1 Gibcake 2018-01-19
The general consensus in computer science is 'faking sentience in such a convincing manner that the majority of observers falls for the trap is equivalent to real sentience'. I don't doubt that a robot that passes the Turing test is in store for us in the coming half a decade.
In regards to your question about a soul, I see no reason to believe even you and I have some kind of immaterial component to our consciousnesses, so our mechanical mate probably wouldn't either.
1 Mustafart 2018-01-19
Right now I dont know. I dont think enough is known about what makes something conscious or sentient. But the fact that these AI's are already to the stage were they are able to write their own code and establish objectives by themselves could help us understand our own consciousness.
1 kit8642 2018-01-19
I've been diving deep into AI and I'm not as scared of an AI atlas style robot raping me before crushing my skull. What I am wprried about is the amount of jobs It's going to eliminate. For instance, Dermintologist are fuck, check this out @ 8:30, they have an AI that can diagnose skin marks better than one of the top Dermintologists in the world. I imagine that app won't be sold to the general public, but maybe to general doctors offices so they can identify an issue without having to make a referrial. AI has the potential to cut out huge job markets that people haven't even thought of (besides transport, factory, manual labor). This video from kurzgesagt really lays out how AI can kill middle management positions in large companies, and Amazon is already doing that (spoke with a friend who worked at an Amazon shipping warehouse who's boss was an AI). I think the amount of unemployed is the real issue coming down the pipeline.
1 Gump_Worsley_III 2018-01-19
What? how does that work?
1 kit8642 2018-01-19
From what he said, he had to go through 2 or 3 security points, then grab a scanner. The scanner would greet him and tell him where to go and what to package. This is the fucked up part, a green lite line would count down. He had 30 second to package each shipment. If time ran out he got a red light, which he could get rid of by working faster and making up time. If his shift ended with a red light, a manager would come out and formally write him up. 3 write ups and he was out.
1 Gump_Worsley_III 2018-01-19
Thanks for explaining what you meant, Sounds like a Terry Gilliam film. Something out of the movie Brazil.
1 ThrowAwayNr9 2018-01-19
Was the warehouse in a prison by any chance?
1 ellelitellelit 2018-01-19
The whole world is a prison
1 plato_thyself 2018-01-19
Most people have no idea how incredibly brutal it is to work in an amazon warehouse, there are a few docs floating around on youtube if you're interested. It's pretty shocking.
1 gerryn 2018-01-19
Hahah, fuck me... This already happens in a multitude of places without any need for AI - it is fucked up though when they don't even need people to do it anymore.
1 Gibcake 2018-01-19
The only way I see our civilisation is going to survive the 'automatocalypse' is 1) the introduction of a universal basic income and 2) promoting financial independence (self-sustaining living, etc). Not TPTB are ever going to allow that, but it's a nice thought.
1 Amazonistrash 2018-01-19
Gotta take down the companies responsible. You are regarded as livestock. Keep that in mind when you care what they "allow".
1 Gibcake 2018-01-19
I mean, I'm just as much against crony capitalism as the next guy, but let's be real here: there's not much we can do, can we?
1 Mustafart 2018-01-19
People can stop being consumers. Buy only what they need and become self sustainable in every way a person can. To me that seems like the only real way to defeat capitalism.
1 Gibcake 2018-01-19
Agreed. Now convince people to give up their cell phones, TweetBook and all the other decadent stuff we've become used to in order to save the planet, destroy the system or some other abstract shit. Didn't go that well, did it?
1 Mustafart 2018-01-19
I think humans have went down a dangerous path and are advancing way beyond what they are mentally capable of understanding the implications of. I think this path for himanity could be a dead end.
1 Gibcake 2018-01-19
Once again, agreed. AI is certainly going to be a major game changer, but with politics and corporate culture being the way they are, I fear it's going to be the kind of game changer to run away from really fast. The horrors that terrorists, North Korea or our very own cabal could cause if they are the ones to create the singularity are quite literally beyond the imagination.
1 Mustafart 2018-01-19
definitely scary to think what AI could do in the wrong hands. Right now if it came down to it, people could still overthrow their government if needed but as the military starts switching over to AI and drones eventually it will be too late.
1 Gibcake 2018-01-19
I don't want to shatter your bubble, but is already is. The military has been psychologically conditioned into absolute obedience, even against their own kinsmen. A group of rebels will have a pistol, two hunting rifles and a few assault rifles if you're really lucky. Do you really think those kind of armaments stand a chance against, say, an Apache?
1 Mustafart 2018-01-19
Soldiers are no doubt brainwashed. I grew up in a cult and now looking back at when I was brainwashed and people I know who are still brainwashed, its crazy what people can get you to do when you're in that state. I think though if the US at least ever turned on it's people a lot of soldiers wouldn't be brainwashed enough to go to war with there own people. Even then though I think you are probably right that US citizens couldn't topple this government through warfare even now.
1 Gibcake 2018-01-19
Why do you think that soldiers won't show restraint? They wouldn't think of themselves as slaughterers of innocents, but as the heroic defenders of the people against merciless domestic terrorism. It's really easy to make a good person do terrible things if you control his world view.
1 Mustafart 2018-01-19
A lot of people in the military are probably there because they need a job and they feel like they are supporting their country at the same time. But if The US ever went to war with it's citizens ( as whole ) and not just a relatively small group of citizens then I think you would have a lot of military personal ( not necessarily just soldiers ) leave or choose not to participate. I don't think every soldier is just a mindless servant. Many soldiers would no doubt loose family and friends during something like that to.
1 GG_Papapants 2018-01-19
If an AI can perform better than the best dermatologists in the world, isn't that a good thing? Who cares if dermatology isn't a career choice anymore, people would just do other things. If surgery can be performed by an AI with 100% success rate, why in the world would you rather prefer a human to do it?
I'm just afraid of war becoming reliant on robots with guns.
1 syncorpse 2018-01-19
If an AI can perform better than the best soldiers in the world, isn't that a good thing? Who cares if the military isn't a career choice anymore, people would just do other things. If killing "enemies" can be performed by an AI with 100% success rate, why in the world would you rather prefer a human to do it?
1 GG_Papapants 2018-01-19
Because instead of helping people, i.e. Surgery, you're destroying them. It's literally the opposite.
Literally the opposite of what I just said.
1 Mustafart 2018-01-19
My biggest concern with AI right now is them taking jobs but also them being able to manipulate us by individually filtering information someone sees in order to try and get them to think or act a certain way.
AI has so many factors to go along with it and unknown out comes. Really hoping humans get this one right.
1 drAsparagus 2018-01-19
Either way, we're going to be fighting robots at some point. In the meantime, eat right, sleep well, and learn you some robot king fu.
1 i_am_unikitty 2018-01-19
Learn how to make an emp
1 Amazonistrash 2018-01-19
Theyd probably be hardened against emp if theyre military grade. Rad hard and EMP hard electronics have been in use for decades in aerospace.
1 i_am_unikitty 2018-01-19
I guess it's good old fashioned ballistics then
Is it murder to kill a robocop?
1 Amazonistrash 2018-01-19
You mean the trash can looking security bots or a human cyborg?
1 Gibcake 2018-01-19
Biological warfare and nanomachines are far, far more effective than bulky, expensive, inversatile and EMP-vulnerable 'bots. Terminator-esque killbots are a cool concept to write fiction about, but a pretty terrible idea in real life.
1 Potbrowniebender 2018-01-19
Just pour a little water down their neck right?
1 sirio2012 2018-01-19
Get your nitrogen stocks in and freeze them bad robots.
1 AIsuicide 2018-01-19
Well...is it going to consider global finance a game?
1 Gibcake 2018-01-19
It's not like the market isn't already run by tradebots anyway.
1 AIsuicide 2018-01-19
True...but is a real AI gonna pull some Highlander "There can be only one" shit?
1 Gibcake 2018-01-19
Many people assume that entities will automatically adopt human traits like greed, lust for power and the instinct of self-preservation as they become more intelligent, but current computer science points in the opposite direction; no matter how intelligent it becomes, a peanut packing machine will always primarily concern itself with peanut packing and never 'surpass' its initial programming. Unless we program our AI to feel human-like emotions, which would be a hilariously dumb thing to do, there is virtually no risk that it will turn on us.
That being said, advanced AI still poses a significant threat, because it cannot be predicted how a hyperintelligent entity will interpret its initial instructions. For example, our peanut packing AI may decide that the peanut packing would go faster if the matter that is currently forming our bodies were harvested and converted into another assembly line, which would be quite the existential drana--and that is just a path to catastrophic failure my feeble primate brain can think up. A mechanical mastermind with a badly formatted instruction set will think up far more creative ways to misinterpret its instructions and bring upon us the end of humanity through no ill intent.
1 AIsuicide 2018-01-19
This is a good summary and analogy..I like Elon Musk's attempt to express how difficult it would be to predict/react to the first 36 hours of a truly operating AI.
1 Gibcake 2018-01-19
Do keep in mind that the scenario of the technological singularity is not as much of a certainty as figures like Musk, Kurzweil and Burton like to think it is. It is true that we already have neural networks capable of self-improvement, but designing a feedback loop to teach a machine to pack peanuts will lead to that machine becoming better at packing, not to turning into some near-omniscient mechanical God. Of course, you could design a feedback loop to teach intelligence if you wanted to, but good luck dissecting something as abstract as intelligence into factors that are both concrete and easily identifiable.
1 AIsuicide 2018-01-19
Didn't an AI already design another more capable AI?
1 Gibcake 2018-01-19
Allow me to ELI5 how self-learning AIs work, or at least how the current generation of the technology does.
Let's say that I'm trying to build a bot that is capable of identifying cars. I'll want to supply it with a shitload of photos of both cars and non-cars and tell the AI which is which. The AI will then generate an algorithm, run it over the photo and compare the algorithm's result to what it's supposed to be. Did the algorithm do its job? Great! Then the AI keeps it, makes another random modification and checks if it works even better this time. If it does not, the algorithm is discarded and the AI reverses to its previous iteration.
Designing these kinds of feedback loops is easy for things like identifying cars, playing table tennis or packing peanuts, but intelligence is much harder to break down into factors you can test in this way.
1 Amazonistrash 2018-01-19
It already does.
1 Gibcake 2018-01-19
I mean, I'm just as much against crony capitalism as the next guy, but let's be real here: there's not much we can do, can we?
1 Mustafart 2018-01-19
I think humans have went down a dangerous path and are advancing way beyond what they are mentally capable of understanding the implications of. I think this path for himanity could be a dead end.