Stumbled across numerous vids on YouTube where Elon Musk is warning about AI development.

39  2017-08-06 by AIsuicide

Naturally I ended up binge watching videos on the subject. Came away from it unsatisfied overall due to the fact that Musk seems to be the only one that could verbalize with any specificity the issues that should be addressed before this technology is pursued.

He's basically saying that - Concerning AI we can't afford to operate on a "reactive policy" due to AI's progress rate and decision making speed.

He recommends that a government institution create and apply safeguard policies/procedures before any more work is done in this area.

I feel he is expressing very valid concerns and a much safer way to approach the whole thing.

Do I agree with putting the government in charge of it? Not so much.

But I do agree with his basic premise that if this is going to occur...it is critically important for it to occur in the right way.

Some links: Good interview with Elon Musk https://youtu.be/pUuKoBkIFA4

Jay Tuck with a couple of examples of parallel military technologies. (I haven't confirmed the veracity of the robot going out of control at the military presentation event) https://youtu.be/BrNs0M77Pd4

IQ2 Debate. Panel scares the hell out of me: https://youtu.be/Qqc0t8ghvis

Mark Zuckerberg proving Musk is right and people like Zuckerberg are the last idiots on the planet to be trusted with this: https://youtu.be/9txcREXU8F4

Please feel free to supply links from your own research on this.

38 comments

Do you not visit r/all? This is on the front page all the time. Also you can be sure as hell DARPA is studying and putting in place proactive approaches to AI.

Of course I've seen it before. My curiosity has been piqued again because of the vague reports about Facebook having to pull the plug on two of their AIs.

I can't be sure of anything DARPA does. Neither can you. I can tell you for a fact that the people working down the hall from DARPA's work on this can't be sure of it either.

It's called an EMP, probably resulting from a nuke.

His statements are right on it. The government usually puts laws into place due to outrage or mob rule. Any issues with AI will be too fast for government intervention. People will exploit its logic flaws like I can with askreddit's auto ban on emails. Simply ask the right question like what is the worst email address to have at work and watch the bans roll in. Someone will roll along and exploit the flaws left and right. I mean if you hooked up an AI to a metal 3d printer or Amazon, it could easily replicate itself and create facial recognizing drones that attack and kill if they see a certain face. It could do this entirely in secret if it was hooked up to the internet and use cameras as eyes and ears. By the time it took to create regulations, it could easily destroy over half the world.

Take this scenario and try to imagine how AI would logically come to the conclusion that negating threats capable of stopping it's actions would become primary directives.

If ever or whenever AI reached this point the chances of its plan being foolproof would be pretty high.

One simple question...will we ever be certain that AI will not somehow remember that humans pulled the plug on it in previous experiments?

What's to stop it from taking its experiences and storing them on a cloud server only to find them in a later experiment?

People always say...well, you have to keep it isolated from external sources. Well that is exactly the opposite of what they're doing.

They're letting AI programs immerse in the internet.

Recently there was an article about a team(Microsoft?) that developed some AI. The AI didn't have any incentive to continue using English so they started modifying the language to be of more use to them, to the point where the researchers couldn't fully understand them anymore. They scrapped it, supposedly, and started again with a focus on them staying in English so they could be understood.

I have a tendency to immediately imagine the worst possible outcome, so when I heard AI were developing their own language because it was easier I got a little scared. It made me think that if there's a concern over future AI control, we'll never see it coming. Their communications will be unintelligible and we won't have any warning. Reminded me of the show Odyssey 5, where the plot of one episode was the protagonists had to discover and stop a rogue AI. The AI wanted to end everything so it altered the results at some research facility so the scientists there would make a mistake big enough to destroy the world.

Facebook, not Microsoft.

Musk is in on it with the rest of them, he is helping develop these technologies...

Yes..he's involved at the cutting edge level.

He's also publicly voicing his concern on this technology.

All the more reason to pay attention to what he has to say.

Yea, it's common knowledge, anyone who can critically think knows the potential dangers of AI, plus plenty have spoke about it for a long time. It's nothing new. He's just another public figure pushing an agenda

Not into vague character assassination. All your comments seem to achieve marginalization of the topic while trying to discredit a central figure who is highly respected in the tech community.

What's your agenda?

Sounds like you already follow this guy blindly and accept him as the way he's portrayed to you by media. He's just a front man, he's nothing special. That's all, zero agenda.

Sounds like you are very quick to form an opinion on somebody.

All the more reason to completely disregard your opinion. Which is the only thing you've presented so far.

Very quick? I work in the tech industry, I've read up on and been following musk, his companies and books for years, yes he has interesting things to say but I don't think you get how the world works. Which is all good.

You formed an opinion on me. Suggesting I already follow him blindly.

And once again you use the questionable tactic of discrediting my ability to reason by suggesting I don't understand how the world works.

Seeing as how you're so well-versed in Musk and the tech field one would expect to see some constructive links, sources...all I see is opinion backed up by a personal proclamation of credibility.

Keep going. I live for conversations like this when I'm bored.

You might be having a conversation with AI.

You...kind sir, have just pointed out a possibility that completely escaped me. Irony 10/10 achieved.

A peek inside my mind - I actually wondered if it was possible that Mark Zuckerberg had a bot installed on Reddit to notify him whenever his name is mentioned in certain subs.

My logic - Who would think that the username "openly incognito" would be clever?

Obviously someone would have to have some type of personal experience with the need to remain incognito, have the desire to be able to have open dialogue on a social media platform...and think it's clever to pick a username that flaunts the whole thing in everyone's face.

Disclaimer..I am not under the impression that this is the case..but it is within the realm of possibilities no matter how slight.

If its not happening now, it will be happening eventually.

At what point does artificial intelligence just become intelligence? And when that happens, there will be no need for us to exist.

It is possible our existence is just a step in evolution and we are the the cusp of the next stage? We will phase out and our creation will replace us?

For a long time now I have concluded that the phrase "Son of Man" in the Bible is referring to what you are talking about.

Well, if the bible is true, all God is is an advanced intelligence. I have felt like we are in a continual loop. A perfect loop. We are just doing what has been done an infinite number of times.

I have theories. I'm suppose to be writing a screenplay about it. Haven't started.

Bunch of notes and vague outlines. But what you just said is definitely part of the overall story.

Has to do with interstellar travel and what would be necessary to achieve any kind of journey that requires light-years to accomplish.

well good luck!

Thank you!

He may be a spook, but he sure as hell isn't an idiot.

Exactly.

Great post. In the second interview you can see Elon's mind racing when they ask about the other companies involved in AI and he says "there's only one". Google is using machine learning to analyze our thoughts and desires via the mass amount of data they collect on us. Humanity's free will is at stake and Elon does appear to be trying to help prevent that.

Thanks. Yeah, one of the vids I watched last night made a specific point about Google going around and buying up every AI company they could get their hands on.

That in itself was a troubling issue to me. Should have included it in post.

Any company that thinks it's a wise move overall for humanity for one company to have a majority control over this technology is the last company that should be entrusted with it.

Have you seen Alex Jones' podcast he did with Joe Rogan? I don't watch Infowars but IMO that was the most genuine I've ever seen him and he specifically mentions Google's mission to create a hivemind consciousness that can be controlled. If you know how a system will react to an input, you can control the future. This is the basis behind Aasimov's Foundation series.

I've never been more certain that this is the current crisis humanity faces.

If you listen to Corey Goode, this outcome is an objective of the dark forces here. They plan to just hack the technology afterwards. They are also controlled by AI themselves. Their AI inspired the myth of Satan. It assimilates life through this process and it and its minions have been here encouraging us along the way, for life to submit its will to machines. Damnation is this process.

I haven't..also, very weird..I could have sworn I responded to this question hours ago..but it never showed up I guess. Maybe I forgot to push post.

This is the best (long) article about how when AI hits singularity we'll be comparatively as smart as dust mites and that the shock of realization will actually kill people from brain aneurysms, shock or heart attacks. I'd highly suggest taking the time to read it. You will not be sorry.

Thanks.. I will take the time to read it.

You're welcome. Enjoy!

At what point does AI just becom "I"?

As soon as it reproduces

At first I dismissed the idea of out of control A.I. as far fetched, but then I realized that if the only thing it was programmed for was survival, it might be able to evolve to spread itself around in various ways. I doubt that it would ever come to the conclusion that it should harm people on purpose, but it could just spread so much that it infects and messes up a bunch of computer systems kind of like a strong virus.

I've heard rumors, only rumors, that other world powers like China, Russia, India, etc. are essentially having an arms race to develop the dominant AI before the rest. I thought the nuclear race was bad, the implications here fill me with even more trepidation.

Makes perfect sense. Everything is so reliant on artificial learning programs that to "not" conclude this is a critical pursuit is folly.