Comments on: Why I Don’t Worry About a Super AI https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/ Making the Inevitable Obvious Thu, 14 Jun 2018 19:41:00 +0000 hourly 1 https://wordpress.org/?v=5.9.12 By: Martin355 https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/#comment-81932 Thu, 14 Jun 2018 19:41:00 +0000 http://kk.org/thetechnium/?p=6446#comment-81932 In reply to Kent Schnake.

No, the worst case is that they will be trillions of times smarter than we are and have goals that are contrary to ours.

]]>
By: Martin355 https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/#comment-81931 Thu, 14 Jun 2018 19:38:00 +0000 http://kk.org/thetechnium/?p=6446#comment-81931 In reply to firoozye.

> Nowhere in this simple world-view does “kills the human race” figure.

“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
—Eliezer Yudkowsky

The paperclip maximizer is the canonical thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an existential threat.

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

]]>
By: 2Punx2Furious https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/#comment-81651 Fri, 29 Jan 2016 22:15:00 +0000 http://kk.org/thetechnium/?p=6446#comment-81651 I commented this on Reddit.

https://www.reddit.com/r/artificial/comments/43b1lr/why_i_dont_worry_about_a_super_ai/

]]>
By: firoozye https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/#comment-81621 Fri, 06 Nov 2015 17:00:00 +0000 http://kk.org/thetechnium/?p=6446#comment-81621 The thing is–existential threats need not come from intelligent beings, nor should intelligence automatically imply that this could lead to an existenial threat. A shark isn’t intelligent in any Turing sense, but on an individual level could be a major existential threat. Meanwhile our own intelligence or even ‘near’ intelligence doesn’t imply that we threatten one another. Even if AI did exist, what incentive do intelligent machines have for doing in the entire human race?

Would it just be out of caprice?

Incentives prove a good determinant of likely courses of action for all sentient creatures. Economists and game theorists have studied them for some time.

As far as I understand, 100% of AI “wants” to fit good models. They want to classify correctly, and not incorrectly. They want to mimic human behaviour. The “want” is just some objective function–some likelihood being maximized, some gini coefficient being maximized on subtrees, some cross-validation error being minimized.

Nowhere in this simple world-view does “kills the human race” figure. And it won’t figure in until someone “programs” it in. Or until we get the AI to change its objective function (based on what objective?).

Look to incentives and you can foresee outcomes.

]]>
By: David Johnson https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/#comment-81609 Wed, 07 Oct 2015 13:00:00 +0000 http://kk.org/thetechnium/?p=6446#comment-81609 IMHO the ‘real’ danger from AI doesn’t come from the machines themselves, and at least within our lifetimes probably never could.
The danger with machine intelligence, machine learning, pattern recognition and awareness is the uses to which they are put by the real ‘robots’ in our world, the single minded profit-seeking morality unaware beasts we call ‘limited companies’.

When those companies start pitting highly tuned machine intelligence, armed with the statistical population databases our governments seem to want to sell, against our free will. That will be manifestation of the true dangers of ‘AI’ – they will become a lethal armament in the struggle by corporations to control masses of people. Us.

]]>
By: girdyerloins https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/#comment-81606 Thu, 01 Oct 2015 12:43:00 +0000 http://kk.org/thetechnium/?p=6446#comment-81606 Hm. Algorithms benefitting the tiniest slice of humanity have rendered millions powerless in so-called democratic society,nevermind aiding to concentrate mind-boggling wealth in those self-same hands. And they aren’t even AI yet. These algorithms, serving, say, derivatives sold in the financial markets have contributed directly, I’m led to understand, to the recent financial disappointment of 2007-8 and searching wikipedia reveals some pretty sobering analyses on the part of critics.
Spouting baloney about the good something does is like saying all the good an organized religion does negates its appalling behavior during the previous millennium, give or take a century or two.
I, for one, welcome my AI overlords. Why squirm over it?

]]>
By: franckit https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/#comment-81577 Sun, 05 Jul 2015 02:54:00 +0000 http://kk.org/thetechnium/?p=6446#comment-81577 The article strikes me as somewhat naive about what an AI really is and the extend to which an AI could be really intelligent – not just narrowly intelligent like most of the AI that have been mentioned in the article, or indeed in the comments.

Let’s assume for a moment that some organization, somewhere in the world, is able to develop an AI that has a human-level intelligence, across the board. It might have motivations for which it was programmed by that organization. Let’s even assume for that those motivations aren’t nefarious – it’s trying to improve some production process in a industry. So it is to look at those process and its mission, what it is programmed to do -again with a human-level intelligence- is to optimize those. Let’s think about what could go wrong with THAT scenario, setting aside the any nefarious intent from the makers.

1 – That AI might be REALLY good a improving those processes, or that production. But could it be simply too good? If it was programmed with “increase efficiency & production level of that type of car”, then that’s its mission, but what tells us it would stop, and how would it stop? What if it takes its mission to be to do that at all costs even to the point of being disruptive of other activity, or extremely harmful in some unintended way? What if it decides that subverting narrow-AI (e.g. computers & robots) that are connected to the net through hacking & redirecting they workflow to produces those cars as well? I’m not even talking about a doomsday scenario, but let’s imagine such a AI busy hacking through the net & disrupting whole systems across the globe… You might say, again, “well that’s an engineering problem, we just have to put strict limits on the bounds of what it can and cannot do”. While this is true, how can you be so sure that a company or organization will be so far-sighted? Why would they significantly slow down the advant of a functionning human-level intelligence AI for their purposes in order to ensure those safeguards are sufficient? Also, how can you really ensure that they will succeed in building all necessary safeguard?

2 – If a human-level AI is ever develop, and that AI is able to of self-learning, I think you aren’t fully understand the implication of such a program/machine. A human is, in a way, a self-learning machine. However, we have serious limitations as to how/what we can learn, even setting aside physical constraints. A computer would be able to self-learn a MUCH HIGHER PACE than a human ever could by pure thought exercice, just because a processor works at a much higher frequency. The number of cycles your brain can do, and therefore the number of learning cycles you can in any amount of time, is puny compared to what today’s computers can do. Another important limitation for us is the speed at which we can input information for self-learning. An computer, again, would be able to sift through GIGANTIC amount of information much faster than you and can could ever read/listen/watch, and it also wouldn’t tire of doing so every could hours. In other words, a human-level AI capable of self-learning would be able to learn at a rate we can’t quite grasp and is highly unlikely to STAY a human-level AI for very long: in all likelyhood it would far surpass our intelligence fairly quickly. What then? Let’s say that AI then becomes twice, or 10 times, as smart as the smartest human can get, how would you pretend to be able to control it then? Let’s say the gap becomes of the same magnitude as between the smartest monkey & us, how would you believe that this monkey could control a human, even though a human at age 5 or 6 was possibly of a very comparable intelligence level? Once you get outsmarted, it seems difficult to predict what will happen: you’re just not smart enough to know.

3 – You refer a lot to engineering, and how smart design should prevent any catastrophy in relation to a super AI. Well let’s use children for analogy: we’re smarter than our children, and we sure can do our best to guide what they should or shouldn’t be doing. However, there comes a point where they are able to realize that they have a say in this, and that they can -or not- follow what they are being told. How could you garantee that an AI that aquires a general level of intelligence far superior to our own would STILL obey whatever guidelines & rules you hard-coded into it? Perhaps an AI could never develop real self-consciousness, perhaps there is something biological in this that can never be replicated articifially. But then, maybe not. Maybe consciousness & self-awarness are simply some sort of intelligence threshold we ahve happen to surpass, which makes us as humans able to charter our own way through life. But then, if that is the case, and there’s nothing *special* about self-awarness other than the threshold of intelligence you need to achieve it, then how can you garantee you human-level AI (or superior) will not deviate from what you initially set out for it? Even assuming that the -by some way that seems hard to believe- that the safeguards you have build are smarter than yourself, e.g. smart enough for a smarter than human AI, then what would preven the AI to program another AI that doesn’t have those built-in safeguard?

Just a final thought, from a human-level intelligence that wants to survive. If your human or higher level AI ever happen to have some sort of self-preservation instinct, one of its priority at some point would have to be self-replication, or backups of some kind. This means that any idea that you could, at worst, just “shut it down” if things go wrong is likely to be wrong. You might shut one down, but how can you shut down its backups/replication? If I was an AI, self-aware, and with some sort of will to ensure my survival, it seems to me that my priority at some point would become to just that. And as soon as I can manage to get access to internet -with your concent or otherwise- then that can easily be ensure, from a program perspective. It seems likely that a human-level or higher AI would be quite a hacker, potentially much more gifted than a human at that actually. Just think of the havoc this could created….

I’m NOT saying I’m *against* an AI, or research on it: it will happen not matter what. You can’t stop the research anymore than you could have prevented the atomic bomb, or electricity, or the internet. The question is how will this happen, how worried should we be and what can we do to ensure this doesn’t go wrong.

But then, what is worrisome, is that the answers aren’t obvious.

And keep in mind that I have left out of the picture any nefarious intent, and any question about the fact that when an human-level AI becomes an achievable objective, many organisations are likely to race for it (as there could be quite an advantage in developping the first successfull one)…. Which, most likely, will tend to disadvantage those who will include all the beautiful safeguard & consciencious engineering you advocate, and favor those who might be more willing to cut corners, even though that also would be playing with fire…..

]]>
By: joe https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/#comment-81572 Wed, 24 Jun 2015 09:59:00 +0000 http://kk.org/thetechnium/?p=6446#comment-81572 This is exactly the situation with ContentID. Sure, ContentID *could* be designed to respect fair use and external licenses. But it wasn’t, because the designer had no interest in doing so, despite the collateral damage it causes to external parties.

Now imagine this situation applied to a life-or-death decision. Imagine that the AI responsible for making a decision about how to treat your heart attack is designed by a pharmaceutical company. Why does that pharmaceutical company have any incentive to design an AI that respects patient preferences about certain types of treatments, or to respect the right to refuse treatment? Sure, they could design an AI that respects patient preference, but will they?

dailyneedservice.com

]]>
By: Norberto https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/#comment-81568 Wed, 10 Jun 2015 14:48:00 +0000 http://kk.org/thetechnium/?p=6446#comment-81568 Kevin,

I can’t find your email to write you directly and request your permission to use the name “Technium” in a novel. I am writing a “fiction” about evolution and how are the same evolution patterns repeating over and over again, since the creation of the first molecules until now and of course this will repeat again in the future. I believe that the next step in evolution is that thing of yours: the Technium. Should we fear it? as much as the chimps feared us a few years ago. I imagine a million years ago a group of monkeys concerned about the freak chimp that is using tree branches and stones (first cool tools ever!) to fight against their predators. Discussing as we are doing it now. Some of them fear that the new raising trend (humans) may end with the chimp tribe as it existed then. Some others argue that this is progress and they should embrace it. It is not one or the other, it is the natural path of evolution.

I don’t think we should fear AI, any superior intelligence understands that cooperation is preferable to destruction. Why would you destroy something that you may control and use. We may end up connected to the technium as a functional neuron, sure. But we do that now anyway. In this exact moment I am connected to it providing my ideas to the Technium. Believe me it is not painful, I even enjoy it.

]]>
By: Devonavar https://kk.org/thetechnium/why-i-dont-worry-about-a-super-ai/#comment-81559 Mon, 18 May 2015 03:05:00 +0000 http://kk.org/thetechnium/?p=6446#comment-81559 I think #2 doesn’t quite get at the fear I have about AI:

“This is an engineering problem. So far as I can tell, AIs have not yet
made a decision that its human creators have regretted. If they do (or
when they do), then we change their algorithms. If AIs are making
decisions that our society, our laws, our moral consensus, or the
consumer market, does not approve of, we then should, and will, modify
the principles that govern the AI, or create better ones that do make
decisions we approve.”

The engineering problem isn’t the hard problem here. The hard problem is a political problem. My fear isn’t that I will regret the decisions of AIs that I make. My fear is that I will be trapped by the decisions of AIs that I didn’t make. My fear is that AIs will be vested with the authority to make decisions that affect me that I have no way of appealing or fighting back. AIs enable an efficiency in enforcing power that no human can match.

There is already a real world example here: YouTube’s ContentID system frequently makes decisions about what videos can and can’t stay on the site at the behest of the Music Labels. These decisions are in error a significant amount of the time — i.e. when fair use should apply, or when an external license has been negotiated. These decisions cannot be easily appealed (despite a manual appeals process that exists). The result is a system that enforces a particular power structure that serves a particular master, without any regard for law or social agreement. It simply does what the engineer programmed it to do, and that engineer wasn’t necessarily designing it with the public interest in mind.

When this type of decision is made by a human, there is a certain amount of leeway to the decision. Humans are good at handling contexts that are unfamiliar or unexpected. And, when they make a wrong decision, they can be held accountable and responsible. When power is abused, it is possible to fight back. When that power is held by an AI, there is no recourse.

Neither of these things are true of AI. As you say, “human ethics are sloppy, slippery, inconsistent, and often suspect.” This is a feature, not a bug. Human beings have consciences by default. Conscience doesn’t have to be programmed in, and this is an advantage when power structures are enforced by humans because when wrong decisions are made, they are (to an extent) self-correcting. If an AI makes a decision in an unanticipated context, it will likely make the wrong decision. It’s impossible to anticipate every context beforehand. And, when that wrong decision is made, there is nobody to hold responsible. Yes, it’s possible to design an AI that learns from its mistakes, but there is no requirement to design AIs in that way, so we end up with AIs that make inflexible decisions that can’t be reversed.

“The clear ethical programing AIs need to follow will force us to bear
down and be much clearer about why we believe what we think we believe.” This is a problem. Maybe it forces the creators to be clearer, maybe it doesn’t. What it doesn’t do is require AIs to be ethical. Designing ethics into an AI is an engineering and design expense, and it’s not clear to me why an engineer *would* program these things into an AI if they can get the job done without it.

This is exactly the situation with ContentID. Sure, ContentID *could* be designed to respect fair use and external licenses. But it wasn’t, because the designer had no interest in doing so, despite the collateral damage it causes to external parties.

Now imagine this situation applied to a life-or-death decision. Imagine that the AI responsible for making a decision about how to treat your heart attack is designed by a pharmaceutical company. Why does that pharmaceutical company have any incentive to design an AI that respects patient preferences about certain types of treatments, or to respect the right to refuse treatment? Sure, they could design an AI that respects patient preference, but will they?

Right now those decisions are made by doctors who make similar decisions based on their personal judgements about what is best for the patient. Crucially, these decisions can be influenced by outside factors (wishes of family, availability of drugs, prior known preferences from the patient), and if the doctor makes the wrong decision, they can be held responsible.

Sure, an AI could be designed to respect all of these things, but will it? The engineering isn’t the challenge here. The politics are. How do we ensure that the AIs that make life-or-death decisions make the *right* decisions, instead of just the ones that benefit the creator. That’s a political problem, and it’s not an easy one to solve.

]]>