Loading
James A. LindsayJames A. LindsaySep 29, 2016

I'm Concerned About AI Risk, but Maybe Less than Sam Harris

James A. Lindsay -- @goddoesnt

Sam Harris's new TED Talk on the risks inherent in artificial intelligence is, as he insists, something well worth watching. The reason is straightforward: we should have as many qualified, intelligent people thinking seriously about the prospect of advanced artificial intelligence as soon as we can. I quite agree, although I have a least one reason that I may count myself somewhat less concerned about the prospect than he is.

Harris may ultimately be less concerned than he lets on, but his talk is remarkably pessimistic about the potential risks inherent in developing artificial intelligence to the point of rivaling and eventually surpassing human intelligence. I won't attempt to rehash his case -- partly because I want you to go watch his talk for yourself -- but the arguments that he presents paint a compelling picture of a potentially scary eventuality.

If we don't destroy ourselves in the meantime, Harris makes it clear, we will eventually develop broad machine intelligence that matches our own capacity for intelligence, though with far greater speed. The result will be something of a "singularity," as it is sometimes called, in which machines are able to progress from intelligent to unfathomably superintelligent in a remarkably short time, proceeding along a self-reinforcing vector of self-improvement that seems constrained only by the furthest potential reaches of physical law upon information processing and the material resources the machine can harness to itself.

Harris also phrases his concern about risks poignantly, comparing the gulf between potential machine superintelligence and human intelligence to that of our own intelligence towering above that of ants. More saliently, he points out that we have a penchant not to go out of our way (most of the time) to be particularly malevolent toward, or even all that concerned with, ants -- except in cases in which their goals and ours come at odds. If large numbers of ants invade our homes, for instance, not only do we lose no sleep over their wholesale destruction, we feel relief, if not outright glee, over what, in a way, is a genocidal solution to our annoyance. In fact, we often pay exorbitantly and potentially risk our health to rid ourselves of the problem. It cannot be overlooked that we call it by the very telling name "pest control."

Harris's analogy is clear: it is distinctly possible that superintelligent machines, above us on the spectrum of intellect in a way comparable to how we stand above ants or chickens, will only have our best interests in mind when they happen to, and when they don't, we're, to put it coarsely, totally screwed. He makes little or no mistake in comparing the development of advanced AI to building a god.

Hopefully we can all agree with his broad point: we should take this alarming possibility far more seriously than we do, and we need the attitude shift long before we get the first superintelligent machine going.

Now, using one of Harris's own assumptions, I want to articulate one reason that I'm not as concerned about the development of advanced AI as he seems to be, with one caveat, which I'll address nearer the end. The operative assumption in question is pretty bland in its statement: intelligence is a matter of information processing.

Here's why I'm less concerned: ethics, also, is a matter of information processing.

Harris is very clear that we often take pains to avoid causing undue grief to ants, and we seem to take greater pains where it comes to chickens, and so on, up the line of presumed richness of sentient experience. How, though, could we have arrived at such a state of ethical thought? Quite simply, we became more ethical by being able to identify and process the relevant information.

That we slaughter our meat, for whatever failures remain in that industry, far more gently than any lioness makes the point fairly vividly. Understanding that other lifeforms than ourselves live finite lives and have experiences, some of which are negative, and recognizing that it is in fact possible to avoid causing much unnecessary suffering is all it takes for us to have ethics at the level that we do.

Now consider a superintelligence, a true one, the result of self-improvements being made that equate to millions of years of human industry, innovation, and thought. If Harris is right, a machine with intelligence equal to that of a typical team of researchers at a top university, by virtue of nothing else than the inherent differential in processor speeds between electronic circuits and biochemical wetware, would achieve unfathomable progress in about a year -- without engaging in self-improvement. Ethics is one of the great human questions that we would certainly turn to an advanced superintelligent AI to solve, and we have every reason to believe, on the assumption that ethics, too, is merely a matter of information processing, that the answers a superintelligence would give to ethical questions are likely to be superethical as well.

So imagine such a machine brooding on matters of human interest, including ethics, for a year, which is worth something like a million years of the best and hardest human thought. Our science fiction, like Star Trek, specifically, presumes that we will have left behind most of our biggest ethical quandaries within a few centuries without superintelligent AI. The first superintelligent machine, put to the task now, would put thought past the year 2364 (in which Star Trek: The Next Generation begins) in about two hours. Pause to appreciate that it could then go on to chug through a million years' worth of top-notch ethical thought in its first year. And what would it conclude?

I don't know, and you don't either, but we can guess.

We can guess that just as we are able, at times, to come up with remarkably humane solutions to our problems that respect and honor the quality of experience for sentient life (and maybe even nonsentient life, if that's a thing), a superethical superintelligent machine could do so vastly more successfully than we can. Whatever capacity we have to identify and compromise upon solutions to our dilemmas, such a machine would have potential far outstripping humanity's to figure something out that works well for every party, or at least rationally optimizes the possibilities for them.

So, if a superintelligent AI were to come across a situation in which its (superior) values do not align with ours, we would have good reasons to trust that it would understand our needs and the complexities of the problem in ways that are impossible for us to fathom, and to navigate optimal solutions in real time even around our apish blunders -- even predicting them in advance. It would be as though we were building a house on top of an ants' nest and, rather than merely calling an exterminator, incorporated a perfectly good, low-cost way to make the structure and its access points both ant-proof to its inhabitants and yet an improvement for the ants themselves.

It is therefore difficult to imagine the problems presented by superintelligence in terms of a machine possessing values that differ from ours and thereby deciding to treat us with disregard, or contempt. Harris' analogy to creating such a machine being like building a god may, in fact, be truly apt, only without any of the usual mental contortions theologians call "theodicy" to explain away its gross ethical failures. Such a superintelligence should be able to make short work of many of our mundane, yet difficult, ethical challenges, and it should, in fact, be able to easily solve new ones as they arise. To me, the biggest concerns we display about superintelligent advanced AI conflicting with us in values are often projections of our own apishness onto machines that ethically would be, to H. sapiens, like we are to ants.

Well... almost. I mentioned some caveats, and I think the truly scary problems with advanced AI lie in the short-term, during the tumultuous transition phase during which we shift to life with a superintelligence. Regarding such a transition, Harris's points about (1) the way human actors will respond to rumors of advanced AI and (2) the insufficiency of our current economies to accommodate advanced AI are both immediately concerning.

We don't have good solutions to those problems, and they would be made worse by the fact that our apish selves wouldn't trust emerging superintelligence in the first place, when it finally got around to solving real problems. (I once tweeted something to the point that should we ever invent a superintelligent machine that is never wrong, the first thing people will do when it tells them something they don't want to hear is say that it's wrong.)

A larger problem lies in the space between the emergence of superintelligence itself and sufficiently super superintelligence to know fully how to manage itself. It is imminently plausible that as superintelligence first awakens, there will be a period of time during which the most important concern will be preventing the beast from doing something stupid, which it very well may have the power to do, before it is intelligent enough to know better. A miscalculation done by such a machine could well prove disastrous in ways our own mistakes rarely could be, and, worse, we have good reasons to expect that we'd have no way to realize the machine messed up until it was too late. The best I can say to these problems is a mere hope that the transition would be fast -- that truly highly intelligent machines would, indeed, become superintelligent in a matter of months.

The risks of advanced AI are, as Harris points out clearly, very well worth thinking about, deeply and now. There are very real, very legitimate concerns with the prospect of developing machines that can drastically out-think us in literally every regard, but I don't think we have much reason to be too concerned with its ethics, so long as we don't build it as a sociopath.

1 Reply18 Likes↻ Reply
What do you think? Reply to James A. Lindsay.
@imperfectidea@imperfectideaSep 30, 2016218 views
I'm Concerned About AI Risk, but Maybe Less than Sam Harris James A. Lindsay -- @goddoesnt Sam Harris's new TED Talk on the risks inherent in artificial intelligence is, as he insists, something well worth watching. The reason is
Great post. Your last sentence encapsulates the whole problem, in my view. We are still very uncertain as to how our moral intuitions came about, evolved and function. We haven't figured out a perfect moral code or system to operate in any given situation for humans. It's very unlikely that we will be able to figure that out before we build a superhuman artificial intelligence, so it will be impossible for us to codify that perfect moral algorithm into the machine. Without a moral foundation of intuitions to evolve from, how will the superintelligence be able to develop a perfect ethical system on its own? Merely being intelligent doesn't guarantee that it will "discover" and understand and agree with our naturally evolved and flawed moral system. Esentially, this superintelligence would be like a psycopath: perfectly rational but unable to fathom basic emotions and to develop them and use them. That's the very likely possibility that scares Harris, and me, I think.
Keep up the good work ;-)
◇ View2 Likes↻ Reply
Quick Sign Up
Allthink is a community of free thinkers. It's fun and free.
Email
(private, SPAM-free)
Username
(use A-Z and 0-9 characters only)
Password
(8+ characters long)