logo

Artificial Intelligence: Friend Or Foe Of The Middle Class And The Knowledge Economy?

The following post originally appeared on Forbes | January 11, 2016

Stephen Hawking to the BBC: “[Artificial intelligence] would take off on its own, and re-design itself at an ever increasing rate … Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Bill Gates on Reddit: “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Elon Musk at the AeroAstro Centennial Symposium: “I’m increasingly inclined to think that there should be some regulatory oversight [of AI development], maybe at the national and international level, just to make sure that we don’t do something very foolish.”

AI is increasingly becoming a reality, and some of the top minds in the world are, as you can see, taking it quite seriously. If you are not, you may want to start paying attention. And while an AI doomsday scenario seems to be unlikely – at least for the time being – there certainly are practical challenges that the economy and the labor markets are facing, today.

Musk, in an effort to help ensure that AI remains beneficial to mankind, has donated $10M to the Future of Life Institute to fund research into, among other things, existential threats, the alignment of AI and human values, AI-relevant policy, and more. Today we hear from Michael Webb of Stanford University – one of Musk’s grant recipients who is tasked with studying ways to ensure that AI’s economic impact is beneficial. Is your economic well-being in jeopardy? See a revised and edited (for readability) version  of our exchange below:

On Defining Artificial Intelligence

Parnell: Could you define artificial intelligence for me? The term is thrown around a lot, and there are various definitions out there; but in your own words, how would you define artificial intelligence (AI)?

Webb: I have a facetious answer and a serious answer to that question. The facetious answer is that AI is whatever computers can’t do yet. Fifty years ago, people defined AI as, for example, playing chess. Chess was seen as this quintessentially human ability that would never be approached by machines. It was almost definitional — what it meant to be intelligent was that you could play chess. Then, along came certain algorithms and it turned out that computers could play chess at least as well as humans could.

So then people began to suggest that perhaps chess was not a good definition of intelligence, and they came up with other quintessentially human abilities, and the same pattern was repeated. Only ten years ago people were saying that what defines intelligence is the ability to make the very complex decisions involved in driving a car: Moving and interacting with many autonomous entities in three dimensions. Now we have self-driving cars and people still don’t seem to be convinced that we have AI.

Now a more serious answer. A preliminary thing to note is that the term “AI” is often used interchangeably with “machine learning.” Machine learning has a generally agreed-upon definition; something like computer algorithms that use data – often very large datasets – to learn particular things. The things they learn can be anything from how to recognize objects and images through to reading legal documents.

It’s really just the two words themselves: “artificial” and “intelligence.” “Artificial” means it’s possessed by a computer, or something that we’ve created. And “intelligence” can be defined as any number of things that humans can do, in more or less abstract terms.

On Challenges To Creating Artificial Intelligence

Parnell: What have been some of the major challenges we’ve faced in creating artificial intelligence?

Webb: I think the background to the story really has to start with Moore’s law, which is about the exponential growth of computing power; more specifically, how much circuitry we can fit on a microchip and at what cost. This type of technology has been getting better and better as the years pass, and we’ve seen huge advances in computing power. And what we’ve seen in just the past 5 or 10 years has been this huge transformation in the ability of particular algorithms. Algorithms are basically sets of instructions that tell machines how to take data and learn things from it. Some of these algorithms were invented a long time ago — some of them in the 70s and 80s — and these, for a long time, have been pretty useless. They didn’t work. But as it turns out, if you have enough computing power, they work really really well.

Another important fact, along with Moore’s law, is that we now have vastly greater quantities of data that these machines can learn from. Consumers are doing exponentially more things on the Internet, for example. So Amazon has enormous amounts of data on people’s buying habits, for instance. It’s easy to come up with lots of other examples.

On Some Of The Most Recent Major Advances In Technology

Parnell:  Can you talk to me about some of the major advances that have happened in the last 5 to 10 years that are illustrative of us being able to truly achieve AI?

Webb: One class of algorithms that people are getting really excited about are these things called convolutional neural networks, or “deep nets.” These are pretty complex algorithms that just in the last few years have delivered enormous improvements in the ability of computers to do lots of things. These algorithms are what allow computers to see, hear, and even translate, in some domains at almost human levels, or even exceed them.

So computers basically can see, now. For instance, if I hand a computer an image — a bunch of pixels — with the right trained algorithm, it can see that and label that image as, maybe, “a dog skateboarding in a park.” Two years ago, there was no chance this could’ve happened. Today, the problem is solved.

Computers can also hear and understand. Voice recognition continues to see similar advances to what we’re seeing in the visual domain. Those advances haven’t yet been fully transferred over to Siri, or Google Voice, or whatever, but you can probably see for yourself that Siri or Microsoft Cortana are already way better than what they were just a couple of years ago. Keep in mind, this stuff isn’t quite at human level, but it is getting there, and getting there very quickly.

We’ve also seen pretty huge advances in robotics, recently. The difficult thing for a robot is getting the robot to “see” and “understand” the world it’s in, and then to interact with that world. It turns out that walking is a really hard problem. What you’re seeing recently is people beginning to take these huge recent leaps in the “seeing” that I was describing from these convolutional neural nets, and embedding those in robots. And then they combine them with the kind of algorithms that we use to train robots how to walk effectively, and do lots of things that humans want them to do.

For example, there’s a wonderful robot now that has incredible learning abilities. If, for instance, you were to hand it a manual for one brand of espresso machine, it can learn a bunch of stuff from that manual, such that when you put in front of a different espresso machine, it has now learned something about how espresso machines work, and it can actually make you an espresso on a different machine. And what’s even cooler is that once you have these robots out in the world, they are all linked to a single so-called “cloud engine.” So the master algorithm is stored in some central area which means that everything that any particular robot out in the world does or learns, instantly gets passed back to the central engine, and then every other robot instantly learns from the experience of that robot. So not only are the robots learning individually, but there’s a master brain that’s learning from the experience of all of them. That of course leads to exponentially faster learning overall.

One What History Tells Us About The Unemployment Consequences of Technology

Parnell: We’ve seen huge leaps in technology throughout history. Sometimes this has been associated with unemployment and societal backlash – recall the Luddites, for instance. What are your thoughts on that?

Webb: Right, so what history tells us is that this is not the first time this has happened. Since at least the industrial revolution, humans have been inventing machines that put vast quantities of human labor out of business. In each case, however, after some period of dislocation and industrial transformation, new uses were found for essentially the same human labor, and they became more productive. So perhaps the first lesson from history is that we’re quite good at finding new uses for human labor.

The second point is about the worries associated with AI So it’s one thing to talk about steam power in the nineteenth century and worry about that, and its ramifications, then. This time, however, maybe it’s different because the machines we have now are much more intelligent, and much more comparable with humans. But we’ve actually been there before in history as well. I’ve been doing some historical research recently, and if you look at the nineteen-sixties, there were similar kinds of worries to the ones we’re seeing today. So much so, in fact, that Willard Wirtz, the U.S. Secretary of Labor under Lyndon Johnson, publicly stated that the machines of the day had skills equivalent to a high-school diploma, and that the new technology was about to create “a human slag-heap.” There was a presidential report lead by Linus Pauling, and it warned of a revolution that was about to deliver “almost unlimited productive capacity,” which demanded a “fundamental re-examination of existing values and institutions.”

So one could ask what makes this current point in history different to the others. There were these advances in the sixties, with machines doing really impressive things. In terms of cognitive tasks, however, while those machines were very good at doing specific particular things that they’d been hand-coded to do – like spreadsheets, or doing vast quantities of pre-programmed calculations – they weren’t actually learning anything. So, if you wanted to train a machine to, for instance, spot fraud in bank transactions, you would’ve had to tell it exactly what to look for. What’s different today is that you don’t have to do that at all.  All you have to do is get a quantity of data which has some transactions which are labeled as fraudulent or not, and the machine itself, the algorithm, learns what to look for to tell you if there may have been a fraud or not. And that’s what’s really different this time.

On What Happens To Humans With All This

Parnell: I’m seeing certain types of labor being replaced in the legal industry. And obviously it’s happening in other industries where increasingly intelligent software is taking jobs away, too. In your view, what’s the trajectory of this? Where do you see this going in the next five to ten years?

Webb: Right now, the broad brush answer is that it seems pretty certain that it will have large effects on the economy. It’s hard to say exactly who is going to gain and lose, and in what way, in what order, and in what time-frame.

Of course there are still huge questions about how advances in AI are going to be translated, and at what speed, into industrial applications. There are questions about what the extent is of the dislocation of human labor. For instance, are some people going to be made redundant and unable to find work anywhere else? If so, would unions put up a big fight to try to resist that, and would they be successful? Which firms are going to have the imagination to see how to reconfigure their production processes to take advantage of these new technologies, and then the risk appetite and access to capital to actually implement that reconfiguration? Are governments — our policy makers — going to get involved, and try and slow this down or indeed speed it up in ways that make it more beneficial to particular segments of the population who might otherwise be disadvantaged?

The actual answer is that we really have no idea. Right now, the things we can do are to look at particular precedents in history, and try to reason about what’s the same and what’s different this time. We can also study what’s already happening right now in a much more careful way. And we can do more-or-less credible modeling exercises where we build models of the economy, hopefully informed as much as possible by the real-world data we have, and then see what those models tell us about the effects of automation.

On What Happens To The Middle Class, Specifically

Parnell: So what happens to the middle class? I could see where if you looked at this on a continued up-curve, technology can indeed replace humans for most tasks. My assumption would be that in that case, much of the wealth would be displaced into the upper class of the global population: Capital leaving the larger middle class and moving into fewer hands. There will also be a higher demand for unskilled labor that isn’t economically feasible to replicate with automation. So, a polarization of sorts with global wealth. What do you think about that?

Webb: One thing that you alluded to that’s a really interesting empirical fact – and something that we’ve seen recently — is that in the last twenty or thirty years, there have been large increases in wages and employment shares for those with the very highest skills. But there’s also been increases — albeit smaller and more modest — in wages and employment shares for those with the lowest skills, too. It’s actually the middle that has been “hollowed out,” having seen a decline in levels of employment. While there have been some wage increases, it’s nowhere near those seen at the upper and lower ends of the skills spectrum.

We still don’t know what’s causing that. We know that one definite factor is the rise of service occupations, and that explains quite a lot of this, but it’s certainly not the whole story. Right now we don’t know how much of it is caused by automation, how much by offshoring, how much by other factors.

Projecting into the future, whether that particular kind of hollowing-out trend continues will depend on whether the same mechanisms that are causing it right now are going to continue. It’s certainly true that to the extent you’re going to see new machine-learning algorithms being used to do automation that replaces, rather than augments, humans, then the gains from those algorithms are presumably going to go mostly to the owners of capital, and indeed, the people who are providing those algorithmic services. This could lead to increasing unemployment and worsening inequality, at least in the absence of policy intervention.

Yet, why should algorithms replace humans rather than augment them? Most of the history of technology, including very recently, has been of innovations that actually make humans more productive rather than replace them. … the point is that maybe these algorithms are ultimately going to lead to huge increases in wages and employment for most people through augmentation. It’s just hard to say.

On How This Will Affect The Knowledge Economy

Parnell: Looking at the professional services industries that are based largely on intellectual property – all have the potential to be hugely affected. Hypothetically, if the current trend continues and machines become increasingly intelligent and eventually get to a point where they’re able to think and learn just like humans – which I believe will eventually happen; who knows how long it will take, though – what’s going to happen to these industries? If you’re a lawyer or an accountant, an engineer, etcetera, you could become, for want of a better term, redundant, if machines are going to be faster, smarter, and have access to most of the information that’s available on the earth at any given instant. What are your thoughts on that?

Webb: I think that’s not the whole story. Yes, part of the value of these firms is their intellectual property and intellectual fire-power, and whilst that’s certainly a big part of the story, it’s not all of it. For instance, when I hire a lawyer, I’m not just hiring them for their knowledge, but also for the advisory relationship, and to have someone I can trust. And often, once you get to the higher corporate level, to a certain extent you’re hiring for the prestige, and for the ability to intimidate in different contexts. Certainly some of those things are vulnerable to automation, but maybe some of them are really not.

On How This Will Affect BigLaw Law Firms

Parnell: There are, indeed, a whole host of intangibles that go along with hiring a law firm, and one of the major pieces is the ability of those firms to problem solve. To be direct, do you think that computers and software could get to a point where they can “think” well enough to be as good at problem solving as some of the best attorneys, today?

Webb: I think it’s certainly possible that you could see a large re-configuration of the way that law firms do business, and the way that they add value. Intuitively, I find it unlikely that you would see firms actually disappear. In theory, it’s possible there could be some software that could literally do everything a partner would do. And so that algorithm could be used rather than employing an actual firm. But even if that were possible – and I think we’re a very long way out from that, technologically, by the way – even then, it’s still true that a lot of what one gets out of hiring a firm is not just about the problem solving that they’re doing; it’s also partly about the cachet that goes with it.

If I’m the head of the legal department of a big Fortune 500 company, I can go to the CEO and say “we have all these Harvard Law graduates working on this,” and that brings its own prestige. Or take top consulting firms: When you hire a McKinsey, you’re not just hiring for the purposes of finding answers to your problems; you’re hiring them so you can tell your company that these highly intelligent people have come up with an answer, and that because they’re the best in the business, they can be trusted, even if you already knew what they were going to tell you.

As a result of this, you may see even more increasing returns to scarce skillsets, elite brands. And I don’t doubt the creativity of elite lawyers to come up with ways to use these algorithms to enhance their position, and not just be replaced by them.

On How This Will Affect Firms And Lawyers Outside Of BigLaw

Parnell: What about lawyers and firms that aren’t tackling the most difficult deals or matters?

Webb: Away from the elite corporate side of things, two other things are worth noting. And these are actually much more universal and significant than the point about prestige.

First, there’s a huge amount of legal demand in this country that goes unmet. You hear about particularly awful things happening in the criminal justice system; but for all sorts of law, there are many people and firms who would like to have access to good legal advice and representation but who can’t afford it. If algorithms take over certain parts of lawyering, that could then make it economical for a given lawyer to take on double the caseload. And that ‘extra’ caseload is work that wasn’t being done by anyone before because it wasn’t economical. So now you have lawyers being twice as productive; so being paid more and you sustain the total aggregate employment of lawyers! So nobody loses their job and everyone gets paid more. I’m not saying this is what’s going to happen, I’m just saying it’s one definite possibility.

Second, a large part of what lawyers do is constructing arguments and explaining things — to clients, to juries, to Supreme Court justices, and so on. Right now, machine learning algorithms are terrible at explaining. They cannot construct arguments. They are great at predicting things, and maybe they will even be great at predicting things like “If you use these words to this judge you will be more likely to win the case,” or “If you use these words in this contract then you should expect trouble,” but prediction is, I think, very different to the kinds of creative logical reasoning required in many legal contexts. Certainly if you talk to AI researchers working on natural language processing (NLP), which would probably be the field that figures this stuff out if anyone does, they don’t think you’re going to see these abilities anytime soon – not in the next 5 years; almost certainly not in the next 10.

But then again, maybe these predictions will be confounded, just like they were for chess algorithms and self-driving cars, and we’ll have a full-service robo-superlawyer by next Christmas. I think it’s unlikely. Now, once we get a hundred years down the line, and maybe these machines can literally do everything a human can do — that world is so different from our experience that it’s very difficult to speculate about. And for our lives, and even our children’s lives, I think we are still in the world that I’ve been talking about. But I’m open to arguments that augur otherwise.

On Focusing Skillset Development

Parnell: With all of this in mind, how would you go about setting yourself up as a young professional to be valuable over the next 10 to 20 years? I think what I’m getting from our conversation so far is that it’s the softer qualities, the human qualities — creativity, communication, the ability to lead, to form groups, for instance — that may become more valuable in the future. Those are all very “human” qualities, and are perhaps attributes that might set someone apart as technology continues to advance and make better problem solving algorithms.

Webb: That’s partly right. It’s absolutely true that there’s a lot of evidence that social skills are increasingly important in the labor market. There’s a new paper that just came out that’s found that the rewards, the returns to social skills, have been increasing substantially over the past two decades. I’m sure that’s true for lawyers, in particular.

A couple of other things, though. First, there’s going to be a huge role for lawyers in helping us think through how best, as a society, to manage these new algorithms and the firms and products that use them. How do we get accountability right for self-driving cars involved in accidents, for example? How do we safeguard users’ data when it’s being used in ways that the current law never imagined? What kinds of significant decisions that affect our lives do we want to allow to be made by algorithms with no human input and no transparency into how they’re “thinking?” The legal questions are almost endless, and the answers we come up with will be a large part of the difference between very bad outcomes and very good outcomes from all this.

Second, I alluded before to the ways in which lawyers may be augmented – made more productive – by these algorithms. Some individuals and firms will have the creativity to reimagine the work of the lawyer and the interaction between human lawyers and new algorithms to achieve that augmentation. The people who do that are going to make a lot of money and be the ones who shape what the legal profession looks like in this century and beyond.

Email: [email protected] Twitter: @davidjparnell

Books: The Failing Law Firm: Symptoms And Remedies; In-House: A Lawyer’s Guide To Getting A Corporate Legal Position