Podcast: Ken Goldberg & Eric Brewer – AI, Automation and the Future of Work
Don’t miss out on future episodes:
And, join our private Facebook Group to discuss this podcast, suggest topics and learn with our growing community.
- Ken Goldberg
- Eric Brewer
- Multiplicity: Are AI and Robots a Threat…or an Opportunity?
- AI may not be bad news for workers
Full Transcript of Steve Chen’s Interview with Ken Goldberg and Eric Brewer
Steve: Welcome to NewRetirement Podcast. Today we’re going to be talking with Ken Goldberg, a robotics professor from Berkeley, and Eric Brewer, a VP at Google and professor of computer science at Berkeley as well, about artificial intelligence, robots, automation, and the future of work. Both Ken and Eric live here in the Mill Valley, and continue to mine this town for talent. So, we are live in studio.
Ken’s daughter and my youngest son are in the same elementary school. And Eric and I have played Sunday morning soccer in the same pick up game for the last 10 years. Eric and I compete for the title of the least skilled player among the other Americans, while the Turks, Jamaicans, Guatemalans, Argentinians, Italians, and Mexican guys that we play with tolerate our presence. Both of us had the ball too much, but Eric’s cost per header is probably 50 times mine.
So, Eric and Ken, welcome to our show. It’s great to have you join us.
Ken: Thank you. Thank you, Steven.
Steve: So, Ken, why don’t we start with you. I just wanted to, if you could tell us in your own words a quick background, and what you’re working on, and what you find most interesting today?
Ken: Well, I’ve been really working on the same problem for most of my life, which is how to not drop things. I was a clumsy kid. But my father was an engineer, and came up with this idea of building a robot when I was a kid. So, I’ve always been fascinated by robots.
When I was in college, I started working in a robot lab. And then, basically trying to solve the problem of how to make a robot less clumsy. It turns out that it’s extremely difficult. Humans are very good at this, most humans. But it is very hard. We’ve been trying different approaches for many years. And very recently, we’ve made some very interesting progress. Haven’t solved the problem, but we’ve made a significant improvement.
Steve: And you’re thinking about, or you have started a company in the space, as well?
Ken: We did. We just started a company that’s called Ambidextrous Robotics.
Steve: Nice. But solving this problem probably has pretty profound impacts if you can suddenly make robots be as dextrous as humans?
Ken: They’ll never be that dextrous I don’t believe. But, the big driving application for it is ecommerce. I think we’ve all noticed that there’s just more and more boxes appearing on our doorstep every morning. It’s really changing the way we shop. It’s growing at the rate of 12% a year. And the challenge is, how do you get these products into the boxes fast enough. And they’re building, places like Amazon, Walmart, JD, are building warehouses every week. They cannot hire enough people to do this job. So, that’s where there’s a real opportunity for robots to come in.
Steve: Wow. Before I move on to Eric, do you see them penetrating the labor market in a huge way in the next 10 years?
Ken: Well, let’s come back to it, because I do think there’s the technological issues, and then there’s also the economic or social issues. The reality is, I think we’re going to have a shortage of workers for the foreseeable future.
Steve: Wow. Okay. All right. We’re going to dive into these robotics and singularity and stuff, but I want to let Eric give us his background really quickly, if you want to.
Eric: Sure. Happy to be here. I am a systems guy, which basically means I like to design large scale systems that run all the time. The most obvious modern version is cloud. But starting in the ’90s, I worked on things like how do you make a search engine, how do you find all the documents on the web, how do you distribute content all over the world. They’re all variations of the same problem, which is large scale distributing systems. They are a foundational thing for the future, in the sense that it makes computing essentially free. Everyone now gets to have their own personal supercomputer.
Steve: Right. I think specifically you run infrastructure at Google. Is that right?
Eric: I do. Software infrastructure. I’m essentially the kind of senior designer for the software in Google Cloud.
Steve: I think at one point, is it true that you are the only VP at Google that has no direct reports?
Eric: I believe that is still true.
Steve: By the way, Google Home is listening to us right now.
Ken: It heard what mentioned. Okay.
Steve: There’s what, I think five or six SVPs, and 160 VPs at Google, which has what, tens of thousands of employees.
Eric: Those are all about right numbers. I don’t actually track them very closely. But yes, there’s order a few hundred VPs, and quite a few less Senior VPs. Cloud alone probably has more than a dozen VPs by now.
Steve: Wow. So you have a very unique skillset. All right. And also, you founded Inktomi, as well.
Eric: I did. 1996. Different era.
Steve: Different era.
Eric: In the internet bubble and burst.
Steve: Yep. I was reading your Wikipedia, and it said you were a paper billionaire at one point.
Eric: That is also true. Almost paper two billionaire.
Steve: Nice. I hit 20 million on paper back in the dotcom glory days. But, here we are. I mean, we’re still doing fine. Before I move on, can you just tell me how many servers Google runs? And also, any amazing stats that the average person might not appreciate about just how big Google is and what they’re doing.
Eric: We don’t actually talk about the details, but I would say this, that it’s tens of data centers and order a million servers. When Inktomi was big, we were thinking about 1,000 servers is big, or maybe 10,000 servers would be big. But, it’s certainly way larger than that now.
Steve: Wow. And how fast are you bringing them online?
Eric: The only public thing I can tell you is, we spend about $5 billion per quarter on capex, which is basically data centers, computers, concrete.
Eric: That’s a very large number. A lot of servers for that.
Steve: That’s amazing. How does that compare to Amazon, Facebook, and Microsoft?
Eric: It’s hard to know exactly. I believe Google still has the largest total collection of infrastructure, including networking and servers. That’s in part because Google has, I think, seven or eight now services that have a billion users each.
Eric: Gmail has more than a billion users. Maps has more than a billion users. And so, each of those needs their own very large cloud just for them in some sense.
Steve: You’re responsible, we were talking about this before I got started, Ken was mentioning this, for making sure that all Google Docs, all photos uploaded to Google never disappear and are always available. Right?
Ken: Yes. It’s a lot of responsibility, actually. Do you sleep well at night? Do you worry about this?
Eric: I actually do sleep well at night. Of course, it’s not a job that an individual can do. My role is to get systems of systems really, such that we’re robust to all kinds of different failures. So, I’ll work on things like how far do data centers need to be apart so that if you have a tornado take out a data center, the other data center’s fine. What we depend on is not that disasters don’t happen, it’s that they happen in uncorrelated ways so that we can bound the risk of any given accident or facility.
For example, we had a fire in one facility at a point. Took out a bunch of stuff. Mostly, people didn’t notice. That’s because we had copies of data other places. The fire was limited to a section of a data center, not even the whole thing. And so, you can bound these risks over time.
And by the way, it’s the same thing with government. You want to have data in different governments for different reasons. That’s another kind of risk. What data do you want to get subpoenaed. If a customer asks you to delete their data, how do you know it’s actually been deleted. You have to find all the copies.
So, all that kind of stuff is very interesting to me. And it’s really, you have to plan on people making mistakes, systems breaking, things not doing what they’re supposed to, and still try to be resilient despite all that.
Steve: How much data is being added to Google’s network every day or month?
Eric: The biggest single source is YouTube. The last I checked, which was more than a while ago, it was about, let’s see, I think it was about 500 hours of video uploaded per hour.
Eric: Which means you couldn’t watch the video that’s arriving if you started now, you wouldn’t live long enough.
Steve: You would never catch up.
Eric: You wouldn’t catch up.
Steve: When you build your networks out, are you staying a certain amount in front of the data getting added, or are you accelerating away from it?
Eric: You can’t really accelerate away from it on an exponential scale. You have to basically just try to keep up. And the network actually hasn’t been that bad a problem. It’s would not, I would say, trivial. Its biggest problem is it takes a lot of lead time. You have to guess far in advance how much networking you need in different locations. And of course, locations are not interchangeable, so you have to guess correctly for every location.
On the other hand, you have to figure out how many discs to buy and what type. You have to actually get multiple suppliers. You have to have suppliers that make enough in advance that they can actually meet your needs.
Steve: Right. So you’re just getting having Amazon drop ship servers to your doorstep every day. No, it’s not like that.
Ken: But you should also ask him about, Eric is famous for a theorem about this.
Steve: All right.
Ken: It’s a mathematical result that’s actually quite beautiful. You should describe it.
Eric: Sure. Sometimes it’s called Brewer’s Theorem, but I didn’t want to call it that. So, I called it the CAP Theorem. It says that there’s three properties you’d like to have in these giant systems, and you only get to have two of them. The three properties are that it’s consistent, which means, for example, in Google Docs, if you have two people looking at the same doc, they should see the same thing.
Another is that it should be highly available. Even in the presence of failures, you should still be able to make updates to your doc. And the third one is that, and this is a little more complicated, is that you’d like it to be tolerant of partitions. Which means if you are to cut some fiber lines, and the two documents are on different sides of that cut, you’d like the system to keep working.
Steve: And why can you only have two of three?
Eric: Well, that’s why it’s a theorem. It turns out the short answer is if you look at two sides of the cut, if you want to make an update on both sides, you have two choices. You can update one side and then the other side will be inconsistent. So, you’ve lost the consistency. Or, you can not do the update until later, in which case you’ve lost availability, because now I can’t do updates.
You can both read it, by the way. Reading documents on both sides is fine. But updating documents on both sides won’t work. And it turns out for many years people tried to design systems that had all these properties, and they were not able to. And that’s because it’s not possible.
Steve: Wow. When did you come up with this theorem?
Eric: While teaching in 1998.
Eric: I was teaching this topic. I’m like, “You know, I know this is not possible. We should be able to just clarify.” And I had a simpler version at first, but eventually I realized that this nicer version where you could have any two of the three properties you want, actually. But you can’t have all three.
Steve: Nice. Well, appreciate that. It’s good to get this history.
Ken: Isn’t that cool? It’s very cool.
Eric: It applies to people. It applies to all systems.
Ken: People are inconsistent. Also, I have to tell you. I want to thank you on this program because, I’ve been recently using Google Docs to write this big document with about 15 authors are all doing it at the same time. And, I was editing it today in a Lyft ride, trying to catch up desperately of this. I was offline during the drive, and then when I got there, it synced up with all the 15 people and all the changes and everything else. I was like, “How is this going to …” But it sure enough, it did.
Ken: It was that is not trivial to get that right.
Steve: So, a big doc. Other people were editing at the same time.
Ken: Yeah. And I was not editing. And then suddenly, a whole burst of my edits come in, which are inconsistent with what they were doing. So, it has to basically figure out all that and make it reasonable.
Eric: It’ll almost always do the right thing.
Ken: Yes. That was it. It wasn’t 100%, but I wasn’t expecting even that.
Eric: Roughly what it’s doing, this is not that hard to understand, is it’s making a little list of all your edits, like an edit list. It’s not actually sending your version of the document, it’s sending-
Ken: The changes.
Eric: Your list of individual changes. And it turns out it’s easier to merge lists of changes from different people, especially because they typically work on different parts of the document. And then only if two people edit the same part of the document at the same time, now you have to decide what to do among those edits. And the short answer is, you just do the union. Meaning that if you added a word and someone else added a word, we’ll just add them both. We might’ve added the same word or something, it doesn’t matter.
Ken: Yeah. That’s where the few snags were. Exactly that. But overall, it was amazingly well organized.
Steve: I think Google Docs, they obviously nailed it with the whole collaborative, making that the core functionality. We use it, and it is amazing. It’s pretty interesting to see how email, and email’s getting replaced with messaging. Everyone’s using Google Docs because it’s all real time collaborative. Every system is going in that direction for human to human communication.
Anyway, I want to move on a little bit. Just quick background on where we are in this state of the world with AI and machine learning. One of the things that I saw today was that Waymo, driverless cars from Google, rolled out, or self-driving cars, rolled out in Phoenix. They were going to go live with fully autonomous vehicles, but they have left human co-pilots in there at least temporarily, but it doesn’t seem like they’re needed.
That kind of really makes it real for me, to see this. Even though I’m in the technology space, it’s kind of amazing to me that I can click a button like calling Uber, but a completely autonomous vehicle could show up. I could get in it. It could take me somewhere. Navigate the streets and traffic and a completely chaotic world, and do it safely.
Ken: Well, unfortunately, that’s science fiction. I’m very, very skeptical about this. I think that leaving humans in the car is not accidental or just temporary. I don’t think we can solve this problem fully in our lifetime.
Eric: In our lifetime. That’s a stronger statement.
Ken: Yeah. Well, listen. I’m not as young as I used to be, but I’ve really been saying for many decades to come. We’ll solve it on freeways. I think we can do a lot better than human drivers there. That’s coming. I think it’s already we’re very close.
But, trying to drive, for example, the way I drove to your house tonight, very, very complicated and perilous. Especially a truck trying to drive through these kind of neighborhoods. It is amazingly complex. You have to, the corridor cases crop up fairly commonly. So, someone is double parked, and you have to cross into the other side of the street. That usually requires a very complex negotiation in navigation.
So, we may disagree on this, but I have a lot of friends in the business and even at Google, who feel the same way. That we’re not close. It’s very premature. This pilot study in Phoenix, I think is going to run into problems.
Steve: Yeah. I know that, I think an Uber self-driving car killed somebody.
Steve: Right? Was that completely autonomous, or did that have a human driver in it?
Ken: It had a human, but the human wasn’t paying attention.
Steve: Right. And it actually killed somebody.
Steve: So Google had said, they communicated even, I think, within the last month, when this goes live, it’s going to have no humans in the car. And then when they went live today, they said we’re going to leave humans in the car.
Ken: The other thing that I believe they’re doing, I don’t know if this is official yet, but they’re approach is there’s not a human in the car, but there’s a human remotely monitoring the situation.
Steve: So like a drone, a drone flyer.
Ken: Like a drone. Exactly.
Eric: There’s definitely some of that going on. I don’t know the details. But, I think they’re also limiting the routes they’re taking.
Ken: Right. I would also say if you do this in an environment like a retirement center, where you have very well controlled environment, where you very clearly labeled intersections. And you have beacons and things like that. You may be able to do it. There’s a city in China that’s being built right now, where the whole entire city is being built from the design from the beginning to have no human drivers. Think of it like a tram system. A lot of subways run this way. You don’t really need a human’s input.
Steve: It’s all public transportation, except cars.
Ken: Right. It’s the cars. And the pedestrians, it gets really tricky.
Steve: Yeah. Okay.
Eric: It’s also bikes and animals.
Ken: Right, right.
Eric: It’s a free for all.
Ken: Yes. And that’s the thing. In suburban or urban settings where so much of it is that a driver like an Uber driver or Lyft, they have to do basically illegal maneuvers constantly. Pulling over in intersections, dodging in them. They have to find the client. That’s always a very complex maneuver. It all requires human skills that are very complex.
Steve: So you’re living up to your skepticist title.
Ken: Yeah. I’m sorry. I’m sorry. I don’t mean to disappoint people. But, I feel like Cassandra sometimes on this thing. We wanted our jet packs. We wanted teleportation and time travel. I mean, science fiction is talking about those, too.
Steve: Totally. Yeah, actually in preparation for this, I was thinking about this. I was like, all right, I was googling failed future predictions. Everyone’s like, flying cars and jet packs, teleportation. Right? But then there’s stuff like it seems like we’re starting to do fancy stuff with genetic engineering. We’ve got these smart phones that are doing real time translation. We’re able to get access to the whole world’s information at our fingertips. And communicate with anybody. We’re sitting here in the garage in Mill Valley, and I regularly talk to people in Europe or whatever through video calls that are amazing. That’s happening. There’s a lot of amazing things that are happening.
Ken: Absolutely. I’m not trying to say that these kind of miracles don’t happen. I mean, technological progress has been amazing. The history of everything from air transportation to automobiles, to X-rays, nuclear power. There’s a huge number of miraculous breakthroughs. I perfectly believe a number of them will happen.
But this is one category where I think we need to be really careful. Sometimes people call it Moore’s Curse, because Moore’s Law has been so successful in making this growth in the capacity of computation and memory, people believed that anything is possible. And this myth of exponential technologies, it’s very at a distortion. Because in so many other technologies, it doesn’t happen like that.
Ken: Air travel is a perfect example. Air travel hasn’t really improved much since the 1960s. Actually, some people may argue it’s gotten worse.
Steve: Sure. Yeah. My sense is that the world is amazing me at a faster rate these days.
Ken: I know.
Steve: I’m seeing headlines-
Steve: And you’re kind of like, “This stuff’s happening.” Right?
Ken: Right. That’s why I want to say I understand that, and I agree with you. I don’t want to rain on the parade, because I think that, look, I’m a technologist. We both are. I think we want to be optimistic about the future of various technologies. So, we are very excited about everything that’s happening and will happen.
I just, this idea of AI, that’s reaching human equivalent intelligence, artificial general intelligence is the one that I really feel it’s important to be more realistic about. That one, we haven’t made much progress.
Eric: That one I definitely agree with.
Ken: Yeah. You’ll hear this from many academics. By the way, like understanding the human brain. That’s another one. If you talk to a neuroscientist, there’s a question of if understanding the human brain is like walking a mile, how far have we gotten so far? The answer is three inches. Okay?
Ken: Yeah. I would say it’s very similar for general AI.
Steve: Yeah. I think, Eric, we were talking about this, right? Artificial general versus artificial specific intelligence. How would you define these things in machine learning? How do you-
Eric: I think of it this way, which is, on a narrow, well-defined task, machine learning in general should be able to do excellent. And in general, better than humans. But the problem is, it’s a narrow, well-defined task. You could even have 100 of those tasks, and collectively say that’s a very smart thing. But it really does 100 things well, and still not general intelligence. It’s jut a collection of narrow intelligences. We’ll see many, many variations of that. That’s really-
Steve: Can you give me an example of this?
Eric: Well, one recent one is machine learning is better at looking at cataract images to figure out if you have a disease in your eye or not.
Steve: Right. So, radiology and stuff like that.
Eric: Yes. Those are basically image analyses that humans do pretty well, but the best humans do much better than well trained humans that are average. And machine learning can do as well as the best humans. But it’s a narrow task. You have to actually rebuild models for that narrow task for every different narrow task.
Ken: But I would also say, what’s important to keep in mind there, is that it doesn’t mean that those doctors are going to be unemployed.
Eric: Absolutely not.
Ken: Right. A radiologist, there’s this fallacy that we can analyze an X-ray better than humans. We can run a bunch of trials, and show a bunch of human doctors the same X-rays to see they spot the cancers as well. And yes, you can train a system, but that doesn’t mean in any way that you’re going to put these doctors and radiologists out of work. What it means is, you’re going to have better tools to make them more effective and efficient at what they do.
Steve: Yeah. Does that mean you’re going to need less of them though?
Ken: I don’t think so. I mean, the three examples I always come back to is, ATM machines. Everyone that was going to wipe out bank tellers. There’s now huge numbers of bank tellers out there. The second one was spreadsheets. People thought that was going to eliminate accountants. Not at all. And the third one was barcodes. They thought we’ll just have no more cashiers.
Eric: Right. That might still happen.
Steve: That might happen. Amazon’s got their store now, right?
Ken: Yeah, right. The thing is, it’s very interesting. Generally, when you’re given the choice, do you go to the cashier or do you have to check yourself out, people go to the cashier. It’s just like, “Do I have to really do this myself? It’s very annoying.” So, I think there’s this phenomenon that many times the innovations are great because they actually make our jobs, us as workers, better.
Ken: And that doesn’t eliminate jobs.
Eric: I have a good related story on this. How I got into cataract images is, I’m doing work with a very large hospital in Southern India. They do thousands and thousands of cataract surgeries per year. In fact, their problem was simply they don’t have enough doctors. In fact, they can’t even train enough doctors. Because if you train a doctor in these rural hospitals, they’ll eventually move to an urban area where they can make more money or have more options for their kids and things like that.
So in fact, when we were able to give them tools that helped, like video conferencing and image analysis, it really meant that they spent less time doing things they didn’t want to do, like being in the van going to rural areas, and more time being doctors. What it eventually led to is a system where the best cataract surgeons only did cataract surgery. There was one guy that could do about seven an hour. Because all of the other stuff, including the pre and post operation stuff was done by somebody else. The paperwork was done by somebody else. Nurses did all of the kind of things you’d think of as bedside manner.
That’s kind of a Model T approach to medicine. But when you have such a great need in a place like rural India, it ended up being critically important that we take things off their plate.
Steve: Right. I think is one of the things that is definitely happening is, prior to this, I was looking at the poverty numbers. In the 1820s, 90% of the world population lived in what we consider poverty. And by 2015, that’s 10%. We’re also bringing online billions of people onto the internet, where they can get educated. Their educational ramp is going to be much steeper than it would’ve been 20 years ago. They can get access to the whole world’s information. And potentially, become students at Berkeley or whatever. The talent can rise to the top, and contribute to make things get better faster. That’s the good scenario. Right?
Ken: It is. And I think historically, if you look back over the last hundred years or further, there used to be a huge number of a faction of the population that worked on farms. That is very small, 2%-
Eric: In all countries, including the United States, in the 1700s they all looked the same. They were all 99%-
Ken: Agrarian. Yeah.
Ken: Right. What happened is, historically time and time again, that new jobs emerge. And people get more efficient. This is this productivity idea. Then people say this stuff is different because it’s happening much faster. But, it happened relatively fast for the time when air travel was introduced. That radically changed. Automobiles, same thing.
So, there are disruptions. And temporarily, a lot of people do lose their jobs. Blacksmiths and others, there’s no doubt. We’ll see analogous things in the future. But the idea of this widespread unemployment I think is a myth.
Steve: Yeah. I know you wrote about that for The Economist. I mean, it definitely feels like it’s accelerating. Technological change is accelerating, and we’re all much more productive. We can do many more things with fewer people. But you think that we’ll innovate even more quickly and soak up all this excess labor?
Ken: I do. I think we’re going to have … Look at the predictions about truck drivers, and taxi drivers, bus drivers. I’ve heard these five billion angry white men unemployed. What are we going to do about this? Well, the fact is, truck driving is actually not a great job, but it’s actually a pretty well paying job. Some people really like it, because they’re free. There’s freedom. You’re not sitting in a factory.
It turns out there’s a shortage of truck drivers. Massive shortage. They cannot fill those jobs. So, I think that’s going to probably persist. And it’s not that we’re going to completely eliminate. Their job will get better. When they get on the freeway, they can switch to autopilot, maybe take a nap. That’s going to be great. They’ll be fresher and better drivers when they get off the freeway.
Steve: Right. Or, they can do other things. They can be computer programmers driving trucks.
Ken: Right. Right. Yeah. They’ll do other things like spend time thinking about their next route, and being optimal, more productive. In so many fields I think this is true. I mean, look at education how much the internet has radically changed the way we teach. My kids, our kids, they’re all using this resource amazingly. It’s making them incredibly, I think that they’re smarter. I mean, would you guys agree? I think kids are smarter.
Eric: I would go a step farther and say that I think for me one of the biggest impacts of the internet is the ability to teach yourself.
Eric: The number of things I learn, the number of things I’m willing to try to learn that were kind of unapproachable before, is really unlimited.
Ken: See, that’s interesting. That’s actually an area I’ve been thinking about lately. One of the things is I feel like we’re in this very fragmented stage right now, where there’s all these resources out there, but they’re all spread out and there’s nowhere to really, it’s very hard to find. I mean, you can search.
But it would be nice, what I would love to do is have something that would know my skills, know my time availability, and it would say, “Listen. I know you have a layover on this flight. I’ve found this great mini course in three hours that’s going to teach you this skill that I think you’ll like.” I would love tools like that.
Because you’re absolutely right, I think we are, and all of us can benefit from essentially continuous education. Learning new skills. And especially for certain people whose skills will become less in demand. I’m still optimistic that they can be retrained and do new things.
Steve: Sure. Yeah.
Eric: I just find it empowering. I don’t know. I recently made brisket, which is actually hard to make really, really well. I’ve made it like three times now, and now I think I make it really, really well.
Ken: That’s one of my skills, actually. I can teach you that. My grandmother taught me.
Eric: But it wouldn’t have occurred to me to try to learn how to make a great brisket 20 years ago.
Ken: And you found it online?
Eric: I did. I got the start there. I actually taught myself cooking in the past. I’ve taught myself a huge number of things about building our house.
Eric: Including everything from codes to details about materials. It’s all out there. It’s just a matter of feeling like it’s approachable, or whether you’re confident enough to try and take on some of these things.
Steve: Yeah. I’d be curious to see, as we sit here, you two are in definitely the top .1%, .1 of 1% whatever, in terms of intelligence and probably curiosity. Do average Americans embrace this and say, “Hey, it’s a great opportunity for me to learn the guitar,” or whatever it is? Pursue my passions. Or do they see it as more of a threat and like, “Wow, it’s going to take my job.”
Ken: Right. I’m glad you brought that up, Steven. I think that’s actually a big issue, which is that there’s what’s sometimes called automation anxiety out there. People, their morale is being weakened by a lot of this discussion. This is why I have this problem with the word singularity. I just feel it’s very counterproductive. It appeals to certain people who shall not be named. But there’s certain people who just love to vision this kind of science fiction hypothetical point when we’re going to supersede humans, and humans will be left in the dust.
But it sounds interesting if you’re in the tech world. But for many people, if you’re in the Rust Belt, you hear that and you start believing it. It’s demoralizing. I talk to drivers on Lyft, for example. I always ask, “What do you think about this self-driving car?” They’re always like, “It’s coming. I’ve got probably a year left, and then I’m out of work. I’ve got to work for something else.” But it’s not true.
I think that’s where this idea of multiplicity I’ve been advocating, which is really the future is going to be much more about humans and machines working together. And in a cooperative, positive way. And, actually it’s going to make people better at their jobs, more enjoyable. They’ll enjoy their jobs more.
And also, I hope that it’ll lead to more of this mobility, where people will feel comfortable learning in the same what that Eric just described of, okay, I want to learn how to do drywall or plumbing or some skill like that. By the way, if you’re a plumber, you’ve got job security for life.
Eric: I do think it’s a confidence thing, though. I feel like our kids will grow up in a generation that expects that they can do whatever they want at any time. That’s super empowering.
Steve: I do think it’s empowering. But as a parent of two teenagers and an elementary school kid, I also see how much these devices encroach on their lives and influence their lives in thinking. And almost drive us. This information is always available. There’s always more stimulus out there. And software is designed, social media and games and so forth, to appeal to our habits and brains, and give us dopamine hits, and create habits and loops, and stuff like that. There’s a huge amount of money being spent to keep people doing more stuff, doing more and more on their computers, consuming more and more, and being in front of their screens all the time. Which I think has bad downside and bad long term effects, or could have bad long term effects.
Eric: It’s certainly a risk for adults. And I think a big risk for children. We actually intentionally have our kids at a relatively low tech school. It’s not that we’re anti-tech, quite the opposite. It’s that I want to control the introduction of technology over time.
The way I think about it is, by the time they’re 18 they have to have full command and they’re on their own. But you have kind of 10 years before that where you can incrementally in a guided way say, “Let’s start with email.” Email is actually quite a bit safer than social media on many dimensions. There’s record. You can see what happened. It’s not urgent. The delay is actually more like regular human interaction. So, that’s actually a good place to start. Oh, and the audience is relatively limited, too. You don’t get, at least on basic new accounts, you don’t get a lot of interaction with people you’ve never heard of. It’s not as public.
So, that’s an example. Another example actually is, I think it was worthwhile having my older son have a dumb phone for a while. Let’s talk about the phone and the phone habits, and simple texting. There’s no internet. There’s no games. It’s just can you manage a phone, and can you show that we can trust you with a phone at all. Then we can talk about a more advanced phone.
Steve: I just think that, that’s what we did with my oldest son. He had a text only phone for a long time. I think he has a healthier relationship with technology. We have iPhones in our house. And then the middle son got a phone earlier. I think it’s had a pretty profound impact on him. It’s difficult to unwind. You can’t unwind it. I think most parents don’t appreciate just how addictive these devices are.
Ken: Yeah. There’s a term I’ve heard recently. Digital obesity. I do think this is a real danger. Tiffany, my wife, is actually writing a book right now. The strategy is that we take one day a week off from screens. All of us. It’s based on Shabbat. The idea of … We’re not fanatics, so we’re not orthodox in the sense that we drive and we turn lights on things like that. But we don’t use screens. This has been a way for our daughters and us to just have this ability to separate from it. It actually turns out to be really interesting. They are capable of putting screens down and not feeling that compulsion. I agree with you. Trying to get that early is really important.
Steve: I think one of the things that I’ve definitely learned recently is 20% of the population deals with depression at some point. And I think that these devices amplify those behaviors. What happens is, if you have a certain type of mental health or profile, you can manage things healthily. Just like alcohol, probably. Say it’s alcohol. We can drink a beer. I’m not alcoholic. But some people, they drink a beer it’s like they can’t drink one beer, because it’s going to be a problem.
Ken: No one likes to go on and see that there’s been this fabulous party and you weren’t invited. That sucks no matter how well adjusted you are. That is what so much of social media is about. Here, look at this place I’m at, or this party I’ve been invited to. People just feel miserable. I do worry about this. This is a whole other issue. We can talk about this for three hours.
I do think that it would be interesting to try to see how this evolves. Because right now, there is a huge, the driving forces are really around commerce and the idea that whatever behavior creates more clicks and more eyeballs is going to be rewarded. So, we have to really think about this. Is there any alternative?
Eric: I can say that this topic is not my area, but it’s taken very seriously by Google at this point. The name of the program is Digital Wellbeing. The idea is that there are ways your device can help you at least monitor your usage, and help you set limits on you or your children in a way that you believe in. But a lot of it actually is even if it’s just feedback, like, “You’ve been on for four hours. Is this what you want?”
Ken: Right. Right.
Eric: I don’t know where that’s going to go. It’s a very nascent space. But, it’s needed.
Steve: Totally. I think as adults, you can say, “Oh, yeah. I’ve been on for four hours. Maybe that’s not a good idea.” But I think with developing kids, developing brains, yeah, big deal. “I’ll play more video games.” They don’t have the same judgment and executive function.
Eric: Yeah, that’s literally true. The early teenage years are among the most addictive. And it has to do with the fact that there’s plenty of the physiology of addiction, but you don’t have the developed frontal cortex yet that is the higher reasoning that lets you say, “Hm. What is this behavior doing to me? Can I assess what I’m doing and how it’s going?” That actually doesn’t come in until your 20s. Even at 18 you don’t have great executive judgment on top of your lizard brain.
Eric: That’s actually basically the same problem AI has. AI is a giant lizard brain. It’s lizard brain for image recognition. Lizard brain for song recognition. It’s missing the executive function.
Steve: Right. Yeah. Hopefully it emerges in a way that is good for humanity.
Eric: I actually think it’s worse than that. In the short term, the executive function is the programmers and product managers. In some extent the users. Humans are going to have to be the limiting factor on how these things get used in the short term, because there is no general AI that’s going to figure this out.
Ken: And I think that’s actually a great point. Understanding what the skills are, image recognition, speech recognition, translation. Those are very subtle problems, but they can be addressed with deep learning, the newest wave of technology around AI. And the results are astounding and also game playing. AlphaGo, the techniques that are being developed there are really interesting. But they’re, again, solving these very specific problems.
In the case of image recognition, we also know there’s a number of flaws that we can create counter examples systematically that cause these systems to fail. So, they’re vulnerable to attack or exploits from hackers.
Steve: That’s good. Humans can still outsmart machines.
Ken: That’s right. That’s right. Humans, I think humans have many good years left.
Steve: Right. So I guess, Ken, you haven’t gone down and hung out at the Singularity University and done the-
Ken: Oh, yes. I should say this, I always forget. I’m not against the Singularity University. I actually like that program. I think it’s really interesting, and I don’t want to sound like I’m bashing that. Because, people I know there, everyone of them has said, “We’re not a university. And we’re not about singularity.”
Steve: Maybe they should change the name.
Ken: That’s what I say. Why not just change it? Because if I see that on somebody’s resume, I always think, “Oh.” But, it’s an interesting program, because it attracts really good people. It’s now, everybody gets funded completely. Like any great program, you get really great people together. And so, a lot of interesting results have come out of it. I’m not opposing that.
Steve: So before I move on, I just want to touch on this whole singularity idea and just define it. I was looking at some data. By 2023, the average $1,000 computer is supposed to have enough processing power, the equivalent processing power of a human brain. Does that sound about right? Or does that sound crazy? That’s only five years from now.
Eric: It sounds early to me. Plus, the rate of improvement of processing has gone down significantly. For a long time, it was like 2X ever 18 months. That lasted almost 50 years. It’s been pretty flat the last few years.
Steve: Because we’re hitting the limits of physics. Right?
Eric: We are hitting the limits that are at least harder to get around. And there’s also something economic limits, in the sense that the things you have to invest to do a new factory that can do a new generation of equipment is a very large number. And it’s getting larger exponentially with each generation. It might be that it’s too expensive to make significantly faster chips now.
I think many changes are in progress. I don’t know if we’ll get back on the nice curve we were on. I think we’re going to be stuck with a slower curve for a while. But we do have another decade probably of tricks we can play. It is going to slow down. In fact, I had this, this is a half-baked idea but I’ll put it out there. I was wondering if, because we look at the plot of improving computing over time. It’s been this straight line on an exponential graph for a long time. And now, it’s curving flat. So you would say it’s approaching an asymptote.
I’ve been wondering, maybe that asymptote is the singularity. Meaning maybe there’s some limits that we’re not going to get past by any mechanism, human or otherwise. But I don’t know where that is, and I certainly don’t know if that’s way above or way below what a human brain can do compared to this kind of computing. I would expect it might still be above. You would still have singularity in the sense that was originally intentioned. But it is interesting to me that our ability to improve is slowing down right as we’re starting to have this question.
Ken: This is the other issue that if you look at, for example, Kurzweil, who’s very compelling when you listen to him. He’s a magnificent speaker, and he really sweeps you along. But what’s really interesting if you look carefully is, he’s very selective about the results he shares. And the way he tells his story is very, very, I would call it biased. It’s not evenhanded in the sense that he’s really looking at all the different possibilities.
And just simply being able to match some measure of the processing speed of the human mind. Which by the way, is very subject to debate because we’re analog systems. There’s huge nuance of chemical processes that go on, in addition to the electrical processes in the brain. We don’t understand this at all. So just trying to understand the number of computations in some way that’s happening in the brain, it’s just speculation.
So, we can probably increases orders at several orders of magnitude in the speed. But right now, if you look at it … What’s interesting is if you look at the device in your pocket, it has a certain amount, a huge amount of computation onboard. But it also has the network. So what it means is that it can access Eric’s data centers. And essentially in your pocket, you have access to the amount of computation that’s staggering, even though it’s not living in your pocket.
Ken: All these numbers are very easy to manipulate. And that’s why this idea, people have been predicting, by the way, that computers will do things, robots will do things, take over human capabilities for a long, long time. It goes back before the word robot was invented.
Steve: Yeah. Well, I’ll tell you that personally my expectations are rising. I have this Google Home thing on my desk here that Google sent me. And I find myself getting frustrated. Now I actually have this thing and I’m thinking to myself, “I should be able to talk to it in a conversational way.” And I can’t do that. But it does strange things like my phone, my Siri will, I’ll be on a podcast or a phone call, and Siri will think I’m talking to it and start talking to me. I can’t get it to shut up. But if it does become conversational, I think that will be completely game changing.
Ken: Don’t hold your breath. That’s the turning test. We’re not close to that. We can do that for a little bit of time. You can fool some of the people. In other words, people look at a stuffed animal and think it’s real, it’s alive. So, people will project a lot onto devices. I do think that Google Home and Alexa in particular, are really capable. They’re impressive. I mean, I remember this show, do you remember it in the ’60s called, was it The Munsters?
Ken: Or the Addams Family, where they had this thing called The Thing that was on the table. It would just light Morticia’s cigarette. She would say, “Thank you, Thing.” It would just kind of go back into its box. Remember that? I loved that. I think of Alexa kind of like that. It’s just this thing that you can ask simple things. How do you spell embarrassment? You get the answer. It’s beautiful. But that’s so far from being able to hold a conversation like we’re having right now.
Steve: Right. The idea that, there definitely seems to be more awareness around these devices. I find myself, I’ll be looking at one thing on say Apple News, and then I’ll go to Google to search for something, and Google will have a sense for what the next word that I’m going to be typing in there is. I feel like it’s aware of what is happening out here. So just on the singularity side, processing power is plateauing, or it’s not accelerating nearly as fast. I guess neither of you think, “Hey, we need to be worried about some super AI taking over and destroying humanity,” like Elon Musk is.
Eric: It’s certainly not imminent. I think by the time it becomes imminent we’ll have some semblance of what the real risks are.
Ken: Yeah. I think it’s very interesting, because the people like Elon and Stephen Hawking and others, they’re great minds, and they’re very much thinking several steps ahead. Maybe hundreds of years ahead. So, I think that’s certainly interesting to think about. But it’s the way it gets interpreted. This existential risk, and it’s coming, and it’s going to be near. I think that’s where the real problem comes in. There’s no clear path. We can speculate about it, but it is not a path like how we’re going to get there.
Things like 5G is very clear. We know we’re going to get there. There’s no science fiction there. Maybe you can say more, but I think it’s we have to install the transmitters, we have to upgrade our equipment. And that’s going to happen. We know how to get there.
Eric: That’s relatively straightforward.
Ken: Yeah. That’s fine. I think there’s a lot of technologies that I would say we probably both agree that they’re very exciting, that are coming.
Steve: Yeah. I definitely, that’s what I wanted to dive in next. Yeah, 5G, AR/VR. What you see happening with robots, solar energy, biology genetics, 3D printing, stuff like that. Any particular things that jump out at you as like, hey, this is going to be the next big game changing thing that will affect most people?
Ken: What do you think?
Eric: There’s a lot of them. I still feel there’s a whole bunch of stuff still coming around collaboration and empowerment. I don’t have many specifics. But for me, the first time I had this realization of some kind of collaborative power, which actually goes all the way back to Evite, this is probably I would say late ’90s. People were talking about what became the cloud. Oh, we have all this extra computing power. We can do a search engine. We can do things that you couldn’t do before. Do things that don’t fit on one computer, in particular. So that’s one kind of thing you get by moving into the cloud.
But actually a more important one in the long term has actually proven to be that in the cloud you have shared state, meaning the state of the system is actually now in one place versus on everybody’s desktop individually. Now it’s very easy to say, “Oh, I want to have a group of friends, and I want to know if they can make this same date.” That’s a collaborative application. That shared state is actually super enabling.
It’s the same thing that enables Google Docs you talked about earlier. But also, it’s the thing that enables Uber and Lyft. It’s not actually anything except the fact that there’s a centralized way to track all the drivers and track all the passengers. That’s the value, actually.
Ken: And it’s been incredible. I took that today, and it’s just great, because I can get work done while I’m getting back and forth to Berkeley, and love it. Here’s an example. About 10 years ago, you remember there was this huge fear about spam.
Ken: And these spambots were happening. We were getting our inboxes, we’re getting a deluge. It was a real fear. What are we going to do? We can’t stay ahead of it. It’s actually interesting, because you forget that it just kind of got fixed. It’s just not there.
Ken: And what was it? It was really Google, really. I give you a lot of credit, and by figuring out that you could distribute that problem. As soon as someone flagged something as spam, it sort of percolates, so it gets recognized by all the systems. Brilliant. That actually was able to solve that problem.
I think something very analogous can happen with what’s now being called fake news, and the way social media’s being manipulated to basically distort and send people messages that are particularly designed to inflame them or something. I think we can address that. I don’t know if it’s interest of the social media yet to do it, but I think it’s starting to. And they’re going to be able to figure that out. I’m actually pretty confident about that one.
Eric: It’s also a kind of shared state application.
Ken: Definitely. Exactly.
Eric: You need to know what is your bubble. That’s actually visible in the cloud in some sense, in a way that should be able to say, “Ah, this group is getting vastly different information than this other group.” That is something that is at least detectable.
Ken: Collectively, yeah. For example, I don’t think that we could just have an AI system that would be able to look at a message and say that was fictitious or not. But collectively, with humans in the loop, and this is multiplicity, that I think we’ll be able to solve.
Steve: Right. I think the point about the spam filter for fake news is a great idea. I totally agree that social media, a lot of these firms are not interested in fixing it because it entices and engages half their audience, or a big chunk of their audience. They’re like, “Hey, I can sell more clicks, or sell more ads. Good for me.”
Ken: Right. You notice, all these systems, it’s fairly easy to set up a bot to generate lots of accounts. Twitter has known about this for a long time. They know. They know what’s the bots. But, of course, they want to show how many new accounts are created every month. So it’s not in their interest to eliminate those until recently. I think this is where there’s a lot, as Eric said, I think where we start realizing this power of the collective.
Ken: That’s really strong. We can do even more with that. I do think that, here’s another thing in terms of safety and driving. I think that we’re going to see this relatively soon. All cars will have this. Equipped with these systems that will help keep you in lane, will help anticipate and avoid collisions. Just like airbags. They’ve greatly reduced the number of fatalities. We’re going to have other technologies like that, that are going to really make driving much, much safer.
Eric: That’s a good example of specific intelligence versus general intelligence.
Eric: It’s really easy to make a good braking system. You don’t have to have a fully intelligent system to do that. You just need to solve the very narrow problem of, “I’m about to hit an obstacle, maybe I should stop.”
Ken: Right. And so, we’ll still have humans in the car. In other words, you’ll still have let’s say Lyft or Uber drivers, humans, but they’ll be safer. And their job will be less stressful.
Ken: One thing I would love to see right now, and this doesn’t exist yet, is that you can switch on autopilot when you’re in heavy traffic. That’s the most aggravating. Right? You’re stuck on this bridge, and you’re like, “Ugh.” That actually is you’re slow moving. That’s actually something I think we could address. But you know what the biggest problem right now is why it can’t be done is that, merging, which happens a lot in these kind of situations, is a very hard problem. Because geometrically, it’s impossible. There’s nowhere to merge in on this line of cars. But we do it all the time. How do we do it? We nudge, we indicate, we wave to the person. We’re like, “Can you?”
Steve: We negotiate.
Ken: We negotiate. Exactly.
Steve: Right. Yeah. Just listening to this, the whole … I’m back on this fake news thing. Some of this comes down to human, so much of this comes down to humans and do they want to change. I think many people, they have these strongly held beliefs despite prevailing data. But they’re like, “Hey, I don’t care. I don’t believe in climate change. I don’t believe just because 99% of scientists say that this is actually happening. Who cares?”
Ken: Yeah. Listen, we’re all prone to our confirmation biases. The one about AI is one example where I think there’s a lot of very intelligent people probably listening to this show who will say, they’ll totally disagree. Because I run into this a lot, really, really smart people, and they say, “You’re an idiot. You don’t know what you’re talking about. We’re going to have AI. And it’s coming very soon. You’d better wake up to it.”
I’m like, “How do you know?” And they’re like, “I know. I know.” They’re like, “How are you disagreeing with Elon Musk and all these geniuses?” I’m like, “I happen to work in this area.” Then they still don’t care. That doesn’t make any difference. It’s a perfect example. They don’t want to take in any information that goes against their assumptions.
Ken: And we’re all the same way. I mean, I’m guilty of this, too.
Steve: Right. That’ll be a big technological advance, is finding a way to work with the humans to convince them.
Ken: Oh, my god. That’s a hard problem.
Steve: Depending on how, regardless of what their perspective is. You have to feed them information a certain way. I actually think-
Eric: Yeah, there’s actually a whole field called persuasion.
Steve: Yeah. Google, Facebook, I’m sure could spoonfeed people information a certain way. They’ll understand how they think. And you’re not going to go from this to this, from this position to this position overnight, but you might be able to get them to take a series of steps to bring them around.
Ken: That’s right. Understanding that, probably it’s going to be, as you said, in small steps. Not contradicting someone, but actually helping them see through, making a very specific case study that helps them walk through this scenario doesn’t work the way they think it will. I actually think, this brings it to the issue of diversity, which I’ve been thinking a lot about in terms of intellectual diversity or cognitive diversity. That there’s this idea of the IQ that basically was created a hundred years ago. I somewhat blame our competitive institution to South about this, because Stanford-Binet IQ test. Right? But it’s not really all their fault.
But it was this idea of projecting the complex, multidimensional phenomenon of intelligence down into one dimension. They used principal component analysis, statistical methods, eigenvectors. But it was a great disservice to what intelligence really is. It was useful for classifying soldiers and students, but it doesn’t capture the multitude of nuances where any group of people, there are good and bad, they have strengths and weaknesses in so many different ways. In terms of music, in terms of spatial ability, numbers, verbal, reasoning, all those are different. We don’t want to project everything down. We should start thinking going back to the idea of intelligence is a very high dimensional phenomenon.
And that way, thinking about artificial intelligence as just certain other dimensions. Computers are very good at doing certain kinds of calculations. Absolutely. Certain kinds of pattern recognition. But it doesn’t mean they’re going to do well in all these other ones. I think it’s important to recognize that. And then it comes back to this idea of what we’re talking about here, of cooperation and collaboration. That it so benefits from having people with different perspectives.
Steve: Right. I can tell you as a father of three kids that are very different in terms of how they think and how they learn, I’m hopeful. Because, I have a traditional child who’s like hey, normal. Our standard public school education has been great for him and he’s gotten a ton out of it. He’s going to go hopefully to a great college and everything else. And I have a couple other kids that think very differently. But, they have their own capabilities and skills, and they can definitely add to teams.
Steve: And having a world that embraces that and say, “You’re great at art,” or, “You’re great at creativity,” you think about the world in a different way, as good.
Ken: Yeah. Exactly. We’ve seen so many creative geniuses did not fit in, into the conformal standards of what-
Eric: Einstein was slow.
Ken: And what?
Eric: Einstein was slow.
Ken: Yeah. Absolutely. Realizing that when you get groups together, you should have them be diverse. You need to learn how to negotiate, to use your word. That diversity is really important for kids. We shouldn’t be saying, “Okay, this is the good kid. This kid is bad,” or something. We should be realizing they’re thinking differently or learning differently. And then, how do we get them together and work together, because they’re going to complement.
There’s these new results now with collective IQ, collective intelligence, where you can show and they do experiments where they bring people together and they give them these problem solving or creative tasks. If you get the homogenous group together, they always underperform compared with a heterogeneous group that has diversity. We’re not talking about just gender or age or racial diversity. Those are important and vital. But those also lead to cognitive diversity. And you can have cognitive diversity just with two people who look identical.
Steve: I did see that Google, for instance, had a program where they were actively trying to recruit different kinds of thinkers.
Steve: People that might be on the autism spectrum or something, because many of them are maybe great at math or something. But for different reasons. Do you see that?
Eric: I think it’s important to try to figure out for each individual how they can contribute. That usually happens at Google after the people have been hired, not before. Because, hiring is an imperfect process. But the principle is the same, which is this person’s clearly bright. They might have a challenge in their current situation, but I’ve had many successes moving people around, changing roles, changing what we call ladders, which is kind of job description. I feel like it’s a huge boon to Google that we’re able to give people this kind of mobility.
Steve: Nice. Any other big technological changes that you see on the horizon, or even 10 years from now, that you think will affect what the world is like? I mean, I think about this a lot. 10 years ago when the iPhone was created, I don’t think I would’ve imagined what it could do. Now in some ways, it’s kind of mundane, but in some ways it’s amazing what the thing does and how it connects us, and how different the world is because of it.
Ken: Yeah, and it’s hard to imagine that was only 10 years ago. 2007, right? I agree. That was, the smartphone, amazing. I wouldn’t want to go back and give that up. I do think there’s a few things I think, coming back to the robot grasping. We’re making some very interesting progress being able to pick things up, and to sort products in warehouses. There’s a group of companies now that are really pushing that. I think we’re going to make some progress there. I’m kind of optimistic about that one.
Here’s where I think it may impact everyone beyond just getting orders delivered faster and less expensively. I think we will see, and I know this is a little risky, so I’m going to hedge my bet. But I would say within the next decade, we will have a decluttering robot. We have Roomba now. It works fairly well just sweeping or vacuuming the rug. But decluttering would something that would just go around and pick things off the floor. Not fold your laundry or anything. It’s not going to be like a Rosie the Robot. But it’d just keep your floor relatively straightened.
Steve: Pick up water glasses that are left around the house and plates and stuff like that, and bring them around.
Ken: Yeah, and clothes that are on the floor. Just put things into bins.
Eric: Bins are good.
Ken: Yeah. It’ll sort them. It’s very important for senior citizens to have such a thing, because they’re eyesight isn’t so good. They often see if there’s something on the floor. They may not stop and pick it up because it’s a hassle to reach down. And third, if they slip and trip on something, it could actually be fatal. If you can say, for let’s say $5,000 I can put this thing in my mother’s home that will just keep the floors clear, I’m probably going to buy that.
Steve: It’s interesting how demographics are driving this. In Japan, they have a rapidly aging population. They’re living a long time, and they’re not having as many children. So you have a huge boom of older folks. They’re in the forefront of robotics for seniors and elderly, for companionship, for I think self-care, stuff like that.
Ken: Definitely. And this is interesting for your audience in particular. I think there’s huge bulge … By the way, this is not speculation. This is demographics. We know the birth rate. We know the death rate. We can actually see that there’s this huge bulge of senior citizens coming. And there’s going to be a shortage of caretakers. Again, why I’m not worried about unemployment. There’s always going to be those kind of jobs.
But, I think people are going to want something in the house that’s going to help them. It’s going to take different forms. And again, it’s not going to do everything. Here’s another task that I predict we can address. One that nobody likes doing. Making your bed.
Ken: Here’s the thing. We started working on this actually last year. The reason is that, making a bed is not time critical. You just generally would like to come home and have the bed made. It can happen over a period of hours. Second, it doesn’t have to be done perfectly. It’s fault tolerant.
Eric: It rarely is.
Ken: It rarely is perfect. And I’m not talking about replacing that kind of bed making in a hotel. That’s really got to be done right. That’s very hard. But just making the bed, straightening up so the sheets are pulled up and things reasonably. We actually have a robot in the lab that’s doing that.
Ken: It’s slow, but it’s getting there. There’s a mathematical problem of how do you find out the pick point on the cover to pull, to increase what we call coverage, which is how much of the bottom sheet is covered by the top sheet. It’s a nice problem, because it’s a high dimensional manifold, how that cover’s been arranged. If you just look at the data points, it’s not obvious where you should pick it.
So, it’s actually very ripe for deep learning. We’ve been giving it a lot of examples, and it seems to be learning some structure to find the points to pick that cover, and it’s able to actually do reasonably well already. It’s just too slow. But I think we can get there. I think that would be something people would also want to have.
Eric: It brings me back to, or into I guess, a generalization of this idea which I’ve been thinking about a lot, which is the kinds of things we should be doing are what I call the repetitive toil tasks. Making a bed is not a terrible example, but in some senses it’s a repetitive toil task. Humans are not good at those tasks for a variety of reasons.
And this actually comes up in system design for things like cloud, because it turns out if you have a system that fails rarely, you can’t actually have a human monitor. Because if it fails rarely, they will not take the monitoring seriously. Then when it actually fails, they’re not prepared to do anything. In fact, this is the nature of the Uber crash you mentioned before. You can’t actually ask a human driver to do nothing for a long time, and then expect him to do something on very short notice that’s urgent and important and participate completely. It’s not going to happen. It’s against what humans are good at.
So, you see this in many other fields. This is where checklists come from for pilots or surgeons. You don’t have the checklist, these rare things that only happen once in a while don’t get covered because it hasn’t happened in a long time, that you didn’t have the right blood type. Usually you do have the right blood type. Checklist is a human mechanism to deal with the fact that infrequent failures are hard to handle. The checklist causes you to be more systematic.
But really, if you want to be systematic on something, use a computer. They’re really good at that. If you want to do something like the classic KP thing, clean off potatoes. Right? Pure toil. No human needs to be doing potato cleaning except as punishment.
Ken: Yeah. What I don’t like is snapping the ends off the peas, the string beans. I would love to have a machine that would do that.
Eric: I would eat more green beans if I didn’t have to do that.
Ken: Exactly. But you know, here’s one part where our two ares come together, which is the idea of the cloud for things like robots. That’s where I do think there’s also some new innovation that’s going to happen, which is where robots will start to communicate and learn from each other. In this scenario I was describing about, something that picks up, a decluttering robot in your home, it’s probably going to make mistakes. But over time, if it’s sharing its mistakes centrally, and we’re able to retrain, then you can download new versions so the system will be able to get better over time.
Steve: Right. That’s one thing I see with Tesla and Waymo, pretty much everything. You have these intelligent Internet of Things. The car becomes a thing, or whatever it is. They’re learning collectively. And all that learning is being centralized. That’s been true in computer science for a long time. We all, when we’re building stuff, especially on open source, you only have to write some algorithm one time. Someone uploads it to GitHub, and makes it available, and everyone else can use it and no one has to spend their time ever again on that.
Eric: This is absolutely fundamental to self-driving cars. I think that the robotics part of the car is super interesting and powerful. But, I would say equally important is the shared map system, the million miles of shared training and learning that you get from these systems.
The maps we use for self-driving cars are much more sophisticated than the maps that you think of on your phone. A much more level of detail in terms of where are the lanes, where’s the signs. We’ve actually started finding all the signs in the video that we’re watching of cars. You see, oh, that’s a sign that gives us the speed limit. Now we know the speed limit for that part of the road. Which is not documented anywhere. There’s no database of where are all the speed signs. So you have to learn all that essentially in these super advanced maps.
Steve: Right. And then it can be stored and shared and updated real time. The world changes, but the next driver that drives by that speed sign says, “Okay this speed limit got updated in Mill Valley.” So now, all cars know that you can go 20 miles an hour instead of 25, or whatever it is.
Ken: Right. I often say part of the reason, to my just speculation, why would Google get involved in self-driving cars. Because, they saw it as a cloud issue. It was a task that it could be hugely benefit by having all this streaming information coming in where you have maps, you have traffic information, you have weather information. And it’s constantly being updated.
As you approach an intersection, I don’t know if this is what they do exactly, but they’re probably downloading as you get closer to that intersection, they’re downloading a new policy that’s the latest policy for that intersection given that traffic condition, in that time of day, in that weather condition, et cetera. You want to be constantly basically sharing data over the cloud. If you just have a self contained system, it’s really at a disadvantage.
Steve: Right. This is where I do think some of this stuff is going to emerge and be faster than we anticipated. Because we have all these humans that are now wired up all the time to these devices, educating them, speaking into them, looking stuff up, asking questions. And Google’s, I’m sure, storing every single question. I did hear something amazing that there’s something like 15% of the queries every day are new, and there’s some gajillion number of queries every day. It was something shocking. But I had never seen them before.
Eric: Yeah, there’s a surprising number of new things every day. Yeah. You would think there’d be no new things today compared to yesterday, really. But there are every day.
Steve: But I do see Google, I mean I see it when I’m doing searches, it anticipates what I want to ask better and better. I have the sense also that these devices, people say it’s bologna, but you’ll be in a car ride, and you’ll maybe have a conversation that’s completely random about some very random topic. And then you’ll do a search, not even a search, you’ll just see presented to you in some stream or another something related to that. And you’re like, “How did that get in here? Because, I only talked about it.”
Eric: That may also be confirmation bias. But I think there’s another thing going on there, too. Which is, current events are given, relatively speaking, a lot of weight. Why you were talking about something, is it likely that other people were talking about it, too. Then they’ve actually been doing that search recently that day in unusual numbers. You definitely see effects like that, where, “How did it know I was going to ask about this fire?” Actually it’s because lots of people were asking about the fire even though it’s a new topic that wasn’t here yesterday.
Ken: Right. That’s the other thing that this is constantly changing. If you asked about yellow vests in Paris, two weeks ago would’ve been very different than searching it today. So, that’s another example where constantly these systems need to be fed with human activity. We are still going to be needed in the loop.
Steve: Sure. Well, it’s interesting. My wife was in Berkeley recently, and she said she saw those small ice chest size tract robots that deliver things slowly cruising around. These things are making their way. Now we have some autonomous cars out and about. We’ve got these tract delivery vehicles. We have drones seeding for us, and drones flying around different ways. They’re starting to be present in our lives in bigger ways.
Eric: I do worry about drones, and I do worry about IoT. But, these are all things that I think are fortunately going to evolve some. But I don’t think we’re ready as a society for all the consequences. For example, if you’re IoT device is taken over, is that your problem, is it the manufacturer’s problem, is it nobody’s problem? What’s the back pressure on systems that are badly behaved? What causes them to not be badly behaved in the future? We don’t have a lot of mechanisms. We have liability. We have brand. We have regulation. Those are all complicated mechanisms, and they all add to the cost. So, we’ll see how this works out.
Steve: It was what, a year, 18 months ago, when we had that giant DNS attack from smart fridges or something like that.
Eric: Yeah, cameras mostly.
Eric: 100,000 cameras attacking the internet. Very preventable. But they’re not, in general, to make these things safe, you would need to be able to, like we do with Chromebooks, remotely upgrade the software in a reliable fashion. So that when you find security bugs you can fix them. And in general, IoT devices aren’t fixable in the field, but they’re still a network device that is an attack vector. That’s not sustainable. Something there is going to change. I don’t know what it’s going to be yet.
Steve: All right. Just as we wrap up here, I think my thought is, I just feel like you see self-driving, maybe not completely self-driving cars, but you see these more and more autonomous things, more intelligence being built into the network, more learning quickly from all these shared resources and humans attached to it. That many jobs and things will be, maybe they won’t be completely automated. I think Ken, I’m coming down on your side, that they’re not going to be completely automated, but they’re going to get a lot better a lot quicker. And they’re also going to amplify what any individual can do. So, you may need less human capital.
One thing that I’ve been thinking about is that the idea of a 40 hour work week is actually a relatively recent invention. Back in the 1900s, people were working all the time, and you worked until you dropped. Kids were working in factories. Now we have 40 hour work weeks, and maybe it’s going to go to 35 hour. Maybe we will work less or have more control, or ideally choose the kind of work that we do.
Eric: You see variance of that now, besides the obvious one of France, a 35 hour work week. You see, I believe there’s some shared funding in both Alaska and I believe Finland is the other place.
Steve: The universal basic income idea?
Eric: I think that certainly has a role to play. If nothing else, as a safety net. But there’s a long term consequence of being more productive if things are on the table. That doesn’t mean people won’t want to work, but I do feel like our goal should be to make work more fun, actually. Let’s take out the toil of work and get to the creative part. Let everyone be creative and work on the things they want to work on. That, I think, is an achievable goal.
Ken: Absolutely. Here’s one that I would love to see automated, which is scheduling. The amount of time I spend trying to schedule things is just enormous. And it drives me crazy. I would love to have a real digital assistant.
Eric: You could just do half of the stuff.
Ken: Maybe. I’ll tell you, there’s Clara and these various things out there that are artificial artificial intelligence. You know about this?
Ken: They’re supposed to be AI, but there’s actually people doing it. That’s something I would love to see done. I do think that may be doable. It’s going to be, you’d have to essentially learn my preferences. You’d have to be able to have these networks, calendars and things. But it’d just be great if it could just say, “Hey we want to get together,” and it figures it out.
Steve: I feel like, I’m sure Google’s working on this. That wouldn’t surprise me if that emerged in the next three to five years. If it was a pretty bulletproof thing.
Eric: I think it will emerge. But I think one of the things that makes this tricky is to do it well, you really know, as Ken called it, the preferences of the people being scheduled. And that’s actually much harder than you think. Because many times they can’t express the preferences if asked. They don’t realize that, “Oh, wait. I never really want to have a lunch meeting on Friday because sometimes I do stuff with friends on Friday at lunch.” That’s something they would know if you asked them specifically about Friday lunches. But if you ask them in general about scheduling, it’s not going to come up.
Which means that if the system’s going to work, you’re going to infer a whole bunch of preferences over time by saying, “Whenever I schedule stuff on Friday, you never pick that one. You don’t seem to want me to schedule stuff on Friday in the city.” That requires some repeated interaction, which means people have to trust the system and use it for a while, while it learns those preferences. And we’re not that good at using systems that don’t work the first day and get better.
Eric: Which might be a good reason to use humans the first days, if maybe the humans can do a better job until we learn the preferences, and then could do it more automatically.
Steve: But these devices do know where I’m traveling and what I’m doing. Many of us have a rhythm to hour lives. If it sees that I’m riding my bike around this time and that time, and I’m going here or there, that it might understand. And if it could see access to my Google calendar and all my emails, “Oh, here’s what you might like to do, or where you might like to eat, and where you might like to meet somebody.”
Eric: Yeah, I’ve looked into that some. I mean, I think it’s a good direction. We should definitely pursue it. But if I looked at your calendar, it’d be hard for me to tell on every hour of every day where you actually intend to be. Even if you know this meeting is in San Francisco, you didn’t actually put the address or location in the meeting. It’s in your head. Maybe I can infer that, because you went last time, and last time it was in San Francisco. But, it’s much more incomplete on what’s actually in the calendar than what you think it is.
Ken: But if anyone’s going to do it, it’s going to be Google. And Google’s been pretty good about doing things like this. I don’t know if you would tell us if you’re working on it. But Google Now, and things like that are really sophisticated, and they can learn by looking at lots of history for us. And look, it would’ve made it easier for us to get together, the three of us, tonight.
Steve: I know.
Ken: My emails didn’t take.
Steve: It only took us eight months.
Steve: Of a wait.
Ken: And we all live in the same city.
Steve: Right. All right. Before we wrap up here, I want to put my hope in here. This is one thing I’ve talked about with my wife. Vacation planning. We say, “Hey, listen. We have this block of time. We want to go somewhere.” It will know. This to me seems doable. Kind of what you’re like, what you might like, what the people have liked. It’s like the Netflix of vacations. And then it optimally finds how you’re going to get there, where you’re going to stay. It puts together a rough itinerary. It does all the pricing. To me, that feels like it could happen.
Eric: I think it could give you a menu of options.
Steve: Right, give you a shortened list. Like Netflix. Say it’s a 99% match. You’re probably going to like this, because you liked that. Yeah, I think it’s coming.
Eric: Having with worked travel professionals, a lot of what they do, I think, actually is help the traveler figure out what it is they actually want. You might say, “Oh, I want to do all this stuff in Tuscany. Do some painting and things like that.” But then you realize, “Actually, my kids will be with me. They’re not going to want to do that. That would actually be a terrible trip. Let’s figure out what holistically is going to work for the family.” Those are things that people self discover as they go through the process.
Steve: And that problem exists. To loop back to what we do, we’re trying to help people build financial plans for their future. And say, “How do you optimally use your resources that are available to you. Not just your savings, your investments and what’s in your bank, but your human capital, home equity, insurance, where do you want to live, how do you want to live, health, all that stuff.” I can totally see, okay, computers can look at patterns.
I had this discussion with our dev team today. I was like, “Can we look across and learn from the thousands of users in our site, and learn more quickly? And then use that to help other people in aggregate.” But still, in delivering this, humans want to talk to other humans. And you quickly sense where someone’s at. Because when you look at somebody, you’re getting so much information. Ken’s like, “I’ve got to get out of here and work on some paper that’s due tonight. This is going way too long.” But computers can’t do all that stuff yet.
Eric: One thing I think we do know is that to complete a task, it’s actually okay if the computer suggests multiple steps, as long as the steps are generally in the right direction. Like say, “Do you want to do things with your family on this part of the trip?” Even if the answer is no, or it’s not a good question, it’s at least a reasonable question. Which means the dialog will continue, and the traveler stays engaged. So, I think you can present options which in a dialog way navigate to a decent solution. We’re not good at that yet, either. But there’s absolutely better work on that coming, as well.
Steve: Right. Well, I’m looking forward to robotic decluttering. That would be awesome. As long as it’s one robot. I don’t want to think I have like five robots in the house doing different things. And then also, digitally, same thing. Digitally decluttering my life would be awesome.
Eric: Try Google Photos.
Steve: Yeah. I mean, there’s so many other things. I think we have used this … The ability to do so many things, to do so many things, and then our lives are super complicated.
Eric: The feature I love in Google Photos that maybe people don’t know about is, you can search for people by their face, like your kids. And it can find all the pictures that my kids are in, including all the way back to their birth pictures even though they don’t look the same. It can actually extrapolate over time to say, “This person is the same as this person.” And even though 10 years apart they don’t look at all the same, because it has some intermediate data, it can connect them all.
Eric: Before that, there was no good way to search for pictures of my kids.
Steve: And it will put together these highlight reels and stuff like that, that are pretty cool.
Ken: Google Assistant. Yeah.
Steve: I’ve seen some awesome stuff. All right. I know we’ve gone long here. As we wrap up, any last things you guys want to share about good resources for people to look at to learn more about that stuff? Or, do you want to send them to your company or anything like that? I don’t know, anything you want to call out?
Eric: I’d say feel empowered. You can teach yourself whatever you want to know.
Ken: I’ll end on something I saw someone else posted recently, which was it said, “May the person who invented Auto Correct burn in hello.”
Steve: Nice. That’s funny. Thanks Ken and Eric for being on our show. Thanks Davorin Robison for being our sound engineer. Anyone listening, thanks for listening. Hopefully you found this useful. Our goal at NewRetirement is to help anyone plan and manage their retirement so they can make the most of their money and time. And if you’ve made it this far, I encourage you to check out Ken Goldberg or Eric Brewer, their work at Berkeley. You can look them up online.
You can also check out our site at newretirement.com. And you can join our Facebook group, or follow us on Twitter @newretirement. And finally, we are trying to build the audience for this podcast, so if you have a change and you can leave us a review on iTunes or Stitcher or anywhere, we’d appreciate it. We read them and try to improve our show based on the feedback. All right. Thank you, and have a great day.