Transcript: The Future of Interfaces

by Tim Wright

This is the transcript for The Dirt episode The Future of Interfaces

Tim: Hello and welcome to episode 6 of “The Dirt.” I am your host, Tim Wright. And today, I’m here with Steve Hickey and Mark Grambeau. Steve is a faux-movie buff because he’s a huge Nicolas Cage fan. Steve, would you like to reply to that?

Steve: I refuse to apologize for the things I enjoy. And I’m just sad that you can’t appreciate such a fine actor.

Tim: You think “Con Air” is a good movie?

Steve: Prove me wrong.

Tim: Miserable. I’m also here with Mark Grambeau. Mark is actually a hand puppet for Glenn Beck.

Mark: Funny story. That’s not true. But speaking of Glenn Beck, just what America needs at this time of bipartisan strife. Glenn Beck has released his own line of blue jeans. I am not kidding, look it up and watch the ad.

Tim: Are these in Target next to the Wranglers?

Steve: They’re not going to be in Target. Let’s be real. They’re going to be in Walmart.

Mark: They’re going to be right next to the Cash For Gold section.

Tim: So, you have the Glenn Beck jeans and the faded glory. And then some used shoes?

Mark: That’s right. All right, so I think we’re off to a good start.

Tim: So, today, we’re going to talk about the future of interfaces. Or future of interface design. And we want to talk about some of the keys to short-term success with pushing the limits of interfaces. Some of the long-term ones. And then, we’ll get into some more specifics.

We’ll talk about Touch, which is very prevalent right now. Voice commands, some eye-tracking and some really cool Iron Man-esque stuff that Steve wants to get into, which is pretty amazing.

So, some of the short-term keys, I think, to pushing these limits with the interface, at least from the standpoint of the Web, is device access, I think. And currently, we don’t really have that. We have it to an extent right now in the browser. But it’s not quite where it needs to be.

But we can build in certain ways to where we can take advantage of some of the more advanced APIs that are coming down the line. Like the Web API, some things that Mozilla is working with to get device access. And Boot to Gecko.

Steve: There’s a lot of cool things that are in our near future. The problem being that you sort of have that chicken and egg problem again. Where it’s not implemented, it doesn’t have good support. So, we can’t build a lot of things with it.

But unless we build a lot of things with it, it’s not going to be adopted more widely.

Tim: With these interfaces, I think there’s twofold support. There’s browser support from our standpoint because we work in the browser. But once we break out of the browser, I think there’s a lot more opportunity to do true user experience, I guess. When you’re not just experiencing a screen, you’re a whole experience–

Mark: For lack of a better word. No, it’s true. Because to me, the future of all this…and it’s already starting to happen in some really interesting little ways. Up until now and over the last few years, we interact with a computer. And if you’re going up to a thermostat. If you’re going up to a lock on the door. Any of these things, these are analog. Your refrigerator, your toaster, all these things.

And there was talk for a long time, the holy grail of this is home automation. This is the future of interface because everything is connected. It’s a great, palpable example.

Tim: In my “Jetsons” world.

Mark: But what we’re actually starting to see, thanks to some of the recent advances, really, right now, in smartphones, in the shrinking of sensors. In the increases in power efficiency and the shrinking of batteries. Is that we don’t need one big, massive system that controls all of these various things. And that’s what’s really exciting to me.

And what we’re getting now is the smartphone or the Web as just a soft hub for a bunch of distributed, individual devices. That each handle their one function really well.

Steve: Mark and I were lucky enough to be able to attend Luke Wroblewski’s workshop after an event [inaudible at 00:04:16] Boston earlier this year. And he had an entire section of his presentation where he talked about what basically amounts to a collection of sensors attached to our smartphones or other mobile devices.

We talked about the Fitbit, we talked about the Nike FuelBand. And essentially, the only purpose these devices have is to gather data and then something else is responsible for you taking that data and turning it into something useful and measurable.

Mark: Right. So, what that does that’s really fascinating is you don’t need to put the computational load on the tiny little sensor device. The tiny little sensor device just looks and listens and feels. And whether it’s your smartphone or a Web app, that handles the computation. And that’s where we can get great learnings.

And it lets these things get smaller and smaller. Especially things like Bluetooth 4, for example. The 4.0 profile has a very low battery requirement mode. So, you can have a tiny little device that barely has to get recharged ever. And still is communicating wirelessly and passing information.

Tim: Is that the current Bluetooth, the 4?

Mark: Yeah, that’s what you’re starting to see. It showed up in the iPhone 4s last year. And is now proliferating through a bunch of other devices. It’s not on every new thing.

Tim: I was struggling with Bluetooth on my iPhone 4s last night, actually. It’s infuriating. So, I think, right now, we’re at the stage where we have devices and we kind of search inside of them for features. And then, we build interfaces around them.

Like, a few weeks ago, we were talking about is it the gyroscope or the accelerometer in the MacBooks?

Mark: It’s an accelerometer. This came up…we were talking about, MacBook started, correct me if I’m wrong, 2005 actually would have been iBooks and PowerBooks at that point. Apple started building an accelerometer to detect free fall. If the laptop was falling, it would lock the head of the hard drive so that it wouldn’t scratch the platter.

So, a really great feature to prevent data loss, but there are ways to use it beyond that, of course.

Steve: Are those still built into some of the newer machines that have just solid–

Mark: Solid states? That’s a great question, I’m not entirely sure. One great way to check–
Tim: They do have accelerometers in them. I had a solid state drive not the last place I worked, but before that. And there was a demo that came out. I feel like Paul Irish maybe did the demo practicing the accelerometer on your laptop. And you can actually take the laptop and move it around and the little widget in the browser would move with you. I thought it was really cool.

These features in the devices that maybe we don’t have access to, but they exist. And so, right now, the devices are really leading the way.

Mark: What I find really neat about that, it’s all these sensors and functions and hardware. Not just defining functionality, but straight-up interface, straight-up design experience. I think that’s pretty neat.

Tim: The reason I like coming at it from a more browser or Web-based angle is because it’s not Apple defining what we can do with our devices. It’s not Samsung or Google or anyone building these features into a hard device.

It’s kind of the community or maybe an independent third party. I don’t know if I’d call it an “independent third party.” But certainly community and W3C and people pushing for specifications and more access to the stuff.

Steve: It’s like the standards are being massaged by the invisible hands of the free market.

Tim: I don’t know how to react to that.

Mark: I think we just need to know that Steve is, in fact, Glenn Beck’s puppet and not me.

Tim: Coming at this from the Web, when you really look into what we’re getting into with this advanced device access, it’s not so much features of the phone itself. There are things that the phones have that we need to access. But we can use them in different ways.

Like, you can access the camera and you can figure out how you want to use the camera. So, it’s a little bit limiting towards the device.

But there’s also light sensors. There’s a new ambient light API where you could dim the interface based on how much light is coming in.

Steve: Or if you detect there’s a very low amount of light in the room, you could automatically switch over to the Marvin Gaye playlist on your iTunes.

Tim: Yes, absolutely. And there’s also proximity. There’s a proximity sensor API that’s coming out pretty soon, which is awesome. And I think that’s along the lines of NFC from the browser. Which is amazing and will not be on an Apple device, apparently.

Mark: The future is long, we’ll see.

Tim: I’m sure we’ll all be super excited, I was going to say ’12, but then I realized that it is currently 2012. 2018, when there’s finally NFC technology in an iPhone.

Mark: This is an interesting discussion in terms of…as we talk about the sensors and we talk about the technologies that get put into these devices. And I don’t mean to take this as a right or wrong; it’s Apple’s decision to put in or not put in NFC at this time.

But there are some technologies that will get developed. And you spend all this time and money and years developing it. And all of a sudden, it’s an Occam’s razor scenario. You run into a significantly simpler way of doing it.

Again, I put there, is NFC the simple answer? Or is it something that actually gets better-served by location data, Bluetooth and WiFi awareness of what’s going on around? Or is NFC the right way because every single thing has it’s own close near-field (?) signature?

Steve: Location sensitivity when you’re using things like GPS is only accurate to about 22 feet. So, NFC does have a huge advantage. There’s direct proximity things that you really need to have access to.

A friend of mine worked for a company where they were working on basically redefining what an ATM was. And part of that was the entire interface is on your phone. The only part of the ATM that stays in the physical location of the ATM is a spout where money comes out and a pad that says your phone is right next to it. It’s an interesting idea.

Mark: Yeah. And I think Square sort of pioneered a little interesting way of looking at this that is GPS-enabled. Where Pay with Square is not defined by NFC at all, of course. It is GPS. You walk to a vendor. You are in the building of this restaurant. And you already have a Square account.

When you go to buy, you don’t even take your phone out. You don’t wave it over a little magic pad. You say “Put it on Mark Grambeau.” And they say “Okay. Pay with Square” and they see my face and they see my name.

Tim: Dumping coffee all over you.

Mark: Just put it on me. It’s a tab that doesn’t require an interface. And to a point, what’s interesting to me about this and why I bring it up is because it completely abstracts what it means to have a UI.

So, for us, a UI, we’re always talking about the code, the visuals, the touch. But here, the UI is all around us. It’s the sensors, it’s the location. We are the UI. But it’s still all happening. This digital transaction is still happening without ever touching a digital device on the consumer’s end.

Steve: That actually lines up pretty well. Somebody I follow on Twitter said something last week about how as interaction designers, we should first design the experience. And then, once we know what the experience is, then design the interface itself. In that case, the experience involves no interface. Which is ideal for the user.

Tim: I think with these APIs that are coming out, we know that the Samsung phones have NFC and we want something to be able to do, something like that. And we build this proximity sensor API saying “Yeah, you can use NFC, but we really don’t care what you use.”

But we want a proximity sensor in these devices. And Android, whatever, Samsung. You have it, so that’s kind of our guiding thing. And we’re going to build it on the Web.

And the browsers that are likely to have it before the iPhone has it. Which is kind of us from the Web angle pushing technologies into the device. Whether it’s called NFC or not. And whatever sensor they put in there, we kind of make it broad enough to where our requests are being forced upon these devices. And I think that’s an interesting model.

Steve: So, what you’re saying is that the hardware implementation doesn’t matter so much as the ability to use software to accomplish the same task. If there are multiple methods of accessing the same interface functionality with hardware, then as long as the device has at least one of those, the API should work appropriately.

Tim: It doesn’t matter what sensor they put in there. As long as the device has a proximity sensor. And it will feed information into the browser that we can use through a JavaScript API. It’s kind of cover yourself.

We’re not saying “NFC, API,” but we’re using proximity sensors. And the proximity sensor is actually really sensitive. It detects electromagnetic waves and it actually can get messed up with certain temperatures.

It’s actually in the W3C specification that this may get messed up with certain temperatures.

Steve: So, could you correlate that with location data to determine the ambient temperature in that area? Actually, I suppose that only really works if you’re outside.

Tim: I mean, just with the way we’re doing feature detection right now with the things like Modernizr and just basic jQuery detecting for things like Touch. And Touch is, it may mean it’s not a next-gen sort of a thing at this point because we’ve been doing it for a long time. But the model of having this feature; it’s still hardware-based, but we’re doing it from the browser.

We have to detect if there’s Touch capabilities. And if they do, then do some Touch stuff. Like touch the device or throw the device at somebody, I don’t know.

Steve: You can use the accelerometer to detect the acceleration of tossing an iPad at a person like a Frisbee.

Tim: Or rubbing it with your nose.

Mark: And then you could actually have the screen right when it knows. It can calculate the distance to the person who is being thrown at. Using the accelerometer, using all the camera. And then it can just flash like a scary face. I mean, it’s already scary enough, but when it’s like 2 inches, it’s like “Ahh!”

Tim: So, it’s like a super-advanced grenade, actually. You could put this in grenade technology. When you throw it and it starts to get towards your target. It’s not on a timer, it’s on like when I’m 10 feet away, I’m going to explode.

Steve: It’s like the serial killer side of interface design.

Tim: I [inaudible at 00:15:13] contribute to society from the Web.

Mark: I was going to say that it would contribute to pranks, but you just want to, the military industrial complex. That’s fine.

Tim: I need to tease the line. I need to find where the line is and then just put my foot over it.

Steve: It’s a very Tim thing.

Tim: So, we’re building all these Touch-sensitive interfaces. And because we’re exploring the features of the device, we’re really pushing the limits of the capability. We want more and more access.

And one of the things that I noticed, it must have been a year ago. I don’t know. A Web year feels like five years. So, when you go to Google and Chrome and the little voice icon shows up, that’s a Webkit-specific thing. So, this is just browser-based.

But obviously, if you have a microphone, you can use it. But it’s a Webkit speech attribute on the input field. And I think we’re getting into different input methods.

So, in the beginning, we had Click. And the Click Drag and then Hover and all those. And then, we’re moving into using our hands more and Touch and swiping. And now, we’re getting into, actually, speech.

Steve: So, what would you call that if you had to create a JavaScript event? Would that be like on vocalization or something?

Tim: I don’t know. When somebody starts talking? They have to click the icon, I think, to–

Steve: Right now, they have to click the icon. How do you create a method of having the device always listening, but not drawing significant power consumption and targeting when it should start?

So, for example, we’re talking about the future of UI design here. Let’s talk about “Star Trek.” Whenever anybody starts interacting with the computer in “Star Trek,” they start by saying “Computer” and then give it a command.

It’s sort of a simplistic way to think about all of this. Because you probably say “Computer” a lot without wanting the computer to start listening to whatever your command is. So, then you get into the idea of things being anticipatory.

Tim: I was watching an episode of “30 Rock.” I watch it on Netflix, I’m not sure what season it was in. But Tracy Jordan was in his dressing room. And I forget what the situation was, but someone wasn’t doing something for him. And he wanted to turn the TV on.

And he just yells “Television, pornography.”

Mark: We have a clean tag, Tim.

Steve: There’s actually another episode where Jack thinks he’s invented a proper method of voice interaction with television. And it has all of the problems that I feel like the “Star Trek” implementation should have.

Mark: Right. So, back to the world. As much I’m good to talk about “Star Trek” and “30 Rock” all day, don’t get me wrong. So, in the browser, yeah. Chrome, we click on the microphone. On an iPhone or iPad, we hold down the Home button, Siri is activated.

On a Mac, now dictation is built in. I believe now, the default in Mountain Lion, I think you double-tap the Function key, if I remember correctly. I don’t know off the top of my head. This brings up dictation.

So, what it gets to, is we do have a future of this, right? Where we do talk to our computers. And we already now have natural language processing and it’s an easy target to laugh at how Siri doesn’t understand me sometimes. I say “Make an appointment” and Siri says “You want to hide a body.” “No, Siri. I did that yesterday.”

But in truth, it’s only going to get better, the more data we have. In fact, there are studies that show that far and away, most of the data that the human race has ever produced has been produced in the last handful of years. And I mean, like, two years.

So, our ability to understand gestures, our ability to understand voice is only going to get significantly more sophisticated very quickly.

Tim: Well, for the voice information on data collection, Google had been doing it for years and they recently stopped with GOOG-411, I don’t know if you guys used that. But it’s a free 411 service. And what it really was was a Google search, like voice command Google search.

And it would just search and just search for businesses or search Google Maps or something to find phone numbers. And the whole time, they were just collecting data. So, they could do something like Siri and they did. And it failed.

And then Siri came along and we made fun of it and I don’t know if it’s succeeding. But, yeah. We’re collecting data constantly. And no one really realized it, I think, with GOOG-411, because it was such a great service. And it kind of clicked with me when they stopped, when they discontinued the service. I was like “Huh. They must have enough data.”

Steve: There are even some available commercial applications of voice technology where the actual device in question is learning, over time, to better understand the individual speech patterns of the most frequent user.

Mark: And that’s the new standard. Remember, a few years ago, if you picked up a copy of “Dragon Dictation,” for example, you had to spend an hour training the thing to recognize your specific voice patterns, your accent, your tonality. And now, you just jump right in.

And it uses the data from everybody, as well as learning yours very quickly. So, what do we see about this in the future? We’ve got voice, as we talked about. But there are some other ways that computers are going to understand us.

I think the most salient point is that if we look really realistically ten years down the road. Now 50 years down the road where we’re all in our personal spaceships but–

Tim: We’re all just voices.

Mark: We’re all just heads in jars. But ten years down the road, when all this stuff has matured, my feeling on the matter is we’re not going to just be doing speech. We’re not going to be just doing Touch work. We’re going to do all of these things all at once naturally.

And as the sensors improve, as the software improves, our computers are going to be aware of all them simultaneously.

Steve: This sort of brings up this idea of what is the ideal way to combine all of these things? A lot of people got really excited when the movie “Minority Report” came out. They saw this crazy interaction with this really high-tech advanced computer in the pre-crime division headquarters where they were just processing vast amounts of information in a pseudo-physical space in order to figure out what they were doing.

Tim: I think you’re thinking of “Goonies,” actually.

Steve: I think you’re completely wrong about that.

Mark: No, it’s “Risky Business.” That’s the Tom Cruise movie he’s talking about.

Steve: Anyways, people got really excited about that. I never found it all that exciting because it didn’t look very intuitive or interesting to me. I think that the peak representation in popular culture of this combination of interface paradigms is actually the “Iron Man” movies.

And that’s for a very simple reason. I remember reading an interview where they said the way they design the interfaces in that movie; they didn’t spend hours and weeks and months coming up with these ideas and then teaching Robert Downey, Jr. how to do it and then having him actually implement it.

What they did was they rolled camera and said “Okay, just do what you think you should do.” So, he just started throwing his hands around the air and saying things and touching stuff, throwing things.

And then, later on, the special effects guys sat down. They took all of this film and said “Okay, now let’s design the interface that would accompany these actions.” And I think that probably made the whole thing seem much more intuitive and useful. Because Robert Downey, Jr. just did what he thought he should be doing and they designed around it.

Tim: I’m not sure how I feel about a future designed by Robert Downey, Jr.

Steve: Do you not like fun?

Tim: Maybe Robert Downey, Jr. ten years ago.

Mark: In “Minority Report’s” defense, it predates “Iron Man” by, I want to say off the top of my head, about six years.

Steve: It’s actually more like ten years, almost.

Mark: It can’t be. I want to peg “Minority Report” around 2000, 2001.

Steve: Let’s get the average, eight.

Tim: I think this is interesting in how to combine these technologies. Because right now, these interfaces interacting with data, but they’re completely separate, almost. Like Siri is voice and I interact with data. Touch interacts with data. But they’re not really syncing everything up.

We have Touch, voice, eye-tracking, motion, all these things. And I see them kind of siloed off right now. And it’s going to be cool to see when they actually come together.

Steve: I just described what I think, “Iron Man” being the ideal convergence of that. Now, let’s talk about a situation where you actually have a useful separation of those paradigms as opposed to the sort of artificially enforced separation we have now.

Let’s go back to “Star Trek” again. I see two types of computing applications in there.

Tim: Oh, my God. “Star Trek” again.

Mark: We’re doing an episode on “The Future of Interfaces.” It’s going to get heavy with science fiction films no matter what we do. But also [inaudible at 00:24:18]

Tim: I’ve seen two “Star Trek” movies and nothing else.

Mark: Which ones?

Tim: “Wrath of Khan” and the one where whales come into the future.

Mark: I’m not getting into this rat hole. But see the 2009 J.J. Abrams “Star Trek” film. Because–

Tim: I like that one.

Mark: That’s a great film. Back to Steve’s point before we lose all of our five listeners.

Steve: Just watch “The Next Generation.” All of it. Every episode. There’s 177 of them, if you have nothing else going on in your life.

Mark: I’m really trying really hard not to–

Steve: Anyways, back to the point I was trying to make. Which is that you have users in those episodes and in the movies attempting to accomplish certain things that are, tasks that are very much based on requirements of physical manipulation. Steering the ship, doing complex research, reconfiguring systems. And so, those require Touch interfaces.

And then, you have things like computer. What’s the temperature on the surface? The only thing you need is a piece of information and the fastest way to get it is just you ask for it as if you are in a conversation.

So, I wake up in the morning and I want to know what the temperature is outside before I leave. I don’t want to boot up my computer or pull out my phone, put in my little unlock password and wait for the weather application to load. I want to say “What’s the temperature outside?” And have an answer.

Mark: And where I see this becoming a reality is in the devices we’re seeing cropped up already. I mentioned these at the top of the show. Devices that are the persistent sensors around. And then, our computers, our networks, our local WiFi networks, what have you, are the processing engines.

Something like Lockitron, which is a device that mounts over a standard deadbolt. And it’s motorized and it uses WiFi and Bluetooth 4. And it communicates with your local wireless network and when you walk up to your door, it detects your phone and it automatically unlocks it.

And you can even set user access. You don’t even have to have a smartphone. You can activate it by texting. And the battery lasts a seriously long time, it connects over WiFi. So, it’s a sensor that connects to your network.

The Nest Thermostat pioneered by a fellow who was considered the father of the iPod, a former Apple employee. The Nest Thermostat, again, is a network connected to the device. And it’s using a variety of sensors to detect motion, to detect temperature and adjust all these things on the fly.

I think what we’re going to see in the home…you could have a little microphone system, as crazy and creepy as it sounds. But maybe it’s your lamp. Your lamp is the light sensor and temperature sensor and microphone. And these little things all interact with each other to share the data. From a common source, they maybe share some sort of API.

Tim: So, what happens when the battery on that lock dies?

Mark: The nice thing is because it’s network connected. Much like when your wireless mouse battery is getting low, it says “Hey, battery is getting low.” It gives you a warning in advance. And it can shoot you a text, it can e-mail you.

And in the end, this lock is not replacing the lock. It’s basically a little hand that’s sitting on top of your deadbolt and moves on a motor. At any time, even when it’s fully functioning, you could still use a standard key to open your door.

Tim: It’s only for deadbolts, then?

Mark: Right, it’s for deadbolts. But this is what’s available now. This is shipping in early 2013. Ten years down the road, this could be on any given lock, on any given door, on any given thermostat, any given refrigerator. You know what I’m saying? We’re going to get these simple interfaces.

Tim: That’s exactly what I need. My refrigerator to open easier.

Mark: But what you might want your refrigerator to do is detect when your milk is sour and automatically order it from the Amazon of the future and have it delivered in two minutes.

Steve: I know what I want is a refrigerator and a washing machine that I can play “Angry Birds” on. That’s the future. And that sounds like a joke, but there’s actually companies making that. It’s ridiculous.

Mark: Absolutely. Look at Hewlett Packard’s interest in Web OS, which they bought from Palm.

Tim: I am so disappointed with what they did that. That is such a great operating system and they’re just delegating it to printers.

Mark: Yes. And it was really depressing. It made me very sad. However, look at this notion of taking advanced interfaces, advanced back-ends, advanced operating systems and putting them on standard devices like printers and down the road, toasters, what have you.

The fact is, sensors that work on a local network. They interact with each other. We’re going to have an immense change to our life. And what I want to bring up is what that does for us as UI designers.

So, if the UI is not just your phone, it’s not just your TV. It’s not just your computer, it’s everywhere and everything. What is responsive? What are we designing? What are we doing on a daily basis?

Tim: That’s great. What is responsive design targeting? It’s targeting a browser width, really. And there’s this concept of environmental design. Where you’re actually looking at everything that’s going on. You’re looking at the time, the weather, the location. All these external things to build out your entire experience. Not just your experience on this little screen.

And I agree with myself.

Steve: Of course you do.

Tim: There was actually a demo that Paul gave in the office for this new JavaScript method called getUserMedia. And what it does is let you access the camera.

Now, there’s a camera API that came out a little while ago that lets you upload photos from your camera in the browser. And when I was first researching this API, I thought it was going to be that you get to display this live image of yourself in a browser. And that’s not quite the case. It’s just upload photos.

You can upload them from your hard drive or you can upload them from your camera. And I was really disappointed.

And I don’t know if it’s recently, but recently, we’ve been looking into the getUserMedia method. And it’s super cool. It’s exactly what I wanted the camera API to be. It lets you get camera access and pump it through a browser live.

And we were doing some really cool interactions with that. Paul actually built it. You guys know Paul. He was silently sitting in a chair a couple weeks ago.

Mark: You met him as he sat here silently on an audio-only podcast.

Tim: Yes. But he was doing some really cool stuff where you can actually do facial recognition or face detection at this point. He’s way better at math than me, so we let Paul do that stuff.

And it was very cool. And I think that could actually lead us into this “Iron Man” interface. Where instead of detecting for your face, you’re detecting for either hands or you’re interacting with an object that’s on the screen. But you’re interacting with air, basically, or interacting with a hologram.

Mark: Yeah. What I find interesting as we’re going forward also, is the screens that we’re outputting too are changing. So, already, you look at iPhone apps that can use AirPlay; Apple’s technology for streaming simultaneous video onto an external display.

So, you’ve got flight games. Where you download the game. You can play it on the phone or you can play it to your TV. And it’s exporting second screen. I think a lot of what we’re going to be doing in the future is designing a hell of a lot more second screens for every interaction.

Whether it’s a TV or something like a Google Glass or whatnot. An app or device is not limited to that one app or device you’re viewing it on.

Tim: I think we need to break out of these devices, really. We’re not just this imaginary Internet thing that we need to deal with. We need to deal a lot more stuff and make it super flexible and actually craft the experience.

Mark: What I was going to say is that what we also have to think about, of course, as we’re wrapping up, is we’ve got to think about the privacy implications. So, the pervasive listening, pervasive watching.

When you have a camera on on a Mac of any kind, that little green pin light shows to show that the camera is currently active. And you really need this stuff in place. Because we’re talking about the very utopian; how the world is functioning beautifully around us.

But from a very negative point of view, look at this controversy that came up a couple years ago. Where a school district was distributing laptops to their students and were actually enabling the camera remotely. The kids had no idea, the kids were being recorded in the privacy of their own homes.

And the school district got, of course, into a lot of trouble because it’s atrocious. And as we’re building this in the future, devices are going to get smaller. Our wallpaper could be just littered with processors, literally. But it means we have to be that much more careful about the privacy of our information.

Tim: I think that’s great and I think we should work on that. So, we’re going to work on that and–

Mark: We’ll have a solution for you next week.

Tim: Next week we’ll have a solution.

Steve: Solved it.

Mark: Solved it.

Tim: And we definitely thank you for listening and we will try to do better next time.

About Fresh Tilled Soil

Fresh Tilled Soil is a Boston-based user interface and experience design firm focused on human centered digital design