Back to Blog

Better, Faster User Research Using AI and Machine Learning

Author

Vempathy logo

Paul and C. Todd of Vempathy

Who should talk to users? The answer is EVERYONE should talk to users. That means designers, product managers, sales, marketing, and yes, engineers. Without user research, how will you know you are building products that people will want?

But on small, cross-functional teams there isn’t always a full-time researcher. And even on large product teams, sadly, talking to users doesn’t always receive the resources and attention it deserves. Someone needs to step up to the plate and tackle the challenge. Paul Cheek, the founder and CEO of Vempathy, is a software engineer by training who did exactly that – he talked to users. And he was left with hours and hours of usability testing and screen recordings to review and little time to review it. He knew that he was really looking to find the short clips in which his users became confused or frustrated – the clips that would highlight where he should focus his time and energy to make the product better. So he decided to do just that – he analyzed the videos to identify those insightful, thirty second clips. And it worked! He was able to condense over ten hours of videos into under five minutes of video that would influence the design process.

Vempathy, which stands for ‘video empathy’, provides next generation customer experience research software to show companies how their customers feel using a webcam and artificial intelligence analysis. Their proprietary emotion detection software analyzes over 4,200 data points every minute to provide valuable insights that help you improve your customer experience. I sat down with Paul and VP of Product and User Experience, C. Todd Lombardo to talk about:

  • Quantifying qualitative user research using AI and machine learning
  • Breaking down the wall that exists between Engineering and user research
  • Reducing bias and blind spots in user research
  • The biggest challenge they’ve faced so far at Vempathy
  • How the company product and vision has changed by their own user research and feedback over the years

Sound interesting? Try it out using code “THEDIRT”.

Listen to the show:

Show Notes:

Podcast Transcript:

Heath: Well, I’m here today with Paul Cheek, founder and CEO of Vempathy. Did I get that right, the saying?

Paul: Yeah, that you did.

Heath: Okay, and C. Todd Lombardo is VP of Product and Experience. Welcome guys.

Paul: Yeah, thanks so much for having us. Good to be here.

Heath: I’ll just start off kind of like I always do just asking you to tell me a little bit about your backgrounds. How did you get to where you are today.

Paul: Yeah, absolutely. I’m serial tech entrepreneur and software engineer. I just love building cool things. I’ve started a few companies in the past and kind of found this trend of user research to be really, really interesting. It’s something that I wound up having to do as an engineer, and while that’s not my full-time job, I found a lot of problems with it, and I love to solve problems and build companies around those problems. On this journey I’ve been lucky enough to meet C. Todd, quite randomly actually. We sat down, we had coffee, and oddly enough we both walked in thinking nothing would come of. Here we are today. It’s been great working with him, great story, and yeah happy to be able to go build some cool stuff with him.

C. Todd: I think I’m no stranger to most of your audience. I think they know me from my days as key design strategist at Fresh Tilled Soil but-

Heath: You’ve even been known to host this show on occasion.

C. Todd: I know. I’m a former show host I guess.

Heath: When I twist your arm strong enough.

C. Todd: Right. So, yeah, I’m product and design guy for many years, and sort of jumped ship from Fresh Tilled Soil. Not that I didn’t love you guys, I do, still do. But, kind of enjoying the idea of rolling my sleeves up and getting back into really in the thick of it, so excited to be working really like into the nitty gritty of product with Paul. It’s great.

Heath: You talked, Paul, about having done user research before as an engineer. I haven’t heard that very often. Certainly engineers are and/or should be interested in the user intensely so, but in my experience, and I’m definitely not saying this is a good thing, there tends to be this maybe an old school is a good way to say it. There tends to be a wall between engineers and the users. I think that’s a bad thing but you came at it from the jump it sounds like being close to it.

Paul: Yeah, absolutely. There’s generally a huge wall between engineering and product design experience and the end customer. I’ve always worked on small cross-functional teams building products. In those scenarios they’re generally not a full-time researcher, meaning that somebody needs to step up to the plate and tackle that challenge of figuring out what the user really wants. In a lot of cases I just kind of had to jump into it, which I love doing, because I’d rather be building a product that I know people will use as opposed to a product that nobody really cares for.

Heath: It’s interesting because way back in the day when I was running certain products as a product manager there was this real, or perceived, idea that you don’t want your engineers to talk to the users. I don’t know if it was the didn’t want the engineers to be distracted by things we didn’t consider important. You know, “Stick to the roadmap. We already know what the user wants,” which is kind of hilarious, and sad, when you think about it. Or, we thought for some reason that our customers and our users didn’t want to talk to our engineers which, again, also is kind of dumb. I found whenever it happened, even if it was by accident, I would have engineers come to me with a brilliant solution to what they were hearing directly from the user. They’d say, “Well, they’re complaining about these three things,” or “They’re asking for these three things,” I’ll take the positive. They would say, “You know, we can do that very easily. I can do this, this, and this. I’d say, “Okay, cool. Go do that.” It never was clear to me why we created this wall, but it’s interesting that you had that from the jump.

C. Todd: I think there’s a couple reasons for that. One is the engineers are not necessarily trained to be interfacing with customers and interacting and learning how to distill feedback, because as your example of “Well, the customer asked for this,” and if an engineer sometimes will jump on it and say, “Oh, I can build that,” but that may not be the right thing, because customers don’t necessarily know exactly what to ask for. The customers aren’t always trained in knowing how to distill what they think they want versus what they actually want. So, I think that’s one reason that happens.

Another reason is sometimes is engineers don’t know exactly what to ask. They’ll just say, “Tell me what you want,” and I think that’s historical, because now we’re starting to see teams that are much more cross functional, like Paul referenced, when you hear about our friends at Pluralsight with directed discovery, bringing the whole team in and participating in that user research. We’re starting to see more and more of that cross-functional, and so that’s been kind of the way of the future, or actually the way of the present and the future, right.

Paul: Yeah, I think that’s a good point. You’re right, absent would be the background experience and training to understand how to peel that onion. I think the key is we’ve gone from a model where it is, “This person shalt not talk to users and this person must,” to “You know what, everyone needs to talk to them and everyone needs to hear from them,” preferably together so there can be discussion about it, but let’s dispense with this notion that it’s only the domain of this person or that person.

C. Todd: I agree.

Heath: For sure. You know if you look at it from the other side. People come to me all the time and they say, “How do I convince my engineering team to do X, Y, or Z and not put up a stink about it. Well, the way to do it is to give them some evidence. Show them what users are saying. Show them what the customers are saying, even if the engineering team is not doing the research themselves, even if they’re just getting the report on the research. They can see what customers are saying. They’re gonna be much more driven to go build that feature or make that change.

C. Todd: I can add onto that point. It’s not even just showing evidence to engineers, it’s showing evidence to people who may be higher up in an organization, depending on how big your organization is. I remember an example at Constant Contact where we had an executive with a very strong-willed idea, and some data to back it up. It was anecdotal data but it was still data nonetheless. It was about a particular mobile app idea, and rather than going off and build it, which she was really ready to just fund the project and get it going, we said, “Hey, let’s do a design print first and get some evidence.” We video recorded the tests and showed them some clips of like, “Hey, here’s what they said when they saw this, interacting with this prototype.” It really helped change her mind, like “Oh, right. This isn’t the right idea. This isn’t the right thing,” and helped build that empathy for the end user where there wasn’t initially.

Heath: So, you said the word empathy. I’m going to throw a V in front of it and ask the question now. What is Vempathy? The word itself, how’d you come up with? What does Vempathy, the company, do?

Paul: Yeah, absolutely. Well, Vempathy stands for video empathy. We’re really focused on video analysis and helping companies build empathy with their users, because ultimately if you can understand your customer’s emotions what that’s gonna help you do is better cater to their needs. You know, their emotions drive their decisions, and if you can change those decisions then you’re ultimately gonna have a great impact on your bottom line. What we’re doing is we’re basically analyzing video files, usability testing video files to show you how your customers feel and when they feel that way, so that you can then, basically, react and make changes accordingly. We’re doing this by quantifying qualitative data, so by giving you numbers and figures that are a little bit more palatable to other stakeholders, whether it’s an engineer, a designer, or an executive within your organization. If we give you numbers that kind of back up why certain decisions need to be made around your product, then you’re gonna have everybody much more on board than you would otherwise.

C. Todd: I think another thing that that does is it helps to reduce bias. A lot of this type of research it’s very qualitative in nature. We’re kind of quantifying the qualitative as Paul said, but also you may have a researcher, or even a team, that’s just looking for a particular kind of data. With qualitative data you can kind of cherry pick and pull anecdotes to help bias your story one way or the other, and this can help reduce that, because we’re actually quantifying things. This is how frustrated they were. This is how happy they were, how delighted they were. This is where they were delighted. So, we have a way to reduce that bias. It’s a little bit more neutral for the researcher to then present that data elsewhere, and they can actually get deeper, better insights.

Heath: Is the bias reduction because you’ve got it on video or because you’re somehow translating not just what they say but how they say it, and what they look like when they’re saying it, to say, “Okay, yes they said they like this, but we’ve noted that they’re really just okay with it?”

C. Todd: I think that, yeah, when you’re doing UX research you will see that somebody might be shaking their head yes or no but saying something the exact opposite. So they’ll be saying, “Oh, this is totally great and I’m shaking my head “no.” That’s something that you want to detect. You might be able to physically see that as a researcher watching a video, but we’re able to detect things like that and say, “Oh, that’s actually, the tone of voice is different.” So, we look at their facial expression. That’s one part of the algorithm. Paul can probably talk in a little bit more detail about this. The tone analysis like what’s the inflection of their tone of voice. We look at the context of all the different words they say in a sentiment analysis. We pull those things together and say, “Okay, here’s how confident we are that this person is frustrated, happy, sad, et cetera, and we can put a numerical value to it.

Paul: Ultimately I think one of the big takeaways that I had from UX Fest was that bias creates blind spots, and that’s what we’re really focused on is trying to reduce those blind spots. It’s a huge problem in UX research, and when you can, even subconsciously, use that bias to like C. Todd said cherry pick from those findings, ultimately you may be making the wrong product decision.

Heath: What is the problem, or the problems, that you’re trying to solve. My guess is it’s not just to give companies, and product owners, and product teams, the ability to do user testing, it’s more specific than that?

Paul: Yeah, it’s definitely more specific than that. Anybody can go out and do user testing now. The problem that we’re solving is we’re helping make user testing faster, more relatable for the other stakeholders involved besides the researcher, and we’re also solving the problem of bias involved in qualitative research.

Heath: How are your audiences currently either solving, or attempting to solve, this now?

C. Todd: So, a lot of what UX research does, and this is no secret to most of your audience, is that you might create a prototype or even just using your product, and either have some remote testing perhaps, maybe you send somebody a link and ask them to perform a handful of different tasks, or you could do in person, moderated or unmoderated type of testing. You might watch the videos, watch the screencasts and then draw some conclusions based on, “Do they complete the task, did they not complete the task? How frustrated did they look? Where did they get stuck trying to complete these tasks? Oftentimes it’s very manual intensive and takes a lot of time and energy and effort from the researchers. So, we’re trying to automate some of that, and saying, “Okay, you could do this unmoderated and we’ll actually give you a report from that.” Rather than having to watch say 10 10-minute videos you can just send out the link, get the report back and say, “Oh, here’s the three places where they’re most frustrated.

Heath: So, these are unmoderated?

Paul: They can … Yeah, they’re unmoderated. You could use them moderated, too, but we have it set up so you can actually set prompts and do unmoderated.

Heath: You can just go with a report or if something intrigues you go in and actually watch yourself, but your point, you can have 10 hours of video that you could watch, or not watch, at your leisure?

Paul: Yeah, that’s exactly right. You can look at the high-level reporting, or you can dig deep and watch the individual video files. That’s actually where there really started is. I was working on a project where I had just about 10 hours of video that I personally did not have the time, or patience, to sit and watch, but it turns out I did have the time and patience to build software that would watch it for me.

Heath: That’s usually what I do, I just build something when I don’t have the time to do it. What was I gonna say? So, it sounds like if I were to say one of the biggest barriers today without something like Vempathy is the time element?

Paul: Time is a key, key factor and bias tends to play in as well.

Heath: When companies and product teams opt not to do the user research you would say time is one of the factors. Bias is another one. Obviously, you’re not gonna overcome the issue of companies and product teams that just don’t believe in it, which would be kind of odd but-

C. Todd: Yeah, there’s plenty of teams that have that ego-driven development rhetoric. “I know what to build. I know what my users need and want. I’m gonna go build that thing. That still happens to this very day. We see it all the time … We saw it a lot when we were at Fresh Tilled Soil. We had companies come to us and say, “Build us this thing.” We’d be like, “Why do you need that thing?” They’d start explaining it and we’d ask them why three, or four or five times, and we’d realize that they realized they’re initially wrong even just in that conversation, and with something like Vempathy and doing this more quantified user research, you now have evidence. So, you can turn around and make your case to somebody else like an executive who’s really, really intent on doing this thing, this can help you prevent from building the wrong product.

Paul: I think that ties into one of the other problems we’re trying to solve, and that is executive buy in. I think that’s one of the key reasons. Even if a product team knows that they need to be conducting user research, a lot of times they won’t have executive buy in. One of the reasons for that is actually because it’s qualitative research, meaning that an executive is not going to get key insights out of that qualitative research. They’re basically going to get anecdotal evidence. But, when they have numbers that back up what this qualitative research is showing, which we’re basically trying to provide in our quantified reporting, then all of a sudden the executives have a very solidified reason why their team should be conducting research using our platform.

Heath: I think a marriage of the two, or a combination of the two, qualitative and quantitative, is ideal, although the qualitative is often easier to gather. It sounds like what you’re doing is you’re continuing to enable the collection of the qualitative, but you’re marrying that with some pretty easy to translate into quantitative as a way to break down these traditional barriers to user research. This idea that, “Well, yes, we love to do user research but you can’t get any quantitative so why bother.” Well, no, let’s do bother, because the qualitative on it’s own is certainly valuable but we can also get the quantitative out of that without having to get them to fill out this 10-question scaled survey.

C. Todd: Exactly, and I think the other thing is we now have AI machine learning algorithms that are robust enough that we can use that have had millions, if not billions of data points within them to know that, “Okay, depending …” We’ve seen all different colored faces, with beards, without beards, with glasses, without glasses, different races, genders, et cetera, and we’ve now got a sense of, Yep, we know that this is what happiness looks like.” We know that this is what frustration looks like. We know that this tone of voice means delight; this tone of voice means disappointment. We have enough data in those algorithms to then spit that back on more quantifiable scale, whereas I think previously we didn’t have that technology available. So, now we’re trying to bring this technology to UX Research.

Heath: Are there particular questions, or insights, that are more appropriate for, or more robust when using AI? Should you avoid using AI, or is AI more applicable for certain types of user research over others, or does it really doesn’t matter?

Paul: You know, I really don’t think it matters. Ultimately, if you’re gonna ask someone a question, or ask them to complete a task their emotional response is going to be really valuable. I don’t think that it necessarily makes a difference. I think it really just comes back to like best practices in terms of conducting research, asking the right questions for the result that you’re looking for, as opposed to designing a question because you know that you’re using AI.

C. Todd: The technology can empower you to do something you maybe hadn’t done before, but it’s not like the basic research question. Like, if you don’t have a good basic research question it’s not like this technology is gonna make you suddenly a better researcher, or asking better questions. It might give you some insights you haven’t had before, or maybe some things that asking the question you didn’t necessarily think about, but if you ask a really terrible question you may not get the greatest results you can. That’s something we can’t control for. Technology can’t solve for that.

Heath: Right, you can’t solve for stupid questions.

C. Todd: But, there really are no stupid questions, they’re just less effective.

Heath: Exactly, less effective. So, obviously a lot of the talk ad nauseum is about agile versus waterfall and what’s the right one and agile-ish, and scrumfall, and all these made up words. But, for some reason integrating proper user research can be a challenge in an agile shop or an agile-ish shop. Why is that the case do you think?

C. Todd: I think it goes back to a couple of things. One is time. Two, I think, is the ability to interpret the results in a way that’s actionable. I think that’s what a lot of my experience has shown is that to really do a solid UX research you might need a good week, two weeks, even more, to really formulate questions, gather your participates, gather your data and then interpret the data, interpret it with yourself. I think we just spoke to somebody at Wayfair the other day. She talked about really three points of, okay, well as soon as we’ve done the research we have a debrief routine. Then, we do another debrief with extended set of stakeholders to talk through things, and then we actually do a workshop to finally present it to others to see what we can do from that research.

So, that’s a pretty time-intensive thing. I think that when you’re a smaller company, or a company that’s moving really fast, you might not have the time to do that, and so we’re trying to help solve by saying, “Look, you can do this in a faster way, and we can help you shave off some of that time. You still need to allocate some time to it, but you don’t have to allocate necessarily as much.

Heath: Are you licensing technology and then wrapping it around your engine or your platform, or it is soup to nuts you guys have built it?

Paul: Yes, we use a combination of different partners, basically to pull those different data streams together that C. Todd spoke about earlier. What we realized that if we look at one stream of data individually we don’t get actionable data, it just simply is almost worthless. What we’ve done is we’ve actually built our own algorithms that look at these multiple streams of data, look for overlap between them and then use those overlaps to say with a very high level of certainty what emotion is present. So, if you look at one data stream individually your accuracy level is going to be very, very low. What we’ve developed is a much higher accuracy emotion detection software that’s specifically applied to user research.

Heath: Is there a minimum number of questions and/or participants that will render the analysis accurate within a certain level of confidence, or is it that you can have one user and that’s all it takes, because you’ve got this mass amount of data on the backend?

Paul: Yeah, we would never want you to make a decision as a researcher or come to a conclusion based on what one user says, so we generally recommend about a group of five test participants, but in terms of the number of questions that you ask them, that’s totally flexible. You can ask them as many questions as you’d like, and we just measure the emotion for each question.

C. Todd: I think it’s … Best practice in research still applies, like we are getting five to ten users to do a handful of tasks, because while the algorithms and our partners they’ve been time tested and, as Paul said, individually they might give you a certain level of confidence, but then we combine these three to four different data streams and then we can give you a much higher level of confidence, and tell you things that not the one individual can tell you.

Heath: Yeah, I guess I wasn’t think- It was a clumsily asked question for sure. I mean how many people, or users, should you interview to come to worthwhile conclusions. It was more about the algorithms that are calculating the emotions, or quantifying it. Do you need a certain denominator? But, it sounds like you’ve already got a vast data set that has learned how to interpret emotions and quantify them, and so you can run through one individual to quantify. It’s not that, “Oh we need at least 10 individuals to calculate real time what the emotion is.”

C. Todd: Each of the algorithms we’re using has, I think, been time tested with … You probably know more than this, millions and billions of data points over the course of years and years and years.

Heath: We had you guys at UX Fest and sounds like hopefully you got to talk to a lot of people. What there, and elsewhere, what’s been the early response?

Paul: The early, early response a year ago was mixed, and that’s where we went back to the drawing board and kind of revised these algorithms, pulled in a few more data streams to make this much more accurate, and we just keep putting it in front of people, testing and iterating on that feedback. Of course C. Todd’s come in and turned, well not the most beautiful product, into something that we can both be pretty proud of, I think. The feedback at UX Fest was awesome. We got a lot of really actionable feedback that we could take back to the drawing board and just continue iterating. That’s kind of the goal.

Heath: What have been the biggest like, holy crap just being able to do this is enough for me to say, “I’m in,” or “Hey these three things are really amazing for me? I would love to have this.”

Paul: The thing I heard was … The emotion reaction was, “Wow, this is really cool.” I think a lot of UX people don’t realize you can quantify these emotions. You can put a layer of quantitative analysis on this qualitative data, thanks to things like machine learning and algorithms. I think there’s that sort of ah ha realization like, “Whoa, okay this is possible,” and I think the other realization is, “Wait, this can save me some time.” That’s not a conclusion they come to necessarily immediately. I think the early one is like, “Whoa, I didn’t realize it could do that.” As they start to ask questions and realize like “Oh, okay,” and I think one of the clients we talked to they were like, “Wow, this can actually really reduce the bias that we have here. That was something that we didn’t initially think about until they actually told us like, “Holy crap, this can reduce all the bias from our different researchers.” If they’ve got researcher A, B and C working in a really large organization they may come to different conclusions based on their qualitative data sets. This helps them have a more consistent conclusion, as well.

Heath: I don’t know if you guys coined this phrase, or if it’s what you’re using, or if it’s common in the industry, but emotion measurement technology. Is that-

C. Todd: Yeah, I think that’s just kind of a generic term that’s used in a lot of the new artificial intelligence emotion recognition software.

Heath: Are there other ways, and other types of research, that that’s being used, either through you guys or elsewhere in the industry?

C. Todd: Yeah, absolutely. It’s being used all over the place for all sorts of stuff, and that’s what’s really exciting is we look at kind of the algorithm that we’ve developed around pulling in these different streams of data for a more accurate emotion detection, and we realized that where we’re applying it in UX research is really just the beginning. There are so many other places this can be applied.

Heath: Is this like the next wave polygraph test?

C. Todd: Maybe, but I don’t think a day goes by where Paul and I don’t think about, or talk about, an idea, an application of this technology in a different industry, and we all sort of have to like shake like, “oh, oh, oh, we’re focusing on UX and research right now.”

Heath: That’s what I was gonna say.

C. Todd: You can really apply this to a lot of places, and I think one of the things that we think UX research is great is because it’s an unregulated environment. There’s not like a lot of medical regulations that have to go in, because we talk a lot about some medical applications and especially my partner being a surgeon. There’s a lot of medical applications that could benefit from this, but right now we’re like, “Let’s focus on what we know we can make work, and we don’t have like an overburdened some regulations, and we know we can add value to this user group.”

Heath: Having spent a previous lifetime and career in healthcare technology and biopharma I would highly encourage you to stick with that gut decision to hold off as long as you can. It’s very-

C. Todd: I was in biotech two years ago.

Heath: It’s very enticing because there are loads of problems to be solved, but man is it just fraught with regulation, and lack of speed, and all that stuff.

C. Todd: And the amount of capital we would need to enter that market and time would be far more than we have today.

Heath: Filled, also, I should say with incredibly smart people, which is partly the frustrating part, that you want to tap into that, and there are so many people who are more than willing and receptive and, in fact, craving for smarter solutions, but they just can’t access them.

C. Todd: Oh, I hear that every day from my partner. She tells me all the time, this and this and this. Oh, well we could solve that, we could solve that, but there’s a matter of it becomes prioritization, like what can I actually do with the bandwidth that I have?

Paul, how has the company or product changed. I mean, obviously, we’ve only been together for about three or four months. You’ve probably had a year or so before that of trial and error and iteration. What’s been sort of pivot and change, and how has the vision for Vempathy changed since you started in where you’re going?

Paul: The vision started really as just a way to record users, which, as it turns out already exists on the market. There are plenty of software platforms out there to conduct usability testing and record people. It’s really changed in the sense that it’s become more about reporting in analytics on those usability tests, so that’s where looking to some of the usability testing platforms and looking at ways to integrate or take videos from their platform and plug them I so that we can provide our advanced emotion reporting that researchers can take back to their stakeholders. That’s been much more of a priority as of recent, knowing that there are already existing platforms and trying to tap into those has become a priority for sure.

One of the other ways I think it’s really changed is around how we can quantify the data. You know, it’s something that the first time we met you were giving a talk on peanut butter and jelly, which I didn’t even know I was walking into. I thought we were all gonna sit around and have sandwiches. You were talking about how data can be helpful for designers, and there were a lot of designers there who probably didn’t understand the data science side of things, but that’s because they’re not necessarily capable of putting together all those numbers. That’s where this whole reporting aspect has become really important, because a designer needs to be able to look at numbers and understand what they mean without having to put them into an Excel or without having to crunch those numbers. Changing the focus from just qualitative, “Here’s a video, watch it,” maybe put a comment on it to “Here is all of the data that we think is important to you, and that you can take back to your team and say decisively, ‘Here is what we need to do to improve the products in the next week.’”

C. Todd: If one person might be frustrated with one particular task, or one aspect of your product, but let’s say the other five aren’t, you might be able to say that anecdotally, but if we actually say, “No, here’s the reason why,” you might have somebody who’s very passionate about they want to fix this one thing, because they saw this one user, or two users struggle with it. “Yeah, but, they’re really not as frustrated as everyone else. We can actually have that insight and say, “We don’t really need to fix that right now. Let’s go on to somebody that has a higher priority and a higher ROI than trying to fix this button, or this form, if it’s not really affecting us as much as we think. Really, it’s tying those two things together. I think that data science plus design. I think, the older or the former question was, “Should designers learn how to code?” I think the newer question is, “Should designers learn data science? I think the answer is, “Yes.” Do you have to be a practitioner of data science? No, but do you have to understand numbers and understand what basic data science principles are? Absolutely, because that’s gonna make you a better designer.

Heath: Or in Paul’s case, should engineers become user researchers.

C. Todd: I think the better question is, “Should engineers become designers? The answer is definitely no.

Heath: What’s been the biggest challenge thus far for you guys. I mean, early stage company, starting with a great idea, sounds like you’ve done a lot of research and early feedback. What’s been the biggest challenge?

C. Todd: I think it’s always continuing the feedback loop, and making sure that how we’re interpreting that feedback, and even eating our own dog food and actually running these studies on our own product, and just getting more people to use it and tell us what works and what doesn’t, and even if they just do a simple Vempathy demo on it. Maybe, actually, we should give you a link at the bottom of your podcast to like, “Hey, try it out and try this demo and see what it’s like to actually be a participant.”

Heath: I definitely want to do that, put that on for people.

C. Todd: Yeah, that’d be kind of fun. Anyone can just go through and test it out. I think that’s probably the biggest challenge right now is just getting more and more people to actually … Everyone’s strapped for time, and sometimes people don’t quite see the immediate value, but once they start to dig in, it’s like, “Oh, wait, this really can help me.” So, that’s probably the biggest challenge right now.

Heath: All right, guys, so this sounds like really cool technology that could have a lot of benefits to all kinds of product teams, those that either struggle with doing user research at all, or those who feel like that there must be a better way, but just can’t seem to access it, and get others on board in their organization. I want to thank you guys for joining us on the show.

Paul: Oh, yeah.

C. Todd: Thank you.

Paul: Yeah, thanks, it’s been a pleasure to be here.

C. Todd: Good to be back.

Heath: Welcome back C. Todd.

C. Todd: This is how I kind of got into like the starting to interview Paul as like if I was host.

Heath: No, that’s good. You’re the insider so you know, you know the questions to ask that Paul’s gonna want to answer, so that’s good. It’s cool. All right, guys, take it easy.

Paul: All right, thank you.

Heath: Thank you.

Author Heath Umbach

Heath is an avid cyclist and runner who brings athletic rigor to everything he touches. With over 15 years of experience building and marketing digital products, he has a deep passion for solving our clients’ greatest challenges.

More posts from this author

Sign up to receive updates from our blog

What we do Expertise

From concept to design, we'll partner with your team to deliver amazing product and website experiences.

Recent Projects Work

See the results of our most recent digital product and website engagements.