Michelle Niedziela - Powering Consumer Research with New Data Sources
Welcome to "AigoraCast", conversations with industry experts on how new technologies are transforming sensory and consumer science!
Dr. Michelle Niedziela is a behavioral neuroscientist experienced in both academia and industry. Michelle began her career as a post-doctoral fellow at Monell Chemical Senses Center working on research of functional foods. She continued her career at Johnson & Johnson where she specialized in innovation technologies for consumer products. At Mars Chocolate, Michelle worked on global sensory projects and in her current role as VP of Research and Innovation at HCD Research, Michelle focuses on integrating applied consumer neuroscience tools with traditional sensory methods to measure consumer response with the goal of providing a comprehensive account of consumer decision making.
Transcript (Semi-automated, forgive typos!)
John: Michelle, welcome to the show.
Michelle: Thank you so much, John.
John: So, Michelle, as you know, this podcast is about new technologies and how they're impacting sensory and consumer science. And what we like to do with this podcast is interview industry experts like yourself who are seeing all sorts of things happening in the different sides of sensory consumer science. One of the things that I found most interesting about research since we've met and I believe we first met at Pangborn this year, just a chance encounter, a reminder why it's good to go to conferences.
Michelle: Yeah. Absolutely.
John: Is that you are looking at other forms of data collection that are traditionally seen as sensory science? And I find that to be really interesting, especially given my background also in psychology in neuroscience. I think that there's a lot there are a lot of ways of collecting data that we should be leveraging in sensory. So I'd like to start the show with you talking about some of the different types of data that you're collecting and then how you all at HCD Research are using those data combined with other maybe more traditionally collected sensory data to help guide your clients to more informed decisions.
Michelle: Yeah, absolutely, so you know, it is really fun and exciting when you start thinking about all the different options that are out there and there certainly are quite a few. And it's all kind of based on the idea that different methodologies provide different sorts of data. And so being open to all sorts of different types of technologies where whether it is actual technology or a different psychological approach that might exist in some form of questionnaire, just understanding that you can get a wealth of information that you may not have gotten from your standard survey or your standard more traditional approaches to sensory research. You know, HCD has actually been around for about 35 years or so, basically doing traditional survey type of research. Yeah, exactly. And so, you know, has a real core competency in doing stuff like that. But then about, you know, 15-20 years ago started adding in different types of technology to really understand better what was going on in the survey. So like when somebody says they like something. What are some of the other aspects that are leading them towards that liking? You know, started with eye tracking? Well, what are they looking at? You know, what's really drawing their attention? And that was a really good entry way, but then starting to look at will physiological response. So if you're looking at heart rate or skin conductance or facial MMG or EEG, there's so many different options out there that I'll tell you a little bit more about what's going on in the consumer experience to better understand why they said they like it.
John: And so for you know, for me, I guess the question is that we have all this information. And I mean, the first question is how correlated do you think these different types of data are or how many different dimensions do you think we're really getting at? When we start to integrate all the information together, I mean, is it really is there just a single hedonic dimension or do you feel like there are multiple dimensions that we have to explore? You know..
Michelle: I think there's multiple dimensions, you know, because I think even liking alone doesn't really tell you a lot. For example, what's going on in the consumer experience. There's so many drivers of behavior. You know, for us, it's really looking at the decision making. You know, when the client comes to us, they really want to know what's going to make them buy this product again. Certainly a good part of that is liking. But that hasn't turned out to be a very good determinant of purchase intent, right? So I think there's a lot of different dimensions that need to be looked at. And so really understanding what are some of the drivers of different behaviors or decision making processes. And there have been some research studies out there that have looked into whether, you know, different types of physiological measures are actually related to ultimate purchase intent when it comes to add testing, you know, this is outside the realm of sensory. And it's been honestly, it's been very difficult to pinpoint. But I think that's just the nature of this very complicated based on understanding consumer behavior when it comes to purchasing time.
John: Yeah. I agree. It's very complicated. I think that, you know, you've got thoughts also the question of actual purchase behavior. I mean, that's where you see that some I mean, the companies that, of course, surveillance capitalism is its own topic right now. You know that some of the tech companies are starting to gather enormous amounts of data and also actual purchase behavior. A company like Amazon has a kind of big leg up in terms of seeing what people actually buy. You know that Walmart's and Krogers of the world have advantages as well. But for us in kind of consumer research, maybe you could take us through what would a typical project that you might be involved in that involves alternate forms of data collection. What might that look like? What are some of the technologies that you might use?
Michelle: Well, the first step is trying to figure out what the actual research question is. And that sounds super basic. But I think when you go from a technology standpoint of being like, oh, we have this really cool technology. This gear to slap on someone's head. That's the wrong approach, right? So instead, it's more important to say, "Okay, what do we ultimately want to figure out," and then work from there to decide, "Okay, what's the best technology to do that?" So often the first step is having that conversation with the client and saying, "Okay, what is it that you are looking for?" What is it that you're not getting? You know, what's your research pinpoint that you're not getting from what you're doing right now? So when we start there, then it becomes very easy to figure out, okay, you know what you need to do something maybe like an implicit association test or maybe you need to look at EEG or maybe you need to do a combination of all these things with eye tracking and heart rate or whatever it might be. Maybe it's behavioral coding to really figure out, you know, how the person's interacting with the product. They all answer separate things. So my focus has always been to really focus on what the research goals are and then match the technology to that.
John: Can you give us an example of a specific question? I mean, obviously, we don't want to get into, you know, early information or client information. But if you can give me an example of a specific question, how you match that with specific data collection technique? That would be very interesting I think.
Michelle: Sure. So like a really common thing is differentiation, right? So, you know, when I was working on the client side, one thing that I noticed that a pinpoint that came up a lot. And when I have conversations now with clients, this seems to still be a persistent problem is, you know, a lot of times it could be very similar prototypes that you're measuring. So, for example, having five strawberry flavors or, you know, seven clean fragrances. Right? And they're all very similar. But then, you know, there's slight nuances that are different. And so when they do their traditional consumer test, you know, they're looking at strength and liking and these very typical hedonic measures. But they often don't differentiate. Right? So maybe you have a top two or three performers. And ultimately, what ends up happening in that situation is that the brand manager says, "Well, you know, I like prototype number two the best," and so we're gonna go forward with that. And so our instances has really been that, you know, you can do better than that and have more data driven decision making business decision making on which one to move forward with. So instead of saying, well, the brand manager really like prototype number two of all the eight that we looked at, instead saying, okay, we have these top three performers, let's look maybe at the physiological differences, you know? So what might be driving that liking? Why are these, like more than the others? Which one is actually a better fit to the brand or maybe a better fit to the concept and that could be something with implicit testing. Often we do a combination of the two. You know, perceptually, I think implicit testing can be really informative in that sense to say, okay, this these are all strawberry flavors. This one's actually perceived as being more healthy. Right? And that could be really important when you're thinking about moving say a brand in a new direction. So you're you know, you have these new flavors that you're trying to make because you're trying to establish this feeling of health or something that's actually very difficult to establish when you're talking about a fragrance. Right. Because when you're talking about moisturization and it's, you know, a fragrance, that's something that's very difficult for a consumer to answer when you ask them out, right? But when you do it through these other measures, maybe it's an implicit study or maybe it's looking at, okay, how does this image of a concept of moisturization fit with the fragrance experience? You can look at some physiological measures to see if they are a good fit or they're kind of jarring or surprising.
John: Okay. That's very helpful. Maybe we can go through those two that you just gave you two examples of those association and physiological measures. Some of our listeners, artists familiar with implicit association cars. This is a common cognitive psychology, but I think it's very useful. And most of my postdoc was in a computational neuroscience and I felt like there are lot of these measures, like reaction time measures, other ways of collecting response information that we should be using more in sensory science. So can you talk? Tell our audience what is a typical implicit association test look like? You know, as you use it in your consumer research? And then how do you use that to get insights as far as..
Michelle: Sure.So the basic idea of implicit test reaction time test is that you have these associations that are like semantic associations in your brain. Right. So that the more familiar you are with two concepts being matched together, the faster you are matching. Right. So the example I like to use is if you had to match whether Jennifer was male or female, you'd be really quick at being able to say that Jennifer, that's a female name. You've heard that combination many times. But if I were to use the name Taylor, it might take you longer to say that's a female name because it can be used in both situations. So your association with it being female is not as strong as for Jennifer. Right? So that's how kind of reaction time kind of works. You have a stronger connection with those two ideas. When it comes to consumer testing, I think, you know one of the things that you said when you were talking about it just a second ago was saying there's multiple methodologies out there. And I think that's really important for people to realize. There isn't really one totally typical test that's out there. And if you were at Pangborn this past summer, then you probably saw all sorts of different types of implicit tests going on. Some were really good and some were not, as you know, I would say validated. Right? And some were going a little bit too academic to be used in consumer senses I think. So there's a that's one thing to realize is that there's lots of different types, the type that we tend to use and that we see a lot out there is the go-no-go implicit test. So in a traditional academic test, you have multiple concepts that are being tested against another. So not only is Jennifer either male or female, but also could be a scientist or a homemaker. Right? So you're trying to put these two ideas of, you know, the gender of the name and as well as like the occupation of the name. Well, that doesn't work actually very well in a consumer test I think. It becomes a little too challenging for the consumer participant to really do. And also, you have to come up with these multiple concepts to be able to test in your consumer test. Not often that doesn't really work. So instead of doing that, you can do what's called a go-no-go test, which is also established by the same people that developed the original traditional test. So in that sense, you just have like one thing that comes up and you just press the space bar if you agree it's a match, right? It's either a go or no go. And so you can have like an image come up on the screen. Is that female? Yes. Is that a color? Yes. You know, is Apple the company, right? Are Apple computers innovative? Right? Is this fragrance healthy? So you can go from there.
John: And so then you're getting the reaction time. How quickly if the space bar is pressed.
Michelle: And then the faster they respond, you know, the stronger that association is and then you can start to do a lot of different analysis on the back end of that. So you not only get their reaction time, but if you do the, you know, a sufficient number of people, then you can also get sort of the certainty of the response. Meaning, you know, how many? What's the percentage of people that actually agreed? Because you're getting agreement and non-agreement, right? So you can say that not only is that a strong association, because it was a very fast response, but also 80% of the people agree that it's a match, right? So then that would mean that is pretty strong and that can be really informative when you're looking at innovation. So if you're looking at new ideas and try and explore spaces, then if you have a really low association, but a high number of responses. So if you have, you know, 70% of the respondents agree, but it's a low association. Well, that's room for improvement, I would say. Right? So you can strengthen an association. And so I feel like one really informative thing about implicit is that you can tease apart that data and and use it to develop something better. So if it's a low association for healthy. Right? For this brand. But you would like but a lot of people agree that it is healthy. Then maybe you can strengthen that association through marketing.
John: That measures some of the interaction with what you might call system one and system two or, you know, slow. Okay, at the beginning, you've got to kind of be slow cognitive processes that eventually start to train up faster, maybe cortical responses where there's not really a processing involved in the response after a while after you've trained it.
Michelle: Yeah. Yeah. These are totally learned associations. Right. So it's like more exposure you have, the stronger those associations become. So that's what we're trying to help with. It's not you know, when you get a low association, it's not the end of things. Right. It could be something that means you you have an opportunity.
John: Right. And that more marketing because I mean, really good marketing is just education. Right. A good marketing is education.
Michelle: And you know something? I saw, you know, more in my industry life is really that, you know, you do have this disconnect between, you know, marketing teams and R&D teams. A lot of the time they work in silos, a lot of the times. And this is a good way for them to connect. We found someone we're working with clients. And I think we were talking about this before, John. We're talking about how to get sensory a seat at the table. And I think when you are able to interpret results in a way that really bring in marketing in that sense, you know, it's like you're understanding something about the sensory. But it's also bringing something to marketing that they can do with it, too. And so we found that to actually be really powerful in helping the communication between these silos that don't always agree.
John: Yeah, that's what's fascinating. Now, let me just ask, I mean, this is always on the cutting edge, but I'm kind of just curious of your opinion. Something I've gotten involved recently myself, our surveys on Alexa, where you've got Alexa administering the survey. And I think the metadata there is very interesting because you're going to get how long it took someone to answer the question. Right. And so I think there's a lot of potential. There is kind of implicit, you know, a measure that you're getting. I mean, you're going to get eventually get vocal tone. Right. Because they're starting to get into emotion recognition. So if someone's you know. Yeah. You're going to detect whether or not someone is actually happy when they're telling you that it's a, you know, seven out of nine or whatever on the liking. So, yeah, I mean, it's exciting time for sure. So what kind of the range of new technologies that you are using? So you have, what are your facilities there is to, people come into central location or how do you conduct these tests?
Michelle: Yeah, we don't have central location. So we have offices where our analysts and myself and other members of the team work, but we partner with location facilities or with our clients facilities to build around our research. So the great thing about that though is that means that we're not really limited to any particular place. We take our equipment anywhere. We've been all over the world. We have offices in Europe. We have labs in Asia. So anywhere that you can find participants to participate and have a fairly quiet room, then we can test people.
John: With these new data collection techniques, though, I like talking about his own physiological measurements in a second. Are you pretty much in that kind of central location paradigm, though? Or are you doing assessing as well there's this pretty much, would you say?
Michelle: So that's an interesting question. And we see a lot of this more recently. You know, everybody wants to do some use tests. When you do in physiology, though, you really have to be mindful of environmental noise. And what I mean by that is, like if you're testing in people's homes, you have no control over any of the things going on around them. You don't know if somebody is cooking something or if there's a dog barking or a new baby in the background. Right? I out you do on it. But yeah. So you don't know what's going on in their homes. We have no control over it. So in most cases, particularly in sensory, we prefer that it's at a central location testing facility where we can control the environment a little bit better. But we have done studies where you use wearables, for example. And a lot of people do want to do that because it's so easy for people to make that leap about a technology that, oh, it's technology. It can do all these things. And we trust technology when it says that it's measuring heart rate, that it's really measuring heart. But that's not necessarily true. It's terrible. But even some of the top products that are out there for wearables, you know, there's the Wahoo, there's the Empatica. And those can cost over ten thousand dollars. Right. I mean, you mean you get in the really expensive realm of wearables. You know, we really start to trust the data. Right? Right. Having been on the side where we explore a lot of the technologies, we found that, you know, they don't handle movement very well and people at home move throughout the day. You get up, you move room to room. The second that happens, your heart rate actually drops. And so that heart rate that you're actually seeing on any of your wearables is not your heart rate. It's an approximation. You're missing probably 80% of the data. So there's definitely something that we've had to deal with and there ways to deal around it. But the reality of the situation is that the technology isn't there yet for home use testing on doing, you know, any of these neurophysiological measures to really get at the psychological understandings that, you know, a lot of these more academic grade products really do. Wearables aren't there for sure.
John: Right. Okay. So, yeah. So now this leads us to this second question I'm going to ask for, which is about the neurophysiological measurements. So maybe you can talk about do you have those really interesting here you talk about implicit association and how you might look at the different metrics that are coming out of this association task and give guidance to marketing or have some sense of what you should do to try and strengthen if this association is in the direction you want it to be and you need to strengthen it. You know, that is an operational problem. So maybe you can talk about your some physiological or neurophysiological measurements that you like to use and how you incorporate those data into your recommendations.
Michelle: So I think one case that's come up really useful in is with claims. Right. So being able to say, I mean, there's a whole legal issue that goes around with that, but that being able to say that's like a shampoo is actually invigorating. Right? So you can use different physiological measures to show that it's actually, you know, arousing physiologically. So being or of the opposite, you know, having facial lotion that is physiologically relaxing. There's ways to look at heart rate, for example, to get at relaxing galvanic skin response can give you arousal, physiological arousal. Interestingly enough, it's not very good at measuring relaxation only going up not going down. That's exactly what we'd have to look at to get a relaxation or maybe more cognitive processes. Right?So more emotional cognitive processes going on. If you look at heart rate variability. Again, it has to be in a very controlled environment to be able to do something like heart rate variability. But those are some of the cases that we do also with package testing. When you combine that with something like eye tracking, you can show that, okay, when the person was looking at the logo, for example, or what was their physiological response? And this can be a really good way of having sort of a true consumer experience because you're not interrupting them to ask them or having them recall. Right? So it's you're looking at their first look at, say, a shelf, for example, and being able to see. Are they excited by it? Are they distracted? Is there confusion or frustration going on with a variety of different physiological measures? So things like that have been really useful. Of course, an ad testing as well, which isn't as applicable to sensory, although the workshop we did with fermenation codie at Pangborn, we were showing how well a fragrance matched to a advertisement for the perfume. So looking at the perfume experience physiologically and then seeing how the experience watching the advertising matched up to that. Yeah. You really see is it fit. You know, are you the sort of emotions and physiology that you're evoking through the fragrance, is that a match to what you're doing in your marketing, right? So my boss, Glenn Kessler, what he always says is, you know, does the product meet the promise? Right?
John: Yeah, that's right. I was at very nice workshop at Pangborn with MMR, where I thought that was really the promise was, you know, trying to make sure I mean, I think this is what we strive for. Brand is a promise, right? That you've got marketing that promises some experience is going to happen and then you need to actually fill it, fulfill that promise to the sensory experience. You know, that's what I love about sensory science. It is the science of you know, the experience of life.
Michelle: So, yeah, and that's why that communication has to happen between marketing and R&D. Right? Because, you know, I think where a lot of the failure of new products in the market comes from is second purchase. It's not first purchase. Right? People buy the product. It's when they become disappointed because it doesn't meet that promise. Or, you know, when expectations are not met, then liking decreases and they're not going to repurchase it.
John: That's interesting. So do you have a kind of favorite example when it comes to that neurophysiological measurements where you feel like some insight was obtained that would've been very hard to obtain with traditional methods that you could help explain to our audience? I wish more people use of these technologies.
Michelle: Yeah, we were working with personal care product and looking, doing a matchup of the behaviors that people took in using it. It was like a bathing product. And so the behaviors they used and the behaviors they did when they were using the product and also their sort of physiological emotional responses throughout that experience. So behaviorally coded what happened during the experience and then also, you know, did the physiological measures and match them up. We were able to find sort of the product pinpoints. So when they had the more negative emotional responses and that was really informative to the end client. So they were actually able to create a new product from that. And that's been pretty cool. So we've done that sort of paradigm with the behavioral coding, with the physiological measures to do something like that. Yo know, it provides some innovation opportunities, for example, that they were able to do so. Not only did they come up with a new formulation, but they were also able to come up with also just some instructional stuff to have on the packaging that would lead to a better, more positive experience. We were able to test that those changes to really show that it did improve things. So that's pretty cool.
John: That's fascinating. I mean something I mean, really getting out of this that is neat is the idea that these data could be used in multiple ways, like with the eye tracking data. I actually hadn't really thought about the fact that was okay. Well, we want to get people to look, you know, what do we need to do in order to give you a look at our package or where are they looking on the package to. We want to draw attention to certain places and really thought about simply using eye tracking to keep track of. Okay. When is the moment when they first look at the thing to look at and then correlating that with other measurements that we're taking at the same time. Yeah, that is something I think that we can do a lot better job in sensory science, providing this comprehensive this 360 degree view of the consumer experience, as you call it, consumer decision making process. I think that that's where we have the chance to offer something that you could never get by looking at, say, Amazon reviews. I think there's a bit of a danger right now that, you know, there's a rise of a lot of kind of AI based, crystal ball type technologies with that promise that simply by surfing through publicly available data sets, you can never again search again. And, you know, I do think somebody said I'm not completely negative about publicly available data sets. I think you should be aware of what you know are the kind of...
Michelle: It's informative for sure. Yeah.
John: But you should also remember everybody else is collecting that information, too. And so it's publicly available. If you can get it, other people will get it. And so I'm not going to be as special as these well-designed carefully science studies with insights that are unique to your.So I think that you're going really deep with your research. I think that's...
Michelle: We do call it, digging deeper. And, you know, it definitely is customized research. Right. So, you know, it is not just this generic thing that everybody's going to have. And often it's very much catered to whatever research situation it is that we're in with, you know, different types of products or different types of research teams, different types of budgets. Right. So there's all sorts of pieces that go into it. But, yeah, I think, you know, the whole field is developing and all these different inputs are definitely important. You know, one thing that's always, you know, because as being part of innovation and research at HCD, you know, I always look into these different methodologies that are coming up. So like you were saying, looking at the natural language processing with the different reviews that people might be posting etcetera. And it is really interesting, but it also brings in a lot of questions sometimes, too. So, you know, the quality of the data in is the quality of the data out. And so, you know, you do have to be cautious that just think about the psychology behind doing reviews at all. You now, are they going to be more pessimistic or, you know, taking into account things like that I think are often, again, very similar to the issue with the wearables. People just automatically trust that, oh, it's this new technology. It's a cool.
John: Right. Right. Exactly. Yes. You have to go and tell out for sure. Okay. So we just have a few minutes left here. So try to wrap things up. I would like to ask a little bit about your kind of data integration. So, I think you've talked about is taking one form of data and allowing that to set the context for the interpretation of other data that you have found. I think that's really interesting. Now, where are you when it comes to kind of. I mean, with machine learning, it's possible to take different inputs and process them and feed them and best features or sometimes you get two different models and you might form a blended model or you might stack things. I mean, how do you how important do you think the kind of high technology of machine of machine learning that's starting to become popular is versus having more scientifically? I would say this. They're kind of two approaches to science right now, in my experience, there's kind of a computational collect a lot of data and then try to find insights through having a lot of computing power being unleashed. And you might take lots of different data sources and throw kind of them all into a big pile and let your algorithms run into that counter. I mean, that is an approach that can yield interesting insights. But then what I've been hearing from you has been more of the kind of carefully designed research where you've got different data streams, but you already have a psychological model about how they're interacting with each other. Where where do you find yourself falling on that, like more computational and or the more, how would I say this, I dont know, the more model driven approach where you've got some good mental model for what's driving the response?
Michelle: Honestly, we do both. You know, I'm not a stats guru by any sense nor am I an AI or machine learning guru by any sense. So my approach is automatically to think about what things fit together to tell a story. So a very like hypothesis driven approach. And so, like, for example, I really like the power of starting with something traditional like max death and getting the sort of need rankings of consumer needs and then following it up with something like implicit to get understanding of the gaps. So then you have the need gaps, combining those two methodologies together and looking at how they maybe correlate. Right. Can give you a lot of information. But we also do the other approach as well. So also so often it really depends on, you know, the client we're working for. Because if you're working for an end client and not like a flavor fragrance house, but an end client has a just a wealth of data. Right. I mean, the fragrance as do as well, but a wealth of data on purchase on, you know, having trained panels, you know, all sorts of background consumer studies they've done. They have, you know, machine data on realogy, etc. that might be on, you know, lotion or whatever it might be. And then being able to use some sort of modeling approach has definitely been very informative for us. So we've definitely used things like a Bayesian approach to be able to see, you know, sort of look at predictions of what happens if you were to change the bubble size, you know, on foam, what's I going to do to liking? That's definitely been very informative in some cases. So and that's not something I probably would have been able to figured out been able very well without it. Right. Yeah. Yeah. So I think that both approaches are really good. It just depends on what you got. Right. You know, if you have all that stuff, then great. By all means. Everybody should be doing it.
John: Right. Right. The computationally kind of data expensive where you have a lot of data and you've got to get insights. Maybe you have a lot of noise, too, but you can kind of overwhelm the noise with computational power and the data quantity. That's one approach. And then you have a team at the other end. You might have carefully designed study where you don't actually have a lot of data. But it's very well-designed study with clear hypothesis, a clear model of what you think is going on. Get insights. Yeah, I do think that we need to be skilled or at least be aware of the range of approaches available to us. And the ranges of data. So, yeah. So maybe that brings us back where we started. And I hope that our listeners appreciate it now that there are different sources of data available. If they should have questions, Michelle, how should our listeners get in touch with you? Where can they find you and where can they find the HCD Research?
Michelle: Feel free to check our website. It's hcdi.net or email me firstname.lastname@example.org. Follow us on Twitter, HCD neuroscience on Twitter. We post a lot of different technological, you know, scientific research studies, etc. on Twitter for that. So feel free to follow us.
John: And you're also on LinkedIn, right? People can look at. This is a great thing. Things are really exciting. And I think that there are lots of ways to approach these questions. I think we can all agree that we have to do more than we've done in the past in order to really help figure out.
Michelle: Why not? We can.
John: Yeah, we can do it. And it's exciting times. The best time in the history of the world to be is a researcher.
Michelle: It's fun.
John: Yeah, it's great. Okay. So that's it. I hope everyone enjoyed the call. Thank you, Michelle.
Michelle: Thank you, John.
John: If you enjoyed this show, please remember to give us a positive rating on whichever platform you're listening and to subscribe to AigoraCast. And we will see everyone next time. Alright. Thanks. Thanks, Michelle.
Michelle: Thank you.
John: Okay. That's it for this week. Remember to subscribe to AigoraCast to hear more conversations like this one in the future. And if you have any questions about any of the material we discussed or recommendations about who should be on the show in the future, please feel free to contact me on aigora.com or connect with me through LinkedIn. Thanks for listening. And until next time.
That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!