Rebecca Bleibaum - We’re All Individuals
Welcome to "AigoraCast", conversations with industry experts on how new technologies are transforming sensory and consumer science!
Becky Bleibaum, M.A., president and co-founder of Dragonfly SCI, sensory science consultancy with a mission to help entrepreneurially spirited teams, and their products, become more successful in the marketplace.
Prior to founding Dragonfly SCI, Bleibaum was the Chief Sensory Officer at Tragon Corporation, working alongside Herbert Stone and Joel Sidel for nearly 30 years, beginning as their first intern while a student at UC Davis under the guidance of Rose Marie Pangborn and Howard Schutz.
She also is co-developer and instructor of UC Davis Extension’s web-based “Applied Sensory & Consumer Science Certificate Program”, on their Leadership Board for Agricultural and Environmental Sciences, teaches sensory science in the Master, Professional, and Intensive Brewing Program, and in 2016, was awarded the Outstanding Service Award for her contributions and dedication to the University's continuing education programs.
Bleibaum is co-author of Sensory Evaluation Practices, Fourth Edition (2012), A Practical Guide to Comparative Advertising: Dare-to-Compare (est. 2018), along with a variety of chapters, papers, and presentations on sensory and consumer science. She is the past Chair for ASTM E-18 on Sensory Evaluation and has been active in ASTM since 1985. She is a founding member of the Society of Sensory Professionals and was conference chair for the second meeting in Napa Valley, in 2010. Bleibaum has received multiple ASTM awards for contributions, has spoken at numerous workshops and events, and has given hundreds of presentations on sensory science over the years on a wide variety of FMCG products.
Transcript (Semi-automated, forgive typos!)
John: Becky, thank you very much for being on the show. You're almost too qualified.
Rebecca: Yeah. Thanks, John. Yeah. Happy to be here. Absolutely. I'd love to be part of the conversation.
John: Great. So, Becky, one thing that we're talking about before this call is AigoraCast is about how new technologies are impacting sensory and consumer science. And of the many ways that you've kind of been involved in that impact, I think chief among them has been your pioneering program at UC Davis, the online learning program there. So maybe you can talk to our listeners, maybe first off, who aren't familiar with the program about it's details and then talk about how technology has enabled you to reach a much wider audience than you would have been otherwise. Plus, whatever else you want to talk about.
Rebecca: Well, I'm glad we can start there, because I think, you know, to me, think about things in my career that I'm most proud of. And that program was really the brainchild of Howard Schutz. You know, he decided early on when the online learning program first started in around 1998. He's like, let's do an online sensory program because we're such a small field and how do we train people around the world in this system? So, you know, Davis was looking at all of that. And about 2001 we launched the program and it's really got four courses because we talk about the fundamentals or the foundations of where sensory started in psychophysics and measurement and all of those things so that's course one. Course two, is all about sensory methods. As much detail as we can go into for that. Course three is consumer methods. And then my favorite what I really love to do and why Howard brought me in was to teach applications. So really, what are the business applications that we've seen in Howard companies using sensory science to really build brands or launch these products and so I think we really give it nice, it's a year long. We take it on the quarter system. So you know it's a rigorous program. We've taken a lot of people through it. They have assignments and quizzes and discussion boards and all sorts of things. And yeah, it's been a lot of fun. We've met a lot of great people and I think we've had some impact in kind of raising the level of awareness of this science.
John: Definitely. And, you know, do you know how many graduates you've had at this point?
Rebecca: Yeah. You know, we take about, we started we were hoping we would get 25 people every year to take this program. We'll do it for 5 years and be done. Then we took it up to 40. Now we're up to 60 students per cohort.
Rebecca: So, you know, it's still not high numbers. We're talking about a thousand plus maybe in our school. But you think about the number of people that attend Pangborn, it's about a thousand.
Rebecca: So I think we've talked a fair number.
John: Yeah, that's fantastic. And where have you seen in terms of the topics that you've covered? Like, maybe we should think about how things have changed. So you think about the business applications that you were mentioning a few minutes ago. Have you seen a change in the sorts of business applications that are relevant in sensory, especially in light of all the new technology that's changing society?
Rebecca: Well, I think there's tremendous change. And, you know, this field is just people take these methods. And I'm fascinated by the discussion boards and the ways people are applying these things. But it's still, you know, when it comes down to it, basically there's a whole innovation side. How do you get your ideas to develop products in the first place? So we cover that, you know, the whole innovation side and observational and things like that. But then it comes, you know, a lot of product based research is still discrimination. Can people tell the difference if they can tell the difference, what kind of difference is it? And does it matter? And to whom does it matter? So I think, you know, the biggest change we've seen lately is how we deliver the material because, you know, think about our courses, you know, it's 20 years old. And how we used to deliver it was very text based. And we're really making a huge improvement, a huge redo. We always update it. But this over the next couple of years, we're really completely redoing how we're delivering the content of the material.
John: Is there video now?
Rebecca: Yeah. There's a lot of video, a lot more demonstration, a lot of very interactive things and smaller segments. Yeah, people don't want to listen to long things, so we're trying to make it a very user friendly.
John: I see. And have you thought about a partnership with Coursera or any of that? Any of these, you know, the MOOC's? I guess you based the MOOC's that you have. It's maybe not exactly massive, but it's definitely, you know, online course.
Rebecca: We have. Davis does a lot of the Coursera materials and we you know, they are trying to pull us in. And I'll tell you, I don't want to get involved in another particular program at the moment. I think we have to focus on delivering what we do very well. And we've thought about a fifth course, you know, a graduate level course, because there is a lot of new material and a lot of new things, especially with context and, you know, getting out of the lab. And how do you help these craft people? You know, we do a lot of craft work and they don't use the same types of tools and techniques or they just don't. They don't have time to do it. They don't have the resources to do it. So trying to find new ways to engage them in this whole science is very interesting.
John: That is interesting. And on a related note, it would also be interesting to hear how your company has kind of followed the same, you know footfall at somebody same trends. Like when did you found Dragonfly SCI? You and Heather Thomas, correct? Founded the Dragonfly SCI.
Rebecca: We got three extreme folks at Dragonfly, trug and fly. And I didn't come up with that. Somebody else came up with that. But that was pretty funny. But, you know, we decided that part of our big thing, we do product research, but we also, you know, we have a training program because we want to help these companies. We're working with the company right now that they have a product that they still make in small batch, right? They really make, you know, small batch. I'm talking about like a few gallons of product. And now they've scaled up, the consumers loved this product. And they cannot continue make these small batches they really have to commercialize and they bring in, you know, very talented people to take it from that small batch and make sure that the sensory experience doesn't change as they scale up. And that's where, you know, descriptive analysis, which is one of our key strengths, quantitative descriptive analysis is definitely, you know, the product developer, the guy that they brought in said I couldn't do this without descriptive. I need the feedback loop to understand as we make changes with everything that they're doing, can they still deliver that same sensory experience to the consumer?
John: That's fascinating. And so, I know that you partner with DraughtLab fairly regularly. Is that part of this sort of research where the kind of they're using app based data collection for this? Or is this more now, what is the right way to say this, is the panel more centralized? Can you kind of take us through, what a typical, you know, kind of project for you looks like?
Rebecca: Well, yeah, we've decided to the system that we used to use in quantitative descriptive analysis was kind of a long process. You would spend about 8 to 10 hours in your language development and then you'd go into data collection with replication. It would take 3 or 4 days to collect data on products. You know, it's an intensive program. And we did that up a little bit. And now we're using culinary experts who come with us with fantastic language. They have very known sensory acuity. And we can get through a language session and data collection within a short amount of time and really deliver results within a session or two. They can walk away with some fantastic learning on a quantitative analysis. We're not partnering with DraughtLab but where I see I love DraughtLab's mission of quality because, you know, the whole thing about you can't make the same product twice is very true and quality. Most companies, can't get to control products to pass a discrimination test. And that's not necessarily bad. That's just reality. Right? So you want to know where are your benchmarks, where are your goal posts for your product production? And anything that you change does it fit inside that window or that framework, that sensory profile, or is it outside?
John: And do you tend to prefer that quantitative descriptive analysis to different testing, for example? I mean, do you ever really use different testing or do you mainly have some sort of a, you know, idea of what's in spec or out of spec for a sample? What's your kind of take on how to match the sample? What would you recommend?
Rebecca: Well, I think discrimination testing has its place. I'm not anti-discrimination testing, but we use it for screening, for sensory acuity. We may use it for packaging as you change packaging from one to another. You know, how does that impact the product. But I think it's a little, for us, it's misguided to look at it from a QC standpoint, because products vary, every product vary. And you're trying to have your sensory signature and at what point does it go outside of that? So, you know, to me, I think that's what DraughtLab is helping companies do, is understand what is your sensory signature? What are you trying to produce and how do you rate attributes, not just is it different, but at what point does it become too much or too little of something?
John: Right. And then do you lose that signature?
Rebecca: Do you lose that signature and how does that impact your brand?
John: Fascinating. Now, another topic along this line that I know you have some opinions about that I like to hear are the individual differences in terms of the descriptive analysis. The fact that people aren't identical, that when you have a panel of people, that there are individual differences or biological or genetic differences that are impacting, how do you see that playing into the quantitative descriptive analysis?
Rebecca: Well, that's a great topic. And I think that as a science, we have to embrace the individual differences that people demonstrate. You know, people aren't the same. And trying to calibrate them to be the same is goes against the science of what we know about behavior, about physiology. And we just had an example of a company that was working. They have a something that's really it's a food safety issue. It's really an antimicrobial that they're sprained on a product and it interacts with the fat in some way that we don't quite know, but the chemicals that it produces are really people are genetically sensitive to it or not. And so we trying to find, you know, real applications for these techniques. And we're descriptive analysis plays is we took it to the panel and we looked at some people did not get the sensation at all. Other people got it very strongly. It's like, well, they're not they represent we're looking for patterns of behavior, that it represents a certain set of the population that some people and the consumers will get it and some won't. But the ones that do, how serious of an issue is this? So I think, you know, we went back to the main values. We went to the individual, the raw data, because you really have to drill down and understand the quality of the data and what each person contributes. And that's why we have the panel and that's why we do three replications most often, sometimes four.
John: So you wouldn't say then that someone's ability to group agree with the rest of the panel is a measure? What is a good panelist? I'd like hear your definition, how do you know that when someone is a good panelist? What are the criteria that you look for?
Rebecca: Well, you know, Stone and Sidell came up with some very specific statistical analysis to determine that one way analysis variance to the variance we look at. And I'll tell you, RedJade is another software program that we really rely on heavily because it involves the Stone and Sidell way to look at panel performance measurements. And you really, you know, they've got page after page of ways to look at. Is the panel performing as a unit? And if not, are people crossover interactors or magnitude interactors? And, you know, we're not looking to throw people out. We're looking to say, are they giving us some general information and reacting in a way similarly? And if they're not? What's the reason for that? And, you know, it's looking at panel performance measures over time, that quantitatively looking at some data.
John: This is so fascinating to me, because a big thing for me right now in my work is machine learning, right? Sometime typical project is trying to predict liking, but it could be any other measure of interest. I mean, there's a lot of measures that are being nominated as like maybe more business relevant than liking. But whatever it is that you're trying to predict, what oftentimes happens is you have things you know about the samples and then you have things you know about the people and you want to put that all together. And somehow for a person, sample pair, make some sort of reasonable prediction on how that person's going to respond to the sample. Like, that's the kind of general machine learning enterprise is to collect data about people, about samples, put it together and make predictions about how the people are going to appreciate the samples. And the thing is that. Usually the average values are used from the panels, right? To describe the products. But I think it's a lot more interesting to look at all the data, not just the averages. And maybe the men's and Maxus are actually more informative that if you've got some product where maybe not all of the panelists are responding to it a certain way. But some of the panelists are. Then there may be some people in the population for whom that's not going to be a good product. Maybe it can be a great product. And I think that that's a huge opportunity for us is to start to analyze the full set of panel data, not just the averages, in terms of trying to build these more predictive models.
Rebecca: Are you are you thinking about segmentation? Or, you know, in looking for patterns or groups of people that do similar types of things, or are you looking at the individual?
John: We're looking at individuals. Yeah. So there's a bunch of approaches that you could, like Thierry Walsh and I published on the, you know, the idea of profile method, which is this method where you ask people how sweet is this product, just what would you like it to be? What you can kind of infer from that is whether people in general are wanting sweeter products and then presented like what we need to know if we're gonna model you and I, suppose we have individual differences. You are looking for something different than I am. Right? The exact numbers on those questions like, how sweet is this, how would you like it to be unsour and how sour would you like it to be? Those exact numbers aren't that important. What's interesting is the relevant differences between you and I, that if it's the case that I'm saying that I want things that are less sweet when you're saying you want things that are more sweet, that's informative to a model because it indicates there's some individual differences in terms of like we're getting the same samples, but we're saying we want things in different directions. So rather than looking at and some of the stuff, I have to be careful because there's some like client stuff that we're working on, so actually I should not give everything away, but the point is that if you know a fair amount, I mean, I can talk about the ideal profile method because Thierry and I published it. But if you know a fair amount about the people in terms of what they seem to be looking for, and there's different ways you can infer this from the survey data. And also, interestingly, the nice thing about machine learning model is if you have enough data, you can put lots of variables into the model, right? And you can try to figure out what's informative. So that's where you can start to put, you know, your ad like you can have behavioral information or attitudinal information or psychographic data. Like you can put lots of variables into the model. You can see what's interacting with what. Right? And I'm interested in this topic of the Braw panel data because I think that you've got people you know, there are genetic differences in the population that are leading to differences. Like I really believe that when people taste things, they don't actually experience the same thing.
Rebecca: Right. Absolutely.
John: When in your mouth, you experience different things. And that's one of the reasons why I think data sciences have so much trouble modeling sensory data because they just see it all as numbers. They don't know about all these like twists and turns that we know about, you know. But I think it's not an unsolvable problem. I think that if you know enough about the people and you have that level of detail that you're describing from the panel, not just the means, but like appreciating the differences. I think now you might be able to make some headway.
Rebecca: Well, I think it's up to sensory science. Anybody that's collecting quantitative descriptive data and I would encourage people, you know, two replications gets you a straight line. Well, you know, in a sensory, we're looking for replication. But another part of this is we're trying to get more product represented through that model, through the system. Right? If people have one plate, we've seen this and some claims data we were working on some really nice a class action lawsuit data to a descriptive data. How do you demonstrate, you know, the attorneys, one of these lead counsel attorneys said it's an elegant piece of evidence finely because you are a group of people, you're not teaching them anything. They're actually screen and qualified to be there. They create the vocabulary, the language to describe these products. We say the proof is in the booth. Once you go in and you start measuring things on a repeated basis. Are you providing the same kind of information? Are you consistent amongst yourselves? I don't need you to be like me, but I need you to replicate yourself. And it doesn't mean exactly the same place on the scale because, you know, products vary, people vary, everything, all this variability. And that's why the analysis, the variance is such a workhorse for looking at the quality of the data, because we are working to feed this into AI models. But the data has to be solid and robust before you can do that. And you know, now we can look at across, if you've got a panel and they come in on multiple occasions, you can put this data to combine and look across these different studies that they've been involved in. Are they still acting? Are they still really participating and contributing like we would expect them to do?
John: Yeah. That's another thing, you know, when you see a lot of these AI startups where they're collecting their own sensory data and it's a giant mess because they really don't know what they're doing.
Rebecca: Well, let's talk about the data quality that you're looking at. I think that's fantastic. It's a great question to ask. And people need to just be very open with that and make sure that as they start using this data that, you know, it really can be predictive. You know, I say it's so easy for sensory to lose credibility. It's difficult to gain. And we as a science, we really want to gain that credibility and be able to contribute to these larger models.
John: Yeah, I totally agree with that. So now we are starting to even run out of time. There's a lot of topics I wanted to get to, Becky. One of them is software because, you know, working in Tragon and then Tragon kind of spun, I guess RedJade. Is RedJade what Tragon became?
Rebecca: Oh, my gosh. Well, let me tell you a little bit about software and Tragon, because, you know, I started at Tragon when I was a student at Davis, right?back in the 80's. But at that time, we were running down to Stanford Research Institute, if we would run, you know, descriptive analysis or any of that, we'd run down. And so we had programs. I actually had the old tape that we used at Stanford Research Institute to you know, we wrote software and Tragon sold that software over many, many years just for quantitative descriptive analysis. And then we also had I remember the first time we had a plotting software that when I was an intern, we had a guy who used to work for Apple Computer. So he developed a plotting software that we could use with the Hewlett Packard system, the old HP plotter. And then we had for entering data, we developed a digitizing software. So you didn't have to measure line scales. You could actually digitize and it would collected, you know, input it electronically. And then, you know, when we were doing a lot of consumer work, we were looking at systems out there and it just nothing quite fit the model that we wanted to fit. And so in about 2006-2007, we really put a big emphasis on developing our own system. So, all of the descriptive analysis modeling that came as a whole package in the RedJade system and then try to balance block designs and everything else that we really wanted to incorporate. So yeah, I think when Tragon was purchased by a venture capital firm. And I think there were three huge components, a fielding component, a consulting component and the software component. And kind of I think, you know, we all have different homes there. But yeah, that's RedJade is absolutely Tragon.
John: That's a software. Okay. And so, as you've seen, I'm really interested because as you know, I'm really interested in new modes of data collection. I mean, it would be nice to have you just kind of talk about, like, what do you see as the current state of the sensory data collection software? What do you think is needed? What would be improvements that you'd like to see? If you have some listeners out there who are maybe working on their own software, what should they do or what do we need in sensory science?
Rebecca: Well, you know, for data collection, I really love the handheld, you know, do it on your phone. You know, we were talking earlier about this, we're part of a discussion group anyway, from Oregon, California, Washington and Australia to talk about the impact of smoke on, say, for the wine industry, for example. And so you have people out in the field looking at where was the fire? How close was the fire? What age of the berries when you tasted that? You know, so can they, I would love to see the point where we get to people out in agriculture, in the field, tasting and finding in ratings so you can track those products through the distribution channel, through the food chain. What happens as you scale that up? And, you know, what kind of intensities did you see and observe at one point and really lock that in with a quantitative measure? And then as you go through the system, how does that change or what kind of decisions do you make that put these tools in the hands of people that are really on the front lines and you can give them some training. But, you know, it's like they have to be our eyes and ears.
John: You know, what you're making me think of is that now you've got like the Amazon glasses coming that have built in Alexas, right? So imagine you've got someone in the field wearing these goggles and they're tasting things and reporting them through just handsfree. They're just saying, you know, and they are giving their ratings like the goggles are giving them a little questionnaire and they're answering. Maybe that's coming soon. We will see. But, yes, I showed you I did a shampoo surveys, I don't know if I played this for you, but I did the shampoo survey in my shower on Alexa recently. And it actually works. So, like, we're gonna see all these things come along. And I think the data collection can at the Edge, Lindsay Barr speaking at IFT, as part of a symposium that I'm organizing on data collection at the Edge.
Rebecca: She's been a great one. I think she's really shaking things up. About, you know, making these tools inexpensive, easy to use, and just really applied to the researchers themselves. I mean, you know, people some of the systems are kind of overbuilt for what she's trying to go after. And I think it's a very elegant way to do this. So I'm a huge fan of what they're doing. It's gonna be exciting to see what they take it and with Alexa, I mean, you know, who knows where it's going to go.
John: It's amazing right now, it's like, what are the themes and technology is that everything that used to be binary is becoming continuous. Everything we're used to be like, okay, it's central or it's like home use. It's getting to be this continuum where, you know, quantitative and qualitative are coming together. There's all this blending online off-line that's all coming together. Like it's that's the theme that I might see emerging.
Rebecca: Let me tell one more thing about you know, we use a lot of JAR scales, you know, just about right scales. You know, we all use them. But it's so difficult. I think our goal is really to help the product developer understand the product experience and just about right skills or so they can be misleading for sure.
John: I think they're useful. They're useful as the diagnostic. Like if you and I are in the same study and we take to taste the same products. And on average, my JAR score for sweetness is lower than yours. I think that says something. If you're on average about like suppose three is JAR, right? And you are an average of four and I'm an average of two. I think that's diagnostic because what it means is on average, I found these products to be not sweet enough. And on average, you found them to be too sweet. I think that kind of information is diagnostic for finding individual differences. So I don't know if the individual JAR responses are that informative, but I think when you take the averages across a common set of products and compare them and especially of several attributes like that, those differences I think are useful as inputs to a model. I wouldn't trust themselves.
Rebecca: I agree. But I think, you know, just sweetness is hardly ever just the key driver. It's so much more complex. And, you know, the question we ask consumers is lot of halo effect if you like it. You have to say it's just about right. And most things and if you don't like it, you're finding ways that, you know, questions that you've given them that they have to respond to. And it's not nearly as comprehensive enough to me for the product developers to really be successful in making changes based on that.
John: 100% agree. For me, it only has value in the context of a much bigger model with lots of other inputs. And then the things that are going to be important, because maybe sweetness won't actually be an important variable in the model, but maybe some other attributes will be and they'll be interacting with other attributes. So, yeah, I'm with you for sure. I think that we definitely need more nuance analysis. Yeah. Well, Becky, we have, as predicted blown through half an hour quite easily. So a couple of things that we didn't, well, I was hoping to talk to you a little bit more about kind of developing lexicons. Do you have any last thoughts on that topic? I know you work with your culinary experts. You have advice for people trying to develop their own lexicon? I mean, they should hire Dragonfly SCI.
Rebecca: Oh sure. We want to put these tools in people's hands. And, you know, it's really allowing a group of people that have known discrimination abilities, right? It's just not that you have a screening qualified group of people that once you do that, once they're in there and they're you know, they're familiar with the product that allow them to come up with the language. It's a process, it's an iterative process on how you develop a vocabulary. You can pull things off the shelf. We do a lot of olive oil work, right? So there's published olive oil language. There's published wine language, published before language. But from what you're doing, that may not really work for the samples that you want to study. So you need to make sure that the words that you're selecting and using apply to the products that you're studying. You know, in trying to understand how consumers and how people talk about the product is also very enlightening for most research teams and pick up things that like, you know, we're talking about the plant based proteins. There's a tremendous amount of work in that area that we're involved in. And it is critical for that. You know, what is a real hamburger tastes like versus some of these other products or a chicken? And, you know, and that varies, too. I mean, what is hamburger? You know, the questions get very large and so you have to select samples that reflect what you want to study.
John: Yeah. The philosophical questions that keep us going in sensory, what is hamburger? Okay. Well, this have been great. So couple of things just to wrap up, what advice do you have for young sensory scientist, someone who's just graduated, and they're coming out to the field. What should they be focusing on for the next couple of years?
Rebecca: Well, you know, I was saying, know your history. Understand your history. You know, when we talk about fundamentals or foundations of what sensory is. What drives me crazy is having people come to meetings and speak up when they really don't know their literature. Know your literature. The things that are going on in the field is changing rapidly. But ask questions and be curious and be part of the conversation you always have to participate in, you know, I say ASTM is a great little nerd society.
John: Totally agree. We wouldn't be here if it wasn't the ASTM.
Rebecca: Right. It's a lot of fun and a lot of those topics, and always ask question I mean, I always try to look at each study and learn from one study to the next and, you know, just be a good scientist. Be a good experimental psychologist.
John: That's great. Alright. And if someone wants to get in touch with you, what's the best way for them to reach out and connect with you? LinkedIn?
Rebecca: Yeah, I'm on LinkedIn. And then, you know, Dragonfly SCI site at dragonflysci.net or at UC Davis, ucdavis.edu.
John: Okay, I'll put the links in the podcast notes. So, it's been great, Becky. Anything else you want to say?
Rebecca: I don't know. It's a lot of fun, John. I'm glad you're doing these things. And I think it just helps our field grow. And it's yeah, we're part of the conversation, so keep it up.
John: Sounds great. Thanks a lot.
Rebecca: Alright. Thank you.
John: Okay. That's it. Hope you enjoyed this conversation. If you did, please help us grow our audience by telling a friend about AigoraCast and leaving us a positive review on iTunes. Thanks.
That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!