Dr. Jeff Crukley – Understanding Audiology Through Better Research
Dr. Jeff Crukley is the self-proclaimed Geek-in-Charge at Data Science & Statistics, where he helps clients analyze and utilize their data to its fullest potential. Dr. Crukley was an industry scientist who switched gears and now serves his clientele as a freelance statistician. During his time as an audiologist, he became well-acquainted with the common shortcomings of data collection and started his business to solve the issues that have been plaguing teams for so long.
Dr. Crukley also serves as an Adjunct Assistant Professor for the department of psychology, neuroscience, and behavior at McMaster University and an Adjunct Lecturer for the department of speech-language pathology at the University of Toronto. He continues to contribute to the med-tech industry, where his research has played a key role in modern innovations.
Here’s a glimpse of what you’ll learn:
- Dr. Jeff Crukley explains how he got into audiology and research
- Why is it so hard for children to hear at school?
- The logarithmic scale of hearing loss and how patients can interpret the data
- How functional testing is more accurate for patients than audiograms
- What is the Bayesian hierarchical model and how are they used?
- Understanding patients both as individuals and as a demographic
- The difficulties with audiology terminology and how it can be improved
- The greatest influences on Dr. Crukley’s career
In this episode…
There are tried-and-true methods that have been used by audiologists for years to great success. These methodologies have long served as the starting point for diagnosis and treatment — but they may not be conclusive. Recent research has shown gaps in audiological information for doctors as well as patients. For most audiologists, there needs to be a new approach for how to collect and analyze data. All of this begs the question: how accurate is the data from conventional audiology tests?
Dr. Jeff Crukley is one of the strongest proponents for rethinking medical data, having worked in both medical and data sciences. Having a foot in both worlds has allowed him to see where old models have failed to provide us with the solutions we need. Now he’s started his freelance business as a consultant and contractor, helping his clients better understand and leverage the information that’s already there. So how do you apply that new approach to your practice?
On this episode of the ListenUp! Podcast, Dr. Jeff Crukley, the self-proclaimed Geek-in-Charge at Data Science & Statistics, talks with Dr. Mark Syms about how to gather, interpret, and apply data in the field of audiology. Dr. Crukley goes over the Bayesian hierarchical model of data collection and how they fill in the gaps that others leave behind. He also goes through the problems he’s noticed in children’s hearing at school, audiological terminology, and the logarithmic scale.
Resources mentioned in this episode
- Arizona Hearing Center
- The Listen Up! website
- Listen Up!: A Physician’s Guide to Effectively Treating Your Hearing Loss by Dr. Mark Syms
- Dr. Mark Syms on LinkedIn
- Dr. Jeff Crukley on LinkedIn
- Dr. Jeff Crukley, Data Science & Statistics
- Boys Town National Research Hospital
Sponsor for this episode…
This episode is brought to you by the Arizona Hearing Center.
The Arizona Hearing Center is a cutting-edge hearing care facility providing comprehensive, family-focused care. Approximately 36 million Americans suffer from some sort of hearing loss, more than half of whom are younger than the age of 65. That’s why the team at the Arizona Hearing Center is focused on providing the highest-quality care using innovative technologies and inclusive treatment plans.
As the Founder of the Arizona Hearing Center, Dr. Mark Syms is passionate about helping patients effectively treat their hearing loss so that they can stay connected with their family and friends and remain independent. He knows first-hand how hearing loss can impact social connection and effective communication. By relying on three core values—empathy, education, and excellence—Dr. Syms and his team of hearing loss experts are transforming the lives of patients.
So what are you waiting for? Stop missing out on the conversation and start improving your quality of life today!
Welcome to the ListenUp! podcast where we explore hearing loss, communication, connections, and health.
Dr. Mark Syms 0:10
Hey everybody it’s Dr. Mark Syms here on the first of the ListenUp! podcast where I feature top leaders and healthcare. Past guests include Dr. Drew Dundas and Dr. Rodney Perkins of Earlens. This episode is brought to you by Arizona Hearing Center, I help patients to effectively treat their hearing loss so they act better with their friends and family and remain independent. The reason I’m so passionate about helping patients is because I lost my brother Robbie twice first who is hearing loss from radiation from his brain tumor, and then again when he passed away, I only care for ears I’m the year of EMT is performed over 10,000 ear surgeries over the past 20 years. I’m the founder of Arizona Hearing Center. I’m the author of Listen Up, go to listenuphearing.com to learn more about the book, go to www.azhear.com and contact us with any questions or email customer support. Today I’ve got a great guest. It’s Dr. Jeff Crukley. He has a PhD in health and rehabilitative sciences from Western University. He’s worked as a researcher in both academics and industry in audiology. Now, Dr. Crukley is passionate in working areas and as a data scientist, he helps people analyze data and represent it. I’m really excited to have you here. Dr. Crukley, welcome to the show. Jeff.
Dr. Jeff Crukley 1:24
Thank you very much.
Dr. Mark Syms 1:26
So tell me tell me your path to Hearing Sciences. I always fascinating how people ended up in this in where they are now in terms of Audiology and design.
Dr. Jeff Crukley 1:37
It’s uh, you know, my path is a long and circuitous one. I first learned of Audiology during my undergraduate, which was in biology and psychology. I did some volunteer hours in a hospital based audiology clinic and thought that it looks like a very interesting field. After after being out of school and working in a couple of biotech companies and some other research contexts, I went back to school for my Master’s in audiology. And I did a combined program of clinical training and research training. And the research side of things really caught my interest. My my master’s work was looking at localization, ability with and without hearing aids. And from there, I went on to pursue my PhD in hearing science. And that was particularly interesting, I was looking at the ability to best accommodate children with hearing loss in school based environments. And so that work focused on not only treatment options through digital signal processing, but also through assessment options. And I was kind of I built out my own mobile assessment lab. And so I did all of the research and speech testing and loudness discomfort levels. I did all of that within the schools themselves in real classrooms, as well as looked at you know, soundscapes of what things look like from an auditory perspective for kids and particularly with for kids with hearing loss over the course of school days. And I spent time in daycares, elementary schools and high schools, over a period of several months, just kind of tracking, the ability to assess and then you know, best provide support for kids and their listening needs it where they spend most of their time, which is in school. And then after that work, I went on to do a postdoctoral fellowship in medical biophysics. And in that context, I was looking at auditory fMRI in a very different segment of the population. So I was looking at doing auditory cortical imaging for premature babies and looking at developing some ability to look at long term developmental trajectories based on patterns to auditory stimuli through fMRI. And that was fascinating work, and was kind of my first real introduction to data processing, number crunching and some relatively complicated statistical analysis. And from there, I went on to industry, while I worked in a research capacity for, I guess, close to a decade. And that is where I really found the need for flexible and intuitive and out of the box ways of analyzing experimental data, I find that our did find at least when when looking at results of experiments or trying to design them in an industry context, we were particularly curious about performance outside of a constrained lab environment. So how are things going to look for real individuals in a real world context? And that’s just not something that is easy to assess with kind of the the typical approach just to statistics like T tests and things like that, you need to be a little bit more flexible. And that’s what led me to where I am now, which is focusing primarily on, you know, really custom coded custom created models that are both evaluative and generative. So what I do now through analysis is understand what happened in a study. But my models also work in the opposite direction, so that I can use them to predict and create data. So they work as data simulators, as well as data analyzers. And the flexibility there has proven to be really valuable both for understanding research data in some contexts, as well as providing kind of a backbone for development support in tech companies.
Dr. Mark Syms 5:57
Yeah, that’s, that’s really great stuff. I mean, you know, the stuff that comes to mind really clinically, you know, and that’s what I think you’re getting at are two things. One is I’m not sure people realize how much background noise there is in schools. And it sounds like you’re trying to keep track your PhD looked at that. I mean, in the context of you could speak to that, I should say. And then the other thing is, is I think what you’re getting at is, is what many patients complain? Well, you know, I hear great when I’m in the audiology booth, but then when I go out in the real world, I don’t hear nearly as well as I do when I’m in the office. Right? And so exactly, that’s what you’re you’re trying to model his real world simulate situations as compared to that, and just, you know, touching on your PhD thesis. Can you like speak to you know, I mean, I always say to pay patients, parents, you know, I’ll you know, the back the hardest listening environment for hearing impaired children is the school environment. How hard is it? I mean, can you just speak to that? I know, it’s, it’s kind of a, just a touching on your PhD, but how hard is it.
Dr. Jeff Crukley 6:59
It’s very challenging, you know, but what’s, what was really interesting about the work is that it’s not easy for any kids, because you there are, there are published guidelines for kind of where, what are the levels of background and ambient noise should be the kind of sound reinforcement or suppression that should be present in classrooms. And through my measurements, I did full acoustic measurements in a large portion of rooms and a number of different schools, and almost none of them met that criteria.
Dr. Mark Syms 7:30
What is the criteria?
Dr. Jeff Crukley 7:31
The criteria is, is well, it actually recommends that there is a signal to noise ratio for children of plus 15 db, that means the teachers voice should be 15 decibels higher than the background noise. And that was almost never achieved. Background noise levels. And this is where I redefined things and kind of coined the term, non quiet listening, I think people, people think about speech and noise, as you know, trying to speak, you know, in front of an air conditioner, or in a busy restaurant or busy bar. And you know, when things get up into, you know, shouted level speech or or louder than that at around maybe 80 dB SPL or something like that. But what I found was, you know, it’s almost never quiet. So although the overall levels might be where you would where you would say average level speech is happening at around 65 dB SPL. It’s not a single speech source. So there are multiple sources contributing to that overall level, and disentangling the information you want from the information you don’t want is challenging for many people. It’s, especially, you know, kids with hearing impairments, hearing aids are still not quite at the point where they know what you’re trying to listen to. So we can identify what is speech and what is not speech, but we can’t yet really do a great job of identifying which speech you want to hear in which speech you don’t. So it becomes very challenging.
Dr. Mark Syms 9:05
And that’s a lot of brain work to filter out what you want to hear what voice you want to hear as compared to not to and then when you’re not getting the signal. Well, that makes it even more difficult. Yeah. That’s fascinating study. I mean, I just share with you, you know, we were talking in the pre interview about dating or so all of the schools I went to had a hard tile floors. And this all the schools my children go to have carpeting. So they did learn something in terms of decreasing the background noise.
Dr. Jeff Crukley 9:33
Yeah, I mean, some of the things we can do are very cost efficient. Carpet Mats, curtains on the windows, drop ceilings are good, even simple things like you know, some memory foam tucked in the corners to kind of stop some of those those early reflections. Just to kind of just to suppress the reverb a little bit the signals of interest. gets smeared out really quickly when a bunch of children start speaking the same time.
Dr. Mark Syms 10:04
Yeah. And that’s why, as they call it in the teaching world, classroom control is so important for the teachers, right?
Dr. Jeff Crukley 10:10
Dr. Mark Syms 10:11
Yeah, that’s great stuff. And really great work, because it’s highly impactful on actually our normal hearing kids and our regular kids. So one of the things that I was talking to another guest about, which I think is a interesting thing is, you know, looking at the audiogram, we typically just talk to people about, you know, their, their loss, or how much we’ve determined the volume on this scale of on the DB scale, the thing that it is, is, it’s a logarithmic scale. And so, you know, I being that your data science system, putting you on the spot to kind of help the listeners to understand what that means in terms of, you know, sometimes you say, Well, you know, you’ve gone from 10 to 20 Db loss, that’s not a big deal, but I’m not sure if people really understand the magnitude of those types of changes across the logarithmic scale. Can you speak to that?
Dr. Jeff Crukley 11:00
I can, you know, it’s, it’s, it’s an interesting time to be talking about the log scale, as, as we’re hopefully coming out of the COVID 19 pandemic, where, you know, there was a lot of talk of flattening the curve, of course, the curve, we were trying to flatten was on the logarithmic scale. And even when the logarithmic scale is flat, you know, the linear scale, we’re still seeing more cases every day. So while while the growth slows, it’s still growing. And I think we see the same kind of contrast in thinking about increase in decibel bass thresholds, which are on that log scale. And we try to compare that to the clinical audiogram, which is at least presented as linear scale. And, you know, we have our categories of mild to moderate, moderately severe. And it seems like there’s this linear relationship. But I think what is difficult to grasp is when we when we typically look at a graph, on a usual scale, on the axis, one tick mark to the next tick mark, is additive. So you know, one plus a one unit increase gives you the next tick mark, which is two, that’s the linear scale, on the log scale. It’s not additive. It’s multiplicative. And so although they are still single tick marks, you’re multiplying by 10, between each tick mark. So the first tick mark is one, the next is 10. The next is 100. So you see this exponential growth. And so moving one tick mark on the scale actually implies a more significant reduction in sensitivity than might be depicted. When we think of that audiogram on that that five DB linear scale, that the impact to sensitivity is actually, you know, multiplicative, and significantly greater than were very easily able to convey on that graph itself. And so that’s where I think things get a little bit challenging to understand. But I think the framing and kind of color coding or shading in mild to moderate, moderately severe, and orienting patients to those categories is a helpful thing for them to to understand, you know, what their expectations should likely be. And I think setting expectations is very important thing early in the hearing aids or early in the hearing loss experience.
Dr. Mark Syms 13:26
Yeah, and so correct me if I’m wrong, but what you’re telling me is, is that the further down you go, the bigger the change each one DB is, is that correct?
Dr. Jeff Crukley 13:37
Absolutely. Yeah. It’s because of that that multiplicative effect, the the amount grows between each subsequent checkmark. And so the spacing looks similar, but that’s more as a visual aid because the the axis would be too tall. If we were to actually represent that distance.
Dr. Mark Syms 13:55
The waiver is not long enough. Exactly. I mean, you need something like the size of the Empire State Building goes up or Yeah, it’s it’s just I was talking to Ryan McCreery. He’s at Boys Town and, and his whole thing was also talking about how, you know, those those divisions of mild, moderate and all of that somebody just kind of took the numbers and broke it up, but they don’t actually represent function. So they’re finding people at the lower part of normal, you know, 15 to 25 Db are actually having market functional impact. So it’s kind of interesting that we’ve all kind of I mean, I can tell you from a physician point of view, we’ve just kind of integrated into our discussions with patients like Well, you’re less than 25 to be your normal and it really isn’t normal. It’s just somebody that is divided up and said, well, we’ll make zero to 25 normal.
Dr. Jeff Crukley 14:43
And I think, you know, we’re seeing more and more work come out relative to the synaptic pathology and some of the other things that we’re seeing now that are helping us explain why somebody might have a normal looking pure tone audiogram but still experience significant difficulty when trying to understand speech in a competing environment, even if it’s not that loud. It’s that that that signal to noise ratio is is difficult for folks, even if they have what looks like a normal audiogram. And that’s one of the challenges is, you know, it’s hard to fit a hearing aid when when that’s the issue, because we still they’re still primarily amplification devices. And it’s not so much that making things louder is really going to help someone when it’s that differentiation between seat speech and noise. That’s the problem.
Dr. Mark Syms 15:37
Yeah, it’s it’s almost like athleticism. I think that there are some people whose brains are better at it than others.
Dr. Jeff Crukley 15:44
Absolutely. And there are, you know, types of trauma that that show up differently. I mean, we see in, in a lot of veterans who may or may not even qualify for amplification, but they have significant auditory impairment, because, you know, even at reasonable levels, they have a very difficult time extracting speech of interest from noise of non interest and tinnitus concerns and things like that. It’s it’s not all captured very well with that pure tone audiogram.
Dr. Mark Syms 16:15
Is that from concussive forces or-
Dr. Jeff Crukley 16:20
I think so. You know, it can be, you know, blast blast effects, or even just physical trauma, you know, concussions being being thrown from vehicles and things like that.
Dr. Mark Syms 16:32
The brain is not designed to be shaken or, or it’s really not. So, you know, and I think you’re, you know, one of the things that is fascinating to me is how much more important in my progressing in my career that functional testing has become so much more important, you know, I’ve really realized that the audiogram is just kind of the beginning, I tell patients, the audiograms, like the EKG, and the functional testings, like the stress tests of the heart, right? Like, it’s really trying to figure out how you’re functioning.
Dr. Jeff Crukley 17:02
And it’s, you know, related to that, where where I have taken some work is, you know, taking that functional testing, with an eye to how can we make sense of the data, taking that functional testing out into the real world where people actually want to be able to hear so? How do I how do I fit someone with some technology? have them go about their typical daily life? And how can I determine whether they’re doing better or worse, the same whether, you know, they are feeling more confident, whether they’re actually objectively hearing better? How do you capture that in in a data format that lets you make some, some recommendations, conclusions and draw some inferences from that type of investigation? And it’s not easy to do?
Dr. Mark Syms 17:48
How do you do that?
Dr. Jeff Crukley 17:50
Well, one of the most promising things now is no adaptations of ecological momentary assessment. So now that everyone is walking around with a smartphone, and those smartphones can be connected to technology, so your level technology, we’re better able to set up, you know, basically, have people be walking experiments, because if their environment changes, or their listening device suspects, the environment changes, we can prompt them on their phone to answer some questions for us. So it seems like Things just got louder, is that correct? And that’s a way of gauging whether your signal processing and acoustic analyzer in in the listening device, whether that’s accurate or not, or help you calibrate it to you know, human perceptual experience. Or we can ask, you know, are you engaged in a conversation right now, with how many people? How would you say you’re doing, or we can set up a phone to present some speech stimuli, when the when the devices themselves or the phone thinks it’s in some background noise, we can do some mobile speech based testing and have them respond either visually, or verbally, and repeat back or mark which words they’ve heard in in a presented utterance. So with the advancements in technology, our ability to really accelerate development and assessment simultaneously has become much easier and much more accessible.
Dr. Mark Syms 19:23
That’s really cool. I mean, that’s right. Great stuff, right. So it’s basically, you know, in saturation testing, or in saturation querying, that’s really cool.
Dr. Jeff Crukley 19:33
And it gives us a lens into into how these things are working outside of the booth. You know, it’s, you know, I haven’t worked clinically for a very long time, but I do recall many patients always saying you know, I do well with the beeps and in the sound booth, but the only time I’m in the sound booth is when I’m here for my appointment. So-
Dr. Mark Syms 19:56
Tight, they all say I do great. I mean, you know, one of those sorts I think of hearing a dissatisfaction is poor counseling when people aren’t, that’s not explained to them like, Look, you know, we’re doing this in a booth when you go outside, you should expect it to be more difficult. And that’s really where we want you to assess your, your hearing performance. And some people will say, Well, I got it, and I couldn’t hear I heard at the office, but then as soon as I left, it never worked. Well. And that was a counseling issue to some extent.
Dr. Jeff Crukley 20:21
Yep. It’s I think it’s identifying. And kind of anchoring those expectations, I think is a very important piece. There’s, you know, we’re not yet at the point till though, you know, who knows what the next few years will hold? We’re just not at the point where, you know, we have a full on completely artificially intelligent ear level device that will solve all of these problems for us. So we still have that the the actual human intelligence component to educate and train.
Dr. Mark Syms 20:50
Yeah, that’s interesting. I mean, when you say that artificial intelligence, will they be networked, to be able to utilize some sort of algorithms outside of the device itself?
Dr. Jeff Crukley 20:59
I think that that’s one possibility. And we’re getting we’re seeing some some great advancements in, in the ability to use things like deep neural nets to extract speech of interest, or to even try and decode and understand what people are trying to pay attention to. And we’re still a little ways away from being able to, to run that on a small ear level device powered from a tiny battery. But, you know, in, in offline processing, or even in phone based algorithms, we’re seeing some great progress in those areas.
Dr. Mark Syms 21:38
Now, that’s great stuff. So one of the things I wanted to ask you about, I was looking on your website, and you speak passionately about Bayesian and hierarchical models. So can you help the listeners to explain that and understand and when they’re used?
Dr. Jeff Crukley 21:50
Certainly, you know, I think so to contextualize. Depending on, you know, people’s familiarity and what’s kind of happening in in the analysis world right now, there’s, over the last probably 10 years or so, mixed models have become more more common in the literature across a number of different fields. Mixed models are just a way of introducing some what’s referred to as varying or, or random effects. And so what we’re really doing with these and mixed model is kind of the same term, it’s synonymous with hierarchical model, where we have a group of individuals, and we want to treat each of those individuals as an individual, but also have the the estimates for that individuals performance, influenced by the group level, performance. And so probably the best way to maybe explain this is in something like testing speech recognition with an omnidirectional microphone versus a directional microphone. And so, you know, kind of the more the older school way of doing that is, you say, take a group of 10 people, you test them and find their, their level their SNR for 50%. Correct with an omni mic, then you test that same group of people with a directional mic and find their SNR 50% correct that way. And then you take the average in Omni, the average with the directional and see if they’re different. The challenge there is that humans just really aren’t carbon copies of one another. And hearing impaired listeners even less. So we see a lot of varied performance. Even in a group of people with you know, if you can imagine a group of 10 people with exactly the same audiogram, their performance on a speech test is likely to be very different. And their benefit from a directional mic is also very likely to be different from one another. So what a hierarchical model does is, essentially treat each person as their own control. So I take subject a, I look at their performance with Omni, and then I test that subject again, and look at their performance with directional microphone technology. And it’s that difference for that person that I’m interested in. And so I do consider the whole group performance, but I’m also looking at that individual benefit. And that is why the This technique is so powerful. Now, the other distinction between what I do and what is, or at least historically has been more conventional, is that I do things under a Bayesian framework versus a frequentist framework. A Bayesian framework is, is really just looking at the most likely or most probable solution to a statistical problem. So the the most likely answer for the benefit of a directional microphone is around Five dB. So you’ll see an improvement of five DB in your signal to noise ratio required for 50%. Correct. So if with an omni microphone, you’re getting 50%, correct at five dB, you’re likely to be getting 50% correct at zero DB with a directional mic. The difference is that in a Bayesian system, we are assuming and treating the data that you collect as the ground truth. And so it’s kind of to put it in maybe more accessible terms, a Bayesian looks at a statistical approach and says, Hmm, what is the probability that I have the right model? Given these data, the frequentist approach, which is what you’ll see with T tests or analysis of variance is more, how likely are these data given that my model of no effect is true? And that’s kind of what made me change my my direction several years ago is, is that understanding in the frequentist paradigm that you’re assuming no effect, and seeing if your data are unexpected? Given that there’s no effect. And as somebody who’s been in research for quite a while, I would never really design a study to look at something I thought had no effect. And so why would I model my data as though there’s no effect there? Instead, I say, you know, based on the literature, it looks like a reasonable range of effects for directional mics is, you know, somewhere between two and six dB, and then I look at the data to tell me, you know, does my model kind of fit with that? Or should I, you know, fit a different model that is a better fit to those data. But I look at the responses I get from people as that ground truth. And I questioned my model, versus I trust my model, and question the data that I got from people.
Dr. Mark Syms 26:59
Let me see if I’m under say, so the old model was, there’s no effect. And we’re going to measure the change from no effect to effect. And the model you’re talking about is, we’re made measuring the change in effect, assuming there is a baseline effect in every each individual and kind of controlling out the other thing. So for instance, hearing is a lot more than just your ears, it’s all this other stuff. But if you see the change in each individual, then you’ve almost control for that other stuff, whatever that other stuff is, is that is it? Am I getting that right?
Dr. Jeff Crukley 27:31
You’re getting it exactly right? It’s It’s just that and and some of these models can get pretty complex. But it’s, it’s that complexity and flexibility that I have found, generally just necessary, if you know, I don’t know how often people look at the underlying assumptions of some of the typical statistical tests that we do. But what I found when trying to model more complicated data, especially with hearing impaired individuals, I found that in trying to find the right statistical test, I was never really meeting any of those assumptions. And the models, were assuming a lot of things that I just couldn’t assume about my data. So I needed to find a way to be more flexible, and build things essentially, from scratch. And, and, and customize what I know about the data and what I know about the technology, and analyze those data in a way that that speaks to the true nature of how those data are generated or created. And so it’s, it’s become, you know, it’s it’s very fun. It’s challenging, and I find that, you know, it gives you estimates of uncertainty, which is also something that’s lacking in the frequentist paradigm. And when we’re thinking about technology developments, or benefit from technology, or even just how someone performs on a self assessment, you know, listening confidence questionnaire, or something like that, conclusions should always, in my opinion, come with your uncertainty about that conclusion. So if I say, a directional mic will give you five dB of benefit, but that’s plus or minus two dB. So that is the very important piece, because it really depends on the individual. And how precise is my estimate. So if I can say that, you’ll see five dB of benefit with a directional mic plus or minus point five dB, then you can really expect a noticeable effect. But if that confidence is, you know, five to be a benefit, plus or minus three, well then like I you know, some people might notice a lot but others might not notice anything at all. And that is something to be aware of, in counseling a patient and setting those expectations. Or if you’re in early stages of development, you know, being able to weigh out whether or not your eventual end user is going to notice, appreciate and gain maximum benefit from that technology. So having that estimate of uncertainty is is really an important piece of both clinical understanding as well as industry based technological development.
Dr. Mark Syms 30:17
Yeah, it’s interesting, because, you know, what I would tell you is the clinical context of what you’re talking about is kind of something I talked about with patients. And that’s what I always say, you know, hearing is not just your ears, it’s your brain, right? And so you’re trying to measure the incremental change of some system chain, omni directional mic, bone, or something like that. And you have to, you’ve got all of these other aspects that are vary from patient to patient. So you can’t put everybody in the same bucket and say, Well, you know, we saw an overall improvement in this group, we need to look at each person because some of them are, you know, going back to the athletic analogy, some of them are, you know, highly athletically impaired, and some of them are the LeBron James of hearing. And so, you know, if you put both of those people in a group that Bron, James might, you know, skew the whole group, and so you’re kind of controlling for them. Exactly, exactly. That’s really, that’s really interesting. And so is this commonly used in the hearing science space or no?
Dr. Jeff Crukley 31:15
It’s, it’s less so in the hearing science space, but it’s becoming more prominent. There is, there is a large community in the psychology space, I think, motivated significantly by some of the, you know, replication crises. And and some of the challenges that have that have happened in the psychology literature, there’s been a noticeable shift in in folks using a Bayesian paradigm that makes things a little bit more robust, a little bit more repeatable, and gives you these estimates of uncertainty and what to expect. And so we’re seeing it, I think, one of the major challenges. So Bayesian inference and analysis predates all of frequentist stuff. So you know, Bayesian, the Bayes theorem goes back to 1862. It’s, it’s been around a very long time, it just hasn’t been really approachable or accessible. Until now, when we have very affordable computing platforms.
Dr. Mark Syms 32:20
So power to do it right?
Dr. Jeff Crukley 32:22
Power to do it. I mean, some of the models I fit now would have taken, you know, a roomful of mainframes and a month to do, you know, 20 years ago. Now, with with some of the fantastic developments in in computing languages, as well as hardware that you could have in your home, I can run a model in a couple of hours or a day that just previously would not have even been approachable at all. And so one of the things to be aware of, is just that now we can so I think we’ve known for a long time that this this form of analysis of inference.
Dr. Mark Syms 32:57
Just wasn’t practical.
Dr. Jeff Crukley 32:58
Just wasn’t practical, although powerful and entirely enviable. It just wasn’t approachable, or, or possible.
Dr. Mark Syms 33:04
But now it is, when computations were done by computers that were human beings, right? Exactly, you know, 1000, computers, people to do calculations for several weeks to do one of these was a huge investment of resources.
Dr. Jeff Crukley 33:21
And, you know, to try and contextualize that the language that I use, it’s called Stan, it became public in 2013. So it’s really new, it really hasn’t been around that long. And so it’s, you know, it’s, it’s at the leading bleeding edge of things. And it’s, it’s informed by, you know, a large community of, of computational statisticians. And we’ve got a lot of power here. And it’s, it’s really an exciting time to be doing this kind of work.
Dr. Mark Syms 33:51
I really have to talk to you if I’m going to do any studies in the near future.
Dr. Jeff Crukley 33:55
Dr. Mark Syms 33:56
Yeah. So what you know, I know, this is kind of, you know, so what other measurements and audiology do you think are kind of cool or interesting or hard to communicate?
Dr. Jeff Crukley 34:07
Well, I think, you know, some of the some of the challenges come from a lot of terms used interchangeably, that probably aren’t strictly interchangeable. When we talk about speech discrimination, word recognition, or word understanding. We don’t really have a lot of clinically us nation, right? You don’t and you know, some of these things are measurable, but what we’re calling a discrimination task isn’t necessarily discrimination. Discrimination kind of implies selecting the correct choice out of a fixed set of given choices. And so word discrimination would be you know, I say a word, which one of these was it, but we actually use that term to describe an open set word test. So we’ll present a word and someone takes, you know, either a guess or heard that word. It’s not really discriminating between things. It’s typically also not understanding there,
Dr. Mark Syms 35:10
It’s determining, right? Rather, it is. Right.
Dr. Jeff Crukley 35:14
And, you know, we’re not asking if they’ve understood the word or what it means asking. It’s identify. So it’s word identification or recognition. And I think that that’s, you know, that’s a bit of a challenge. And especially when, you know, if we’re counseling a patient, and we talk about speech understanding, again, kind of linking back to the expectations. Well, you said, my speech understanding was very good, but I go home, and I can’t understand anything, per se, it’s so there’s this mismatch. And so you know, you can identify words, but stringing them together amidst some noise and making sense of a whole paragraph is a different thing, then, correctly identifying a word that you heard, even if it’s presented in some noise, and I think-
Dr. Mark Syms 35:59
Or even more subtly getting upon or a joke. Exactly right, which is even way more complex than just, I understand what somebody’s saying, you know, it’s, it’s that hearing impaired person who is listening to the conversation, and then you can see they’re processing it. And then like, you know, 45 seconds later, they kind of chuckle because they actually finally put the joke together. And that’s not because they don’t have the horsepower, it’s because they’re so working so hard to communicate,
Dr. Jeff Crukley 36:26
That listening effort piece is is a challenge, I’ve done some work previously trying to find ways to, you know, best identify, or at least find some kind of indexing into what we might call listening effort. And I’ve looked at this through pupillometry, through EEG measures through reaction times. And all of these things give us different and often not even correlated measures of what’s happening. So I think when we, when we try to speak to patients or researchers about listening effort, it also puts the onus on us to explain what exactly it is we’re referring to, with our effort.
Dr. Mark Syms 37:07
Yeah, well, because communication is so complex, you know, hearing is obviously one of the entry gateways into it. But obviously, there’s, there’s so much more to it, you know, just even within different languages, how ideas are expressed. And, I mean, look, philosophers spend all of their time, you know, defining terms to use language to be able to communicate complex ideas that are very hard to grasp. So it’s a challenge. So it’s fascinating stuff, really is. Thank you so much. So, you know, one of the questions I like to ask is, you know, who do you think in terms of your career? Who are those people who had a great impact on you along the pathway? Who are those people?
Dr. Jeff Crukley 37:42
I’ve had a number of fantastic colleagues and mentors. There’s a fantastic hearing scientist named Christoph Michelle, who first introduced me to Bayesian inference and kind of set me down this path. One of my one of my favorite quotes from him was, you know, I think I asked, Well, why would you when or why would you use a Bayesian model over something else? And his response to me was, well, if you care about what your data have to say, and why you collected them, then you would use a Bayesian model. If you don’t, then I guess you could something else, something else or not do the study.
Dr. Mark Syms 38:27
You silly person. This is the way to do it.
Dr. Jeff Crukley 38:29
Exactly. It’s so that that lit a fire under me. That that just set me down this path several years ago. I’ve seen some fantastic leadership through individuals like Brent Edwards, and Simon Carlyle, Simon Carlyle, who were just exemplary leaders, for teams that I was a part of, both of them, you know, showed me a path to team leadership that embodies, you know, philosophy of science, as well as philosophy of leadership. And those two gentlemen were just fantastic influences on me. And probably my most significant resource for thinking through science questions, and the lens upon which we look at the world is my significant other my wife, Stella Ng, who is an education researcher at the University of Toronto. She is an absolute phenomenal and qualitative inquiry experts and the way she approaches her research questions and the rigor that she embodies, in asking really difficult questions and coming up with really interesting answers has informed a lot of what I do although I’m quantitative. The the way that I think through questions and seek to answer them is highly influenced by the way that she approaches her work.
Dr. Mark Syms 39:57
That’s awesome. I’m sure dinnertime conversation.
Dr. Jeff Crukley 40:00
They are well for us. I mean, I think probably guests would be maybe less thrilled. But we spend a lot of our time talking science in our house.
Dr. Mark Syms 40:09
That’s great. That’s great. Well, everybody, we’ve been having a great conversation with Dr. Jeff Crukley. Jeff, where can people learn more about you?
Dr. Jeff Crukley 40:17
So I have a website’s jeffcrukley.com. jeffcrukley.com. And I have contact information there, as well as some examples of previous projects and some presentations I’ve given. And I’m happy to chat anytime.
Dr. Mark Syms 40:34
Well, thank you so much for coming. This has been great. I really appreciate your time. And really interesting. Thanks so much.
Dr. Jeff Crukley 40:40
Thank you very much, Mark.
Thanks for tuning in to the ListenUp! podcast. We’ll see you again next time and be sure to click subscribe to get updates on future episodes.