Sensors and Stuttering

I came across this article the other day.

Reading this, I thought it would be very interesting to apply this to stuttering. They mentioned that they’re already using it for some physio purposes:

In similar work with the English national rugby team, McLaren engineers took data that the coaches were already gathering from sensors placed on players during practice and designed algorithms to glean new information. Hargrove’s analysts were able, for example, to determine how fatigued a player was—and therefore how susceptible to injury—by how long it took him to get up after being tackled and how much his pace declined over the course of an afternoon.

And as someone who was on the Pagoclone trial, this was really intriguing:

For all the sophistication of the drug discovery process, trying them out on people remains a time-consuming, low-tech process. Volunteers take the drug (or a placebo) and then are monitored by a doctor through visits every few months, so data points are few and far between. Mayhew wondered whether patients could be monitored remotely, like rugby players and pursuit cyclists. If information could be constantly logged and transmitted back to the testers, a drug’s effects—or lack thereof—could be spotted much sooner, saving labor, time, and maybe a lot of money.

What didn’t impress me about being on the Pagoclone trial was what it says above — the visits that were few and far between. I was asked to rate my stuttering in the past few days (or weeks). And it never occurred to me to keep a journal about any of this. So it was not only really, really subjective, but probably wildly inconsistent as well.

So how would this work for those of us who stutter? Well, something like a [better looking] Google Glass apparatus that records verbal interactions. And then …? I’m not a speech pathologist, but I imagine it would be interesting to go back through a bunch of the communications that happened in the patient’s life. But of course there’s the “big” part of the “big data.” That is, so much data to sort through. Hours and hours of conversations. The user would probably have to actively (or after an interaction) edit or tag or save the conversation. Or maybe they could push a button on the glasses or click on an app that would place a marker in the recording for reference later.

And then? Well, it’s good and evil, right? It’s good because the SLP can see if the patient is making progress. The patient can also set forth challenges and slowly improve with fluency and confidence. They can practice techniques out in the wild and know they’re using the same technique in different situations — or a few different techniques in the same situation.

But there’s the evil, too … you can go through your own data and say, well, I always stutter when I’m checking into a flight in person. So I’m only ever going to book flights on airlines with check-in kiosks. The data could show that you always stutter when at a certain restaurant … or with a certain person.

Would I do something like this? Yeah, it’d be fun for a few weeks, I think. It’d be interesting to look at the data. We probably think we stutter a lot more than we do. We probably do a lot better than we think — especially on thing like the phone or in conversations (after we’ve introduced ourselves.)

%d bloggers like this: