This is a unfinished thought post. I’m trying to work an idea though in my head and am putting it here in the meantime for comments and general interest – I’m genuinely yet to decide my final position on it – when I do I’ll be able to give it proper grounding and probably make it into a paper. Until then, this might be of interest.
My little brother Richard uses a combination of phrase-based AAC(“[I would like new words in my voice]”) and individual utterance based AAC (“[screwdriver], [sonic] [dad]”). Obviously the phrase-based stuff is very useful when it exactly matches the semantics he intends and other meanings can be built up from individual words.
What I’ve become interested in is measuring exactly how useful this is. Because quite often Richard intends something that’s *near* the meaning of one of his phrases so he’ll use the phrase as a conversational gambit and then redirect you at the appropriate point (rather like telling someone to drive you to Edinburgh from Manchester and yelling ’STOP’ at the right point, simply because you’re unsure how you would pronounce ‘Carlisle’).
As you can imagine, this causes a certain level of confusion. To give a simple example:
R: (“[I would like new words in my voice]”)
Me: okay, what new words would you like? (Not a particularly stupid question, we can talk around a subject and then I can make suggestions).
R: (“[I would like new words in my voice]”)
Me: yes, but what sort of ones? I’m not sure it would help just to put in random ones…
…and we go around that loop a couple of times and manage to achieve very little (and in fact, we regularly went around this loop for several years).
Actually, it turns out, that Richard’s intention is much closer to this:
R: (“[I would like new words in my voice]”)
Me: okay, *unlocks the device and starts working though menus*
R: *pokes other menu*
Me: Ah – you just wanted to have a poke at other things the device could do?
R: *nods*
(We note, of course, that Richard probably thinks that life would be much much easier if everyone else just did what they were told without acting like they knew everything).
This turns up in a few other places. Regularly we find that the long phrase that has been used has to be followed by quite a few more key presses to work out exactly what the intention is. And when you stop to think about it – the total number of key presses used is probably pretty much the same as it would have been if the sentence had just been built up from individual words. And this leads me to thinking that phase-based AAC is probably (at least for activity-based use rather than conversation-based – Richard is not a man who considers sitting chatting to be a productive use of his time) only useful when the exact meaning of the phrase is exactly intended.
My hypothesis is that the Amortised number of keypresses required to relay a particular set of semantics is independent of the average number of words generated by each keypress.
(If this turns out to be rule I’d like it to be Reddington’s Law. I’d like a law, a law would be cool)
That is – if it takes you 35 keypresses to explain to someone that a Fox (brown) jumped over a dog on one AAC device, that’s the number of key presses it’s going to take on any other device. There might be some phrases that get you much of the way – but they require quite a lot of both correction and interaction to make the subtle changes.
I’m definitely NOT saying that there is NO place for phrase-based AAC, very very far from it – I’m suggesting that it’s use is social and convenient. It’s much more about the interaction (which is a good thing) than it is about replaying information. But I think it’s important to distinguish the two. And I’m beginning to think that statements like ‘phrase-based AAC helps get information over faster’ are an insult to the rich and nuanced information that AAC users might like to communicate.
EDIT : almost to prove a contextual point – when we have the phrase (“[screwdriver], [sonic] [dad]”) – it’s broadly ‘Fix the mega drive’ rather than a Doctor Who reference…
EDIT AGAIN – Some thoughts from Google plus added here because I think they are very relevant…
https://twitter.com/CoughDropAAC/status/501451069063323649
Re: http://t.co/t5NUHHnwLO within a day of having an #AAC device my daughter learned to use "I'm mad at you" as a generic attention-getter.
— CoughDrop (@CoughDropAAC) August 18, 2014
4 thoughts on “Entropy in AAC”