Entropy in AAC

This is a unfinished thought post.  I’m trying to work an idea though in my head and am putting it here in the meantime for comments and general interest – I’m genuinely yet to decide my final position on it – when I do I’ll be able to give it proper grounding and probably make it into a paper.  Until then, this might be of interest. 


My little brother Richard uses a combination of phrase-based AAC(“[I would like new words in my voice]”) and individual utterance based AAC (“[screwdriver], [sonic] [dad]”).  Obviously the phrase-based stuff is very useful when it exactly matches the semantics he intends and other meanings can be built up from individual words. 

What I’ve become interested in is measuring exactly how useful this is. Because quite often Richard intends something that’s *near* the meaning of one of his phrases so he’ll use the phrase as a conversational gambit and then redirect you at the appropriate point (rather like telling someone to drive you to Edinburgh from Manchester and yelling ’STOP’ at the right point, simply because you’re unsure how you would pronounce ‘Carlisle’). 

As you can imagine, this causes a certain level of confusion.  To give a simple example: 

R: (“[I would like new words in my voice]”)
Me: okay, what new words would you like? (Not a particularly stupid question, we can talk around a subject and then I can make suggestions). 
R: (“[I would like new words in my voice]”)
Me: yes, but what sort of ones? I’m not sure it would help just to put in random ones…

…and we go around that loop a couple of times and manage to achieve very little (and in fact, we regularly went around this loop for several years).

Actually, it turns out, that Richard’s intention is much closer to this: 

R: (“[I would like new words in my voice]”)
Me: okay, *unlocks the device and starts working though menus*
R: *pokes other menu*
Me: Ah – you just wanted to have a poke at other things the device could do? 
R: *nods* 

(We note, of course, that Richard probably thinks that life would be much much easier if everyone else just did what they were told without acting like they knew everything).  

This turns up in a few other places. Regularly we find that the long phrase that has been used has to be followed by quite a few more key presses to work out exactly what the intention is.  And when you stop to think about it – the total number of key presses used is probably pretty much the same as it would have been if the sentence had just been built up from individual words.  And this leads me to thinking that phase-based AAC is probably (at least for activity-based use rather than conversation-based – Richard is not a man who considers sitting chatting to be a productive use of his time) only useful when the exact meaning of the phrase is exactly intended. 

My hypothesis is that the Amortised number of keypresses required to relay a particular set of semantics is independent of the average number of words generated by each keypress.

(If this turns out to be rule I’d like it to be Reddington’s Law.  I’d like a law, a law would be cool)

That is – if it takes you 35 keypresses to explain to someone that a Fox (brown) jumped over a dog on one AAC device, that’s the number of key presses it’s going to take on any other device.  There might be some phrases that get you much of the way – but they require quite a lot of both correction and interaction to make the subtle changes.   

I’m definitely NOT saying that there is NO place for phrase-based AAC, very very far from it – I’m suggesting that it’s use is social and convenient.  It’s much more about the interaction (which is a good thing) than it is about replaying information.  But I think it’s important to distinguish the two.   And I’m beginning to think that statements like ‘phrase-based AAC helps get information over faster’ are an insult to the rich and nuanced information that AAC users might like to communicate. 

EDIT : almost to prove a contextual point – when we have the phrase (“[screwdriver], [sonic] [dad]”) – it’s broadly ‘Fix the mega drive’ rather than a Doctor Who reference… 

 EDIT AGAIN – Some thoughts from Google plus added here because I think they are very relevant…

Screen Shot 2014-08-15 at 21.35.59https://twitter.com/CoughDropAAC/status/501451069063323649

4 thoughts on “Entropy in AAC

  1. Interesting proposition. The efficiency would probably vary with the partner as well. A familiar partner might be able to connect the dots much more quickly than someone completely unfamiliar.
    Your comment also reminded me of work by Jan Bedrosian. Here is an example source:
    In her work she looked more at effect on attitudes and perceived competence rather than efficiency. I find your ideas a nice complement to what she is working on . How would the attitudes change with successful resolution especially if it were ultimately more efficient.

  2. Thanks John – great to hear from you 🙂 I think you are right – context trumps all, I should probably state ‘all other things being equal’ or some other hedge 🙂 I’ll see if I can get hold of a pdf (my place doesn’t subscribe I think) – the paper looks excellent from the abstract 🙂

  3. I have started trying out using far more single words than phrases for the people I work with, most of whom are pre-literate. They come into contact with so many different people every day and many wouldn’t think to consider whether a phrase might just be the ‘closest fit’ to what the person wants to say. By contrast if the person uses one or two key words, the listener has to take into account the context and ask some more questions to make sure they understand correctly. We might not be producing full sentences but there is a much larger number of potential messages!

    • Oooh – there’s an aspect to that that I hadn’t considered – that the has more of an ‘understood’ role to direct… hmm. I might want to turn that over in my head for a little while – thank you!

Leave a Reply