Johnny Mnemonic walks again: designing your memory properly

Apologies for going somewhat off-topic. I do occasional posts about  memory palaces, not the flashy type beloved of fiction, but doing it properly. They are not posts about speed memorising digits for memory competitions (although they may help) they are posts about holding a massive amount of information in your head and making sure it is useful. They are  aimed at the people who’ve already had an introduction to the field, who’ve found memory techniques useful and want to take them a little further. 

Memory techniques may have been used for hundreds of years, but they have been used by the very few and in comparative isolation. By comparison, the last fifty years of algorithm design have employed millions of programmers and mathematicians, developing the theory, testing the results and proving the pragmatics with incredible intensity.

The purpose of this post was to show the correspondence between the designing data structures for algorithms and memory structures for humans. It’s my ambition over the next few months to produce a series of articles looking at each individual correspondence in detail. In the meantime I highly recommend to anyone interested in mnemonics that they pick up a book on algorithms and data-structures.

Making this ‘information is information’ correspondence has moved my memory practice on in leaps and bounds – there is so much in terms of techniques and approaches that can be transferred practically straight across, and that’s just scratching the surface of the total value of the resource.

Let us consider three fundamental memory structures. (Using the terminology from x because it’s the most accessible, not because it’s the best.

  1. chain
  2. peg
  3. roman room (you probably recognise this as method of loci, I’m using this terminology to make it easy to make a particular distinction later on)

I’ve added in the Wikipedia description to make sure everyone is talking about the same thing.

Now let’s look at the set of operations that you might want to perform.

  1. work though all elements in a set(forwards/backwards)
  2. lookup the, say, 7th element
  3. add an element in the start of the list
  4. delete an element in the middle

Each of the memory structures performs differently with this operations. For example, a chain structure isn’t very good at returning the 17th element on the list, but on the other hand adding elements to the start of either a listing or a roman room is pretty hard.

 

iterate through all elements Lookup 7th element Add element at start Add element at end add element in middle
chain easy hard easy easy medium
peg easy easy hard easy hard
Roman Room easy medium hard medium hard

In fact, we have a table showing hard/easy for each structures/operation pairing. When you commit anything to memory you are, consciously or not, thinking about the operations you will use on the data and selecting a sensible structure. This is why you may remember stations on a railway line using a chain, but sports championship results using an index of years.

Now let’s look at three genuine algorithmic data structures:

  1. linked list
  2. array
  3. record

(For the uninitiated, a linked list is a set of objects were each one links to the next one in the list – exactly analogous to the chain technique. An array is a structure where integers are mapped to objects – exactly analogous to mapping numbers to objects using the peg method, and a record is a structured collection of labeled pointers to objects – and we’ll discuss the very interesting links between that and the roman room method in a future post)

It turns out that within programming algorithm, data structures are required to do the same operations and their memory counterparts. So let’s look at the hard/easy pairings:

iterate through all elements Lookup 7th element Add element at start Add element at end add element in middle
Linked List easy hard easy easy medium
Array easy easy hard easy hard
Record/Hashmap easy medium hard medium hard

That looks pretty familiar right? It turns out that information is information regardless of medium so there is a very genuine correspondence between the structures. The correspondence goes quite deeply as well: sorting the information in a linked list requires much less hard drive space than an array of the same information (I’m thinking of arrays of big dynamically sized objects in something like java here – c programmers and their arrays of unsigned ints can just let this pass) in the same way that a chaining memory items requires fewer connections than putting the same information in an peg structure.

I’m going to hammer this point home with a passage from Wikipedia about the disadvantages of the chain method

“There are three limitations to the [chain] system. The first is that there is no numerical order imposed when memorizing, hence the practitioner cannot immediately determine the numerical position of an item; this can be solved by bundling numerical markers at set points in the chain or using the peg system instead. The second is that if any of the items is forgotten, the entire list may be in jeopardy. The third is the potential for confusing repeated segments of the list”.

I don’t think many programmers would challenge these as being any different from the disadvantages of a linked list 🙂

 

Leave a Reply