Flash cards are excellent tools for aiding in studies of all sorts, whether the goal is to learn the vocabulary of a new language, remember "important" trivia for a meaningless test, or anything in between on the scale of utility. However, for maximum effectiveness, simply testing a card over and over again is a horribly inefficient approach which does not work well. Fortunately, much research on the subject has been released publically, including the SuperMemo series of algorithms, which are designed to increase effectiveness via a technique known as spaced repetition. Rooted in neurological research, these algorithms are designed to model how best to form long-lasting memories in the brain.
DSM-9 is a flashcard tool to aid with memorization, designed to be as easy to use as possible. It is primarily intended for use with physical flash cards, though it is quite excellent for use with digital decks, as well. DSM-9 implements the SuperMemo-2 algorithm. While newer variants exist, SM-2 is one of the most well-known, and its variants are used in Mnemosyne, Anki, Oboeta, and various other tools.
The SM-2 algorithm is actually extremely simple, enough so that you could do the math on pen and paper relatively easily - though that would be extremely tedious. Every card is assigned an easiness factor, which measures how well you remember it. Every time you test yourself on a card, you give a grade from zero to five indicating how well you remembered the card. The deriviative of the easiness factor is determined via a simple polynomial equation, and the time at which to next test the card - in days - is determined by the new easiness factor, and the amount of time that has passed since the last time the card was tested. If the grade given is below three, however, the easiness factor is not updated - instead, the card is considered to be forgotten (or insufficiently memorized), and immediately marked as due.
The deriviative of the easiness factor with respect to time is -0.02 times the square of the grade at the time, added to 0.28 times the grade at the time, minus 0.8. Thus, if the grade given is 5 (perfect recall), the deriviative is -0.5 + 1.4 - 0.8 = 0.1. Since there’s only six possible values for the grade, the possible deriviative values are trivial to list out: from 3 to 5, the deriviatives are -0.14, 0, and 0.1 respectively. Thus, a single instance of totally forgetting counts the same as eight instances of perfect recall. The easiness factor can never go below 1.3.
While tools such as Mnemosyne and Anki are designed to store cards on the computer and test you while you use the device, DSM-9 takes a different approach. DSM-9 expects you to keep a physical deck of cards - or a file on an eReader, or a notebook, or a plain file on the computer; you should use whatever approach works for you. For simplicity’s sake, we use the term "deck." This provides a much greater degree of flexibility. DSM-9 provides a number of tools to make life easier, without making decisions for you.
To accommodate this, two files are needed per deck. The first file, a , provides a name for every card in the deck, and optionally includes their contents. This file is to be managed by you. When you add a new card to your deck, simply mark a unique name on the physical card, and add that name to a line in the .deck file. A simple hierarchical naming scheme for a chemistry deck, for instance, might contain entries like this:
A single deck may contain up to 65,534 cards. Cards are formatted with between to three fields. The first field, which is mandatory, is the unique name. The second and third are both optional, and are the front and back of the card respectively. The fields are separated with three at symbols (@@@), a sequence which is not allowed to ever appear within a field. The front and back may be stored in any plain text format - unadorned text, markdown, HTML, or anything else.
Note that, despite the front and back being optional, the delimiter is currently required.
In addition to the deck file, there is a log file, which is managed entirely by DSM-9. All that you need to know, as the user, is that the log file must be kept in the same folder as the deck file, and must have the same name (with the only difference being the file extension). Details are provided here for completeness; this section is not important to usage of DSM-9. Every line in a log file contains three fields, each separated by the three-at-symbol delimiter: a card name, the timestamp at which the card was tested, and the grade given at the time of testing. On startup, every line is processed, each line is mapped to a card via the name, and the card’s due date is updated. When a new value is logged, it is written to the file, and the internal values are updated.
DSM-9 is split into a few pieces: the library, and a few clients. The library handles the deck and log files, and the core logic (updating card info when a new log entry is produced, scheduling due cards, etc). The due client prints the list of due cards for a given deck. The log client prompts for grades for cards in the order in which they are due. The intended usage is to maintain a list of physical cards, schedule them using the due client, test them, update the logs using the log client, and repeat ad infinatum.
The current version of the library is extremely well-optimized, and can load a large deck and large logs file in under 50 milliseconds even on a particularly slow laptop. The library was redesigned from scratch for v2.0, with effort put into efficient usage of RAM, cache, CPU, and disk. Where before, a small buffer was used for the deck and the logs, v2.0 uses a single linear load each for the deck and the logs file, and loads their entire contents into memory. Since decks cap out at 64K cards, even a deck of the highest possible size with large cards can easily fit in under 50M tops. The largest deck I actually use, a deck containing the most common Japanese kanji, is 11M. Log files are roughly 30 bytes per entry, so if every card in a 10K deck is tested every day for a year, the result would be a 365 * 10,000 * 30 = ~110M file. Obviously, that far outstrips regular usage. However, this can become a problem for decks which are very large and practiced frequently for a very long time (decades, not years). This obviously won’t be a problem any time soon, but an improved method for handling logs is intended for v3.0.
In addition to the disk optimization, RAM usage has also been optimized. Rather than making copies of the data for every card (the name, and optionally the front and back), only their position within the deck file is tracked, which uses far fewer resources. In addition, the memory layout of the card structures has been redesigned and optimized such that over 15x more cards fit into the cache now. The various hot loops are now vastly more cache-efficient. Searching for a card by its name, for instance, no longer needs to load in any information about a card other than its name. One major example is looking up cards by due dates: now, all of the info needed for the lookup for 16 cards fits into a single cache line. Before, a single card’s information would require nearly a full cache line.
All told, there are massive, concrete gains to the redesign. The initial implementation of the version one loader took over eight seconds to load the Japanese kanji deck, which motivated a nontrivial amount of optimization effort, resulting in loading times of roughly a second. The v2.0 loader loading time is under 150ms. On a different - but still slow - machine, it can now load in under 35 milliseconds. Raw timings for decks of various sizes along with throughput calculations, measured on the PineBook Pro, are as follows:
deck mean max min bitrate min mean
64k 0349us 0558us 0305us 112M/s 179M/s
128k 0645us 0981us 0507us 127M/s 193M/s
256k 1245us 1813us 1052us 137M/s 200M/s
512k 2220us 3373us 1895us 148M/s 225M/s
1M 4015us 5699us 3163us 175M/s 249M/s
2M 6754us 10.1ms 4857us 198M/s 296M/s
4M 10.5ms 20.7ms 7978us 193M/s 380M/s
8M 18.0ms 26.6ms 15.7ms 300M/s 444M/s
16M 33.4ms 40.9ms 31.7ms 391M/s 479M/s
32M 80.5ms 81.3ms 76.5ms 393M/s 397M/s
64M 156ms 157ms 148.5ms 407M/s 410M/s
Of note is that while average throughput generally goes up as the size of the deck increases, and worst-case throughput and deck size are directly correlated, the loading times for small decks is extremely good. The kanji deck takes 35 ms and is 11M for ~7K cards, but it is not a physical deck. It was converted from an AnkiWeb source using a hacky-but-functional script. Its cards are all in HTML, contain large unicode characters, and contain a lot of information. DSM-9’s target use case, however, is for physical decks, often those which already exist. Such decks will not contain the fronts or backs, as the kanji deck does. The deck size is thus determined entirely by the number of cards and the average name length. A deck of 5K cards, with a typical name length of 50 characters, would fit in under 256K - and would thus load in roughly one millisecond. It is not unreasonable to expect that, when used as intended, DSM-9 will fully load in under a millisecond.
Loading time is determined almost entirely, to my knowledge, by cache size and RAM speed. The load time is dominated by splitting the deck by lines so that card names are known, which is required for log entries to correctly map. This requires processing the entire deck - but DSM-9 is designed to do that processing in a purely linear fashion, which allows hardware prefetching to reduce the number of cache misses by a great deal. As such, by not including the fronts and backs, the amount of time it takes to fit the data into cache, and to process it, is chopped into pieces. The kanji deck is 11M, but if only the names were stored, it would be roughly 105K. The numbers given above show that the worst-case loading time measured for such a small size would be under 980 microseconds.
Given DSM-9’s target niche, and the critical constraint that all data be stored in plain text, this seems quite excellent.