SRS with complex material

I noticed this bit in the QS Conf notes:

[quote]Q: How do you use SRS to learn ideas that are built upon other ideas (ie: not just facts, but systems; physics)?
A: Identify triggers where you would need to know something or perform some reasoning, and then SRS the simplest possible examples of those triggers with the mental response you would want.[/quote]
http://qswiki.com/index.php/Cognition

I (and my college roommate) have been working on a similar question for a couple years. We implemented a couple web interfaces for creating SRS cards that tie to web media and even ran a pilot group with a Real Analysis course at my college (not really any interesting data from that since we are really still just playing around). Now I’m going on to grad school, and I may continue to pursue this question.

I’m curious whether anyone here is interested in this topic or can point to information beyond the quote. If there is interest I can talk about some of what we’re doing with the current iteration of the project (called Learnstream).

[Nick: Maksym told me about you when we were discussing similar things. Also the grad school is CMU, so we’ll probably meet at some point. I promise I’m not a stalker. :P]

I really like this idea. It would be fun to see if cards could be auto-generated from even loosely structured knowledge systems (say, biological pathways). But the triggers (that’s Fogg’s usage, right?) outside of the SRS always seem to be situations where you wish you remembered something about a pathway!

Hey Ryan, great topic! I’m checking out Learnstream now. I’ll email you some feedback on it if you’re still actively improving it? Maksym gave us Skritter guys a ton of feedback when we were just college roommates with our web SRS project, and that was quite useful for us. In any case, I’m interested to hear more about how it’s working and what you’re planning to do with it. We’ll definitely meet up in Pittsburgh in the fall.

So on that QS Wiki entry, someone posted the Q, and I posted the A. The answer is largely theoretical–I haven’t tried explicitly doing this for very many things yet. But if the method of SRS is to train our brains remember things by strengthening the path from stimulus to response, then our goals when designing SRS questions should be in pairing useful responses with useful stimuli.

Most facts we traditionally SRS are like this already, because the real-world process is something like, “Oh, it’s a pyramid. What’s the formula for pyramid volume? Right, V = 1/3 * area of base * height.” Here the stimulus is basically asking oneself, “What’s the fact?”, and the response is to come up with the fact. Same brain pathways every time, easy to learn.

An easy behavioral example which I realize I’ve already done is to remember to salt meat early on before searing it. Natural tendency: remember to salt the meat later, when it’s already cooking, or not at all. Fact I want to remember: salt the meat several minutes before cooking it. Trigger: NOT explicitly thinking, “When should I salt this?”. So here’s a bad formulation of this:

Question: When to salt meat when searing?
Answer: Several minutes before cooking it.

A great formulation which has worked effortlessly for me:

Question: What to do as soon as you put the pan on heat before searing a meat?
Answer: Salt the meat.

Okay, so that’s an easy example. For procedural things I’ve SRSd like cooking and massage, it’s generally easy to do this–to figure out in which situation you’re actually going to want the knowledge, and form the SRS question around a trigger that will come up in that situation. If you can’t find a likely trigger, and the trigger isn’t going to just be you naturally thinking to recall a fact, then perhaps it’s a sign that it’s not useful knowledge!

Greg, can you give an example of how you’re thinking cards would be auto-generated from structured knowledge systems? I’m curious as to this idea. Generating cards seems to be one of the main reasons people don’t use SRS, so doing some of it automatically might be a big step. I’ve thought of some contexts in which you could do this, like if you’re a browser plugin and someone looks up the same fact multiple times, or if you’re teaching languages, or if you’ve got many people studying the same thing (a book, a course, a site) and can figure out where the facts are.

You can’t completely automate card generation.
Card generation is hard.
Every card has to have a clear answer.
You don’t want card that have multiple right answers.
This is especially true for loosely structured knowledge.

When one faces complex topic it’s good to have many cards about the topic. When you have your biological pathway you can start with a few verbal question about the pathway.
Yes/no question are perfectly fine.
Then you move to graphics.
Get a diagram of the pathway and open GIMP (or photoshop).
Hide different parts of the diagram and create cards where you have to remember what information you hid.
Redundancy is good.

Extra cards that display information multiple times don’t hurt. If you can answer them with easy/easy/easy/easy/easy you don’t see them many times and they won’t take much of your time.

I personally don’t like the idea of online card systems for general knowledge. I still want to remember the names of people I meet today in five years.
Having the card on Anki on my own computer feels more safe than trusting that a online solution will still be around in five years.

Unfortunately copyright is a huge problem when it comes to sharing carddecks that use images from secondary sources :frowning:

The Learnstream site linked above is actually our first iteration. I’d like to return to (some aspects of) it at some point because it was more suited to the power user, but we tried to throw in everything, and the resulting code and interface are pretty unwieldy.

The new version is at http://learnstream.heroku.com. We managed to make the interface much cleaner this time, but due to the rush of our project time, some of the content and concepts still need to be thought out. This time, we’re providing all “cards” up front for physics and calculus courses. Here’s briefly how we laid things out:

  • We have a database of the basic facts and procedures called components. These are written in the form of a question. Examples: What are the possible forces that could be acting on an object? What is the direction and magnitude of a gravitational force acting on an object?

  • We also have a database of quizzes that are conceptual or numerical problems and are composed of the components needed in order to get to the correct answer (yes, this assumption can be a problem; more on that later). Answering the quiz is approximately equivalent doing the SRS for each of those components. There are more details I could go into if anyone is interested, but some of that may soon change.

  • Finally, we have lessons. Most of these are either video lectures or text-based tutorials. However, some, like http://learnstream.heroku.com/courses/1/lessons/239080850 (you’ll have to register), show the lecturer working through a single problem and use a lot of embedded quizzes. I like this style more than the usual problems because you can have each component singled out, and it makes more sense to introduce the behavioral and metacognitive stuff. (Sadly, a lot of the Khan Academy videos that do the problem examples also do some bad behaviors or skip good ones.)

I agree that the “behavioral triggering” approach seems to be the most promising with regards to creating traditional cards. So one alternative is just directly testing those components through SRS. However, I think doing problems must have some extra value and they’re a little more engaging. Then, as long as we’re using problems, it’s nice to account for the recall that is used to solve them. I do have a few other ideas for changing it up just to avoid that problem of assuming too much when we carry the spacing effect to all of the components. (There’s also a slight user experience problem of not understanding why problems “keep coming back.”)

I read a bit of research claiming that there’s no benefit to teaching general problem solving strategies. Parts of it resonated with me, but I didn’t have a clear explanation for what would be useful or not. This seems like it could be satisfying though. Here’s one of the relevant papers: http://www.cogtech.usc.edu/publications/clark_etal_2010_math.pdf Short and sweet. And by sweet, I mean it pisses off a lot of people.

The card generation stuff is more what we were interested in for the first Learnstream. My most recent conclusion has been that we should just build tools that work with specific needs rather trying to make a generalist system. Christian’s ideas are definitely good. Also there’s a site that does simple generation from documents such as terms or using Wikipedia: http://studyegg.com I’m hoping he eventually gets into more advanced features, but he seems to have been busy making the interface much improved.

Can forums like this implement some sort of glossary??

SRS is a new term to me and obviously Goggling it did not get me very far…

[quote]SRS is a new term to me and obviously Goggling it did not get me very far…[/quote]http://en.wikipedia.org/wiki/Spaced_repetition_system

Ryan, I’m trying out the new version and I like it, in general, although I would suggest many changes. Making really good cards so that students don’t have to is going to be one good way to go. Can teachers do this, or do they need you to do it?

There are web interfaces for any users that we assign as teachers to make the changes. We’re working through Harvey Mudd, but we haven’t had any faculty come forward with much interest yet. For us on the project, we do most of the work in Google docs and have a slightly complicated importing procedure.

I’d be very interested to hear your feedback! There are many directions we could take, but I’m hoping that I can get enough fixed up this summer that it’s at least a nice resource for students to have. We underallocated time for content production – almost everything there now is just open licensed stuff and there’s plenty more we still haven’t processed, so we’ve barely thought about writing new questions.

[quote]We underallocated time for content production – almost everything there now is just open licensed stuff and there’s plenty more we still haven’t processed, so we’ve barely thought about writing new questions.[/quote]How about allowing user submitted cards?
You could both allow other people to rate the card and do statistics to see whether knowledge of the cards helps general learning.

Provided you have enough users that will give you over time well worded cards that cover the basics the best way.

[quote]If you can’t find a likely trigger, and the trigger isn’t going to just be you naturally thinking to recall a fact, then perhaps it’s a sign that it’s not useful knowledge![/quote]There are cards that are basic for other knowledge. You seldom need to actively recall the basics via a trigger but learning basics well is still very important.

[quote=“Christian_Kleineidam, post:10, topic:181”]
How about allowing user submitted cards?
You could both allow other people to rate the card and do statistics to see whether knowledge of the cards helps general learning.

Provided you have enough users that will give you over time well worded cards that cover the basics the best way.[/quote]

We played with that a little bit in the first iteration (and it’s still available: http://beta.learnstream.org/). We weren’t careful enough to collect any meaningful data. With a good set of base cards, we could maybe see how much people improve based on what cards they make. But in general, we found that participation in terms of creating cards was quite low, even with the incentive of bonus points in the class. People did get them but just by studying the existing cards for the most part.

I’d say those can still be put in a “trigger/response” format:

Trigger: When I see [basic term], …
Response: …I need to know what the heck [basic term] is

Or, for any Cloze deletion card (fill-in-the-blank): “When I see the context of the non-deleted information, produce the deleted information”

If that seems like overgeneralizing, then one example that still doesn’t work is Nick’s first one about cooking meat. Another is something like “Internalize the idea of force”. Not only is it a vague action, it doesn’t has any information about when it should be triggered. You could maybe fix it by saying “Whenever you see this card, …” or “When doing a problem about force, …” and then more precisely defining how to go about internalizing forces. But hopefully at some point you’d ask if that is actually useful and maybe focus on teaching the basic knowledge about forces instead.

[quote]With a good set of base cards, we could maybe see how much people improve based on what cards they make.[/quote]How do you know that your base set is good if you don’t have any comparison?

[quote]You could maybe fix it by saying “Whenever you see this card, …” or “When doing a problem about force, …” and then more precisely defining how to go about internalizing forces. [/quote]I think there a good chance that those question might become to complex.

I want to memorize F = m * a
What cards do I create?
F = m * […]
F:=Force; m:=mass

F = […] * a
F:=Force; a:=acceleration

m * a = […]
m:=mass a:=acceleration

F = […]
F:=Force; m:=mass a:=acceleration

m = […]/a
m:=mass a:=acceleration

The answer to all cards is:
F = m * a
F:=Force; m:=mass a:=acceleration

Neither of the cards resembles an “action”. The purposely don’t say “solve this equation” as that would take reading time that’s essentially wasted.
Every single card should be as simple as possible.

Usability lesson: It’s more fun to answer 4 cards that each take 2.5 seconds to answer than to answer one card that takes 10 seconds to answer.

To take another look at Nick’s example of memorizing the pyramid formula, I would create four cards:

Pyramid volume:
V = […] * A * h
V:=Volume; A:=Area; h:=height

Pyramid volume:
V = 1/3 * […] * h
V:=Volume; h:=height

Pyramid volume:
V = 1/3 * A * […]
V:=Volume; A:=Area

and a forth card with an image of a pyramid that graphically shows A and h while asking
V = […]

The answers of the first three cards could also show the image.

[quote]But in general, we found that participation in terms of creating cards was quite low, even with the incentive of bonus points in the class. [/quote]This might be just a problem of presenting things the wrong way.

Let’s say someone get’s a card wrong 16 times. In Anki the card would be automatically paused for being a difficult card. The user is supposed to review those cards and make them more clear.

You could do something similar. How about a popup:
“Look this card seems really difficult for you, how about creating new cards to cover the information in this card?
Here’s a simple dialog to enter cards.”
With such a mechanism at least some of your users will create new cards. You can give those cards to other people as well.

This process should give you a steady flow of new cards. Even better, it gives you a steady flow of cards that explain the concepts that your existing cards don’t cover well.

Christian, do you really create that many cards for simple geometric formulas, even with pictures? I know the Supermemo SRS principles say to make cards as simple as possible, but surely there’s a tradeoff between time required to create the extra cards and the slightly increased difficulty of the whole formula.

I love the idea of automatically suggesting reworking of cards that appear difficult. When Anki suspends my difficult cards, I always just ignore them as the default course of action. Not ideal.

[quote]Christian, do you really create that many cards for simple geometric formulas, even with pictures?[/quote]If I’m just creating cards for my own usage than I might not invest the time to create a picture for the pyramid formula.
If I’m however interested in creating the best possible deck of cards to teach geometry I would use pictures.

[quote]surely there’s a tradeoff between time required to create the extra cards and the slightly increased difficulty of the whole formula.[/quote]If I do physics problems it matters whether I need 1 or 3 seconds to recall F = m * a.
Being redundant with card creation helps to have fast retrieval of the memory.

One example where I really felt the benefit of splitting cards into multiples is the term dates of presidents in my country. At the beginning I had the start and end date of the term on the same card. Everything went much better after I splitted the information into multiples.

In the future I will test multiple ways of learning full birthdays in a more controlled way. We’re on a quantified self forum and therefore don’t have to believe the minimum information principle without challenge.