Hey thanks again for being interested!
The original method in ActivityRecommender for users to specify the score of a participation was to type in an absolute score from 0 to 1, but it was difficult for the user to evaluate how close to perfect (a score of 1) a participation was. For example, if I was doing homework and the homework was mandatory then it was tempting to record that that participation deserved a perfect score of 1 because I didn’t really have much choice in having done it.
Later I changed the way that score entry works so that users are expected to specify how happy they were (per unit time) during their most recent participation compared to the previous one. That made it much easier for the user to say things like “this participation was 1% better than the previous thing I did that day” (perhaps that was eating dinner) and not have to worry about hypotheticals about how much they liked a participation compared to what might have been able to happen instead, or compared to something great that happened once a very long time ago.
It would be easy enough to reintroduce the ability for users to directly specify the score of a participation if there’s a need for it, though. The relative ratings get converted into absolute scores that get saved into a data file. Would you elaborate?
There are a bunch of parts to the analysis that gets used for suggestions and feedback, but I’ll start with a high level description.
There are a few different values that ActivityRecommender models. The value that it aims to maximize is the net present value of your happiness (exactly like the net present value of a stock in the stock market: https://en.wikipedia.org/wiki/Net_present_value although I’m using a half-life of about two years, which would be a yearly rate of return about 41%).
Note that a user’s net present happiness (like the fundamental value of a stock in the stock market) is a value that changes over time as new events happen to the user. Also note that our understanding of the user’s net present happiness (like our understanding of the value of a stock in the stock market) for any given moment in the past is also changing over time as we discover new information about things happening in the present. As more time passes and more data exists, we gain an increasingly accurate understanding of what the user’s net present happiness was at any given point in the past, particularly because happinesses far in the future are considered to have exponentially lower weight than happinesses in the near future.
So, ActivityRecommender uses its best (and ever-updating) estimate of the graph of the user’s net present happiness over time and attempts to identify patterns between information about the user and what the user’s net present happiness is.
It uses several approaches to estimate the user’s net present happiness, each of which computes a mean, standard deviation and weight, which later get combined into a final estimate. Some approaches are more data-driven and have higher weight when ActivityRecommender has more data about the user, and some are more like very good guesses and have higher weight when ActivityRecommender has less data.
For a user that has entered a lot of data, one of the bigger factors in ActivityRecommender’s estimate (of what the user’s net present happiness will be if it makes a certain suggestion) will simply be based on asking its numerical interpolator what the user’s net present happiness has been in the past when having received a similar suggestion under similar circumstances.
For a user that has entered less data, one of the bigger factors in ActivityRecommender’s estimate (for the user’s net present happiness after receiving a suggestion to do a given activity) will be how much the user reports enjoying the given activity.
For a user that has entered very little data, one of the bigger factors in ActivityRecommender’s estimate (for the user’s net present happiness after receiving a suggestion to do a given activity) will be how much the user enjoys doing similar activities.
For example, if the user has declared an activity called Sports, and also an activity named Soccer that inherits from Sports, then if the user often assigns a high rating to Sports and other activities under it, then before ActivityRecommender has much data, it will hypothesize that the user will often assign a high rating to Soccer too.
When ActivityRecommender is computing the value estimated by one of these approaches, it is mostly done by computing some numerical features and feeding that into a numerical interpolator. I think that the factors that ActivityRecommender uses are: 1. Time of day 2. How many seconds since the activity was considered 3. The last time that the activity was considered, was it done or was it skipped? 4. Compute a graph of the user’s cumulative time spent participating in the activity over time and compute the least squares regression line. What is the difference (the residual) between the top-right part of the graph and the value predicted by the line?
ActivityRecommender also takes into account the probability that a user will take a suggestion, and considers time spent thinking to be time when the user’s happiness is 0.
There are also factors for efficiency that I’ll leave out for the moment until anyone mentions interest in that too.
Once ActivityRecommender has an estimate of the net present happiness for each possible suggestion, it also considers the possibility that it will learn new information as a result of making these suggestions, and will essentially slightly favor activities having less data, because they are more likely to result in significant positive discoveries that can increase future benefit. Whichever activity is expected to maximize net future happiness, accounting for possible learning, is then suggested.
ActivityRecommender does have some support for Problems and Solutions, where the user can declare that Problem X might be able to be solved by Solution Y, and for the user to record instances of having tried Solution Y and whether it solved Problem X and how long it took. Users can then browse those instances later and can also request a suggestion of an activity that may be able to solve Problem X. The Problems and Solutions aspect of ActivityRecommender is still pretty new though, so that part might still be slightly different than what a user might expect or be looking for.
Would you like to talk more about what you’re looking for in terms of ways to objectively score pain?