Feedback on our free happiness+efficiency app?

Hi everybody!

I’ve been enjoying developing and using an app for measuring+optimizing happiness+efficiency over the last ~9 years, but I’ve been having difficulty getting users to try it out and to give feedback.

Would anybody here be interested in any of:

  • Trying it out and offering feedback?
  • Suggesting how to get more feedback?

The application has a bunch of properties that I think are super interesting:

Record what you do

  • Super-convenient autocomplete: it only requires pressing a few buttons to record a new participation

Get suggestions

  • To maximize net present happiness ( definition here ) based on what you’ve been doing lately

Analyze your data

  • Browse reminisce-worthy past participations and any comment you entered for them

Measure efficiency

  • Via experiments
  • Mathematically unbiased estimates of the user’s efficiency at a given point in time (not even biased by user-provided difficulty estimates)

Supports brainstorming ideas

  • Lazily organize the ideas based on how much you like them in the future

A bunch of other analyses

  • Soon to be released: support to identify the activity having the most total negative impact on your happiness in the previous X days

Easy to get

Does anybody here have any interest in trying it out or generally offering advice about the process of getting feedback?

Thanks!

1 Like

Is it better than mysymptoms?

1 Like

Hey thanks for the response!

I haven’t yet tried mysymptoms but I’ll describe what I think some differences are. mysymptoms seems to me to have more handling for health tracking whereas ActivityRecommender is more optimized for general happiness and efficiency.

For example, in ActivityRecommender, happiness is input as a relative score, comparing the happiness of one participation to that of the previous. Additionally, ActivityRecommender can offer suggestions about which activity would likely be best for the user to do right now based on their data. Also, whenever the user records a participation, ActivityRecommender will offer feedback (such as “Awesome!” Or “Oops” or >128 other possible responses) based on its expectations about that participation.

Additionally, ActivityRecommender can run experiments to measure efficiency, including providing the necessary randomness and ensuring that there are enough overlapping experiments at any given moment. Additionally, ActivityRecommender’s concept of efficiency gets combined into one value (based on what the user says is important to them) that changes over time rather than only ever being spread over multiple metrics.

ActivityRecommender also includes support for entering one-time todos (which you can be reminded to do later), along with support for brainstorming of new todos in a lazy, asynchronous manner.

Relative score require remembering all the other previous scores. I wish mysymptoms had some support for objective scores especially of pain.

What kind of analysis and model do you use to base your suggestions?

Hey thanks again for being interested!

The original method in ActivityRecommender for users to specify the score of a participation was to type in an absolute score from 0 to 1, but it was difficult for the user to evaluate how close to perfect (a score of 1) a participation was. For example, if I was doing homework and the homework was mandatory then it was tempting to record that that participation deserved a perfect score of 1 because I didn’t really have much choice in having done it.

Later I changed the way that score entry works so that users are expected to specify how happy they were (per unit time) during their most recent participation compared to the previous one. That made it much easier for the user to say things like “this participation was 1% better than the previous thing I did that day” (perhaps that was eating dinner) and not have to worry about hypotheticals about how much they liked a participation compared to what might have been able to happen instead, or compared to something great that happened once a very long time ago.

It would be easy enough to reintroduce the ability for users to directly specify the score of a participation if there’s a need for it, though. The relative ratings get converted into absolute scores that get saved into a data file. Would you elaborate?

There are a bunch of parts to the analysis that gets used for suggestions and feedback, but I’ll start with a high level description.

There are a few different values that ActivityRecommender models. The value that it aims to maximize is the net present value of your happiness (exactly like the net present value of a stock in the stock market: https://en.wikipedia.org/wiki/Net_present_value although I’m using a half-life of about two years, which would be a yearly rate of return about 41%).

Note that a user’s net present happiness (like the fundamental value of a stock in the stock market) is a value that changes over time as new events happen to the user. Also note that our understanding of the user’s net present happiness (like our understanding of the value of a stock in the stock market) for any given moment in the past is also changing over time as we discover new information about things happening in the present. As more time passes and more data exists, we gain an increasingly accurate understanding of what the user’s net present happiness was at any given point in the past, particularly because happinesses far in the future are considered to have exponentially lower weight than happinesses in the near future.

So, ActivityRecommender uses its best (and ever-updating) estimate of the graph of the user’s net present happiness over time and attempts to identify patterns between information about the user and what the user’s net present happiness is.

It uses several approaches to estimate the user’s net present happiness, each of which computes a mean, standard deviation and weight, which later get combined into a final estimate. Some approaches are more data-driven and have higher weight when ActivityRecommender has more data about the user, and some are more like very good guesses and have higher weight when ActivityRecommender has less data.

For a user that has entered a lot of data, one of the bigger factors in ActivityRecommender’s estimate (of what the user’s net present happiness will be if it makes a certain suggestion) will simply be based on asking its numerical interpolator what the user’s net present happiness has been in the past when having received a similar suggestion under similar circumstances.

For a user that has entered less data, one of the bigger factors in ActivityRecommender’s estimate (for the user’s net present happiness after receiving a suggestion to do a given activity) will be how much the user reports enjoying the given activity.

For a user that has entered very little data, one of the bigger factors in ActivityRecommender’s estimate (for the user’s net present happiness after receiving a suggestion to do a given activity) will be how much the user enjoys doing similar activities.

For example, if the user has declared an activity called Sports, and also an activity named Soccer that inherits from Sports, then if the user often assigns a high rating to Sports and other activities under it, then before ActivityRecommender has much data, it will hypothesize that the user will often assign a high rating to Soccer too.

When ActivityRecommender is computing the value estimated by one of these approaches, it is mostly done by computing some numerical features and feeding that into a numerical interpolator. I think that the factors that ActivityRecommender uses are: 1. Time of day 2. How many seconds since the activity was considered 3. The last time that the activity was considered, was it done or was it skipped? 4. Compute a graph of the user’s cumulative time spent participating in the activity over time and compute the least squares regression line. What is the difference (the residual) between the top-right part of the graph and the value predicted by the line?

ActivityRecommender also takes into account the probability that a user will take a suggestion, and considers time spent thinking to be time when the user’s happiness is 0.

There are also factors for efficiency that I’ll leave out for the moment until anyone mentions interest in that too.

Once ActivityRecommender has an estimate of the net present happiness for each possible suggestion, it also considers the possibility that it will learn new information as a result of making these suggestions, and will essentially slightly favor activities having less data, because they are more likely to result in significant positive discoveries that can increase future benefit. Whichever activity is expected to maximize net future happiness, accounting for possible learning, is then suggested.

ActivityRecommender does have some support for Problems and Solutions, where the user can declare that Problem X might be able to be solved by Solution Y, and for the user to record instances of having tried Solution Y and whether it solved Problem X and how long it took. Users can then browse those instances later and can also request a suggestion of an activity that may be able to solve Problem X. The Problems and Solutions aspect of ActivityRecommender is still pretty new though, so that part might still be slightly different than what a user might expect or be looking for.

Would you like to talk more about what you’re looking for in terms of ways to objectively score pain?