I lifelogged to overcome IBS/CFS and became a cyborg

Tags: #<Tag:0x00007f45eb35d1c0> #<Tag:0x00007f45eb35cdd8> #<Tag:0x00007f45eb35c360>

***** Introduction

I’ve always pushed myself ruthlessly to achieve the mission stated in the Lord’s Prayer: “Thy Kingdom come, Thy Will be done, on Earth as it is in Heaven.” One such action was a calculated risk to take Accutane in college (around 2004) to improve my charisma by eliminating my moderate acne. That was a mistake. I gradually lost the ability to eat most foods over the following years, and developed severe fatigue, sometimes to the point of being unable to stand without dizziness and tunnel vision.

I was already interested in personal productivity, but the brain fog and intermittent energy forced me to find something that worked very well. Thus (after many failures) Textmind was born.

***** Lifelog in plain text first

The best medium for lifelogging is plain text. Text is cheap; binaries are expensive. The total cost of ownership of text is much cheaper than that of binary files such as audio and video.

Quantitative information is useless without a qualitative context. Math is true but not real; there are no numbers in nature.

Text is qualitative; data is quantitative. Therefore lifelogging precedes self-quantification. Without Qualified Self, Quantified Self is an overloaded affordance. The qualitative can embrace the quantitative, but not the reverse. Many metrics can be extracted from a qualitative lifelog.

***** Textmind | purpose | scale

Emacs Treefactor makes a Textmind possible. Textmind is a combination lifelog and GTD system. My personal Textmind holds 1/3 of a gigabyte of plain text. I add anywhere from 500 to 20k words per day to it; 2.5k is a typical number. It is processed to reflect the structure of my mind. It solves complex problems through a simple algorithm. I think with my fingers to spare my brain.

***** Textmind | publications | link

I’ll teach others to use Textmind. So far I’ve written the Treefactor manual, and started recording a silent Textmind demo focused on roguelikes. I plan to make a Textmind documentation website next. My published work is summarized here:

***** Textmind | demo | videos | silent first | Alien

A silent video demo isn’t appealing, but it’s a first step. I need a large public Textmind to show how to manage one at realistic scale. Rather than audio commentating now, it’s better that I type the commentary so that it’s available in the git repo. Later I’ll produce teaching videos with audio, using the demo git repo and silent footage to illustrate each point.

A few years ago, with a much earlier version of Textmind, I tried to do all the above steps simultaneously. I used the movie Alien as the subject, and took Ripley’s PoV. I demonstrated Textmind’s ability to improve Ripley’s decisions while simultaneously audio commentating on how to use Textmind. As a teaching tool, it was a failure. However, I leave the video up as a tribute to the movie Alien.

2 Likes

Your posts sounds insane.

But in a good way…

For comparison: see these posts about Mark Carranza’s memory experiment for another insane project that has inspired a lot of productive thought and discussion:


https://www.flickriver.com/photos/67339053@N00/5777118545/

When he was asked why he would do such an insane project Mark replied that he found it interesting that in the United States it is considered non-insane to watch hours of TV every day but insane to spend a couple of hours writing down thoughts and ideas you want to remember. That made everybody think!

2 Likes

I am an outlier on multiple dimensions, however Textmind is a practical project. While the Alien demo failed as a teaching tool, it suffices to prove that. There were multiple junctures in the plot of Alien in which Textmind-aided analysis would’ve improved outcomes for the crew of the ship Nostromo. The time spent performing the analysis is a pittance compared to the benefit.

I’m familiar with the failure modes of the solo productivity guru/crank. If the tool’s practical benefits don’t justify the administrative overhead required, then it’s broken. GTD is a paper-based organization system that works, but it doesn’t scale to digital, because it chokes on the volume. Textmind is adapted to the digital medium, in the same way GTD is adapted to paper. If I did not have Textmind, then I would use paper GTD as my core system, with digital treated as a supplementary and unreliable medium.

The social memex is an interesting tool, that takes tags about as far as they can go. I feel it’s the wrong foundation for the “house of mind”, however. My foundation is prose, the most expressive medium, the natural human language.

Heading titles, search and hierarchical refiling obviate the need for tags in Textmind, except for the GTD tags. I do see a limited use for non-GTD tags, which I may implement in the future. The key is to manage one’s tag collection, to prevent it from becoming unwieldy. One should maintain a generated list of tags with their expanded meanings, and only tag high-value low-dynamism info. Checklists would be a good example.

I am an outlier myself in at least this regard: I have an unusually high interest in ideas that people have developed to a high degree to serve their own needs. It’s rare that these ideas are highly relevant to anybody other than the person who is developing them, but when they are, they can provide a lot of important and original value that isn’t available for anybody else. I’ll watch your demo.

1 Like

Thank you, but yikes! I made the Alien demo in 2015, and I definitely wasn’t thinking of putting it in front of Gary Wolf at the time. I’m working on a better demo now, but only building up raw footage and working on getting Treefactor into MELPA at the moment. I haven’t even created the Treefactor documentation site yet.

Vimeo has a download button, so you can speed up watching it and skip around without buffering.

Correction - I recorded the Alien demo in 2017-01. Anyway, it’s a relatively long time ago given I haven’t been an Emacs user for many years, much less a developer.

I’m also curious about how you’re crunching a qualitative text-based log to engage in something quantitative (if I’ve read this correctly).

I’m highly unlikely to use emacs, and I think that’s probably broadly true. But maybe there’s lessons here (and pseudocode?) that could apply for people logging in other ways. Is it possible for you to provide examples of how you use the process in your own life?

I have in the past, but don’t currently. It generally isn’t necessary. If I need to answer a specific question, I can simply review the relevant time period, note the relevant information, view it in its condensed form, make a hypothesis, design the experiment, then execute. It’s easier to go back once one knows what one is looking for, than to try to predict what one will someday need.

Qualitative information includes a lot of rich context that is stripped from quantitative data, which can be very helpful in hinting at answers.

Before I’d start stripping out quantitative data from my logs, I’d want to do pairwise nested qualitative reviews. E.g. compare Monday and Tuesday. Then compare Wednesday and Thursday. Then compare Monday-Tuesday to Wednesday-Thursday. Etc.

I’ve done that before, and it gives one a pretty good idea of what’s going on. At that point quantitative info becomes valuable to insert some objectivity, to ensure one isn’t fooling oneself or missing subtle trends.

One good way to collect quantitative data is to treat the qualitative review as a data-cleansing opportunity, to extract some reliable numbers from the day’s record, for whatever you currently care about.

Another way is to search through the raw log text for standard keywords, if one is consistent.

Metadata such as keystrokes etc recorded by selfspy can be relevant, although fluctuations will occur due to e.g. travel.

It’s possible to construct metrics on time usage from the timestamped logs, although cleansing that data is best done during a qualitative review.

It’s helpful to have e.g. a consistent pain scale one uses to rate experiences, with a searchable keyword.

Some quantitative data is naturally isolated during my “proc sprinted” processing cycle. All financial transactions are packaged into headings and filed under “by-Time”, since time is money.

If one really wanted to capture streaming quantitative data, I’d recommend the following:
Rate every time block on the desired metrics. Ratings should have defined meanings to reduce subjective drift. Put them in a special format after the timestamp, like so:
[2019-12-06 Fri 15:56]{(energy, 3)(mood, 3)}

I’m not sure whether doing this is worthwhile or too distracting. My focus is entirely on releasing Textmind so that others can enjoy the cognitive benefits. After that’s done, I plan to setup Dbmind, which will be quantified self for the purpose of further improving my health. Db stands for “database”. I feel that’s the right medium for storing and manipulating quantitative data.

Because Dbmind is backed by Textmind, Dbmind can specialize on a supplementary role without worrying overmuch about intractable issues such as data gaps and hygiene.

I published a barebones documentation for Textmind. First time it’s been adequately described.

https://cyberthal-docs.nfshost.com/textmind/