Self-tracking and AI ethics

Hello! I’m a Science and Technology Studies student at the Technical University of Munich researching AI ethics and the human digital twin (HDT). As I learned about the QS movement, I became intrigued by how AI ethics intersects with self-tracking practices, which is why I’m here! I understood that self-tracking practices and HDTs have different characteristics in common (healthcare/wellbeing goals, tracking body and cognitive data, concerns about privacy, etc.). Therefore, I think your contributions to the discussion can be precious.
I’d like to know if your approach towards self-tracking has changed in time with the advent of new tools and technologies (such as the advancement of genAI)? And what are (if you have) your main concerns related to this topic? What do you generally think about ethical issues associated with AI or self-tracking practices? Is there something you are particularly worried about or wish for the future of AI and self-tracking? What could this focus on big data mean for healthcare and society? How do you feel about privacy concerns?

These are just some questions I’m interested in, but any input or idea related to the topic is highly welcome! I’m also curious to get to know you better, and I’ve been around reading various topics in the forum (me myself, I would actually want to start some self-tracking :smiley: ). I understand you might have already discussed about these topics in other topics, if that’s the case and you want to link them here, it would be great!

Many thanks to anyone who decide to answer :slight_smile:

Hi fellow German (resident)! @Loirad
I can tell you how it all started with me. I decided to get stronger. The barbell training convinced me because I could train relatively everyday movements like deadlift and squat or overhead press. And I could measure my strength gain by the number of kilos I could lift. It makes lifts comparable over time.
It is the same with manual preparation of coffee. Without a scale and a number for the grind size, it is very difficult to repeat a good cup or to improve on a bad cup.
So, my general point is: objective measures are important if you want to know where you are in comparison to a goal.
Then I got into longevity (I am 58). I would like to outlive my wife, 3 years my junior. Then the use of medicine does not end with curing a disease, but it becomes important to help me not to get a disease in the first place. Then the question regarding measurements changes from “am I normal” to “am I optimal”. For example, I don´t want normal weight, normal blood pressure, normal fitness, I want optimal weight, low blood pressure, maximum fitness.
And then monitoring my state on a host of different metrics becomes interesting. And here technology can help. These fitness bands or watches monitor heart rate and pull a lot of parameters out of that. I think there is a lot of human intelligence involved. It depends on the scientific findings, which parameters you want to monitor, and heart rate is certainly one of the important ones.
Continuous glucose monitors CGM are also a technology which monitor an important parameter. I just learned that insulin resistance is assciated with increased cancer risk, and from my daily glucose profile I can see whether I am insulin resistant or not.
What has all that to do with AI? I imagine that AI might help optimize algorithms to reliably predict health outcomes from sport watches or similar data. It might find automatedly a worsening pattern in my data. So, it could help me with staying in optimal health.
If I let a for profit company handly my data, obvious concerns arise: Can I trust their signals? Does the firm want to sell a health course or something of that sort and does this distort their signals? Or do they acutally produce the best calculation possible? I am not particuarly worried about abuse, but it is definitely an issue.
These are my thoughts for now.

I’m doing some experiments with @tblomseth and @jakobeglarsen to accelerate some of the petty and irritating process work associated with reasoning with our own observational records using widely accessible AI tools; for instance, cleaning data and generating code for visualizing data. The AI element is not very profound from a CS perspective; we are just using available tools. But the process acceleration is very promising.