What Next For Quantified Self?

Many of you have asked when the next conference is going to be. I can say with confidence that there will be a next conference, but we don’t have a date yet. One of the reasons we don’t have a date is that we’ve been thinking carefully about the last 11 years of work supporting the QS community, how to make sure it can continue, and what kind of work is most needed next. Our mission, as always, is to help people learn about themselves with their own data. Our method, as always, is to help people share their first person reports of personal discovery: “What did you? How did you do it? What did you learn?”

Our conferences don’t make money. Why not charge more? We’re not unaware that many conferences involving technology charge $1000 or more, and have corporate sponsors paying for exposure and marketing. But this obvious solution is inadequate in our case, because we’re not a business oriented conference, and many of the people we learn the most from are students, artists, independent technologists, designers, and scholars who pay for registration out of their own pocket. We want to continue to do it this way, because that’s where we think the most learning is.

Typically, we’ve subsidized the conferences through other program work making tools and organizing research. This work has been supported by grants and sponsorships. We’re grateful to our sponsors, especially to the Robert Wood Johnson Foundation, who provided us with crucial early support. But the Quantified Self community has never been a great fit with the institutions that are focused on traditional clinical and public health discovery. We’re focused on supporting individuals learning about themselves, even if what they learn only applies to themselves. For us, the value of the community is realized not only in the specific lesson that one person takes from their own data, but also in the aid and support all of us get from one another. We’re interested in the ecosystem we can see forming around the Quantified Self community. How do we support that?

Right after our last conference, we decided that this was the time to act. We’ve founded our own non-profit organization, called Article 27, with the mission of supporting personal discovery through everyday science. And we’ve marked out a big goal: 10 million discoveries in 20 years.

Wait, what?

How do we propose to do that?

At the link you’ll see a draft of our fundraising pitch for Article 27. I’m editing it everyday, based on feedback from others in the Quantified Self community. Will you add your comments? This is an open process and I’m also glad to respond here also. We are just starting this phase of the work, but we aren’t starting from scratch. Tell us what we have right, what we have wrong, and/or what we’ve simply missed.

And, sooner or later, we promise you a conference date.

9 Likes

The argument is :100: in my not-so-humble opinion!

But … it feels very wordy or clunky when I try to read it out loud. Admittedly, I have my own style when I’m speaking – how does it feel when you read it out loud? I wonder about editing the script to be more concise, or more rhythmic.

Yes, totally agree. It will get more concise. Happy take recommendations for cuts.

Exciting!

Could be worth being explicit about the different meanings of “discover” in this context:

  1. to learn something that is already well known (as I do with almost every QS project),
  2. figure out which solution works for me (often), and
  3. actually discover something new (maybe once in a lifetime).

What is holding people back from doing the above? I like to think it’s the tools, but maybe not.

I’d love to see an ecosystem that doesn’t rely solely on volunteers and grants in the long run (>90% of contributions to the “peer production” poster child Linux now come from companies) :money_mouth_face:

3 Likes

Thanks Eric - I think that’s a worthy goal: what would it take to make the practice of learning about ourselves using our own empirical observations common enough and accessible enough that companies would want to build things on top of the “open stack for everyday science” and have an interest in supporting the open stack?

So… I wanted to try making a version where I try writing my own script (and I started doing this), but it’s too hard – sorry. Given the feedback, I wanted to try reordering slides and changing text just slightly to see how it flowed for me… but I can’t even figure out how to download the images of the slides (i.e. to replace text).

Some things I think a commercial stakeholder would want/need (additions welcome?)…

  • trusted mission and governance (stability and not a COI with the company’s goals)
  • minimal restrictions (e.g. not requiring “no patents”, or “open source”)
  • stability & market size (trust it won’t disappear, network effects)
  • low friction for customers (don’t lose customers due to ecosystem requirements)
  • low friction for developers (easy to create a product with the ecosystem)

(It’s not always possible to get all of these.)

@ejain in terms of contributions & sustainability, I think each of these is its own beast. Wikipedia gets a lot of money, and it’s very different from Linux Foundation; a lot more reliant on “community” dynamics in contribution and production – and they get a ton of revenue from their donation campaigns. (I’m not saying that’s the answer, either… rather, I’m saying: if they can be that different, maybe there’s no one true answer.)

Governance is not quite the same as sustainability/contributions, but should probably be related to it. Ostrom’s rules for commons include: “ensure that those affected by the rules can participate in modifying the rules”.

1 Like

Disclaimer: I’m new here, and I often feel that I’m missing a lot of the “back-story” of QS. But I do love the idea of QS, so please take this all with a grain of salt… and I’m not a fundraiser but I work with a lot of non-profits who do a lot of fundraising. And I’ve done work for the Association of Fundraising Professionals. I also helped develop an earlier version of sofii.org (showcase of fundraising innovation and inspiration), so I’m hoping I’ve drank enough of the Kool-Aid.

I’m not sure if this is a fundraising pitch for conference funding or for general funding of the non-profit? Gary starts the post talking about the conference but the linked document goes in to a lot of other details/info. But either way, here are some thoughts…

If the goal is to get people/institutions to support the conference/non-profit, I suggest you quickly spell out the Problem, how they can help, and what they’ll get back in return for their investment. And you need to do it concisely if possible to show that you respect the donor’s time. You all can refine this much better than I can but here’s how I see it:

Problem: We know that if more people start doing everyday science, their lives will improve and they’ll contribute what they learned to an ever-growing community with exponential results. We’ve built tools and resources and fostered a community of awesome self-investigators who’ve done and shared incredible things. But now we need to scale so we can put a dent in the world of citizen science.

How you can help: (This should be tailored to the target donor but basically, you want their money, or access to their network, or their physical resources like a conference hall or office space, or their expertise and time, if you’re pitching to a marketing agency for pro-bono publicity, etc.)

What you’ll receive in return: Again, I don’t know enough to tailor this but I think the biggest asset that QS has is its members - a motley group of free-thinkers and smart, inventive and influential citizens who aren’t satisfied with how things are and want to make their world a better place while helping to raise the bar for many others who are wrestling with the same questions and lack of answers. There are a lot of brands/institutions/philanthropists who would love access to that group of people, or to be associated with them. There are other additional benefits but I hope you get the idea.

As for doing it concisely, I like the idea of creating an “elevator pitch”, even if it’s just as an exercise to help bring clarity to what you’re actually asking… you have 30 seconds to convince a donor that they should give you a meeting… just a thought.

I hope this helps spark some ideas,
Dean

1 Like

Hi Gary,
Very interesting! Reading your pitch through the lense of a platform business model you are proposing, i observe the following:

  • There are two sides to this platform. On one side are the everday “scientists” in search of discoveries. On the other, clinical researchers looking for alternative datasets to anonomyzed randomized trials.

  • We know platforms sustain and grow themselves through network effects. Here you are suggesting the network effect is based on “each person making it easier for the next”. In terms of network effects, i take this as the more discoveries are made, the more people join the platform in search of their own discoveries and with more discoveries, the more clinical researchers looking for datasets and round and round it grows.

  • “10 million discoveries in 20 years” - I asked myself how many everday “scientist” need to be active on the platform to achieve this goal. If by “discovery” you mean as @ejain suggests in his point #3, “actual discoveries something new (maybe once in a lifetime)”, then you’ll need 10 million users over 20 years! I agree with Eric that discovery covers any one of those three points. In my case of casual experimentation, it is about five discoveries in three years. Of course you’ll have a range of discoveries per user, but safe to say we are on the order of 300K - 1M+ active everyday scientist over the 20 years to meet the goal.

  • You are suggesting the platform’s value proposition is based on innovative tools/education/community to enable entirely new experiences in the everyday scientist’s journey to self-discovery. In this regard, I would highlight examples when this journey goes awry,and how the platform will enable users to overcome the challenges and reach the final phase of discovery. You may also want to highlight innoative experiences for the clinical researcher. For example, the clinical researcher who doesn’t find the empirical discoveries she is looking for, but can use the platform’s levers to incentize new discoveries in this area.

  • In terms of the platform’s economics, I think you should mention the marginal costs in acquiring the platform’s participants i.e. everyday scientists as well as clinical researchers, and how this tends towards zero-marginal costs - which would be the platform’s sustainability element along with the network effects.

Sergio

1 Like

Very good and interesting feedback. Thank you @madprime, @dreeds, and @Sergio.

One thing that this exercise is showing very clearly is that he current slides point toward a technology platform/product - and that’s not what we meant to pitch. That’s an issue with the slides, not the commenters, and it’s very useful for us to have this exposed and to adjust course.

We’re definitely NOT proposing that creating a technology platform is the work we need to do. We are working much more directly to grow the number of personal discoveries using everyday science.

Reading the pitch now, I think we’re conveying: that we want to build a GitHub for Quantified Self. But GitHub already exists. The lack of GitHub is not what’s in our way. Our work is at the level of: Support real people who face barriers in their projects of everyday science by growing our community and sharing their work, educate and “train the trainers” in health and allied professions so they can support everyday science, create tools that address some of the most common and pressing points of failure, such as lack of instrumentation and data access, and build organizational capacity. We’ll need funders to get this, or they may expect a technology platform and be surprised when they don’t get it.

I think this work is peer production (it certainly isn’t an expert based system) but it doesn’t come through properly yet in the slides.

@dreeds A program description would help clear out the confusion, but it moves in the opposite direction from the elevator pitch - into details rather than away from them. Probably work in both directions is needed.

@Sergio: This paragraph in your comments really helps clarify:

“10 million discoveries in 20 years” - I asked myself how many everyday “scientist” need to be active on the platform to achieve this goal. If by “discovery” you mean as @ejain suggests in his point #3, “actual discoveries something new (maybe once in a lifetime)”, then you’ll need 10 million users over 20 years! I agree with Eric that discovery covers any one of those three points. In my case of casual experimentation, it is about five discoveries in three years. Of course you’ll have a range of discoveries per user, but safe to say we are on the order of 300K - 1M+ active everyday scientist over the 20 years to meet the goal.

I think you and @ejain are very much right - the goal is to grow the # of people who make their own discoveries, based on our knowledge and experience about how challenging this is today. We can currently help about 100 people make discoveries every year. Some multiple of this is also occurring “naturally” (that is, people doing everyday science without our help, but we’d like to document their work and make it easier to share). Can we get to 200 next year, and 400 the year after that?. This seems realistic, and gives us time to evolve the governance and institutional capacity we need.

(Perhaps one of our challenges is that this approach - building and supporting a community and helping scale the tools and methods so that all can benefit - isn’t as zeitgeisty as a technology/platform effort; but if we were doing what everybody else is doing, then why do it?)

1 Like

I’m afraid I don’t have time to write my comments succinctly enough to fit in this forum thread, but let me throw out a few thoughts.

You’ve identified a core problem, how most health-related research today is driven top-down, by a priesthood of “experts”, in contrast to the dynamism in other industries where technology is moving us to a software-driven bottom-up world that is participatory and personalized.

The Article 27 solution, if I’m interpreting you correctly, is to develop a set of easy-to-use templates that can kickstart a motivated community that can scale to millions of individual experiments. To me, this sounds like https://www.instructables.com/, part of the bigger “Maker" movement. And maybe the Maker movement is an apt analogy in several senses: what started as a non-profit “movement” to encourage more individual involvement in hardware, ended up stalling (Make Magazine stopped publishing in June) when the bigger electronics industry co-opted much of the reason people were doing their own projects in the first place. The Maker movement isn’t dead, any more than open source is, but most of the serious work happens now in for-profit companies (Instructibles is now part of Autodesk).

The Instructables/Article27 idea reminds me of what I call the “Write More Cookbooks” model: we think healthcare today is like a big Mess Hall, where everybody eats the same dull food prepared by chefs for the masses, and we think that cookbooks will kickstart an infrastructure of restaurants, cafes, groceries stores, and Williams-Sonoma. But the analogy doesn’t hold as much as we’d think; healthcare already is full of individual, personalized innovation (the healthcare equivalent of grocery stores already exist for those motivated to try) — but it requires effort and discipline, just like home cooking requires more effort than stopping at the mess hall.

So how can Article 27 help? One thought is to focus less on experiments and more on hypotheses: it’s hard to do an actual experiment, but most people have smart hypotheses — they just need to get written down (“registered”). ( I wrote some ideas about this a couple years ago )

Another idea is to focus on standards and conventions: somebody should work to make data comparable across experiments. (Imagine trying to do science if there were no accurate time-keeping, for example). There are a few efforts in the Mess Hall / healthcare world to promote interoperability and standards (e.g. FHIR, Carin etc) — I wish somebody would help drive that from the personal/everyday science perspective.

You could also work on something like a certification process, not so much for the people doing the experiments, but to kickstart an industry of people who want to help others do experiments. Think registered dietician. Giving an air of professionalism — something to put on a resume — helps inspire some people to focus on this more than if they were just hobbyists.

Maybe I’ll try to write up the rest of my thoughts when I get more time, but meanwhile kudos for making this effort: the world definitely needs what you’re proposing — and you with the QS community are in the best position to do this.

5 Likes

It’s nice seeing many of the ideas that I have around QS to be written down and articulated so well.

The pitch explains the method for scaling the sharing of discoveries made from everyday science very well. But more emphasis should be placed on the how of making discoveries from one’s observations and the value of increasing self-knowledge. There’s a glancing reference to this in that methods will be shared, so it’s assumed that if you adopt these methods, you too, will learn from your observations.

But people collecting data through their tools and not knowing what to do with it is a central problem. It’s crucial to establish that we have an answer to that, and that what we are scaling is solid and impactful on the individual level.

The QS community is a proof-of-concept that people can make these discoveries. But it’s a mistake to make it seem that we have it all figured out. There are people in the community who stopped all tracking activity because they weren’t learning from their data (or perceived that they weren’t learning).

What are the methods and principles of everyday science? How does it differ (and need to differ) from clinical research? The message might be that everyday science isn’t randomized controlled experimentation applied to the individual. The methods that are most useful in generating self-knowledge may be antithetical to the best practices of clinical research.

People using empirical observations to understand themselves and their environment is not new, but it’s never been valued and, as such, a vocabulary and articulation of principles has not been developed. Part of the pitch may be that we will develop the process of how to help people learn from their observations. We’re going to identify, codify, and organize these methods so it’s easier for people to learn from one another’s experiences and apply them to their own lives.

I don’t think that ten million discoveries can happen if we don’t establish this new vocabulary and way to share not only the story, but the methods of these discoveries in a way that they can be abstracted from their particular project and applied to another.

For me, as an individual, it needs to be clear what the methods for discovery are and how I can use them. These methods need to have names so that they can be easily discussed and applied. A weakness of the Show&Tell talks is that the method is often not laid out clearly enough so that it can be applied by another person.

Everyday science is a concept that needs to exist because there is something of value that the traditional gatekeepers either missed or dismissed. We will take a leading role in recognizing and communicating that ordinary people can learn valuable things by applying empirical observation to their lives. And it may not be that kind of value that appeals to a research journal, but it’s incredibly valuable to the individual and the people around them, and it demands to be understood and cultivated.

To make this idea come home, the pitch may need a concrete example of what everyday science looks like and demonstrates the value that increasing one’s self-knowledge can have in bettering one’s experience of life (one could be drawn from the community). With that example firmly established, you give the person being pitched something solid to imagine being multiplied by 10 million.

2 Likes

Gary, great job putting this together. Great feedback so far, and most of mine has already been covered by others.

Your “train the trainers” line really resonated - I feel this could be played up much more. I realize this would require a ton of work, but what if there was a QS-branded paid certification/course that taught the basics of experimental design, tools, data collection, analysis, case studies, etc. then these “certified” folks (organizational thought leaders, doctors/clinicians, etc.) can bring these concepts to their clients (while the average person isn’t curious nor wants to put, but they will gladly collect the data and pay someone else for interpretation (sleep, diet, etc.). This will greatly expand education/reach.

In terms of the pitch deck, you may want to clearly lay out how much you are raising/will need to support these efforts over the coming years, whether those funds are coming from (corporate partners, public donations, etc.) and how those funds will be used.

:100:

2 Likes

These are excellent and useful comments. @sprague you are asking: what is most useful to share? I think the answer to this question isn’t entirely obvious in advance, but at the same time we’re not starting from scratch. Doing the conferences has kept us very close to people who are actively doing projects, and the barriers they face are at multiple levels: instrumentation, analysis, design of the project, domain knowledge. Amazingly, within the QS community there is (sometimes) knowledge at all these levels to help and get people onward toward their discovery, but there is a lot of serendipity required. For an example, see the project I’m currently trying to make progress on. It seems logical that I ought to be able to measure my tremor using a simple method, and in fact I got a great suggestion in the forum that got me a free app that did exactly what I needed, and then I hit a barrier around analyzing the data. More suggestions followed, and they are very plausible, but require familiarity with Matlab and/or Python. I think with more time these suggestions will evolve into an approach that I can manage, and I predict I’m going to learn something from my project - but I have, let’s say, “above average” access to community expertise. One of my goals in developing the pitch in such close consultation with people who have a history and a stake in QS is to remain true to what we already know is required by people, rather than to jump into technology solutions. That’s our greatest asset: our experience doing this. Figuring out what can be “templated” in some way for sharing is part of our collective job. To follow Richard’s line of thinking: recipes, standards, certification are certainly parts of the kit we could deploy, but which parts are most crucial and in precisely what sequence to work on them is part of what we’re figuring out.

@Steven_Jonas says:

This is so well stated, and seems of central importance to conveying the essence of our program. If our pitch as understood as delivering clinical knowledge (primarily) or supporting individuals to become “mini-clinical researchers” then we’ve gotten onto the wrong track.

Just a very brief comment from me: Thinking about how to ensure that potential funders realise early on that we’re not suggesting a technology platform… Traditionally a place for creating, supporting and disseminating knowledge can be called an institute/university/academy. Maybe there’s a suitable way to phrase that?

1 Like

I love QuantifiedBob’s suggestion of a certification/course on basic research concepts. Gary - this also came up during our CSA webinar. Bob T - do you think there is a demand for certification and a train the trainer model? DReed’s format for a concise pitch is very useful. I have trouble with the 10 million discoveries in 20 years - how would that be measured? A few use cases would be very important to include in this pitch - real QS stories with images to make the story real and personal. Thanks!

1 Like

Among the threads of feedback here are specific suggestions about the program work Article 27 could and should do to support people making discoveries. I want to capture a couple of points in a summary, with some comment. It’s still early in the discussion, and these aren’t “the answers” but there’s so much of value in these suggestions that they deserve an efficient recap.

10 Million Discoveries: How Will We Know?

Camille asks how we can measure our success. And this is a good point: It’s not just that we want 10 million discoveries, we want 10 million discoveries shared. In the academic and medical health discovery system, this occurs through publication. We also have a form of publication: the Show&Tell talk. The Show&Tell talk has some key virtues: It is a first person account that focused directly on what’s been learned by the individual, answering the three questions: What did you do, how did you do it, what did you learn? If the discoveries we facilitate are shared in a format that conformed to this template, we can count them. However, the specific Show&Tell format we currently use has some features which prevent it from scaling: The talks are given at live events, documentation is sparse except at the international conferences, and the people who do the projects don’t have an easy way to update them publicly (after they share them) or to get help along the way (before they share them). We need to have a “unit of production” that scales more easily than the live show&tell talk given at a meetup. As we work on developing this, we have an advantage: The non-scalable, handcrafted version is already working. And we can definitely grow it. We can go from 100 to 200, and in fact quite a bit further, until we are absolutely maxed out on show&tell talks. Along the way, we can experiment with other forms. This is definitely a 2-3 year process that we should approach with sparse assumptions. It would be fatal to just think: “YouTube for Everyday Science” or something like that, and charge ahead, with money flying out of our pockets in all directions. The opposite approach is actually more exciting and promising: go from 100 to 200, and then double again, and learn, learn, learn.

QS Institute/Train the Trainers/Certification

@Sara mentions the form of an institute. This is such a different approach than a typical startup strategy that it deserves being underlined. We actually have some experience with this model: The QS Institute at Hanze Technical University in Groningen, founded by Martijn de Groot. Martijn, whom many of you know, managed to fund and develop a very successful group at QSI that launched an undergraduate major in “Quantified Self and Global Health” and also a summer continuing education program in Quantified Self for health care allied professionals. (These were mainly nurses.) I visited the program several times and met students, who were working on their own self-tracking projects as a way to more deeply understand how to help patients. They selected instrumentation, formed their own questions, analyzed the data, and did a “show&tell” poster. Martijn is now the director of the ReShape health innovation center at Radbaud Medical University, where he has, among other responsibilities, a specific charge to develop approaches to teaching Quantified Self to medical professionals. This collaboration gives us a chance to develop curricula that could be shared, sold, or licensed broadly within health care, including the kinds of certifications that @sprague and @QuantifiedBob point to as a powerful component of influencing professional activity in health.

Please keep your comments coming. They are very important and will condition our fundraising and program development, which we hope will feed directly back into the community to spur the kind of work that’s needed.

4 Likes

<3 to everyone that’s weighed in!

@sprague I think you see so much of the same issues, and this potential for citizen science (and not the crowdsourced exploitative version). I hope we keep hearing your thoughts. Like @Agaricus, I’ve found attempting my own personal project to be very instructive on where I stumble. But the dream is to say: “we don’t know exactly how, but the internet makes it possible for people to create and share and expand ideas – it decentralizes knowledge production – and we think it can do that in this area”

I would expect “10 Million Discoveries” to be a long tail: the vast majority things will just matter to the individual, not a big finding. But (1) some become more important (re-used components, larger groups, etc). I think rather than try to predict those, we would want to help millions to exist – and see what grows naturally. i.e. to grow innovation, grow the whole distribution. (2) The long tail has its own value: each person has done something worthwhile for themselves, and there is value in simply making this easier. (Per @Agaricus the goal is also “sharing”, this is key for the serendipity of #2 becoming #1.)

@Steven_Jonas fully agree with the goal of being impactful on the individual level! That is the seed from which anything else grows, and has value in its own right. I think you may also have made the point: We don’t know what’s needed to expand personal research / everyday science. We have some ideas. I think the internet has demonstrated a transformative ability to enable collaboration & decentralized production – that’s the potential big win. But the platforms or tools are strategies, in service to the community and mission. I think what’s done should be flexible, it should expect to explore, learn, and iterate.

Along those lines, I have caution about our analogies – references to other platforms, communities, and models. Analogies are good because they make an idea familiar, and that’s vital for everything (community, funding, etc). Analogies are bad because they may go too far, they encourage imitative isomorphism – which sometimes works, but often doesn’t. I suppose the goal is: be inspired without being imitative.

Along those lines, I’ve been going back and forth on @Sara’s idea regarding an academy/institute/university. I think it really resonates in some cultural values…

I had started thinking along these lines too – is there an idea like an academy, institute, university? It touches on ideas of learning and research.

I’m cautious about leaning too much on it (does it convey hierarchical learning? we don’t want to reproduce current model, does it miscommunicate that?).

But what it might say is some important things about the social norms we associate with institutions & research.

Merton described four “norms of science” – communism, universalism, disinterestedness, and organized skepticism. (Later thinkers have added “originality”.) I feel like there’s a sense that some of these norms exist – maybe in a translated or extended sense – in the aims of everyday science / self research.

Universalism: participation is valued from everyone, not from a particular group – it’s taken outside a traditional institution and democratized. Organized skepticism: I think we might hope for a community that would voice skepticism in the pursuit of empiricism? (i.e. not sympathetic to pseudoscience.) Communism: I think we’d like to see sharing of knowledge/learning, approaches, resources, and solutions so that others can participate and do self-research.

To me, these are positive things I take from the idea of an “institute”… a community collectively engaged in knowledge production, where each researcher has their own project. I’m curious how others feel, if they see resonance with the ethos/norms of “traditional” science.

New draft, substantially revised and heavily influenced by the comments here, is online and open for comment:

I’ll resist the temptation to explain all the changes. Go at it, I’m listening hard.

6 Likes

@tblomseth has been advocating that we have a bolder vision for 2 decades. How about 100 million discoveries? On an operational level, if we average slightly more than doubling each starting with 100 in the first year, we can get on track toward this goal. I think the early years of this doubling are easy to plan (using our current methods). A single conference delivers more than 80 talks and presentations, and in the past we’ve done 2 per year - and that doesn’t include any of the show&tell talks from local meetups - all produced on a shoestring budget. So I don’t think the early phase of the hockey stick is at all difficult to understand in tactical detail. HOWEVER, later years require that we master the use of (existing) participatory technologies and educational channels. We don’t have to reinvent participatory culture, though our approach will necessarily have some twists. And our nonprofit structure means that we are not trying to capture all of the value we create and support, we simply want this value to exist. So that’s all to the good. However, it is still uncomfortable (for me at least) to use the exponential arguments, which have in the past been used to support utterly absurd propositions from utterly unprepared startups. I haven’t quite figured out if this discomfort is wise or unwise, so just dropping this here for feedback.

Some added 2nd (or 3rd) thoughts - I’ve been reading the Wikipedia article on modeling Wikipedia growth and feeling more and more like putting a numerical target on “discoveries” by 2040 is perilously close to bullshit. I think it’s important to be clear about our ambition, which is “everyday science for everybody.” This is utopian, perhaps, in the way that the Universal Declaration of Human Rights is itself utopian, but it is not bullshit. However, saying that we will produce (or even “catalyze”) 100 million discoveries by 2040 - that’s a different matter. I think there is ambiguity in the concept of discovery that, while giving us wiggle room, gives us actually too much wiggle room. The seemingly “hard” number is really just a stand in for “really really a lot.” One of @tblomseth’s alternate suggestions was to eliminate the numerical claim altogether, and that’s where I’m leaning.