Breakout: An Imaging Mind

The total amount of visual data is growing and we’re trying to find the context within it. Every attempt to uncover this context seems to generate even more imagery and thus more data. How can we combine surveillance and sousveillance to improve our personal and collective well-being and safety?

The Imaging Mind session will be hosted by me, Floris. I work on a project called Imaging Mind that explores the road to intelligent imaging.

Short intro

[i]Imaging is getting ubiquitous and pervasive. It has also been augmented. This artificial way of seeing things is becoming our ‘third eye’. Just like our own eyes view and build an image and its context through our minds, so too does this ‘third eye’ create extra context and builds a augmented view through an external mind powered by an intelligent grid of sensors and data.

An Imaging Mind.

And this is what we are chasing at Imaging Mind. All the roads, all the routes, all the shortcuts (and the marshes, bogs and sandpits) that lead to finding this imaging mind. To understand the imaging mind, is to understand the future. And to get there we need to do a lot of exploring.
We engage with the public to further amend the narrative and are honoured to host a break-out session at QSE Conference 2014![/i]

In this break-out session we would like to explore the topics of #surveillance and #sousveillance (or as Steve Mann called it: #coveillance) with you to further amend the narrative. The session will be interactive and we are very much looking forward to your input and thoughts on the subject.

How can we make surveillance/sousveillance contextually useful rather than intrusive?

I would like to thank everyone who attended my break-out session. Below you will find a recap of some of my guidance during the session. I will post my notes later. I encourage the people that attended to do the same so we can continue the discussion.

For the people who didn’t attend: it would be great if you join the discussion by answering one or more of the questions below so we can get a better understanding of the theme.

Introduction

Everyone introduces him/herself with their latest selfie.

What we see happening

Since the advance of digital cameras the cost of producing a single image has been dwindling. As the advance of digital marched on, cameras were integrated into devices such as smartphones allowing images to be shared instantly. These days the value of creating a single image is almost down to zero (acquisition, distribution and storage) but its true value lies in the sharing amongst communities. For humans, images are the most natural channel of transferring concepts like experiences, knowledge and affection.

The next phase

In the next phase of the evolution of imaging – which we are already experiencing – people will carry many imaging devices on them, and in due time also inside of them. Next to that, a multitude of devices will continuously generate video and images. All of this visual information will be made available real-time in the cloud (e.g. “the internet of images”). Moreover, each of these visuals will carry contextual data – think timestamp, GPS co-ordinates, 3D-perspective, health data – in conjunction with the image itself.

Therefore, it can be said that the value in this next phase of imaging will not lie in the sharing (which will still occur on large scale) but in the interpretation and intelligence that can be extracted from these magnificent datasets, using big data technologies and crowdsourcing.

In other words: it will be the system that determines the value of the image or video, not the individual or the single camera.

Thus, imaging is undergoing a system change from social and connected to networked and intelligent (analyze and predict).

Application
New applications and services will be developed that combine individual images into specific data sets for yet to be seen new applications. To a certain extent we can travel back in time to reveal what couldn’t be seen (by looking at data with more or even new forms of eyes – e.g. crowdsourcing, AI or a combination of both).

Questions

  1. What information can be derived from your pictures now (ie: from the selfies we started off with). If combined and analyzed, what knowledge could be discovered about our group?
  2. How to combine multiple imaging data sets without hurting privacy of any or all subjects?
  3. Can surveillance imaging data be used in quantified self applications?
  4. Can surveillance/monitoring be triggered by quantified self devices (with or without the user knowing)?
  5. What are the applications and possible barriers of an emerging imaging grid build on connected data sets?
  6. How would such a grid work and what are the give and takes form the user perspective?

Surveillance is an automated version or extension of security personnel. Per definition this is regulated and therefore limited in its application.

  1. Does an ‘open source’ network of cameras allow for other, less limiting applications?

  2. What future social norms will this new ‘Image World’ require?

**Challenging thought **

What if ‘trackies’ connect to this emerging grid and voluntarily contribute their ambient data (in visual form e.g. video/photo). This imaging data can come from wearable cameras but also from fixed cameras (e.g. Dropcam).

  1. How could service providers be allowed to use this data (if at all…)
  2. What are the limitations, requirements with regards to interpreting and showing this information to third parties?
  3. What contracts would that require

Looking forward to your input on this subject. Great book about imaging in our lives: Susan Sontag - On Photography. Great food for thought.

Has there been movement on this in the last decade?

I actually want to make it so people can track their imaging data more frequently and feel more self-impowered about their own data.

It’s interesting you bring this up now. By coincidence I just met up with a QS friend who has been involved with many startups and he mentioned something that I didn’t know; that is, he had tried to buy the Narrative Clip assets out of bankruptcy some year ago, but it didn’t work out. He said during our conversation that there still isn’t anything generally available that tries to do exactly what the Narrative Clip did; that is, continuously record and store lifelogging images. Maybe this is right, but I have a feeling it’s not. For instance, maybe somebody is doing this with the iON Snapcam? Here’s an academic paper that mentions it as a tool for food tracking: Automated wearable cameras for improving recall of diet and time use in Uganda: a cross-sectional feasibility study | Nutrition Journal | Full Text