sensibilités algorithmiques
2018 08 02 sixth post

An assembly of algorithms and people

workshop “Algoglitch and other methods for a calculated public”

Algoglitch is coming to an end for now. We are busy writing and publishing about the project, also thinking about ways to carry on our research :) Before leaving for summer vacations, here is a long-due report of the workshop that took place at the Gaité Lyrique on April 21st.

Our main question - how do we want to be calculated ? - hypothesizes the existence of a we, a collective entity beyond individual platform users, possessing sentience and agency with regard to algorithmic calculation. It was time to test this hypothesis and actually perform that we, as suggested in the text that presented the workshop :

Our daily activities are assisted by algorithms that each of us approaches differently, according to their lifestyle, their familiarity with technologies, their use of digital platforms, the hopes or fears they express towards innovation... The socalled power of computation to transform public space is the focus of a disproportionate attention compared to the attention to the common and shared experience of being a calculated public.

The algoglitch - the emergence of computation at the surface of reality - is a made-up word. It seeks to capture the manifestation of this ambiguous sensitivity : between the bug and the bias, esthetics and fonction, the technical and the political, the individual and collective experience. It is the entry point of this workshop that will gather designers, engineers, activists and sensitive users. Our proposal is to start from the algoglitch as a visual method to interpret daily algorithmic encounters, and to test its relevance at the intersection of several fields of interest.

The workshop itself will also be a prototype for a research & experimentation space on algorithmic sensitivity.

Algoglitch is a project by the Sciences Po's médialab along with the French Digital Council.

There were 26 participants, most of whom had been invited by us. The others had registered through the Gaité’s website - though we had to limit the number of registrations due to the sheer size of the room. At the outset we re-explained the purpose of the Algoglitch project to everyone, telling stories & anecdotes as ways to notice some visual aspects of algorithmic companionship : the carpet the roomba thought was a cliff; the beautiful project Straightened Trees by Daniel Temkin; the history of adblockers that alter the landscape of our browsing experience, etc.

Algoglitch and other methods for a calculated public, the title of the workshop, stems from our gradual understanding of the glitch less as a type of object than as a type of research method, inspired by/close to concepts like inventive methods and live methods, etc, that aims to focus the attention on the experience of being calculated, and to notice some aspects of it. This is particularly difficult because this experience is precisely designed to go unnoticed, to be seamless.



Musée des glitches

The first activity of the workshop consisted in organizing an exhibition of glitches (Musée des Glitches). A speculative proposition to capture the fleeting moments when algorithms show some autonomy and speak for themselves instead of being truthful and invisible intermediates mirroring perfectly the users' needs and desires. From our collection of glitches, we had selected 10 items which we submitted to a collective curation process : the participants were asked to choose one item and one particular mode of interpretation of that item among 5 possibilities : object label (cartel in French), context of appearance, causes, consequences, and possible value (exploitation possible). For each of these options, participants were encouraged to either use their previous knowledge/experience, search for more information on the internet, or to use their imagination and speculate. Some of these items had historical significance, like the one entitled the Latanya Sweeney Glitch : the Harvard professor googled her own name as part of her method to investigate discrimination in ads targeting.


The selected glitchic items with their different interpretations were displayed in the workshop space, turning it into an DYI exhibition which we then visited together.


While some glitches were chosen only once (like the Latanya Sweeney Glitch), others were more heavily engaged with, like Google killed Father Christmas ! (one of the titles given by participants): it featured all 5 modes of interpretation including 2 labels and 3 consequences. The latter narratives emphasized an array of consequences : from one where the algorithm becomes an ally of children against their parents’ dubious knowledge claims, to one where the child becomes the algorithm’s worst enemy (with a drawing based on a photograph of the film Terminator featuring the character Sarah Connor, the mother of human hero fighter John Connor).


Other graphical proposals included feedback loops in the case of Elsa's Ultimate Spiderman Lover of Crush Cloning, that featured a series of 3 algorithms talking to each other and “writing” the scripts for films then performed by humans.


Labels were used to reframe the items : as a piece of religious scripture stored in a data center ; as a work of art attributed to artist Richard Prince well known for appropriating instagram selfies; etc.

In the case of the photograph of the rollerblading Foodora delivery man taking the metro, the context of the appearance was used to produce a fictional narrative and invent the character who would have been shown the picture : a user connected on the Foodora app asked to recognize Foodora delivery men on pictures. The user realizes he’s being utilized to report deviant behaviours to the company.


The various narrative styles used by the participants (fiction, observation report, journalism, programming language, personal remarks) enabled to account for the various ways we relate to a culture of algorithmic companionship : for example, the narrative envisaging the consequences of the Plenty is never enough glitch states matter-of-factly that the author of the complaint finally conforms to the behaviour expected by the Amazon recommendation algorithm and starts collecting toilet seats. Another example, in Gospel, Psalm 12 is a personal reflection on the possibility for the Lyft driver to rate his days according to his own criteria.

This activity enabled us to interpret everyday troubles experienced by the public, manifested in the glitches, beyond the hope that we might be, in the future, calculated in a perfectly efficient and fair manner: the misalignment between the sensory world we live in and the calculated world of the machines is the very condition of a calculated public we might identify to.

Self-help group

It was followed by a second activity whose purpose was organizing a “self-help group”, and collectively explore which strategies participants employ to regain control, ranging from simple daily actions to thought-through research methods. The participants paired up and were asked to interview one another about their own methods of algorithmic resistance, and report them on a collective table including three categories: “Name / Method / Effect”.


Methods towards many different types of algorithms were mentioned, from music streaming platforms to search engines, social media, and food delivery services. While some participants were quick to explain the strategies they used in their daily life, others were less aware of the methods they implemented. The interview process was particularly useful to them, especially when the interviewer took their role to heart and guided them with specific questions about their use of commonly-used platforms. As the participants were interviewed, they were prompted to put into words methods that they may had not fully formulated before.

In true “self-help group” fashion, the workshop allowed to go beyond sharing techniques. It also offered a space for participants to discuss their representations, fears and hopes about our algorithmic world. Some even exchanged reading suggestions on the topic. Could information be included as a method of algorithmic resistance?


To explore all the methods that were discussed, see here for a retranscription of the table:

An analysis of the results allowed to identify a number of strategies.

Some methods aimed at gaining a better understanding of the inner workings of particular algorithms. For instance, the author of the method “My life as a delivery rider” (Ma vie de livreur) became a rider for food delivery companies, in order to experience firsthand how the algorithm was designed and what its effects were.

Others focused on ways to influence or get away from the platforms they encountered in their daily life. This could take three main forms. The first was not using these algorithms and looking for alternatives, such as free-software and/or services respectful of users’ privacy.

The second was using them, but obfuscating the data that could be collected, with an underlying activist purpose. Such examples of “clouding the issue” included “In the shoes of…” (Dans la peau de…), which entailed mutualizing one profile between several people, or “Counterfeiting data” (Faux-donnayage), in which the author purposely acted in an unnatural manner to blur their data trails, and thus feel less easily targeted.

The third was “making an alliance with the platformes”, i.e. using them in a way that maximized what they had to offer by “feeding” the algorithms with correct data. This was the case of the “Spotify algorithmic patina” (Patine algorithmique): the author of the method would clean up their listening history after parties, in order for the suggestions to remain free from popular party songs.

Interestingly, these three forms reveal different types of relationships with the act of being calculated. While some participants used methods to not be calculated and prevent platforms from knowing too much about them (either by misleading algorithmic platforms or by not using them), others tried to be calculated as best as possible in order to benefit the most from the algorithms.


Protocols of attention

After critique and self-observation, the third activity of the workshop was propositional : called "Protocols of attention”, it was meant to engage the participants in processes similar to the one followed in Algoglitch, i.e. to take Algoglitch as a example of paying attention to and activate unwanted (unsollicited) parts of the experience of being calculated. Originally the idea was to gather participants in small groups of maximum 5 people, organically formed based on the methods elicited in the second activity, and then have them prototype interdisciplinary research propositions. However, the second activity took much more time than planned so we had to change the way of forming the groups. We put together 2 “couples” of the previous activity, in quite an authoritarian manner.

The participants then tried to understand the objectives of the third activity better. One of them asked whether the objective was to make parts of the experience visible, or to propose alternative ways of being calculated. The answer was that there is a continuum between ways of paying attention and ways of acting, which each project should (could ?) address, thus questioning the misalignment between the calculated world and the sensory world.


Five groups were formed, which worked together for about one hour before presenting their results to everyone. The first group got interested in parcoursup, the French online platform for higher education admissions. The platform and its algorithm were totally revamped this year after criticisms of the previous version. However, the new version itself became one of the focus of the student movement this year that contested laws passed by the government. At the time of the workshop the parcoursup algorithm made regularly headlines in the news. The group mobilized two images to describe the process and create a tension between 1. the algorithm as a witch who foresees student’s careers, 2. a possible collective of activists called Parcournymous who would develop alternative algorithms for universities to select students, also offering to smuggle students in. In addition, the group suggested workshops with students to learn about their personal digital data (social networks, geolocalisation, fitness data) and consider their use as alternative sources of education data to be selected from. Overall, the group did not discuss the fairness of the system or the legitimacy of using personal data. Rather, they reflected on the ambiguity of choosing certain selection criteria over others, and the standardisation of the selection process as a difficulty for the universities to actually differentiate between students. That problem was not big at the time, but it became very dominant in the weeks that followed the workshop.


The second group looked at music recommendations as a site for active design by the user rather than passive recording of past activity. One of their starting points was one can totally mess up the recommendations they get after certain unusual uses, such as a party, or listening to music while drunk. Retro-engineering the effects of the user’s listening practices on their recommended music would allow to design protocols to obtain particular effects like resetting a botched list of recommendation. Here the proposal is to make use of an unwanted part of the experience - the inconsistency of the listening practices (several users in a party, or a drunk user) - and taking it into account to design recommendation profiles.

With the same spirit, the other groups tried to pay attention to other situations : the dynamic pricing in travel fares (by making the prices paid by other users at the same time visible), the elaboration of the algorithms themselves (by creating a game where humans embody the different interests being negotiated in the algorithm), and the online reputation.


Next Post

Curating Algoglitches

Small Victories