For more background, read part 1 and part 2 of this series.
In my last article, I raised a lot of questions we were confronted with when setting up the Backfeed application for the OuiShare Fest team. Now I’d like to describe the 2 concrete experiments we ran and my conclusions from them.
Slack experiment 1: evaluating program questions
As a first experiment, we asked OuiShare Connectors to submit and evaluate ideas for key questions that they think the OuiShare Fest program should address this year. To do this, we created a new slack channel in which we activated the Backfeed protocol, enabling people to submit their questions and evaluate others ideas with the Backfeed plugin, for a defined time period of 1 week, to guarantee a high level of engagement. The aim of this was to collectively identify which questions are most relevant.
Fewer contributions, not more
Though this first experiment was quite short, it brought some interesting insights: contrary to what I had suspected, the fact that one would receive reputation for ones actions did not lead team members to make more contributions, but less. They still actively participated in discussions and pitched their ideas in the regular chat, but when it came to actually they were hesitant. When I asked them why, I often heard that they wanted their contribution to be “perfect” before submitting it and that they felt afraid of being evaluated by the group in a tracable way.
As Manuela Brito, a team member, states:
Knowing that the teams' evaluations would be clearly visible and affect my reputation was a barrier for me to contributing.
See a detailed analysis of the results of this experiment.
Slack experiment 2: holding up the mirror
After this first experiment of evaluating questions for the program, we ran a second experiment aiming at using the tool in the way it was originally designed for: people do not submit and evaluate ideas (like the in the first experiment), but work that has been completed. To do this, we connected our project management tool to the program curation slack channel, so that tasks completed on Trello would automatically trigger the creation of a contribution on slack, which could then be evaluated. The idea behind this was to embed Backfeed in a seamless way into our existing workflow (when a task is done, it is marked complete in Trello), to create as little additional work load for team members as possible (which had been expressed as major concern in using it).
I was worried about spending more time evaluating than doing things. -David Weingartner, Team member
Before I go more into the topic of seamlessness, I’d like to highlight another barrier around evaluation that emerged at this point. When having another discussion with the team, it was expressed that not only were people afraid to submit contributions, they also felt uncomfortable when asked to evaluate others contributions with a simple number, because they lacked information and consensus within the group on the parameters of evaluation. What scale are we talking about? What are the criteria? What if I change my mind after evaluating?
“I felt we needed to agree on several points with the team before starting to use it”, said team member Ana Manzando. However after some reflection on her own feelings in that moment, she added. “in the end I realized that I was also using this as an excuse not to use the tool, because I did not want to face the fact that I was afraid of being evaluated.”
It requires a lot of maturity, self-reflectedness and willingness to confront ones defaults to work in such a system on a day-to-day basis.
And this is where the real challenge lies for distributing funds among our team since 2013, which has often led to friction because the process exposes imbalances in the group quickly.
I am torn: on the one hand I don’t see how we can get around having to all evaluate each other's work, if we don’t want to stay in a system where this is the job of those at the top of a pyramid. However seeing it be such a road block to my team in even just testing a tool, building a system that fully depends on everyone evaluating and gives no guidance on the criteria seems problematic.
Between friction to seamlessness
Let me get back to seamlessness. Since the team expressed fear of spending all their time adding contributions in the system, I designed the second experiment to make this step “invisible”. Now, in comparison to before, contributions flowed in steadily and automatically (they were however not evaluated). While people were afraid to contribute before, they were now doing so by default, simply by continuing their work as usual. Does this go against their free will, since there is no active consent when submitting a contribution? Or is seamlessness actually critical for a tool like this to be used? How much friction is good?
Is seamlessness critical for a tool like this to be used?
Of course we consent every day to terms of services of websites we don’t read, share data when we are not fully aware of it and, as
from Blockchain France explains “in the future, many blockchain applications will likely run seamlessly without our knowledge in the technology we use.” But we still have a choice in how we initially set up such systems.
Decentralization is not neutral
For now, this is my conclusion: as far as I can see,
There are many reasons we did not get very far in using this tool operationally, linked to technical immaturity, lack of time, but especially human challenges.
This hesitation from the team to use the tool clearly shows: Decentralization is not neutral. It can even be political. By using one tool and not another, you are opting-in to a certain view of value measurement and exchange, which in this case, we are maybe not sure we want.
The insights from the experiment into how complex it is to design a governance system that works for groups like us may have informed Backfeed's decision to pause the development of this application and, for the time being, focus on a more precise implementation of their protocol on their.