By Nishant Shah
If we think of trust as a physical concept, it becomes immediately apparent that it has intensive and extensive qualities. On the one hand, the feeling of trust – its perception, its experience, its affect, and its emotion – is subjective, customized, and personal. It is subject to both temporal decay and contextual collapse. The sheer ineffability of trust and the surety that it might be broken when least expected from actions and sources unpredictable, makes it an intensive and an intense experience. It requires a continued negotiation, interrogation, and suspicion. Among friends, or other similar interpersonal relationships, the covenant of trust comes with the promise of betrayal. There is no doubt that things/people/ideas we trust, will eventually, even if willy-nilly, lapse and let us down, thus resulting into a constant surplus and deficit of trust.
It is precisely because of this contingent and ephemeral quality of trust as an intensive property – that there has been so much attention paid to its extensive measures. Trust needs to be designed, enumerated, quantified, measured, delineated, fixed, and produced as a static rubric in order for complex human social organization to function. Especially because the intensive experience of trust relies on personalized affects, the need to externalize it and trace its manifestation have been central to how we arrange and operationalize our affairs. Various values such as transparency, responsiveness, accountability, simplicity, openness, etc. have been presented as extensive measurements of trust systems.
…the intensive experience of trust relies on personalized affects, the need to externalize it and trace its manifestation have been central to how we arrange and operationalize our affairs.
The entire history of representative democracy and broadcast based hub and spoke model of establishing direct relationship between the individual and the state, and the individuals within different collectives, has been prefigured by the ambition of building trust. Trust, it seems, required a face-to-face, personal, and direct cybernetic feedback loop for its fruition. Many have claimed that it is this ambition for trust-building beyond individual experience and emotion that has resulted in the gigantic structure of bureaucracy and management that is the backend of modern functioning.
Systems of granting access to rights and resources through verification, authentication, authorization, and legitimization have depended on protocols of forms, scrutiny, performance, and judgement that are built into our everyday life.
From traffic lights that govern how we move to doctors’ offices that can mandate us to be confined in specified spaces; from regulations that define the shape of our homes to laws that define the parameters of our relationships, we essentially live with systems that we trust, which would protect us from perceived harm, but more importantly, from each other. The emergence of Trust systems, it could be argued, relies on the fact that we cannot, intensively trust each other, and hence we need to trust systems of extensive measures, that would govern and order us.
The digital turn was a dramatic disruption in this process because the digital technologies add new layers of intermediaries into our systems of trust. Hidden behind the transparently opaque interfaces of our GUI (Graphical User Interface) devices, there are a multitude of operators who remain both invisible and unmapped in our digital transactions. The design and engineering of trust which have so critically focused on making the actors in a trust system transparent, have increasingly been replaced by opaque intermediaries who perform the protocols of trust without necessarily granting access to their intentions and operations.
The older systems of trust were presented as centralized, objective, coherent, and consolidated, responsible for our memories, archives, histories, and identities. The digital turn has transformed these systems into human forms -distributed, subjective, contradictory, and circulated. The extensive system which was supposed to be a static measure, through neural learning networks, algorithmic structures, data informed manipulations, and protocol based circulations, have become as intensive, and hence unreliable, in the theatre of building trust.
The emergence of Trust systems, it could be argued, relies on the fact that we cannot, intensively trust each other, and hence we need to trust systems of extensive measures, that would govern and order us.
The emergence of the digital, which necessitated a re-engineering of these older systems exposed for us, the human, unstable, and manipulative potentials of these extensive systems. Or in other words, digital conditions have converted extensive trust measures into intensive negotiations. This is essentially a condition of creepiness, where we are being touched by and shaped through systems that we can no longer see or know, but accept in good faith.
What happens then, to our understanding of trust when it gets replaced by faith as the new trope of engagement? In a true manifestation of the potentials of Alan Turing’s experiments, what we have now, is a collapse of our principles through which certainty and truth claims could be mounted. How do we trust the systems that we interact with? What are the new forms of trust that we need to investigate? How do we understand the performance, spectacle, and choreography of trust, when truth is merely an opinion coded in a protocol?
These are some of the questions that the Digital Earth fellow Valia Fetisov explores in his new work. Beginning with the most public global case of China’s Social Credit Systems, Fetisov reminds us and shows through material examples, that the design of trust and the condition of data driven surveillance is a global norm, and not exceptional to China where these things are more covert and explicit than in many other parts of the world. Fetisov examines the ways by which these systems of extreme monitoring and regulation are naturalized – through habits, through incentives, through penalisations, and through distributed surveillance mechanisms where instead of the system, your neighbor becomes the thing to be the most suspicious about.
For those of us who have lived through the social media data sharing debacles, this is not a surprise — Social media giants like Facebook continue to warn us about predatory strangers, third party intruders, and hackers who might steal our data, and thus creating a condition of potential risk from anybody who we interact with in the online space. However, they hide the fact that the thing that we need to be the most worried about is Facebook itself. Facebook presents itself as an extensive measure and warns us of the intensive relationships, but as Fetisov’s work reminds us, Facebook is an intensive relationship and needs as much scrutiny of its betrayal as would any other person.
Fetisov examines the ways by which these systems of extreme monitoring and regulation are naturalized — through habits, through incentives, through penalisations, and through distributed surveillance mechanisms where instead of the system, your neighbor becomes the thing to be the most suspicious about.
Fetisov is deeply interested in what happens to new systems of trust and how they propel innovation in the tech industry. However, he is even more invested in thinking about how these new innovations and systems create the new social order. Instead of merely painting a Brave New World, he believes that engaging people in material practices that make them understand and question the implicit and intensive trust measures that they deploy in their interaction with trust systems, helps start a conversation of who do we trust, and how and why.
More importantly, it immediately connects to the question of who do we care for, and how, and why, and what new mechanisms and protocols will need to be established for the future of safe and inclusive worlds where trust becomes a social good rather than a political manipulation. His physical representation of this conundrum- a trust machine that provokes the subject into understanding the delicious creepiness of these trust systems and how care and trust need us to own our actions and habits as critical spaces of rebuilding trust.
About the author
Dr. Nishant Shah is the co-founder of the Centre for Internet & Society, Bangalore and a Professor of Culture & Aesthetics of Digital Media, at Leuphana University, Lueneburg. His work is at the intersections of technology, affect, identity, and social and political movements and configures the ways in which we learn to become human in the midst of technologies.
Digital Earth
Follow