History has always had chroniclers who tell tales of where and who we were when the world was changing. Histories are important because they eventually define who we are, and where we come from. When history repeats itself, it doesn’t bring the past to life but shapes the future of our present. The anxiety in history has often been that there is scarce information, limited data, and biased recordings which, eventually, reflect the view of the winner, rather than the woes of the vanquished.
In the world of constant communication, incessant information, and continuous computing, this changes. When everybody is wearing an information recording device; when drones and smart devices capture live streams of what we do; when algorithms training on expansive datasets learn to depict, sort, and classify reality in patterns of easy recognition, we know things are different. However, we live in an age of information overload: our protests get streamed, songs of resistance get recorded, presence gets marked on point-and-shoot cameras, and our data trails etched out by smart devices.
We are witnessing historic times right now as young students continue to defend their democratic future and generations of citizens stand against authoritative tactics. How do we make sense of this information? How do we tell stories in the face of this overwhelming abundance? How do we know that the stories are credible and truthful? These are not just concerns of misinformation and truth claims but of how we will be remembered and how we will store all this data to make human sense of it.
There are many digital aesthetics deployed to write the history of the present. We take screenshots, like we did when we saw the WhatsApp transcripts of the groups orchestrating anonymous physical attacks in JNU. We fact-check information, like looking at real-life testimonies over doctored reports, as is often done when talking about police brutality against defenders of justice. We create credibility indices over sources, like technology companies that already warn of some messages as forwards and websites as fraudulent. We perform digital forensics to look at the layers of manipulation in digital objects, like when exposing the “fake movie cancellations” of people objecting to Deepika Padukone using her visibility to lend solidarity to the youth. We amplify the voices by making stories trend on Twitter, producing counternarratives, and making sure that the real stories are heard amidst the cacophony of shrill rhetoric.
These are all actions that make sure that the arc of history remains bent in the right direction, and that paid troll farms and hired authoritative cells do not capture the narrative. These actions need massive participation and amplification because the digital web favours traffic over content, clicks over truth. Unfortunately, these actions are still only human, and the new testimony holders of the history of the present are not. We have to realise in our tactics, coordination, and practices of mobilisation and organisation that we are no longer participating in a human-scale negotiation.
Every platform we use is mined by controlling powers to identify people and put them under surveillance. Every picture we take of a protest crowd is fodder for facial detection algorithms. Every hyperlinked ridicule or critique of the atrocities only ends up amplifying the message of hate instead of the message of hope. Celebrities can influence opinions but the real influencers are bots that filter, manipulate and send mass messages to stored database of people, with a speed that exceeds human comprehension. Technological verifications and fact-checking is always going to be a catch-up game because we have already conceded to the harms encoded in our digital networks instead of reengineering them radically.
As these protests continue to swell, we can be sure that the gaze of those threatened by it is going to intensify and come back with more punitive measures. So, those of us chronicling these historical times have to be more cautious of what we capture, what we share, who we subject to identification, what we link to, what devices and platforms we use, and whose stories we tell. There is no easy manual for digital protection, but it is a good moment to realise that when the negotiations are with non-human digital devices, applications, algorithms, and networks of control, we owe it to each other to invest in human care.
Nishant Shah is a professor of new media and the co-founder of The Centre for Internet & Society, Bengaluru
This article appeared in the print edition with the headline ‘Authorised Versions’