Videos of Black people being murdered at the hands of police don’t just interrupt our timelines – they interrupt our peace of mind, erode at our wellbeing and remind us of the risk of just being Black, all while forcing us to witness the desensitisation and the lack of basic humanity given to our bodies. With many of us spending an unprecedented amount of time online during lockdown, our relationship and needs from online social spaces have changed.
Given the billions of dollars invested in technology and innovation, there must be a better way to hold police and other perpetrators accountable and wake up those who are asleep to racial injustice, without Black communities having to take yet another emotional hit. Research has shown that almost a quarter of people who saw content of violent events developed symptoms of post-traumatic stress disorder (something companies are just realising also affects their content moderators and are now taking some responsibility for). This is why we at Glitch, a charity working towards ending online abuse, are calling on all social media companies to give all users greater control on their platforms.
“When it comes to something as viscerally distressing as a video of a person being murdered, perhaps there need to be more ground rules and decorum around their dissemination,” Kemi Alemoru wrote in a recent gal-dem article, pointing out that graphic content isn’t shared when it comes to other kinds of injustices. Some of the major social media platforms add “sensitive material” labels to certain trending topics and events – so why is it different for the racist assaults, abuse and killing of black people?
“At times, I’ve had to mute #BlackLivesMatter and other keywords to reduce the amount of traumatic content on my timeline, meaning that I’ve had to opt out of other important conversations”
Many will say that if you don’t want to see such content, then you can: (a) just mute and filter keywords or (b) just leave the platform. However, (a) only goes so far, while (b) is essentially victim-blaming. At Glitch, we champion digital self care and encourage users to utilise the muting and filtering options to have better agency when navigating social platforms. However, you cannot always mute videos and images that could cause physical distress.
At times, I’ve had to mute #BlackLivesMatter and other keywords to reduce the amount of traumatic content on my timeline, meaning that I’ve had to opt out of other important conversations happening around that hashtag. However, traumatic content is still shared without the hashtag, which is why tech giants urgently must give their users greater autonomy to manage our self-care.
If Black lives really do matter to social media companies, as some of them claim, then they need to do better. They must fulfill their duty to protect their users’ welfare by blurring and warning users about graphic content that they’re about to see. Just as there are efforts to keep users safe from beheadings, terrorist attacks and animal cruelty, we must have a choice about whether we want to engage with images of the brutalisation of black people or not.
If you agree, then please sign and share our petition calling for social media companies to step up and make their platforms safer for Black people, because Black Lives Matter.