ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

Break the bias to challenge gender norms on social media

Expert comment

Written by Stephanie Diepeveen

Image credit:Steven Dickie for ALIGN, February 2022

In 2021 anti-gender equality activists conducted a targeted campaign against Hanna Paranta – a well-known Somali women’s rights activist who uses her Facebook page to support survivors of domestic violence and rape in the Somali diaspora.

She was restricted from posting by Facebook as her content had been repeatedly mis-flagged as inappropriate. Without an active Facebook account and that blue badge which marked her status as a verified public figure, her digital work to raise awareness on gender discrimination was shut down.

Hanna Paranta’s experience is not isolated. There are many other varied and concerning examples of how social media platforms can silence specific users. Sometimes, on the back of other users’ coordinated action and activity. Sometimes, with very little evidence or understanding of what community standards have been breached.

This specific attack is an example of how social media can be a vehicle for promoting anti-equality and anti-rights agendas, with some individuals paid to post negative content. Hanna’s story of being silenced also points to a more troubling question about whether social media can end up amplifying existing gender biases and oppressions. Her account was later reinstated, but Facebook did not restore that vital blue verification badge.

How social media can change gender norms

If you’ve ever wondered about the dominance of misogynistic content, experiences of shadowbanning or abuse towards women online, it now seems undeniable that certain online spaces are fertile ground for harmful political ideologies and activity. Their democratising potential has been thrown into question with an overwhelming onslaught of hate speech, trolling, dis- and misinformation, and harmful content. Some of which has even been found to feed into genocidal violence or femicidal violence in the real world.

This International Women’s Day, ODI and ALIGN are shining a light on new research which sets out the evidence on if social media is changing gender norms, and if so, how? Departing from the point that, these platforms function as they do because they are built up by hidden layers of material infrastructure.

Hidden in plain sight: how the infrastructure of social media shapes gender norms unpicks how the technological design, profit models, and organisational hierarchy all give way to patriarchal norms, and can end up promoting sexist, heteronormative and racist stereotypes. It also considers what users and regulators can do about it.

A second companion report asks another urgent question on Hashtags, memes and selfies: can social media and online activism shift gender norms? As a virtual public space, social media has been harnessed by activists to call out patriarchal norms, share rights-based gender content, and build movements for change. Yet there is much uncertainty about whether ‘slacktivism’ can really make a difference. ALIGN research seeks to answer this by spotlighting the great potential for feminist digital activism to catalyse transformative shifts, both in sexist thinking and behaviour.

“It is vital to recognise that social media can be an empowering space for feminist activism."

Patriarchal bias by design

What does patriarchal bias by design on social media mean? Our experiences on social media reflect a combination of technological and human decisions. From decisions by data labellers who categorise content, to company leadership steering its strategy and priorities, and to design teams which programme the very algorithm curating our news feeds.

These decisions that maintain and alter social media platforms’ infrastructure have given way to particular patterns in how content is targeted, amplified and made (in)visible – which constrains parameters on what different people can engage with online. Specifically, these dynamics have tended to reinforce existing gender norms, often leaving little room for non-heteronormative sexualities or people’s experiences of gender beyond the binary. This is evidenced by the constant presence of hateful content directed towards women and other non-conforming genders on these platforms.

Unpacking the human and technological biases that are built into the hardware of social media platforms is crucial for understanding how to stem the flow of hateful content towards women. As feminists continue to use online spaces to ignite change, it is even more important to ask why misogynistic or sexist content has been reported and removed less than other forms of hate speech? To develop a more just and equal online world, it’s of urgent interest to learn how users and advertisers, relying on automated algorithmic and binary data systems, end up replicating discriminatory practices online or reifying patriarchal gender norms.

“Users have been found to disproportionately see content on social media platforms that reflects prevailing patriarchal gender norms.”

How to #BreaktheBias on social media platforms

Regulation of social media platforms can target different aspects of the infrastructure, for example, the technology – by regulating algorithmic processing of content (e.g. on what basis targeting can be done) or the company – such as with antitrust legislation. The intersecting dynamics of gender, race, geography and sexuality also point to the importance of intersectional and interdisciplinary work to create a more equitable and inclusive foundation for online interactions.

Different actors in dialogue with one another each have a role to play:

1. Tech companies

There is a real need for companies to be more transparent about what content is removed and why – both for users and the whole tech ecosystem. Given the scale and diversity of content, developing solid human-rights based principles can help underpin difficult content moderation decisions.

A lack of openness about how these technologies are programmed to promote content online and the dominance of company profit incentives contributes to mistrust and has unequal effects. In the context of content moderation decisions, there lacks sufficient nuance to prevent the silencing of marginalised groups, as has occurred with LGBTQI+ communities in the past. Some suggest algorithm reform could help.

There are also gaps to adequately monitor hateful content in non-European languages and non-western geographies.

“Because of existing power relations, along racial, gendered and other historical lines of oppression, machine learning algorithms reflect and amplify existing inequalities.”

2. Public sector bodies

Regulation is part of a dynamic infrastructure that is constantly evolving, and social media platforms are not merely sites where gender is performed or presented. They are themselves implicitly shifting gender norms online through the classification, analysis and promotion of content.

More comprehensive structures for regulation could be achieved by course-correcting hidden bias in design. Thus far, governments have struggled to keep pace with the growth of hateful content. This indicates a bigger problem with regulating social media content after it is published, which does little to address bias in design.

Regulators and policy-makers must therefore present more forward-looking approaches, such as developing a framework from which to make difficult assessments across diverse contexts and changing conditions. Providing ways in which to not unintentionally silence marginalised groups, based on human rights principles for content moderation.

3. Activists and individual users

Social media activism is creating change. Activists and social media users are successfully calling out the bias and demanding more protections for women and marginalised gender online. Lobbying efforts have also been key in putting the spotlight on opaque content moderation decisions .

Activists and civil society bodies might work together to use their experiences to help tech companies and governments reframe debates over regulation. Or, offering recommendations for the creation of more inclusive and open online spaces, like the Equality Labs report on violence and hate speech on Facebook India.

Individual users can be active players in promoting a more gender-equitable online sphere. They can contribute to gender equalities in subtle ways, by refusing to draw on stereotypical images of gender to sell products, or choosing not to share unequal and hateful content. In more explicit ways, users can flag accounts that share unreliable (or doctored) audio-visual disinformation that is intended to attack or harm women or gender activists .

Influencers can work with social media platforms to make visible peoples of non-conforming genders and celebrate LGBTQI+ identities.

For feminist social activist content, you can follow: @ginamartinuk (Twitter) & @ginamartin (Instagram); @aya_Chebbi(Twitter & Instagram)