top of page
Search

RECLAIMING THE DIGITAL LOOKING GLASS

Creating safer online safe spaces for women.





Between having meetings over Zoom, finding love via dating apps and dreaming up vacations through Instagram, we’re seeing more of our ‘real lives’ unfold online. Oftentimes that involves viewing life’s best moments through rose-tinted filters whilst sometimes it means seeing the worst of our world reflected in digital hyper-real, yet recognisable color.


Research by charity, Refuge, has shown that more than one in three UK women (equivalent to 11 million women) have experienced online abuse or harassment on social media or another online platform, rising to a staggering 62% of young women.

As more and more women experience some kind of online abuse or harassment online, and in the wake of app dates, apartment rentals and ride share experiences going horribly wrong, there has never been more of a need for lawmakers, businesses and communities to come together and tackle the complex matter of building safer online spaces.


Survivors and allies have also started speaking up on how these early pushes for digital safety (or the lack of them) have impacted people’s lives. Their stories reveal questions and double binds that must be discussed as we build online safety into our digital world. Questions like – how do we raise the bar on identifying offensive content, without censoring educational and sex positive content? How do we stop predators from making fake accounts while leaving vulnerable people free to access the internet discreetly? And what responsibilities do each of us hold in creating safer online spaces?


Exploring answers to these questions takes us from ‘grey areas’, to a glimpse at solutions that are as colourfully complex as the issue at hand. They paint a picture of what safe online spaces look like when they’re truly inclusive, and detailed to respond to the experiences of women and survivors of trauma.


DIGITAL DANGERS & DIGITAL DEFENCES

Government and digital platforms have been launching policies, campaigns and partnerships against predatory online behaviours like cyberstalking, digital grooming of children and adults, cyberflashing, cyberstalking, revenge porn and sextortion. While it’s too soon to measure their effectiveness, it’s the perfect time to start talking about their visible strengths, gaps and where there’s room for improvement.


This year (2022) the UK government introduced the Online Safety Bill, which is meant to obligate social media and online search platforms (like Facebook and Google) to protect users from: illegal content; ‘legal but harmful’ content related to self-harm, harassment and eating disorders; and ‘child unfriendly’ content, when necessary. When the Bill becomes fully operationalaround 2024, it will criminalise cyberflashing too. Whilst it’s the first of its kind on a global scale and a step forward in online safety, there are noticeable gaps when it comes to the specific safeguarding of women and girls.


In March, domestic violence response charity, Refuge released a statement on the opportunity UK lawmakers have missed by not specifically mentioning violence against women and girls (VAWG) in the Bill.


“Our Unsocial Spaces report published last year showed the scale of abuse being committed against women and girls online and how, for too long, online spaces have been unregulated and unsafe for women and girls, with online abuse continuing to go unchecked. This has a huge impact on the everyday lives of women and girls. As it stands Refuge has little confidence that this Bill will offer the necessary protections for women and girls, despite our clear calls, with over 5,000 of Refuge supporters calling for VAWG to be specifically referenced and a dedicated VAWG code of practice to be introduced, this is still missing from the face of the Bill.”


In 2010, the US government announced June as national Internet Safety Awareness Month to unite federal and state governments, industry and nonprofit organisations in promoting safe online practices. Since then, US national lawmakers also proposed the Kids Online Safety Act in 2022. But legal considerations for vulnerable adults vary from state to state. Revenge porn is illegal in 48 states alongside DC and Guam, but less than 16 states have online anti-stalking laws, only about a dozen states have laws against sextortion, and as of 2020 Texas was the sole state to criminalise cyberflashing. So again, we are faced with grey areas of concern.


Digital brands are also under legal and social pressures to take a more active role in making their platforms safe for all users.


65% of 14,071 teens and young women from 20 countries reported that they experienced the most internet abuse on Facebook, Instagram and WhatsApp.

LGBTQ+ and ethnic minority women and girls said they’d been targeted for their identities. And a similar Amnesty International poll identified these same issues on Twitter too.


Lyft and Uber have been criticised and have had multiple lawsuits for failing to protect both riders and drivers. Though they’ve started to introduce preventive protections for riders, they’re still being called out for falling short on post-assault response and preventive protections for vulnerable drivers.


There do seem to be progressive shifts happening, as brands are drawing lessons from this criticism. Over the past few years Instagram, Facebook and WhatsApp parent company, Meta, have launched a Women’s Safety Hub and partnered with experts to promote women's safety. TikTok has also led campaigns for safer, more positive uses of online spaces. In their #BetterMeBetterInternet campaign for Safer Internet Day 2019, they used in-app quizzes and challenges to get their users informed. And they’re building partnerships to extend these campaigns year-round, so thatEvery Day is Safer Internet Day at TikTok”.


Since 2021, Bumble has also been raising awareness about cyberflashing and has teamed up with UN Women UK, the UN, Refuge, End Violence Against Women, Glitch and other organisations to criminalise the behaviour. They’ve been a major driving force behind the inclusion of language that criminalises cyberflashing in the UK’s pending Online Safety Bill.


WHAT ARE SOME OF THE ISSUES WOMEN ARE FACING ONLINE?


There are many opinions on what a safer internet could and should look like. Mental Health, LGBT+, ISVA digital content creator Annie Wade Smith states

“whatever issues women have in the real world they still have online. In some ways it’s worse online because people can have fake accounts, speak behind screens, find access to information about you and approach you at any time of day…even in your own home.

Research has also shown that online abuse such as digital sexual exploitation can carry the same amount of trauma as an in person face-to-face experience. With digital developments in technology such as spy cameras and sexual violence occurring in the meta verse it’s quite a scary time for women and queers to be online.”


Another issue is that laws and digital platform policies tend to mark specific words and nude images as offensive – which some may say is too lax to cover all the types of content people use to express themselves online, while others point out that hardlines and over specifications are blocking advocates and everyday digital citizens from sharing healthy info about sexual health, women’s bodies or health issues. When those laws or policies are violated, it’s often unclear exactly how involved digital platforms should be in bringing perpetrators to justice. When we take a step back to look at prevention, at educating people before these harms take place, it’s not always exactly clear who the ‘key educators’ and the ‘target students’ are.


How do we raise the bar on identifying offensive content, without censoring educational and shadow banning sex positive content?

There are a lot of words you can’t specifically use on certain platforms, but abusers may get around this by using an emoji to still cause harm or offence, which then turns it into a semantic debate. Monica Karpinski, Founder of The Femedic, a health media platform for women and people with vaginas, points out that “when people do report these types of incidents, there’s also no clear chain of responsibility. So there’s often no outcome. It’s difficult and society is still catching up with how quickly digital comms are developing.


"For a lot of people, there’s an understanding that nothing will happen if they file a report. So that might cause them to retreat from that space as it might cause them to feel censored.”

Yet as we expand the scope of digital content covered in monitoring policies, we need to be careful not to hyper-monitor or hypersexualise women who are simply trying to express themselves. Charlotte Williams, founder of SevenSix Agency, a London-based influencer marketing and talent management agency dedicated to diversity and inclusion, notes that


“there’s an issue of women and minority groups in general being able to post, promote themselves and exist online without having the support of platforms. It’s getting better but there’s no filter around what’s actually appropriate and inappropriate.


Everything around women’s bodies is considered to be inappropriate. So women can’t express themselves, can’t educate, or do whatever it is they want to do on and offline.”

Monica explains experiences of similar censorship when it comes to survivor, LGBTQ+ and social justice pages raising awareness about online abuse. “I haven’t posted on The Femedic instagram for months because lots of accounts in the women’s health space were having their visibility reduced…


I know there are very clear rules about sexual content. But if something is educational or about relationships, a lot of people in my space, instead of writing the anatomical words vulva, vagina or penis, they’ll write p3nis so it won’t get picked up.


So it’s a whole discussion in my space and a lot of people wind up abandoning their accounts.” Annie confirms


“what’s difficult about these platforms is that when queer people do report the abuse or harm, often the community guidelines (which are meant to be protecting us or keeping us safe) actually end up harming those who are a victim of abuse and we often find that trolls get away with it and our accounts get shadow banned or even worse… taken down”

This issue has been echoed by activists whose posts were taken down as they tried to raise awareness for Missing and Murdered Indigenous Women and Girls and Black Lives Matter. We need algorithms and monitoring standards that understand the diversity of contexts behind these online discussions, and which understand emojis, GIFs and wider digital content as a core part of our daily language.


Perhaps digital platforms can start by hiring people from these communities to build their leadership, tech teams and policy teams. Which would also help to reshape the rules around how much personal information people should give in order to make social media accounts in the first place.


Who exactly is responsible for accountability, educating and taking action?


Nicole Noble, a growth marketing specialist who has worked on building communities in the postnatal and women's health spaces (at brands like Joya and Jennis) explains one way to reduce the potential impact that could be experienced by online trolls via the inclusion of expert moderators within their online communities. “If you’re talking about something so sensitive that has potentially ruined a big chunk of your life and if somebody can say something which will trigger you, either intentionally or not, that can have a significant impact on your healing journey.” Nicole notes that in the online communities that she has been building, by adding in an expert moderator as a safety checkpoint, this has helped in following up on a more one to one basis if certain red flags do appear.


Whilst the idea of having a safe online moderator is refreshing and a great concept for niche communities, how can larger platforms with wider audiences follow suit? When it comes to hate speech campaigns, some are calling on social media platforms to require user ID verification. Annie cautions though “that this could cause outing and prevent trans and gender non conforming people from accessing online spaces that they rely on if their ID doesn’t match up with who they live online as. So if online platforms go down this route, it needs to be inclusive and accessible rather than a one size fits all”.


The general consensus is to be able to hold social media users accountable for their harmful online behaviours. When it then comes to investigating perpetrators and holding them accountable, governments need to have laws that make it possible for social media companies to assist in investigations. These same laws should also give law enforcement clearer paths to making investigations and convictions.


If we look at prevention, schools, sexual violence organisations and educational platforms need to include online safety in their information on sex, relationships and consent. Charlotte emphasises the dire need to teach people how to protect themselves online. She regularly advises content creators at her agency to “never shoot content that lets people know the layout of your home. Never shoot content on your street. Never showcase your door number. Make sure if you’re registering a company, don’t register at your home. Register to a PO box. It doesn’t seem like a big deal, but it is. And when you go to an event, don’t post live that you’re there unless you know there’s safety.”


what would make women feel safer online?




It appears the solution is no one silo. As Monica explains,


“It’s both a combination of changing the culture of how people behave online and the accountability for people who are being abusive on those platforms”.

The importance of education is also reinforced by Charlotte, “If more people were educated in what being online actually means that would make me feel safer. That education of ‘one action can change your life’ needs to start really early.”


Community involvement is also key. “Appropriate safeguarding online would make online spaces a lot safer. I think there’s ways of doing it where you can involve communities who have already experienced harm to say what might work for them” mentions Annie.


“For me it’s not a functionality thing” adds Nicole, “it’s the way people relate and speak to each other on a day to day basis both offline and online. It’s not an easy solution but I think talking about it more, be it on socials, or within the actual apps, or even from a content marketing perspective…


"Spreading the message of ‘see the other person as a real person’ is important."

Ideally, as governments, platforms, brands and communities take more action in building better digital spaces, we’ll move farther away from this upside-down reality where the burden of using the internet safely, positively and freely falls on present and future survivors. Instead, we should build towards a future where people know to avoid being abusive, rather than to ‘avoid being abused’.


This article was co written and re-searched by Sydney Amoakoh.

85 views0 comments
bottom of page