Hashtags and Harm:
NAVIGATING SAFETY, SPEECH, AND LGBTQIA RIGHTS ONLINE
- Atilla Tiriyaki
- Average Reading Time: 14 minutes
- Community
- articles, atillat, guides
In an era where digital platforms influence public debate and opinion, social media has become both a lifeline and a battleground, especially for LGBTQIA+ and other minority groups. As online spaces amplify voices and support movements, they also expose users to harassment, misinformation, targeted attacks, and increasing online and real-life risks. As discussions about free speech and accountability grow more complex, we face a crucial question: how can we protect the right to express oneself without compromising safety and the fundamental rights of others?
DISCLAIMER
All published articles are intended for an audience aged 18 years and over and have been written by members of the general public. Many will likely not be journalists nor be affiliated with any professional bodies associated with members of the media. The articles will likely be based on the authors' own opinions, views, and experiences...more
Today, for many people, social media and internet access are vital parts of modern life. On average, adults spend at least a quarter of their waking hours online, with platforms and services enabling quick, easy connections across the world within minutes. They also allow users to share opinions, images, news, and more with diverse global audiences, something that was previously difficult or impossible.
The increasing popularity and widespread adoption of online services have brought about unique and specific challenges. With many providers offering disposable email addresses and a lack of widespread adoption of verified accounts, people often create two separate identities: one for face-to-face interactions and one online. In 2004, psychiatrist John Suler described this phenomenon as the Disinhibition Effect, in which individuals loosen or abandon social restrictions, inhibitions, and norms when online.
People often believe they have the freedom to share their unfiltered, unedited thoughts and opinions, many of which are influenced by their fears, hang-ups, and shortcomings and are reinforced by any online attention they receive. Their aim is to genuinely express how they feel, effectively voicing previously unspoken parts aloud. The perception held by many of these individuals is that they enjoy a certain level of anonymity, combined with a lack of concern for the impact their words might have and an obliviousness to, or lack of accountability for, any damage they might cause.
Though it may seem all negative, there are many benefits to going online and connecting, but there is no doubt about the impact some of the negatives are having on people and wider society.
The increasing popularity and widespread adoption of online services have brought about unique and specific challenges. With many providers offering disposable email addresses and a lack of widespread adoption of verified accounts, people often create two separate identities: one for face-to-face interactions and one online. In 2004, psychiatrist John Suler described this phenomenon as the Disinhibition Effect, in which individuals loosen or abandon social restrictions, inhibitions, and norms when online.
People often believe they have the freedom to share their unfiltered, unedited thoughts and opinions, many of which are influenced by their fears, hang-ups, and shortcomings and are reinforced by any online attention they receive. Their aim is to genuinely express how they feel, effectively voicing previously unspoken parts aloud. The perception held by many of these individuals is that they enjoy a certain level of anonymity, combined with a lack of concern for the impact their words might have and an obliviousness to, or lack of accountability for, any damage they might cause.
Though it may seem all negative, there are many benefits to going online and connecting, but there is no doubt about the impact some of the negatives are having on people and wider society.
Online spaces have been evolving and changing over the years, as have the psychological effects of words and their influence on us. Regardless of a person’s level of notoriety, fame, or number of online followers, we are all human and possess deep-rooted desires and needs. As humans, though to varying extents, nearly all of us crave a sense of belonging, community, and validation from others. Whether directly expressed or unsolicited, praise, compliments, and recognition of who we are or what we share with the world undoubtedly have a positive impact on how we see ourselves and on our overall mental health. Conversely, negative, threatening, and abusive language can have the opposite effect.
Although those making negative comments may not fully realise the impact their words could have, they often rationalise their actions by saying it is just online or that I was merely venting and didn’t mean everything I said. However, in reality, when you upload content online, you are hoping that the people reading or receiving these messages have the mental fortitude and resilience to dismiss them and not let the words affect them.
What people say and share online reveals more about them than words alone. Someone feeling lonely or unloved might leave a nasty comment on a young influencer’s post, sharing a moment when they are happy, or someone unhappy with how they look or feel might post a disgusting rant about a female celebrity’s appearance.
Today, negative comments and abusive language seem to have become normalised and are often seen as a necessary evil for those wishing to put themselves out there online. Many individuals and celebrities with large followings feel compelled to rationalise online comments and reviews, even hiring people to filter, report, and dismiss negative feedback. They employ others who are not personally involved or affected by what is being said to protect their own mental health.
Before 2025, many social media platforms had introduced content moderation and third-party fact-checking to fight misinformation and hate speech. Now, many of those safeguards have been largely reduced, using AI or other methods, all justified by a focus on community notes, the promotion of so-called free expression, and the aim of reducing censorship.
Content consumption varies by age group, with many younger individuals dominating short-form video platforms such as TikTok and Snapchat. People aged 25 and over tend to use platforms like Facebook, Instagram, YouTube, and LinkedIn. The rise of free and unfiltered speech, reduced content moderation, and the widespread availability of ways to connect and share, especially since many prefer to remain anonymous, have all led to communities self-regulating their own spaces. This, in turn, has caused increased misinformation, a decline in trust, and scepticism about what people see and hear online.
Although those making negative comments may not fully realise the impact their words could have, they often rationalise their actions by saying it is just online or that I was merely venting and didn’t mean everything I said. However, in reality, when you upload content online, you are hoping that the people reading or receiving these messages have the mental fortitude and resilience to dismiss them and not let the words affect them.
What people say and share online reveals more about them than words alone. Someone feeling lonely or unloved might leave a nasty comment on a young influencer’s post, sharing a moment when they are happy, or someone unhappy with how they look or feel might post a disgusting rant about a female celebrity’s appearance.
Today, negative comments and abusive language seem to have become normalised and are often seen as a necessary evil for those wishing to put themselves out there online. Many individuals and celebrities with large followings feel compelled to rationalise online comments and reviews, even hiring people to filter, report, and dismiss negative feedback. They employ others who are not personally involved or affected by what is being said to protect their own mental health.
Before 2025, many social media platforms had introduced content moderation and third-party fact-checking to fight misinformation and hate speech. Now, many of those safeguards have been largely reduced, using AI or other methods, all justified by a focus on community notes, the promotion of so-called free expression, and the aim of reducing censorship.
Content consumption varies by age group, with many younger individuals dominating short-form video platforms such as TikTok and Snapchat. People aged 25 and over tend to use platforms like Facebook, Instagram, YouTube, and LinkedIn. The rise of free and unfiltered speech, reduced content moderation, and the widespread availability of ways to connect and share, especially since many prefer to remain anonymous, have all led to communities self-regulating their own spaces. This, in turn, has caused increased misinformation, a decline in trust, and scepticism about what people see and hear online.
Governments are gradually enacting laws to restrict the age at which people can access social media. Restrictions are being applied to minors and other vulnerable groups as more adult users join community-managed platforms like Reddit and YouTube. Some of the most popular platforms now have very different reputations than they did five or ten years ago. Bullying has also changed considerably and has become much more malicious. Name-calling and mockery have moved from schoolyards to becoming a 24/7 attack. Practices like doxing, where individuals’ personal details such as names and addresses are shared publicly to intimidate them, are increasing. Mockery, serious threats, and harassment appear to have risen, with the perception that there is little or no accountability. It is also not solely directed at individuals; businesses are becoming targets as well, facing negative review campaigns, the spread of false information, and even ransom demands for a reprieve.
Many social media platforms use categorisation to group and amplify similar news and content through hashtags. The structure of these tags often involves a word followed by a phrase after the hash or pound symbol. Hashtags can help raise awareness and visibility for advocacy causes and important news, but they can also be used to harass and spread hate. An example of the dual nature of social media is that it can be a great way to connect and share, but it can also, especially when unregulated, cause significant harm.
Especially when combined with anonymity, many believe free speech grants them the right to say or do whatever they want without facing consequences. But many also think free speech protects you even when you intentionally cause direct harm to others, incite violence, or violate others’ rights. They fail to realise it cannot and should not be used for hate speech, defamation, or obscene content or material. Simply put, you cannot say or do whatever you like without some consequences; you do not have universal protection, and social media companies are not obliged to host your comments, meaning there is no legal duty or obligation, as these are often privately owned companies, to allow your so-called free, hate and unfiltered speech.
Although content moderation can be expensive because it often involves human effort and is time-consuming due to the large volume, it is often seen as a challenge for many social media platforms. However, what is often overlooked is the revenue generated from advertising on these services, which clearly exceeds the costs of maintaining safety. Many of these providers are not powerless to act, as they often claim. There are many examples of platforms blocking, removing, and censoring controversial speech when politically motivated, showing that action is possible, but a decision is often made to do little or nothing. While higher perceived user numbers, especially when there are limited safeguards or consequences for users, may attract more advertisers, this can also lead to brand ads appearing alongside harmful content and images. Some social media companies have implemented measures, but much more could still be achieved.
The next key factor is understanding who you are speaking to, where they are from, and the motivation behind their words. Bots are automated software that perform tasks such as sharing, posting, commenting, and following users without human input. Often designed to create fake trends, flood hashtags, and spread misinformation and fake news, they are linked to profiles and accounts that appear genuine.
In November 2025, the social media platform X (formerly Twitter) introduced a feature called “About this Account” that displays the country or region where the user and their profile are located. The feature uncovered accounts; in effect, many with a political focus, linked to movements like America-First, Pro-Trump, and Pro-Democrat, had users and profiles posting popular content from countries such as India, Nigeria, Kenya, and Eastern Europe. Why? Two possible reasons: high engagement leads to more revenue across various platforms, and the second reason might be to undermine governments and societies worldwide.
Why does all this matter, and how can you protect yourself? It is crucial because if malicious actors and bots influence your perspective, you need to think critically and investigate further before accepting online claims as facts. If you base your identity or self-worth on responses from online communities, people might be targeting you not because they genuinely know or care about you, but because they see a financial opportunity and advantage. Always remember that you could be forming your views and opinions on misinformation and fake news from individuals who haven’t experienced it firsthand or are even based in your country or region.
To protect yourself online, always report and flag fake and bot accounts. Report any hate speech and threats not only to social media platforms but also to the authorities in your country. Limit what you share online, especially with people you do not know, and avoid revealing personal information such as your full name, date of birth, and locations. This isn’t about staying anonymous but about sharing personal details only with those you trust and understand how they will use them.
Using nicknames, regional references instead of city names in location details, and profile images that subtly hide your identity not only provides protection but also allows you to control when and what you share. Making your profile private means only those who know you or those you invite can see the full, unedited version of yourself.
It’s not about hiding or deceiving; it’s about protecting and sharing your information only with trusted, genuine people, not with malicious and bad-faith actors.
Many social media platforms use categorisation to group and amplify similar news and content through hashtags. The structure of these tags often involves a word followed by a phrase after the hash or pound symbol. Hashtags can help raise awareness and visibility for advocacy causes and important news, but they can also be used to harass and spread hate. An example of the dual nature of social media is that it can be a great way to connect and share, but it can also, especially when unregulated, cause significant harm.
Especially when combined with anonymity, many believe free speech grants them the right to say or do whatever they want without facing consequences. But many also think free speech protects you even when you intentionally cause direct harm to others, incite violence, or violate others’ rights. They fail to realise it cannot and should not be used for hate speech, defamation, or obscene content or material. Simply put, you cannot say or do whatever you like without some consequences; you do not have universal protection, and social media companies are not obliged to host your comments, meaning there is no legal duty or obligation, as these are often privately owned companies, to allow your so-called free, hate and unfiltered speech.
Although content moderation can be expensive because it often involves human effort and is time-consuming due to the large volume, it is often seen as a challenge for many social media platforms. However, what is often overlooked is the revenue generated from advertising on these services, which clearly exceeds the costs of maintaining safety. Many of these providers are not powerless to act, as they often claim. There are many examples of platforms blocking, removing, and censoring controversial speech when politically motivated, showing that action is possible, but a decision is often made to do little or nothing. While higher perceived user numbers, especially when there are limited safeguards or consequences for users, may attract more advertisers, this can also lead to brand ads appearing alongside harmful content and images. Some social media companies have implemented measures, but much more could still be achieved.
The next key factor is understanding who you are speaking to, where they are from, and the motivation behind their words. Bots are automated software that perform tasks such as sharing, posting, commenting, and following users without human input. Often designed to create fake trends, flood hashtags, and spread misinformation and fake news, they are linked to profiles and accounts that appear genuine.
In November 2025, the social media platform X (formerly Twitter) introduced a feature called “About this Account” that displays the country or region where the user and their profile are located. The feature uncovered accounts; in effect, many with a political focus, linked to movements like America-First, Pro-Trump, and Pro-Democrat, had users and profiles posting popular content from countries such as India, Nigeria, Kenya, and Eastern Europe. Why? Two possible reasons: high engagement leads to more revenue across various platforms, and the second reason might be to undermine governments and societies worldwide.
Why does all this matter, and how can you protect yourself? It is crucial because if malicious actors and bots influence your perspective, you need to think critically and investigate further before accepting online claims as facts. If you base your identity or self-worth on responses from online communities, people might be targeting you not because they genuinely know or care about you, but because they see a financial opportunity and advantage. Always remember that you could be forming your views and opinions on misinformation and fake news from individuals who haven’t experienced it firsthand or are even based in your country or region.
To protect yourself online, always report and flag fake and bot accounts. Report any hate speech and threats not only to social media platforms but also to the authorities in your country. Limit what you share online, especially with people you do not know, and avoid revealing personal information such as your full name, date of birth, and locations. This isn’t about staying anonymous but about sharing personal details only with those you trust and understand how they will use them.
Using nicknames, regional references instead of city names in location details, and profile images that subtly hide your identity not only provides protection but also allows you to control when and what you share. Making your profile private means only those who know you or those you invite can see the full, unedited version of yourself.
It’s not about hiding or deceiving; it’s about protecting and sharing your information only with trusted, genuine people, not with malicious and bad-faith actors.
The key point to remember and action to take away is to learn more about the platforms you use. A few online searches are enough to reveal how many of these platforms collect your data and content. Small adjustments to permissions can often restrict or prevent your information from being used without your knowledge or understanding of its purpose. It’s not that these companies act in secrecy; rather, people often do not fully understand how these platforms work or how to control and limit the use of their data.
Transparency and moderation are both essential. While protecting free speech and freedom of expression matter, they should not come at the cost of unsolicited or irrelevant comments in spaces not meant for them. If someone holds antisemitic views unjustly, they should not be able to post hate speech on a hashtag like #JewishPride. This is not about censorship, but about sharing views in spaces that never welcomed or invited them. Hate speech is always wrong, and it becomes even more problematic when it appears in spaces where such views have no place. Healthy debate, honest engagement free from malicious intent, and inquisitive dialogue are always encouraged, but hate and the desire to cause harm are not.
The availability and adoption of artificial intelligence (AI) present a unique opportunity. Not only to moderate user behaviours but also analyse patterns of conduct. For example, someone who frequently views homophobic materials, were to one day uncharacteristically visit a supportive LGBTQIA+ group, the AI could flag any comments they make for a moderator review, ensuring they are not shared publicly and limiting their ability to downvote or comment bomb individuals within that group. This does not restrict free speech, but aims to prevent unsolicited speech in spaces it is not designed for.
All users should be familiar with and regularly use the blocking, reporting, and request-for-content-moderation features, not as tools for moral outrage against others, but for their own benefit and in their own spaces. Visiting a far-right group and downvoting and reporting is no different from a far-right person doing the same in your space; an eye-for-an-eye does not justify inciting and escalating hate speech. Vigilant behaviour, doxing, threats, and hateful practices should be reported regardless of the group or space, not only because they are wrong but also because they are crimes in many countries.
The main point is that these tools should mainly be used when the content directly affects you, not just for moral outrage towards others, especially when you are threatened or doxed and are using them at the same time while reporting incidents to authorities, where a crime has been committed. Ignoring the problem or pretending it didn’t happen won’t resolve or stop it, and could harm you in the long run.
The simple truth is that social media, traditional media, and governments worldwide have both social and legal obligations in many countries to ensure online safety and protection for all users. Freedom of speech and expression should be safeguarded, but hate speech and threatening behaviour should carry real-life consequences. Inciting sedition and deliberately spreading misinformation in countries, especially those different from your own, maliciously should not only prompt your local government and courts to act, but also lead to those individuals and users behind these actions being demonetised, blocked, and prosecuted.
When you compare in-person behaviours and their consequences in the real world, why can’t they be applied online? If a person were to walk down a street shouting profanities directly at people, the police would likely be notified and involved, arresting the individual or getting them the help they clearly need. If a company disregards the safety and laws of that country, it will probably first receive a warning, then be fined, and eventually be barred from trading. If a foreign country acts aggressively or permits illegal activities, sanctions would be imposed on it, ultimately harming its international trade and overall economy. With all this in mind, why are the same behaviours and actions online met with limited or no repercussions for those behind them, even when crimes are being committed?
Transparency and moderation are both essential. While protecting free speech and freedom of expression matter, they should not come at the cost of unsolicited or irrelevant comments in spaces not meant for them. If someone holds antisemitic views unjustly, they should not be able to post hate speech on a hashtag like #JewishPride. This is not about censorship, but about sharing views in spaces that never welcomed or invited them. Hate speech is always wrong, and it becomes even more problematic when it appears in spaces where such views have no place. Healthy debate, honest engagement free from malicious intent, and inquisitive dialogue are always encouraged, but hate and the desire to cause harm are not.
The availability and adoption of artificial intelligence (AI) present a unique opportunity. Not only to moderate user behaviours but also analyse patterns of conduct. For example, someone who frequently views homophobic materials, were to one day uncharacteristically visit a supportive LGBTQIA+ group, the AI could flag any comments they make for a moderator review, ensuring they are not shared publicly and limiting their ability to downvote or comment bomb individuals within that group. This does not restrict free speech, but aims to prevent unsolicited speech in spaces it is not designed for.
All users should be familiar with and regularly use the blocking, reporting, and request-for-content-moderation features, not as tools for moral outrage against others, but for their own benefit and in their own spaces. Visiting a far-right group and downvoting and reporting is no different from a far-right person doing the same in your space; an eye-for-an-eye does not justify inciting and escalating hate speech. Vigilant behaviour, doxing, threats, and hateful practices should be reported regardless of the group or space, not only because they are wrong but also because they are crimes in many countries.
The main point is that these tools should mainly be used when the content directly affects you, not just for moral outrage towards others, especially when you are threatened or doxed and are using them at the same time while reporting incidents to authorities, where a crime has been committed. Ignoring the problem or pretending it didn’t happen won’t resolve or stop it, and could harm you in the long run.
The simple truth is that social media, traditional media, and governments worldwide have both social and legal obligations in many countries to ensure online safety and protection for all users. Freedom of speech and expression should be safeguarded, but hate speech and threatening behaviour should carry real-life consequences. Inciting sedition and deliberately spreading misinformation in countries, especially those different from your own, maliciously should not only prompt your local government and courts to act, but also lead to those individuals and users behind these actions being demonetised, blocked, and prosecuted.
When you compare in-person behaviours and their consequences in the real world, why can’t they be applied online? If a person were to walk down a street shouting profanities directly at people, the police would likely be notified and involved, arresting the individual or getting them the help they clearly need. If a company disregards the safety and laws of that country, it will probably first receive a warning, then be fined, and eventually be barred from trading. If a foreign country acts aggressively or permits illegal activities, sanctions would be imposed on it, ultimately harming its international trade and overall economy. With all this in mind, why are the same behaviours and actions online met with limited or no repercussions for those behind them, even when crimes are being committed?
For a long time, governments and international law have seemed uncertain and often hesitant to regulate and establish meaningful rules and restrictions in online spaces. Although many platform companies possess vast wealth, giving them the means, opportunity, and capacity to lobby governments and politicians, this also arises from a lack of understanding of how the technology functions. Focusing solely on behaviour rather than the underlying methods can help create clear, transparent legislation for both online and in-person activities across all governments globally.
International influences and companies operating in the country that hold significant political power are also factors; however, two key points are overlooked: how much money does the company make in my country? And would they accept this approach if the roles were reversed? If the answer is ‘a lot’ and ‘no,’ then something needs to be done.
Social media platforms and their users are not inherently harmful, nor are they the root of many societal problems and behaviours we observe, as we are witnessing a shift in how we structure and present ourselves to the world. These platforms have connected people globally, made both good and bad information more accessible and widespread, and, unfortunately, have also exposed societal issues. They reveal that not everyone can self-regulate effectively, and that the mental divide between online and real life needs to be bridged, with people recognising and accepting that there is only one true version of themselves. It is essential to accept that some spaces should be safe and protected, rather than being exploited as opportunities for individuals to promote unsolicited, unwanted, and irrelevant views and opinions.
Connect, go online, engage, and enjoy yourself. Call out bad behaviour, hold people accountable for their actions, and remember that social media platforms are not the problem; they are part of the solution in creating better online communities where you can connect with like-minded people without fear.
Stay safe, and until next time.
International influences and companies operating in the country that hold significant political power are also factors; however, two key points are overlooked: how much money does the company make in my country? And would they accept this approach if the roles were reversed? If the answer is ‘a lot’ and ‘no,’ then something needs to be done.
Social media platforms and their users are not inherently harmful, nor are they the root of many societal problems and behaviours we observe, as we are witnessing a shift in how we structure and present ourselves to the world. These platforms have connected people globally, made both good and bad information more accessible and widespread, and, unfortunately, have also exposed societal issues. They reveal that not everyone can self-regulate effectively, and that the mental divide between online and real life needs to be bridged, with people recognising and accepting that there is only one true version of themselves. It is essential to accept that some spaces should be safe and protected, rather than being exploited as opportunities for individuals to promote unsolicited, unwanted, and irrelevant views and opinions.
Connect, go online, engage, and enjoy yourself. Call out bad behaviour, hold people accountable for their actions, and remember that social media platforms are not the problem; they are part of the solution in creating better online communities where you can connect with like-minded people without fear.
Stay safe, and until next time.
Learn more about the exceptional LGBTQIA+ community, discover the community’s history, the ongoing movement for equality, the size of the community and how much it contributes to the world economy, frequently asked questions and much more.
Learn more about the exceptional individuals and groups that advocated and fought hard throughout history to improve LGBTQIA+ rights and conditions across the world
Learn more about how individual countries and regions around the world treat members of the LGBTQIA+ community. From the status and laws of same-sex marriages to gender identity recognition, this easy-to-use equality index provides a score and breakdown for every country worldwide.
IMPORTANT DISCLAIMER: All published articles have been written by members of the general public. Many will likely not be journalists nor be affiliated with any professional bodies associated with members of the media. The articles will likely be based on the authors’ own opinions, views, and experiences. Gayther does not endorse or accept any responsibility or liability regarding any materials within the news and media pages. This page may contain external links to third party websites; Gayther provides these links for your convenience and does not endorse, warrant or recommend any particular products or services. By clicking on any external links, you will leave Gayther and be taken to the third-party website, which you do so at your own risk and by accessing the site, you will be required to comply with the external third party’s terms and conditions of use and privacy policies
Discover all of the topical articles written by people from across the community and friends, all sharing their stories, opinions and experiences