The vast majority of community members are interested and willing to participate according to the platform rules, even if they might not agree with every one of them. Sometimes people break rules or disagree, but their behavior can be quickly corrected and they can learn from their mistakes. If users continue to break the rules, they may be given longer-term or even permanent bans from the community or platform. Most users will accept their ban, but a small fraction will not.
A 2018 study by Stanford University estimated that 1% of subreddit communities on Reddit initiate 74% of all conflict on the platform. This incredibly small percentage of users rank extremely low in the agreeableness personality trait, and have no interest in getting along with others. Only a trained clinical psychologist can diagnose a patient with a disorder, but a term commonly used prior to diagnosis is HCP (high-conflict people). There are four primary characteristics of high conflict personalities, which is not a diagnosis but their description of specific conflict behavior:
If you fail to use tact in your moderation technique and communication approaches, you may find that you or your community become the target of a high-conflict person. They may spam your community and you may delete their posts and ban their accounts, but more accounts can be created. Discord uses IP bans to prevent users from creating new accounts, but VPNs can be used to circumvent these bans. If truly motivated, armies of bot accounts can be created and used for mass-spamming, members of your community can be doxed, and ISPs or platforms can be DDoS’d to create fear in your community. If a high-conflict person gains access to money, they can pay somebody else to do the work for them.
Most moderators choose to simply wait out the harassment. Advanced harassment like this may go on for several days, or even weeks, but then stop abruptly as the individual turns their attention to something new in their life. In some cases the harassment can go on for months, continuing to escalate in new ways that may put the lives of your team or community members in danger.
What can you do to protect your community from High Conflict Persons? What motivates a person to behave like this? This article will help to explain the motivations behind this persistent, destructive behavior, and provide actionable steps to reduce or resolve their harassment.
A “nemesis” is an enemy or rival that pursues you relentlessly in the search for vengeance. A nemesis typically holds some degree of fascination for a protagonist, and vice versa. They’re an antagonist who’s bent on revenge, who doesn’t go away, and who seems to haunt the mind of the protagonist. They’ve moved past being an enemy to become something much more personal.
You might assume that a high-conflict person harassing your community is your nemesis, but this would be incorrect. You’re not going out of your way to obstruct their behavior, your primary focus is to engage and moderate your community. If the harassment stopped, you would move on and forget about their behavior. You resist their behavior only as long as it falls under your realm of influence.
In their mind, you have become their nemesis, and you must be punished for your insolence.
To them, you are the Architect of an oppressive Matrix, the President Snow of an authoritarian Hunger Games, the tyrannical Norsefire government in V for Vendetta. You or your community represent the opposite of what they believe. In one way or another, either by your direct actions or through your association with your community, you have wronged them and deserve to suffer for your behavior. It’s clear that you will never learn or understand what they see. You not only participate in creating the corrupt and unjust system that they are oppressed by and fight against, but as a moderator, you are the very lynchpin that maintains the corrupt system.
You may believe this sounds outlandish, and you would be correct. Most people don’t believe that the world is out to get them, and that they’ll be hunted down and persecuted for what they believe. These individuals have an overactive threat detection system that makes them believe that you or your community are actively plotting their downfall. They take your opposing stance as a direct challenge to their competence, authority and autonomy. They harass you and your community because they believe that you’re out to get them, or want to replace them and their way of life. The truth is, all you really want them to do is follow the rules and maintain a civil conversation.
Now that you have a better understanding of how somebody like this thinks, we’ll discuss the strategies that you can employ to solve this problem. The goal is NOT to get them to seek help or change their mind- we aren’t attempting to solve people. Instead, our goal is to prevent or stop certain negative behaviors that keep happening so that you can protect your community and focus your energy elsewhere.
The key to getting an individual like this to change their behavior is through utilizing “tactical empathy”. Tactical empathy is the use of emotional intelligence and empathy to influence another’s behavior and establish a deal or relationship. It is not agreeing with them, but just grasping and recognizing their emotions and positions. This recognition allows us to act appropriately in order to respond to our counterpart’s position in a proactive and deliberate manner.
The premise behind tactical empathy is that no meaningful dialogue takes place when we are not trusted or we are perceived as a threat. In order to get someone to stop harassing your community, you need to shift yourself from being the villain of their story to being just another random person in their lives. You must work to shatter the persona that they have projected onto you and show that you are not the enemy working to destroy them. You’re just a mod trying to keep your community safe.
By demonstrating that you understand and respect them as an individual, this will disarm them and allow them to focus their energy elsewhere. It will not change their opinion, but at least their behavior will change.
*Unless you are using the channel description for verification instructions rather than an automatic greeter message.
If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.
Verification Actions
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.
In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.
When somebody continues to harass or disrupt your community, they’re essentially holding your community hostage. If someone truly is holding your community “hostage”, they’re often doing so because they’re looking to open a dialogue for negotiation. Frequently, people take hostages because they need somebody to listen. They aren’t getting the attention that they believe they deserve, and attempt to cause as much disruption as possible in order to make their case.
You are a community moderator negotiating the peace of your community, not their lives, but these tactics can still apply.
Situation diffusal can generally be defined by three primary processes, each designed to collect information and use it to disarm the high-conflict person from believing that you’re an enemy or threat. These processes are called The Accusations Audit, Mirroring to Understand and Getting to “That’s Right”.
An accusations audit is where you focus not on just the things that they believe, but the things that they believe you did wrong. An accusation Audit is not based on logic - it’s based on the unfiltered emotions of the other person.
It’s important that you go through their early comments and messages to understand what prompted this behavior in the first place. This might have been banning them breaking a rule (which is what you’re supposed to do, this isn’t to say that you aren’t acting unreasonably) or not properly punishing another community member they got into an argument with. They might believe “I feel like you didn’t give me a chance to explain myself” or “I feel like you’re discriminating against me”.
Your understanding of their beliefs will be flawed and inaccurate, but you must do your best to piece it together into a coherent argument on their behalf. If possible, learn more about the other communities they’re a part of. Identify if they’re harassing any other communities, and the reasons for doing so. Are there any commonalities of note?
Once you believe you’ve figured out why they’re upset with you or your community, mirror their language to verify it. At this point, opening a dialogue might be incredibly difficult if they’re using throwaway accounts regularly. Chances are they do have a primary account they continue to use in other communities, which can help greatly with starting your dialogue. At this stage, you’re still working to collect information about what they believe, directly from the source. Examples of questions you can use to verify their opinions include, “It seems like you believe that I’m being unfair because I didn’t give you a chance to explain yourself.” or “If I understand correctly, you believe I’ve been discriminating against you instead of taking your opinion seriously, is that right?”
Chances are, the responses you receive will be filled with aggression, profanity and insults. You must ignore all of this, and continue working to understand their position and the events that resulted in them targeting your community. Negotiations like this are difficult in voice-to-voice communication, and nearly possible via instant or private messaging. They will be incredibly resistant at first, perhaps thinking that you’re attempting to trick them into a perjury trap for them to admit their guilt or ignorance.
When you get them talking to you, mirror that language to get them to elaborate further on their beliefs. An example of dialogue might go something like the following:
Spammer: “It’s bullshit that mods ban strawberry jam lovers because the blueberry jam lovers are afraid of being exposed for who they really are.”
Mod: “Afraid of being exposed?”
Spammer: “Yeah, the blueberry jam lovers are secretly running the world and plotting against anyone who doesn’t believe in the same jam flavor preferences as they do.”
Realistically, blueberry jam lovers are not actually running the world or plotting anything nefarious, but in the mind of the spammer this is undeniably true. And while this example was intentionally mild, you can infer more severe types of conversations that would follow a similar format.
Regardless, as you dig further into what they believe, you’ll notice that the rabbit hole will go very deep and be filled with logical fallacies and obviously disprovable biases that make no sense. Remember that the truth or reality behind what they believe is completely irrelevant, and attempts to correct them will undermine your goals. Your job is to help them explain their beliefs to you to the best of their ability, and for you to understand their position to the best of your ability. Once you believe you’ve collected enough information, you can move to the final step, getting to “That’s Right.”
Once you believe you’ve completely understood their position and what they believe, you can repeat their entire position back to them. Demonstrate your understanding by effectively summarizing it concisely and accurately, regardless of how much you disagree with the position. Don’t focus on their behavior or the actions that resulted in them getting banned. Instead, focus exclusively on the ideology that drove their behavior. Do this until you’re able to get them to say “Yes, that’s right” at least 3 times, or by asking if there’s anything else that you forgot in your summary. If you did miss anything, repeat the entire position again while including the extra information. When reiterating their points, be very careful about restating things that are not true. Do your best to remove personal bias from the statements to focus them back to “absolute truths.”
Their actions are about trying to make a point- but what you’re doing is getting them to make their point without taking action, because you have heard what they are trying to say. If you do this well enough, if you put enough effort into doing this correctly (and it doesn’t need to be perfect), they will know that you finally understand where they’re coming from and that they’ve been heard by you, and their opinion has been validated. By demonstrating you understand their position, you go from being part of the problem to being a real person. They might not like you, but they will at least (if begrudgingly so) respect you.
When you successfully reach this state of your discussion, it’s essential that you be careful with your choice of words. There’s a good chance that the spammer will leave your community alone now that they know that their opinion has been recognized. At the very least, you should see an immediate reduction in the number of times they attempt to cause harm.
If they do continue to harass you or your community, it’s possible that you failed to address the primary reason that they’re upset. Open dialogue with them again and follow the steps above from the beginning, or check to see that you haven’t fallen into a common pitfall or mistake.
Markdown is also supported in an embed. Here is an image to showcase an example of these properties:
Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.
It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.
Below is a list of common examples of mistakes people make during negotiations:
When using tactical empathy, remember that the purpose of the exercise is to bring their beliefs to the conscious mind and demonstrate agreement. If you attempt to tell them what they should believe, you may instead get a “you’re right” and fail to see any change. The difference is subtle, but important. Make sure that the other side actually feels heard, and that you’ve fully understood their position.
As a reminder: do not attempt to correct or modify their opinion. Remember the purpose of this process. It is not to modify their position or opinion, it’s only to mirror their opinion to stop identifying you and your community as a threat.
The methodology outlined in this article is designed for conversations in real-life, especially over the phone. It’s unlikely that you’ll be able to get the spammer on an audio call, so it’s essential to be patient with the process and careful with your wording. Formal grammar like punctuation can make a sentence feel more serious or threatening. Use casual phrasing and make an occasional spelling mistake to show you’re human. If you’re uncertain about tone, read the sentence out loud while sounding as angry as you can, and adjust accordingly.
The process outlined here can be easily undermined by others who aren’t involved in the process. If you’re working to negotiate with a spammer but another moderator is threatening them in a different conversation, you won’t see any changes in their behavior. Communicate with your team on the strategy you plan to use, and remember to ask for emotional support or step away if it becomes too taxing.
Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance
**Gaius also offers an additional NSFW filter as well as standard image spam filtering
***YAGPDB offers link verification via google, anything flagged as unsafe can be removed
****Giselle combines Fast Messages and Repeated Text into one filter
Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.
There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.
Text Filters
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**YAGPDB offers link verification via google, anything flagged as unsafe can be removed
***Setting a catch-all filter with carl will prevent link-specific spam detection
A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.
A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country ‘Nigeria’ they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.
Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.
Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.
Anti-Raid
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.
*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.
Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.
User Filters
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name
**YAGPDB can use configured word-list filters OR a regex filter
Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.
There will be some of you who believe that after getting this far, you may be on the path to rehabilitating a person like this. The mistake is believing that you are further along than you really are, or that you’re qualified to help someone struggling to control their emotions. The truth is, getting to “that’s right” is only 1% of the process.
Even if you’re a clinical psychologist, you wouldn’t be getting paid for your work, at least not this work. Attempting to provide support via text chat will have diminishing returns. Attempting to show somebody like this the “error of their ways” may result in all of the work you have done being reversed.
Instead, you must focus on the people who want your help and who need it- this being the people in your community. Empower the people who are truly deserving of your time and energy. At the end of the day, you’re a human and a moderator. Your primary focus in this realm is to make sure your community is safe and stays safe- and if you’ve managed to get the persistent spammer to stop then you’ve accomplished what it is you’ve aimed to do.
One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.
However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.