Moderator: the title you give to people who have the responsibility of managing your chat and community. Answering the call to protect your community and its members at all costs, they are integral to any successful Discord server. But it’s important to remember that moderators have to be safe online, just like the users they fight to protect. The first step in doing this is to ensure your account safety is set by having a strong password and setting up backup login methods- all of which you can learn more about in this article that is going to be focusing on the importance of securing your Discord account.
In this article, we’ll explain how moderators can do their job safely and securely, cover how to handle links, scams, and possible doxxing attempts, and introduce some general best practices to keep you and your account safe.
Spam has historically been a problem that plagues all platforms online as it is a simple way to troll as well as easy to change and adapt to suit the spammer’s needs. Discord has begun to implement progressive changes to how they detect spam on the platform, updating, tweaking, and fine-tuning their anti-spam systems daily to catch more and more spammers and spam content.
Firstly, we’ve implemented the Malicious Link Blocker, which is a system that warns a user similar to what you see with Chrome when visiting specific sites. It is meant to minimize exposure to spam content, but it’s important to remember that it doesn’t catch everything. Keep in mind, just because a link does not trigger the Malicious Link Blocker doesn’t mean that the link is safe! Always be careful when clicking links from unknown users that may seem suspicious.
Discord also introduced another anti-spam feature, which auto-detects and hides content from likely spammers in servers reducing outright spam. These messages will automatically be hidden by the Server Channel Spam Redaction system.
When you take on the title of community moderator, you become a front-facing member of your server. As a result, you have to be up-to-date on the newest methods on how to moderate safely and securely to not only keep you and your team safe but also to help educate your community. This includes knowing how to spot and handle malicious links, files, scams, and phishing attempts. It helps to also have knowledge in how to deal with threats to your community members and doxxing concerns.
Now we’ll explore how to safeguard against these types of risks.
As a moderator, you might come across malicious content shared in your server in the form of links and files. Malicious links and files come in all shapes and sizes. Some try to get ahold of your account credentials, such as login information or token while others try to have you download malware that can harm your computer.
If you do not recognize the domain, try doing a google search to find out more information about the link before clicking on it. Some links try to imitate real websites to trick the users into thinking it is safe to click on when, in fact, it is a malicious link. Be sure to double-check the spelling of common domains so that you aren’t tricked into thinking a link goes to YouTube instead of “YouTbue”, for example. A more subtle way you might encounter malicious links is through embedded messages from bots or webhooks. Unlike normal users, bots and webhooks can hyperlink text in a message. Be sure to double check where a link leads to when clicking on it.
For example, you can encounter an embedded message that looks like https://discord.com/moderation, but is hyperlinked to another site. The link in the previous message doesn’t actually go to the Discord Moderator Academy that is usually found at that domain, but is hyperlinked to another page. This is one way attackers can mask their malicious URLs.
Another thing to keep an eye out for when looking for malicious links is the usage of URL shorteners that might hide a malicious domain name. For example, using a URL Shortener website, such as Bitly, will let you shorten a link like https://discord.com/moderation/4405266071063-100:-An-Intro-to-the-DMA into https://bit.ly/3kGAZKz.
Although URL shorteners are a convenient way to make links more compact and easier to read, they also hide the final destination of the URL, which could be a malicious website. When dealing with these types of shortened URLs, you should first prioritize determining where it leads. You can use a URL expander such as URLScan or Redirect-Checker to do this. Once you have a better idea of what is on the other side of the URL, you can decide whether it is safe or not to visit the site and remove and/or report the message if need be.
As a rule of thumb, it is never a good idea to follow links sent by strangers! If you still are unsure about the destination of a link, you can use sites like VirusTotal to check for any potential malware or illicit content.
You should always exercise caution when downloading files from anyone on Discord, whether it’s from a stranger or someone you think you know. One of the most dangerous files is a “.exe” file. These files will execute some sort of function on your computer, leading to leaking information to the sender or having other serious consequences.
In some cases, downloading a malicious file won’t immediately affect your computer until the file or program is run or opened. This is important to keep in mind since downloading a file can cause a false sense of security to think it is safe since “nothing bad happened” until you run whatever you downloaded!
If you do decide to trust the download, take the extra precaution to run it through VirusTotal or similar websites to search for potential dangers. It’s also good to check your anti-malware software to scan these files. To be extra sure you don’t click anything illicit but want to run the message through one of these websites, right-click the message on Discord and choose “Copy Link” from the dropdown.
If you encounter misspelled links and other sketchy-looking links, it might be a good idea to add it to a text filter or to your moderation bots’ banlist. If you are sure that a link sent on your server is malicious or dangerous, be sure to remove that user from your server so they cannot privately try to spread these links to other users, and make sure to report it to Discord using the online form.
Scammers use many different techniques to trick you into giving them your personal information. They may try to steal your Discord login credentials, user token, or private information through carefully crafted scam attempts, thus giving them access to your account for problematic purposes.
Phishing is when a scammer convinces you to do anything that provides them access to your device, accounts, or personal information. They can more easily infect you with malware or steal your personal information by impersonating people or an organization who need this information. An example of this is a scammer claiming to be a Discord Staff Member or claiming to be from official Discord programs such as Partners or HypeSquad Events. Some more ambitious scammers could also include someone claiming to be from local law enforcement.
It is important to know that Discord Staff will only ever communicate through accounts with a staff badge or through System DMs. We will never ask you for your password. A Discord System DM will look exactly like the photo above in your direct message inbox. Check out their Discord System Messages blog post for more information about how Discord sends direct messages.
These social engineering tactics "bait" you with a trusted looking icon or name to obtain your personal information. These schemes may persuade you to open an attachment, click on a link, complete a form, or respond with personal information to make it easier for them to steal your account.
Scams are constantly evolving and changing, but they do tend to follow similar patterns. Remember, Discord will never ask you for your password, even through official means of support, nor will we give away free Discord Nitro through bots. Some common scams on the platform that are combatted every day are as follows:
Prize Scams. If it’s too good to be true, it probably is. Scammers might try to get your information through empty promises of fake prizes. A common prize scam is random bots sending you a message that you’ve won a month of Discord Nitro. If these bots are not directly connected to a server giveaway you were a part of, this giveaway is likely fake and and the links they sent are malicious. Discord would never use a bot to send this information to you directly, and even verified bots can be hacked to share these malicious links.
Steam Scams. Has someone ever sent you a message apologizing for “accidentally reporting you” on Steam? This is yet another way scammers try to infiltrate your accounts. Referring to someone who can fix the issue along with a link that looks like Steam’s website, but in truth, is a phishing link. If you look closely, you can spot typos in their domain name such as “steamcomnmunity,” “sleamcommunity,” and many others.
Most companies usually handle support issues on their websites, so be on the lookout for anyone claiming to want to help you through Discord representing a company or a service. Regarding the above example, Steam will always use its platform to resolve relevant issues and never reach out through Discord to settle problems with your account.
Game Scams. Be aware of random users who message you asking if you want to test their new game. This is another attempt to compromise your account and unlock your private information through phishing. Requests from strangers or friends to try their game usually mean that their account has been compromised, and they are now attempting to gain access to yours. If you have other means of contacting this user off-platform, it is good to alert them to the fact that their account has been compromised to see if they can regain control of it or contact Discord Support about the issue.
Discord Recruitment Scams. Another type of scam is where external individuals or companies pretend to represent Discord and offer fictitious job opportunities. The scammer will try to impersonate a Discord employee either on Discord itself or via external sites. This is a serious security concern, so there is a whole page about this scam that you can read here: Discord Recruitment Scams. You can only apply to their jobs through their official careers website. All communication from Discord regarding hiring will come from discord.com or discordapp.com email addresses. They will not use the platform to recruit you.
*Unless you are using the channel description for verification instructions rather than an automatic greeter message.
If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.
Verification Actions
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.
In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.
With the vast array of search tools and information readily available online, almost anyone can be a doxxing victim. If you have ever posted in an online forum, signed an online petition, or purchased a property, your information is publicly available. Through public records, databases, and other repositories, large amounts of data are readily available to anyone who searches for it.
Cybercriminals and trolls can be incredibly inventive in how they doxx you. They might start with a single clue and follow it until your online persona is progressively unraveled and your identity is revealed. You must be hyper-aware of what personal information you share online and be cautious when divulging information about yourself.
If private information about you is revealed online through Google searches and you happen to live in the EU or Argentina, you have the right to be forgotten. Similar rights are given to people in the United States, although not to the same extent. We generally encourage you to check resources such as HaveIBeenPwned to see whether or not your data has been a part of any big leaks.
If you want content about you to be removed from Google, refer to this Google Troubleshooter. Sharing these resources or posting them within your Discord server can prove to be a valuable asset to your members, forestalling possible doxxing attempts or threats. Another great resource is the COACH tool which helps you lock down your identity by portioning the basics of online security into bite-sized, interactive, easy-to-follow guides.
If you are concerned you are at a high risk of being doxxed, you can consider setting up Google Alerts to monitor possible doxxing attempts. If sensitive or private information has been leaked online, you can submit requests to have that content removed by using the following guides: Removing Content From Google or Remove Your Personal Information From Google.
Keeping your Discord login credentials and account token safe and secure is vitally important to ensure your own account safety when moderating an online community. Even with proactive measures such as 2-Factor-Authentication (2FA) in place, scammers can still get access to your account with your account token, so evading common phishing attempts and utilizing the vast amount of resources available to spot scams becomes increasingly important for moderators. Discord released an article about keeping your account safe and sound with 2FA, which is an excellent resource to read or refer to.
Ensuring that server permissions are set up correctly is also essential to combat illicit links and other variations of phishing and scamming attempts. It is important to double-check your permissions when making new categories or channels inside your server, as moderators discuss sensitive and private information inside locked moderation channels. If you need a refresher on how permissions work, check out the DMA permissions article here.
Bots are potent tools that moderators and community builders use daily to help moderate and spark community interest via events and games. However, bot accounts differ slightly from a regular user account, meaning that bot accounts are capable of performing actions much faster than a regular user and allowed to obtain the message and user data from your server quickly.
Knowing what permissions bots and bot roles are given is essential to developing a safe community, helping ensure the safety of all its members and its moderators. A malicious bot can wreak havoc within servers very quickly by mass-deleting categories, exposing private channels, sending scam messages, and abusing webhooks. We heavily recommend researching any bot before adding it to your server.
Markdown is also supported in an embed. Here is an image to showcase an example of these properties:
Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.
It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.
When reporting content to Discord, you might hesitate and think to yourself, is this worth reporting? Please know that all reports made in good faith are worth reporting to Discord. Moderating on Discord is an integral part of the platform. User safety is the number one priority for Discord, especially moderators, as they help keep your community safe.
There are a lot of resources to draw from to ensure you moderate safely and securely. Practice good cybersecurity by having good antivirus and malware detection programs and strong passwords. Differentiate between your “real” self and online persona to minimize doxxing opportunities. Check suspicious links and websites through online tools to make sure they aren’t malicious. If you or one of your community members are doxxed online, there are proactive and reactive measures that can be taken to ensure your account security. Figure out what sort of content was leaked, report it to Discord’s Trust & Safety teams, and submit relevant removal requests such as Google’s removal tools.
We hope these tips help you in your moderator journey! If you’d like more information regarding safety and Discord, check out their Safety Center. Stay safe out there!
Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance
**Gaius also offers an additional NSFW filter as well as standard image spam filtering
***YAGPDB offers link verification via google, anything flagged as unsafe can be removed
****Giselle combines Fast Messages and Repeated Text into one filter
Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.
There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.
Text Filters
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**YAGPDB offers link verification via google, anything flagged as unsafe can be removed
***Setting a catch-all filter with carl will prevent link-specific spam detection
A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.
A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country ‘Nigeria’ they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.
Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.
Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.
Anti-Raid
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.
*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.
Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.
User Filters
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name
**YAGPDB can use configured word-list filters OR a regex filter
Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.
One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.
However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.