A parasocial relationship describes a one-sided relationship between a spectator who develops a personal attachment through various influences to a performer who is not aware of the existence of the spectator. It is strengthened by continuous positive exposure to its source, which mainly happens on social platforms.
In this section we’ll take a look at how parasocial relationships are developed and how to establish the severity of the level of parasocial relationships you are encountering from a moderation standpoint.
The establishment of parasocial relationships can be portrayed as such:
User A, in this example a popular content creator, uploads regular content on a big platform. User B, who takes the position as a member of User A’s audience, takes an interest in their content. User B reacts to User A’s content and observes them. While User A may know that people are enjoying their content, they are unlikely to be aware of every viewers’ existence. This total awareness becomes more unlikely the bigger the audience gets.
User B on the other hand is regularly exposed to User A’s content and takes a liking to them. The interest is usually defined by User A’s online persona: content, visual appeal, likeability, and even their voice can all be influencing aspects. User B perceives User A as very relatable through common interests or behaviors and starts to develop a feeling of loyalty, or even responsibility, during that phase of one-sided bonding. This behavior can be attributed to personal reflection in User A, as well as psychological facts like loneliness, empathy, or even low self-esteem. As a consequence, User B can easily be influenced by User A.
At this point User B might feel like they understand User A in a way nobody else does and may even begin to view them on a personal level as some sort of friend or close relative. They see this individual every day, hear their voice on a regular basis, and believe that they are connecting to them on a deep level. They develop an emotional attachment, and the stronger the parasocial relationship gets, the more attention User B pays to User A’s behavior and mannerisms. While User A most likely doesn’t know User B personally, User B will seek out interaction with and recognition from User A. That behavior is typically represented through donations on stream, where User A either reads out their personal message and name or publishes a “thank you” message on certain websites.
Additionally, User B tries to follow and engage with their idol on as many platforms as possible aside from their main source of content creation. These social media platforms are usually Instagram or Twitter, but can also include User A’s Discord server.
*Unless you are using the channel description for verification instructions rather than an automatic greeter message.
If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.
Verification Actions
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.
In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.
While that type of relationship is natural and sometimes even desired, it is important to define the level of parasocial relationships and differ between its intensity for the safety of the community, the staff members, and the performer. In their article in the Psychology focused academic journal ‘The Psychologist’, researchers Giles and Maltby designed three levels of severity of parasocial relationships based on the Celebrity Attitude Scale.
“Fans are attracted to a favourite celebrity because of their perceived ability to entertain and to become a source of social interaction and gossip. Items include ‘My friends and I like to discuss what my favourite celebrity has done’ and ‘Learning the life story of my favourite celebrity is a lot of fun’.”
The least harmful level is the general public and social presence. The targeted celebrity is subjected to gossip and mostly provides a source of entertainment. Their presence is mostly found in talks with friends, talk shows, on magazine covers, and similar public-facing media. Discord users on this level usually interact with the community in a relaxed, harmless way.
The next level is parasocial interaction. The characteristics of this level are the development of an emotional attachment of a spectator with a performer, resulting in intense feelings. This behavior is characterized by the spectator wanting to get to know the performer, followed by the desire to be part of their life as well as considering them as part of their own life. A result of that can be addictive or even obsessive behavior, which can be noticed in Discord servers, too.
“The intense-personal aspect of celebrity worship reflects intensive and compulsive feelings about the celebrity, akin to the obsessional tendencies of fans often referred to in the literature. Items include ‘My favourite celebrity is practically perfect in every way’ and ‘I consider my favourite celebrity to be my soulmate’.”
Spectators of that level usually ping the performer or message them privately in an attempt to be recognized. While that behavior is natural, anything that endangers safe interactions between themselves, the community, or the performer needs to be supervised carefully. Unrestrained abusive behavior, which can be found in unwanted intimate, borderline NSFW questions or comments, needs to be addressed and corrected accordingly.
The final level is considered the most intense level and also the most dangerous. It contains severe, harmful obsessions that can extend all the way to stalking and real-world consequences. Parasocial relationships to this degree will rarely be found on Discord, but have to immediately be reported if present.
“This dimension is typified by uncontrollable behaviours and fantasies about their celebrities. Items include ‘I would gladly die in order to save the life of my favourite celebrity’ and ‘If I walked through the door of my favourite celebrity’s house she or he would be happy to see me’.”
The vast majority of users won’t reach the level past seeing the performer as a source of entertainment, but moderators should be aware of potential consequences of anything beyond that as they can be harmful to both the spectator, the performer, and the safe environment you are working to upkeep for all.
Markdown is also supported in an embed. Here is an image to showcase an example of these properties:
Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.
It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.
Parasocial relationships on Discord can pertain to anyone who is perceived as being popular or influential, making them “celebrities” of Discord. Some examples of parasocial relationships on Discord can be found between a user and a moderator, a user and a content creator you are moderating for, or even a member of your moderation team and the content creator you are working for.
But what does all that mean for you, the mod? While Discord moderators are not nearly as popular and influential as big content creators or celebrities, they are still observed by Discord users. While being a moderator puts you into a position of power and responsibility over the wellbeing of the server, some users perceive it as you climbing the social ladder in the Discord server. In their eyes, becoming a moderator changes your overall social status within your Discord community.
Being hoisted higher in the servers’ hierarchy results in members quickly recognizing you and potentially treating you differently due to your influence, even becoming “fans” of you as a person. Some users will soak up any information they can get about you, especially if they realize that you have common interests. This may lead to the development of a parasocial relationship between users and you. Users you have never interacted with before might see you as a person they would get along with and seek out your attention, leading to a one-sided relationship on their part.
Having such an audience can be overwhelming at first. People will start to look up to you, and younger users especially can easily be influenced by online personas. They might adapt to your behavior or even copy your mannerisms. Knowing that, you should always be self-aware of your actions and etiquette in public to promote a healthy, sustainable relationship with the users. Receiving special attention from users can quickly influence and spiral into developing an arrogant, or entitled attitude. There is nothing wrong with being proud of your position and accomplishments, but being overtly arrogant will influence a members’ behavior towards you.
The mindset of one user deciding a moderator is not being responsible can spread through the community in negative ways. They might belittle you in front of new members and give them the feeling that you won’t be there to help them or might not inform you of ongoing problems on the server during a temporary absence of moderators in the chat. A healthy user-moderator relationship is important to prevent or stop ongoing raids as well as make moderators aware of a user misbehaving in chat.
Additionally, it’s important to be mindful that your perceived fame does not start to negatively influence your judgment. For example, you may find yourself giving special attention to those who seem to appreciate you while treating users that are indifferent towards your position as a moderator more harshly. It also causes the dynamics within the staff team to change as fellow moderators might start to perceive you differently if you begin to allow bias to seep into moderation. They may start to second guess your decisions, feel the need to check up on your moderator actions, or even lose trust in your capabilities.
If you ever notice that you experience said effect, or notice one of your fellow moderators is experiencing it and letting it consume them, be supportive and sort out the negative changes. When confronting another moderator about it, make sure to do it through constructive criticism that doesn’t seem like a personal attack.
In spite of that, the effects of parasocial relationships don’t always have to be negative in nature. If users manage to build such a connection to a moderation team, the general server atmosphere can grow positively. Users know what moderators like and don’t enjoy, which will lead them to behave in a way that appeals to staff and usually abides by the server rules. They will also be able to predict a moderators’ reaction to certain behavior or messages new people might use. As a result, they will attempt to correct users that are mildly misbehaving themselves without getting staff involved immediately in hopes of receiving positive feedback from staff. Naturally, moderators won’t be able to know of every single person that tries to appeal to them through those actions, but once they are aware that such things happen in certain text channels, it will give them the opportunity to focus on other channels and provide their assistance there.
As mentioned before: In the case of content creators who frequently upload videos, streams, and other forms of media for their followers, the chance of a parasocial phenomenon being developed can be even greater. This will only intensify by joining a creators’ Discord server. This can be done under the false assumption that there will be a higher chance of their messages being read and noticed. Your responsibility as a moderator is to neither weaken that bond nor encourage it while providing security for users, staff, and the content creator. Let the users interact in a controlled environment while maintaining the privacy of the content creator.
Some users might even feel like the content creator owes them some sort of recognition after long-term support, both through engagement or donations. Such a demand can be intensified when they’re shown as “higher” in the hierarchy through dedicated Discord roles, such as Patreon/Donator or simple activity roles. In the case of multiple people building a parasocial relationship with the same content creator and experiencing that phenomenon, they may see other active users or even moderators as “rivals.” They see the content creator as a close friend in their eyes and feel threatened that others, especially those that financially support the content creator, perceive them the same or think they are even closer to them. During such moments, it is recommended to keep the peace between users and let them know that the content creator appreciates every fan they have. While the ones providing financial support are appreciated, every viewer is what makes the creator as big as they are and played a part in getting them to where they are today.
Maybe it won’t only be the user that feels closer to them by joining the server. Many beginner moderators may also find themselves feeling as though they are above the rest of the community because their idol has entrusted them with power on their server. Being closer to them than most of the users can easily fog your judgement; it is essential to prioritize being friendly and respectful to the users over these personal convictions. When adding moderators to your moderation team, it is important to keep an eye out for this kind of behavior to combat it, and to hold your teammates accountable should you see this behavior begin to exhibit in one of your teammates. Making sure your entire team is on the same page regarding your duties and standing in the community is essential to maintaining a healthy moderation environment. As a moderator for a Content Creator, this individual you may admire deeply has put their trust in you to keep their community safe. Falling into the machinations of developing an unhealthy parasocial relationship with them directly interferes with your ability to do that, failing not only the community but the creator.
Despite the potential dangers from parasocial relationships, the fact that they develop at all may indicate that you are doing a good job as a moderator. While positive attention and appreciation are key factors to a healthy development, not everyone may like that sort of attention and it is completely acceptable to tell your fellow moderators or even the users themselves about it. At one point you might feel like you reached your limit and need a break from moderation and managing parasocial relationships aimed at you and those around you. Moderator burnout is very real, and you should not hesitate to take a break when you need it.
Users will view you, as a moderator, as a leader that helps guide your designated Discord server in the right direction. As such, you will be a target for rude comments by users that have personal issues with the server while simultaneously getting showered with affection by other users who are thankful for what you do for the server. Never be afraid to ask for help and rely on the moderation team if things go too far for your personal boundaries or comfort level, even if you are an experienced moderator. Establishing a healthy relationship with the community is important, but being able to trust your fellow staff members is even more so. Nobody expects you to build an intimate relationship with every member, but knowing you can count on them and their support is essential for the team to function correctly.
Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance
**Gaius also offers an additional NSFW filter as well as standard image spam filtering
***YAGPDB offers link verification via google, anything flagged as unsafe can be removed
****Giselle combines Fast Messages and Repeated Text into one filter
Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.
There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.
Text Filters
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**YAGPDB offers link verification via google, anything flagged as unsafe can be removed
***Setting a catch-all filter with carl will prevent link-specific spam detection
A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.
A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country ‘Nigeria’ they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.
Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.
Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.
Anti-Raid
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.
*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.
Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.
User Filters
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name
**YAGPDB can use configured word-list filters OR a regex filter
Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.
One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.
However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.