Instagram is a social media platform that is mainly targeted towards youth adults, and this means there needs to be an awareness of how bullying might happen through social media. In a statement, Instagram said, ‘Bullying is a complex issue, and we know that young people face a disproportionate amount of online bullying but are reluctant to report or block peers who bully them,’
Earlier this year, Instagram, and parent company Facebook, published the fifth edition of their Community Standards Enforcement Reports which provides data on how well they enforce their policies. This report covers from October 2019 through March 2020 and shows that Facebook can enforce its standards and catches between 90% and over 99% of community standards violations itself. But, in terms of bullying, Facebook only caught 14% of the 2.6 million instances of harassment reported.
Adam Mosseri, the head of Instagram, said at a developer’s conference that Instagram hopes to ‘lead the fight against cyberbullying.’ Instagram does have a variety of features that attempt to reduce bullying. In 2017, Instagram announced that it had built a text classification engine, called ‘DeepText’, that could be adapted to eliminate hateful comments. The idea was that any comments that violate Instagram’s Community Guidelines would be automatically removed for everyone, except for the person who wrote the comment. This system is not perfect, as the occasional hateful comment will probably slip through. There’s also the chance that harmless comments will disappear, as the filter does not take any context into account.
Last year in October, Instagram added the ‘Restrict’ feature, which secretly stops a person’s ability to comment on your posts or send you a message. People aren’t notified when you restrict them, which Instagram believed would help young people who didn’t want to report, block or alert their bullies.
Instagram said ‘Restrict’ is designed to empower you to quietly protect your account while still keeping an eye on a bully,’ and more recently in October of this year, Instagram added more features to deal with online bullying. First, the platform will automatically hide comments that look like they might constitute bullying even if they aren’t overtly breaking the rules. Next, when a comment may be considered offensive, the user is notified. This intervention is meant to allow people to reflect and undo their comment and prevent the recipient from receiving the harmful comment notification. Finally, Instagram will send a new warning message for users whose comments are repeatedly flagged as toxic. This is with the hope that people might change their behaviour, after seeing a warning.
In 2019, Scott Freeman, the CEO of the nonprofit cyberbullying advocacy group the Cybersmile Foundation, told ABC News that ‘a way of helping moderate comments on your own posts is a welcome addition. He added that ‘ it's not helping the primary problem of people sharing content with the intention of hurting others.’
I feel that in most bullying situations, social media will serve as an amplification of things that happen in person. Even perfect social media policies and tools won’t stop bullying - cyberbullying is not the only type of bullying after all. I was surprised by the number of features Instagram had already implemented to stop hateful language and horrible comments. Some of the features, such as Restrict, seem to be designed with input from young people, making them more tailored and relevant to bullying that might happen to them. It’s essential for the future of Instagram’s anti-bullying features to be created in collaboration with young people, because Instagram is a space for young people. If harmful content isn’t tackled in collaboration with young people, then it’s not really getting to the root of the problem. I think Instagram is heading along the right path: they just need to keep going forward.