A recent war of words – played out on the Web between Gawker and Reddit – was only the latest example of the argument surrounding the right approach for screening comments on the Intranet. In this case, the folks at Gawker helped to out one of the most notorious trolls on Reddit, which is a popular hangout for anonymous users who like to push the envelope on what is appropriate content. The discussion surrounding this issue raised important questions about privacy, conduct rules and the quality and scope of free expression. I have to admit I’m glad Gawker “outed” the troll in question – since I found his work toxic – but I wish Reddit would have more taken proactive steps to purge their site of the most egregious abuses.

This online polemic brought to light an unfortunate truth about the Web; the sad state of commentary of many sites and platforms. Several years ago, when new social platforms greatly expanded and facilitated the process of online commentary, I was optimistic that communities (both large and specific to sites and authors) would generate a fairly useful and candid exchange of ideas.  There would always be outliers and pesky critics who seem to spend all their waking hours on sites, of course, but on balance the community would self-regulate and provide a range of reasonable ideas and arguments.

Unfortunately, based on what I’m seeing online lately I have to admit that is often not the case. Many comment sections – even for websites and platforms where you would expect good self-regulation and informed users – are a wasteland of trolls, spammers and perverts. Some of the worst offenders are political hacks that don’t even bother with original content, re-posting their canned message numerous times with little logic. If there are rules of conduct and filters for inappropriate language, they are not immediately apparent. I suspect many of the sites are rarely if ever moderated or edited. I realize that some topics invite strong opinions – notably news and political sites – but the noise has spread well beyond the expected sites and platforms. Take a look at this recent example on CNN, where a seemingly innocuous (and positive) news post about Drake getting his high-school diploma sparked a nasty, racist diatribe of abuse.

Most communication professionals would agree the ideal is to foster robust dialogue on the Web – and to allow questions, comments and suggestions that help extend and enrich the discussion (or related products and services.) But that choice is no longer automatic given the bottom-feeder trash on many comment sections. The key question for many has become – is it even worth it to try to manage the comment sections? More pointedly, how do you encourage and filter comments without coming down too hard on either censorship or chaos? This question is a critical issue not just for individuals and organizations on the web, but also for companies striving to engage their employees through internal platforms behind the firewall.

My take is that allowing anonymous comments – particularly inside a secure, corporate platform – opens the door to the worst abuses. Even without formal identification or registration requirements, the quality of dialogue would greatly improve with more diligent moderation. Set common-sense rules and enforce them. Where abuses do occur – whether based on a site’s conduct guidelines or broader legal restrictions – site managers should take responsibility and remove and/or punish the offenders, rather than taking a hands-off approach with a blanket defense of freedom of speech. Whatever the response, something has to change or I fear many comment sections will be left to a vocal, vitriolic minority that erodes the credibility and relevance of the conversation, as well as the sponsoring sites and organizations.

Advertisements