Is Google+ on the way to become a contentless network? Does it come closer to networks like Facebook or Twitter where nearly everything is allowed?
But owners and moderators of communities have to strengthen their efforts.
And admittedly they do it already in some well-moderated communities.
I’m not talking about communities of any kind but of those related to brands where the companies behind are strong competitors.
Some weeks ago I was promoted as a moderator of the largest community dedicated to Apple. And believe me that’s a hard job because there are many members just expressing their feelings by posting photos or single sentences without adding value for other members. It’s the same situation in other communities related to the Android operating system of Google running on devices of many vendors like Samsung.
According to the intention of Google G+ is said to be a content network not comparable with e.g. Twitter or Facebook. Many members in communities still do not accept this concept and litter useless content, photos without any intro, just one sentence or some words and usually take the first available category which is most often ‘Discussion’.
If the post is online it takes just some minutes and a discussion, which is no real discussion, starts. People start talking about everything but not refer to the posted content.
Please read this great comment written by Dave Trautmann on Google+ on 2013-09-13.
Comment Litter happens as much in real life as it does online.
I can’t tell you how many times I have been to a large public meeting to discuss important (and suitably real) issues only to have someone get up and question the integrity of the people presenting the information. I cannot tell you how frustrating it is to have someone get up and complain about something which happened 30 years ago and they just can’t let it go.
I can’t tell you how I feel for public officials who are required to attend these meetings only to be harangued by a hostile crowd with some other agenda in mind. I have even been to a couple of perfectly normal public consultations only to have them hijacked by people bringing their own issue (entirely off topic) into the meeting and disrupt anyone who wants to bring things back to what the meeting was originally all about.
I’ve been reading comments since Usenet and I am not surprised by the childish compulsions of some people to only champion their own “brand” loyalty. The demographic of these posts is quite specific. I have had to build up a strong tolerance for offtopic, cranky, in-your-face, sophomoric remarks in order to be able to find those other wellconsidered, clearly written, referenced, reliable, and insightful remarks which appear about as frequently as the Aurora Borealis (Northern Lights).
I cannot begin to list all of the valuable things I have learned from reading some people’s gems after sifting through a beach full of rocks. A lot of what I have synthesized in my own understanding of the world has been shaped by exchanges I have enjoyed online, in blog posts, comments, and from e-mail with previous colleagues I sometimes discover things myself and try to bring them into the public sphere.
But I find it is as true in real life, as it is online, not a lot of people are interested in new ideas. Not many have a tolerance for questioning their own belief systems, myths, and personal scripts. Sometimes events force whole populations to reexamine their values (like a war) but in most cases people prefer the comfort of their own views and seek out others who seem to have the same views (whether or not it can be verified those views are the same).
So what can moderators do if posts or comments are not well considered or the comments spiral gets out of control or strays off into unrelated topics?
Well, they can notify the member and delete the post. Acting on members regularly leads to obsessive comments. Arguments are not brought forward. Thanks Google that moderators can ban those members, report insults, mute or even block them.
But that’s no workable solution and it’s not durable.
It’s not in the responsibility of Google to enforce quality and avoid uselessness and control behavior. Everyone in the internet is entitled to publish his opinion. It’s the reality that this right in many cases is misused.
And it’s the task of active moderators to prohibit this behavior.
Support for moderators, please …
Unfortunately Google’s app Google+ for iOS supports moderators just to a certain extent, not enough to do their work effectively. We should not forget that moderating is a job done in leisure time and no one can expect that all their actions are fairly done.
The flaws of the Google+ app …
- 1 Moderators cannot look into a log showing them which members have already been notified because of misbehavior.
With this information repeated violations could be monitored and would give solid arguments for banning a member. It’s a question of fairness.
- 2 Some additional options for notifying members are missing.
At the time moderators always have to write a comment or insert a text template.
- 3 Stats showing from which communities a member was already banned.
- 4 Report to Google at the time offers just ‘Spam’ as an option.
The following options should be added:
- 5 A moderator should be able to see whether his colleagues are online and active or not.
This could be done by some kind of ‘Moderators Log-In’.
In Internet slang, a troll is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog), either accidentally or with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.
This behavior is always seen if Apple fans comment in Android communities or vice versa. It’s childish, unworthy for adults, points to a close-minded attitude, and ignores the irrelevance of these kinds of opinions for human life.
Identifying post litterers …
If somebody posted off topic or made any other failure I saw these kinds of reactions:
- … German Wehrmacht
(This is quite interesting because these people first look up my profile, see I’m a German, and then post an abusive comment not knowing who I am.)
- I can post here what I want you …
- or simply no reaction
The latter one is a strong indicator for a post litterer. He comes, shares, and quits without engaging in any upcoming discussion, a case for the ban hammer.
Even moderators are human beings and so they are faulty. If a member starts a fruitful discussion about his removed post and put forward understandable arguments, the moderator can simply reply ‘Sorry, please share again’ or remove the ban.
All members should not forget that the work of moderators especially in great communities (which are always attractive for litterers) is mostly an uncoordinated leisure job on the fly. Most often articles and comments are quickly skimmed for a more or less fair impression of the content.
Google’s spam detection …
The Google+ Community ‘Community Moderatos’ is a high-quality content community where many problems are discussed with just one target: The quality of G+ and what moderators can do.
Rupert Wood (on Community Moderators)
Community Spammers – why doesn’t google take action?
Firstly I must admit that Google has gotten better at spotting spam and putting it into moderation in communities, but even when multiple communities confirm that it’s spam, why does the culprit go ‘unpunished’ and is allowed to join and spam further communities?
Surely a user who has the activity similar to that shown on the link below should incur some restrictions on their ability to post, and/or join communities.long can such a spammer survive until Google takes action? Indefinitely it appears.
How many Google communities can a spammer join and post to before action is taken? There seems to be no upper limit surely new users should be limited to maximum number of communities they can join!
How many reports from community moderators does it take to alert google to a ‘spammer’? Looking at this example there are dozens of examples of posts being removed from communities, even if only 1 in 10 removals were accompanied by a ‘report’ action this account should have been investigated by now, if not suspended until investigated.
So we need an update of spam detection to keep the quality on a high level.
Concept map …
For a visualized summarization see this concept map.
Profiles cannot be validated, neither by Google nor by moderators. Neither profile photos nor a note like ‘Attended University of …’ or working at ‘…’ let us know with whom we communicate. There are many bad guys on their way to compromise people or just sabotage well-organized communication. The experience of life and a look on the profile usually tells us quickly who is behind this attack.
In case of newbies (young or old) we all know that they make their faults because nobody can expect that they all read articles like
before engaging in a social network.
So, what can moderators do to keep the fairness?
Well, members can be notified with a link to the above mentioned article.
In case of repeated misbehavior it’s up to banned members to share their thoughts privately with the moderator and discuss the issue. He then possibly can remove the ban if arguments and understanding are put forwards.
Content networks like Google+ need active moderators who are prepared to forward their experiences and to act with clear notifications if necessary. Google+ offers many opportunities to improve knowledge, personality, and social intercourse. Moderators are responsible and in some way their job is quite similar to that of teachers.
Related links …
I appreciate your visit on iNotes4You.