c933103 wrote:I don't think it's realistic to expect moderators of such a large platform (No matter Twitter or other similar site) spending sufficient energy to discern what is constructive and what is not, unless they hire people in the number of hundred thousands which again isn't realistic. There are hundreds of million people constantly using such service. So moderation on platform like Twitter can only focus on whether such comment broke rule or not.
For more context-sensitive moderation, even at the scale of a.net moderators are already stretching thin despite I guess active users each day are only in the number of thousands while moderators seems to be in the number of dozen. To scale it to Twitter with 200 million daily active users, that would mean Twitter will probably need half a million moderators or so.
This is the cost that would be needed to implement rule of debating when moderating a platform. No platform can financially support this.
Not to mention, social media aren't a place for people to debate.
You very well may be right that it's impractical to impose the rules if debate on social media. Still it would be interesting to try. Here is how I've envisioned it:
1. AI engines that evaluate and assign points to all posts based on adherence to rules if debate, courtesy, verified factual references, etc. With the user accumulating a score over time, that is visible to other users, and promoted in results ranking.
2. The AI engines being published as open source, with moderated changes based on community input, to improve the engines over time. Part of that would be the ability to submit posts on a trial basis, to either self-evaluate or to confirm an earlier score that is suspect. Also show the component breakdown so users could learn how to improve their posts in the rankings.
3. Users being able to flag content as either untruthful, unverified, in violation of rules, etc. Also to protest scores they believe to be unfairly applied. These inputs after human review, would be incorporated into the engine training.
I believe that over time, such a system would converge on an active community that supported free speech, but within civil bounds, based upon accepted rules of debate. It effectively would teach those rules to users, while allowing them a voice in determining the rules. Also would not require censorship or banning, the user would simply be de-ranked to the point of oblivion. But could also recover by improving behavior.
This system would essentially mirror the rules of society. Behave well, and you have an audience. Behave badly, and you don't, although people who wish to follow your bad behavior could still search and find you.
Lastly as far as social media not being a proper forum for the rules of debate, I disagree in rhe strongest possible terms. Even Elon says it should be a public square. Those types of meetings do still necessarily have rules.