Society/Culture How do we make social media a force for good?

Remove this Banner Ad

Sep 15, 2007
50,403
46,707
Where i need to be
AFL Club
Geelong
I think at this point we can all say the recent rise of populism and authoritarianism is clearly linked to to the rise of social media.

rather then be a force to push for democratic revolutions, rationalism and knowledge sharing its had the opposite effect. Its created a channel for mass misinformation which is unwinding confidence in democratic institutions providing an opening for dictators to gain control once again.

i was mocked endlessly when i said four-five years ago that the usa is at real risk of losing its democracy. No one is laughing now with the majority of republicans running in the house denying the biden election win. Your institutions are only as good as the people who use and run them.

so how do we turn things around? How do we change social media so it can instead be a force for good? I have a few ideas. But first, what are yours? Can we use our collective wisdom to help make things better?
 
It would very easy to do.

Large populations can be effortlessly predicted and controlled. You might not be able to predict what any one person will do in any given moment, but depending on parameters in an environment, a certain percentage will behave a certain way, and a certain percentage another way, with consistency.

For example a certain percentage of schoolkids take a shortcut across the school football oval every day even though they know it damages the grass. Even the trajectory of that path is predictable depending on various factors. {How often do we see a bald line in the grass at a park because virtually everyone takes the same path?} If you then put a sign on that school oval telling the kids to stay off the grass, then that percentage will change a certain amount.

In the same way social media apps like Facebook often make adjustments to encourage certain user behaviour. They have complete control over the operation of the app and also get instant user feedback, which should allow them to fine tune anything in their consumer population. Facebook for instance, now has a feed exclusively for videos, which you unwittingly enter whenever watching a video on your feed, often leading to a video binge on a certain subject. Clearly, this leads to users watching more videos by a certain percentage, and thus more revenue for ads. And perhaps if they made other adjustments like removing the mid-video ad they might find that users tend to watch more videos. The point is they are in complete control.

So the problem I think really is motivation on behalf of the social media developers, who mine toxic behaviour and habits for advertising revenue. Disinformation, arguments, hate, all create hits and are therefore a lucrative commodity. Someone has to be willing to forego that carrot.

The answer could be an open source social media, a la Wikipedia. This would allow users to design and therefore opt en masse for less clickbait, less abuse, less disinformation, in their social media community and allow the users to do the fine tuning for their own benefit.

In the meantime, Twitter could possibly do it very easily with a simple dislike button. If one dips below a certain like:dislike ratio the tweet is hidden from feeds. Adjust the required ratio according to toxicity levels on the app or in the world.

So as usual, just like with oil, tobacco, gambling, the culprit is greed. Really there is no excuse for the developers to not fix their social media.
 
I think at this point we can all say the recent rise of populism and authoritarianism is clearly linked to to the rise of social media.

It's linked to the rise of those things, in a large part due to influence of grifters and State actors, spreading misinformation and mindwarping people.

Of course Social media companies dont give a s**t because toxic s**t is clickbait and it generates engagement, which in turn generates profits.

See also Zuckerbergs refusal to even acknowledge Facebook being used as a tool for radicalisation of Neo Nazis and dissemination of Antisemitic propaganda.

We need legislation around how the algorithms work so people 'doing their own research' dont get sucked down holes and brainwashed, and strict rules on providers to be responsible for the content on their sites, plus even stricter rules on metadata and what it's used for.

People are gonna hate this last suggestion, but we also need a national firewall like China and Russia have (the two biggest spreaders of misinformation). They have those firewalls to stop their own citizens from being exposed to the kinds of s**t they do in other countries (namely the USA, see also COVID misinformation, the Election hacking and many many other examples in the USA).

The risk with the latter is the Government use the firewall to brainwash us with propaganda, which is also what the Russians and Chinese do with theirs.
 

Log in to remove this ad.

It's linked to the rise of those things, in a large part due to influence of grifters and State actors, spreading misinformation and mindwarping people.

Of course Social media companies dont give a s**t because toxic s**t is clickbait and it generates engagement, which in turn generates profits.

See also Zuckerbergs refusal to even acknowledge Facebook being used as a tool for radicalisation of Neo Nazis and dissemination of Antisemitic propaganda.

We need legislation around how the algorithms work so people 'doing their own research' dont get sucked down holes and brainwashed, and strict rules on providers to be responsible for the content on their sites, plus even stricter rules on metadata and what it's used for.

People are gonna hate this last suggestion, but we also need a national firewall like China and Russia have (the two biggest spreaders of misinformation). They have those firewalls to stop their own citizens from being exposed to the kinds of s**t they do in other countries (namely the USA, see also COVID misinformation, the Election hacking and many many other examples in the USA).

The risk with the latter is the Government use the firewall to brainwash us with propaganda, which is also what the Russians and Chinese do with theirs.
A national firewall is a terrible idea. Can you imagine how it would be used as political football, with very different ideas of what constitutes truth and misinformation from the Libs, ALP, and Greens.

I don't trust any of our political parties to objectively monitor misinformation from a purely scientific basis.
 
We need legislation around how the algorithms work so people 'doing their own research' dont get sucked down holes and brainwashed, and strict rules on providers to be responsible for the content on their sites, plus even stricter rules on metadata and what it's used for.
InShot_20211209_222013681.jpg
 
A force for good needs a definition of what 'good' is. Is it the fomenting (Arab Springs, Qanon), or the platform that will allow such things? It would be very hard to magnify the good without breaking ideological principles to remove the bad, whatever that is.

One simple step I might consider for a town square type arrangement that Musk has alluded to - no hyperlinking or embedding. Would require users to stand on their own intellectual feet, removes some advertising, removes the rabbit holes for all sides. People never meet in the middle even being able to provide scientific resources, so why bother allowing it? It's more likely to sort the wheat from the chaff and people will be forced to argue the conclusions inside their own head.
 
It would very easy to do.

Large populations can be effortlessly predicted and controlled. You might not be able to predict what any one person will do in any given moment, but depending on parameters in an environment, a certain percentage will behave a certain way, and a certain percentage another way, with consistency.

For example a certain percentage of schoolkids take a shortcut across the school football oval every day even though they know it damages the grass. Even the trajectory of that path is predictable depending on various factors. {How often do we see a bald line in the grass at a park because virtually everyone takes the same path?} If you then put a sign on that school oval telling the kids to stay off the grass, then that percentage will change a certain amount.

In the same way social media apps like Facebook often make adjustments to encourage certain user behaviour. They have complete control over the operation of the app and also get instant user feedback, which should allow them to fine tune anything in their consumer population. Facebook for instance, now has a feed exclusively for videos, which you unwittingly enter whenever watching a video on your feed, often leading to a video binge on a certain subject. Clearly, this leads to users watching more videos by a certain percentage, and thus more revenue for ads. And perhaps if they made other adjustments like removing the mid-video ad they might find that users tend to watch more videos. The point is they are in complete control.

So the problem I think really is motivation on behalf of the social media developers, who mine toxic behaviour and habits for advertising revenue. Disinformation, arguments, hate, all create hits and are therefore a lucrative commodity. Someone has to be willing to forego that carrot.

The answer could be an open source social media, a la Wikipedia. This would allow users to design and therefore opt en masse for less clickbait, less abuse, less disinformation, in their social media community and allow the users to do the fine tuning for their own benefit.

In the meantime, Twitter could possibly do it very easily with a simple dislike button. If one dips below a certain like:dislike ratio the tweet is hidden from feeds. Adjust the required ratio according to toxicity levels on the app or in the world.

So as usual, just like with oil, tobacco, gambling, the culprit is greed. Really there is no excuse for the developers to not fix their social media.
This suggests the disinformation and hate is a symptom of a less nefarious cause (i.e. advertising revenue). While i agree that chasing advertsing dollars is also what is driving the social media outcome I think there are more nefarious drivers at play. i think the hate and disinformation is also intended by certain political players who dont care about advertising and money. Its not just an unintended outcome.

the dislike to like ratio is an interesting idea but im not sure it would work in ah highly partisan environment where most people arent experts. posts claiming the election was rigged would still get pretty close to a 50:50 like and dislike ratio.
 
It's linked to the rise of those things, in a large part due to influence of grifters and State actors, spreading misinformation and mindwarping people.

Of course Social media companies dont give a s**t because toxic s**t is clickbait and it generates engagement, which in turn generates profits.

See also Zuckerbergs refusal to even acknowledge Facebook being used as a tool for radicalisation of Neo Nazis and dissemination of Antisemitic propaganda.

We need legislation around how the algorithms work so people 'doing their own research' dont get sucked down holes and brainwashed, and strict rules on providers to be responsible for the content on their sites, plus even stricter rules on metadata and what it's used for.

People are gonna hate this last suggestion, but we also need a national firewall like China and Russia have (the two biggest spreaders of misinformation). They have those firewalls to stop their own citizens from being exposed to the kinds of s**t they do in other countries (namely the USA, see also COVID misinformation, the Election hacking and many many other examples in the USA).

The risk with the latter is the Government use the firewall to brainwash us with propaganda, which is also what the Russians and Chinese do with theirs.
i was more just think about how do we regulate disinformation but you have raised additional concerns. The problem of how social media collect and use big data. Clearly there needs to regulation around this and having regulators undererstanding the algorithms is part of this.

Are national firewalls the only approach or is a more global uniformly applied approach also an option? and if the former then what does this firewall look like? What regulation is applied to prevent disinformation? And is the firwall only about preventing disinformation or is it also about controlling/constraining certain types of information as in china and russia? Pedophilia being an obvious one But what else?
 
I think there should be tight regulation on ensuring accuracy for anyone who has over 1000 followers on social media. If anyone with over 1000 followers posts things that they know to be lies or pretends they have evidence (or fakes evidence) to support a view then they should face consequnces under the legal system. Just as large media corporations do. Having a voice that can reach many people is a special privlege that social media has enabled and with that should come responsibility.

at the moment its a free for all where people with 1000s of followers can post whatever they want withoutworrying about whether its a lie or whether they have evidence or not and this needs to stop.
 
banning bots from posting opinions would be another obvious one.

social media apps should not be able to sell data without clear informed consent from users.

we need more egaltarian rules based social networks. Networks with very clear simple rules based solely around specific behaviour infractions and appeals to logical and EvIdenced based statements. Most social networks have extremely vague and inconsistent rules that are imposible to follow simultaneously and are at the whim of moderators who end up being content controllers who push groups to have certain views. This prevents open discussion and leads to partisenship. social metworks cant be defined by views. They should be defined by topics not views. If they are defined by views then knowledge sharing is closed off And debating becomes an infraction in itself.
 
Remove all news/activist content from the platform. Make it so a single user can’t broadcast to millions of followers. Facebook needs to go back to the days when you would log on and see posts from your friends in your feed.

Whatever is done needs to account for blowback by radicalised users. The measures need to affect lefties and righties in equal part. They’ll lose the activist content that’s keeping them radical, but will enjoy some short lived schadenfreude seeing their rivals shut down.
 
I think there should be tight regulation on ensuring accuracy for anyone who has over 1000 followers on social media. If anyone with over 1000 followers posts things that they know to be lies or pretends they have evidence (or fakes evidence) to support a view then they should face consequnces under the legal system. Just as large media corporations do. Having a voice that can reach many people is a special privlege that social media has enabled and with that should come responsibility.

at the moment its a free for all where people with 1000s of followers can post whatever they want withoutworrying about whether its a lie or whether they have evidence or not and this needs to stop.

Not a bad idea, the problem is… which jurisdiction ? Every country needs to be on the same page. Otherwise…. VPNs
 
Remove all news/activist content from the platform. Make it so a single user can’t broadcast to millions of followers. Facebook needs to go back to the days when you would log on and see posts from your friends in your feed.

Whatever is done needs to account for blowback by radicalised users. The measures need to affect lefties and righties in equal part. They’ll lose the activist content that’s keeping them radical, but will enjoy some short lived schadenfreude seeing their rivals shut down.
I like this idea. No news, no politics, no bullshit political pages like "aussies against destructive leftism" etc. no groups, and no followers, just private profiles and a feed of your friends/family content. celebs can still get verified to prevent fraud/scam/impersonation but they cannot be privately messaged by strangers or sent friend requests, and their content is only visible to their friends. cap friends at 1000, who the * needs more than 1000 people they "know" on their profile?
 

(Log in to remove this ad.)

I like this idea. No news, no politics, no bullshit political pages like "aussies against destructive leftism" etc. no groups, and no followers, just private profiles and a feed of your friends/family content. celebs can still get verified to prevent fraud/scam/impersonation but they cannot be privately messaged by strangers or sent friend requests, and their content is only visible to their friends. cap friends at 1000, who the * needs more than 1000 people they "know" on their profile?

I don’t. I like that I can see streams and videos from around the world.
This was useful in blm issues a few years back, or Iran protests a few weeks back.
 
I don’t. I like that I can see streams and videos from around the world.
This was useful in blm issues a few years back, or Iran protests a few weeks back.
Okay fair enough, but here’s a question: did you like the comments sections on any of that content? If yes, why? If not, maybe the online toxicity can be drastically reduced by removing the comments section on political and news pages instead of removing the pages completely.
 
Remove all news/activist content from the platform. Make it so a single user can’t broadcast to millions of followers. Facebook needs to go back to the days when you would log on and see posts from your friends in your feed.

Whatever is done needs to account for blowback by radicalised users. The measures need to affect lefties and righties in equal part. They’ll lose the activist content that’s keeping them radical, but will enjoy some short lived schadenfreude seeing their rivals shut down.
Define "radicalised users".
 
I think there should be tight regulation on ensuring accuracy for anyone who has over 1000 followers on social media. If anyone with over 1000 followers posts things that they know to be lies or pretends they have evidence (or fakes evidence) to support a view then they should face consequnces under the legal system. Just as large media corporations do. Having a voice that can reach many people is a special privlege that social media has enabled and with that should come responsibility.

at the moment its a free for all where people with 1000s of followers can post whatever they want withoutworrying about whether its a lie or whether they have evidence or not and this needs to stop.
They don't, except in some very limited cases of defamation (and effective recourse to defamation law is even more limited in the major market, ie the US).
They never have, except in those very limited cases.
 
Why are you guys here if you don't appreciate anonymous free social media usage?
 

Remove this Banner Ad

Back
Top