Rage Bait and Other Problems with Social Media Algorithms.

The algorithms that drive social media are problematic. They are the product of the business model of the tech corporations that run the social media platforms. The business model of social media is the same as other corporate media, it is to grab the user’s attention as much as possible and keep them on the platform as long as possible to generate revenue through the ads displayed to users. To do so they both gather data on users and manipulate content based on that data. Those algorithms can and have been manipulated to favor certain content over other. This as been done both by the corporations themselves and by outside entities who have figured out how the algorithm works enough to manipulate it from the outside. Outside manipulation was more common in the past, these days it’s more likely to be in house. X/Twitter under Elon Musk is by far the most egregious example but it is also done in more subtle ways on Facebook. Facebook has for the last few years instituted a system where users pay to have their pages boosted. Those that do pay for this are often not the best of actors. What is referred to as “rage bait” is very common these days. It is content that is intended to provoke users into an angry reaction, to “trigger” them into commenting and getting engaged in an angry dialog that keeps them on the platform. As the election drew near this past year, the amount of rage bait on my feed got to absurd levels, all of it paid for by mysterious unknown entities. It contained both extreme right and extreme left leaning propaganda in a similar format. This fits in with the classical theory of disinformation: That the goal is not to persuade but to sow division and confusion and create a state where nobody knows what’s real any more. So Meta/Facebook is generating revenue both by accepting payments from these bad actors to disseminate their propaganda and by using it as rage bait to engage non paying users and thereby getting more of their attention and more ad revenue from that attention. So the recent announcement that they are ending independent fact checking on the platform is just another step in a direction they were already going. Disinformation is often done for profit and Meta is simply being smart in a business sense by collecting some of that profit. That they can now curry political favor by going further in that direction is just a nice opportunity to be grabbed.

I should point out that I block out big chunks of Facebook with uBlock Origin and limit myself to an hour or so a day on social media. In spite of that, it is still annoying and gets worse daily. It’s always been disinformation central and it got absolutely unbearable in 2016 and 2017 and somewhat better when, for a while, they decided promoting baseless conspiracy theories and extremist propaganda wasn’t in the public interest and they would put the public interest over their business interest. That didn’t last. It changed when they instituted their pay to play plan and Facebook started collecting fees to boost page posts from paying users. Floods of paid propaganda followed. I first got a flood of climate change denial posts and then, at the end of the covid mandates, floods of paid posts about the trucker freedom convoy. In both cases, I took the rage bait and comment bombed the posts. I wasn’t the only one, that’s what usually happens if the page owners don’t actively delete the negative comments. It keeps users engaged and the ad revenue flowing. I was smart enough to write just one comment and paste it into the comments of all the posts. That got me in trouble with the algorithm for spamming. Imagine that, getting a notice on your account for spamming spam. My sin was I wasn’t paying them to spam. The rage bait slowed for a while but the 2024 election brought it back with a vengeance. I fell into my old pattern of pasting comment bombs but they didn’t seem to get the same reaction as before. The algorithm had caught on to me and my ways and was shadow banning me. In any case, I realized that I was being baited and wasting my time and I just started blocking the rage bait pages. That also ended up also being a waste of time. Blocking is also a reaction so I just keep getting fed more pages to block. The best solution might be to not use Facebook at all but there is enough contact with my local community on the platform that I don’t really want to do that yet. And I do engage with one type of rage bait still, the numerous pages that have been popping up recently that try to explain contrail science and trigger furious reactions from the believers of the chemtrails conspiracy theory. Since debunking chemtrails is one of my causes and I’ve written some solid material on the subject, I paste links to my articles in the comments sections of both contrail pages that debunk the conspiracy theory and the chemtrail cult pages that proselytize it. The algorithm also dishes out a lot of positive and neutral content on my feed as well. It’s categorized a lot of my interests but, as my previous post, Facebook Groups Vs. Internet Forums states, there’s a negative side to this as well. It means that the Meta corporation has been collecting intelligence on me, even with my minimal use of the platform, and knows all about my hobbies and interests even though I’ve never consented to give them that information. All they care about is getting your attention and if that means using the dossier they have on you to trigger you into a rage filled dialog or engaging with you with a post on your favorite singer, writer, actor or gadget, it’s all the same to them. The problem is they shouldn’t be collecting that information nor pushing rage bait content on you without your explicit consent in the first place. If you’re in the mood for an online argument, you should look for the content that you want to argue about on your own initiative, not have some remote computer code push that argument in your face whether you want it or not. And that in a nutshell is the basic problem with social media algorithms and the business model that fuels it. In an ideal world, social media would be the online version of going into a cafe or pub and having a public discussion about something. There would be nothing boosting or suppressing what was talked about and the topic would fly or die on its own. There would be some limits, of course. If you get into a fight in a public establishment and start shouting and calling other people names, you get booted out. That’s what social media moderation should do as well. It’s not about what’s talked about, it’s about behavior. The direction corporate social media is going is the opposite. It’s flooded with fake accounts and manipulated discussion and trashy propaganda. With the current announcements from Facebook regarding AI bot accounts and less moderation, it’s only going to get worse.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *