Or will AI lead to even more charges of bias and unsafe social media platforms?
There are plenty of stories out there claiming that Facebook, LinkedIn, and Twitter are biased or promoting certain social agendas over others. I’m not here to debate whether that’s accurate, legal, or morally right for them to do if they are. In my opinion, a private individual can choose what they allow in their living room; likewise, a privately-owned company (or a publicly-traded commercial corporation, at the behest of their Board of Directors), can do whatever the Constitution and law allow them to do. We, the people and consumers of their products, can put our own values in action by refusing to use or buy what they’re selling. Or, we can highlight the things we find unconscionable or annoying, and push back for change.
I think that the “big three” — Facebook, Twitter, and LinkedIn — have grown too big for their britches, too fast. All in the name of profit. They have created monsters that have proliferated too far and fast to keep shoving them under the bed and into the closet.
It’s not that they’re not trying. It’s that they should have listened to wise counsel from those who did “social media” back in the 1980s and 1990s, and built real human moderation into their plans and platforms from the start. AI can only do so much. If you have ever used an Integrated Voice Response (IVR) system on the phone, or an infuriatingly bad chatbot as a substitute for human support, you know that AI is pretty bad at understanding us when we want, sometimes desperately, to be understood. You post a long review on Amazon or take time to thoughtfully fill out a free-form text response on a survey, you imagine a human being taking time to read it. But odds are good that they will never see it. Before a human can be tasked with reading the mind-numbingly large volume of reviews or survey responses that come in, they must be sifted and sorted and categorized by programs that are only as good as the programmers and analysts who “train” them.
Algorithms are still no match for humans who don’t particularly want to be understood. I think that the data science team ought to include native language linguists, rhetoricians, and behavioral psychologists.
I’m going to share with you two things that happened to me, this week. There are many more egregious examples, but these clearly highlight failures in AI — I don’t imagine for a second that any humans were involved.
I first published this on my own blog, at https://jahangiri.us.
Let me make it very clear from the start: I appreciate it when friends fact-check what I post. I don’t appreciate long, drawn-out, circular, unproductive “political debates” with people who think politics is a sport, but I do appreciate friends who keep me honest on those rare occasions when I post utter rubbish on Facebook. It happens to us all, sometimes.
Assuming we’re breathing.
A few days ago, I posted a link — a direct link to a primary source of information — Donald Trump, Jr’s own tweet that so comically led people to mock him for calling Governor Abbott a Democrat. Governor Abbott, as any Texan knows, is not a Democrat. He’s not much of a governor, either, but that’s not the issue right now. Here’s the original tweet, in Donald Trump, Jr’s own words (I’m including the image for those who don’t have Twitter, and if you click on it — assuming he hasn’t deleted it — it will take you straight to the original source.)
I live in Texas. I’d love to see Ted Cruz resign; he’s a disgrace. Wanting to “cancel” Ted Cruz shouldn’t even be a partisan matter at this point, and he should be thoroughly investigated for his role in the January 6 insurrection at the US Capitol. Three major Texas newspapers have called for Ted Cruz to resign.
Texans died during the winter storm and massive power grid failures in February. Texas Republicans and Democrats were fairly united on one thing: They were not impressed when Cruz fled the disaster area to the warmer climate of Cancun, Mexico. They weren’t impressed when he tried to blame his poor judgment on his 10 and 12 year old daughters, claiming he was “trying to be a good dad” by taking them to Cancun during a school break. They were even less impressed when his wife’s texts, complaining of the bitter cold and inviting friends to join them at the Ritz-Carlton in Cancun, at just $309 a night, were “leaked” by a friend. Apparently, no one wanted to be seen traveling with them during the pandemic.
It seems that Ted Cruz didn’t learn anything about damage control from Paul Ryan’s photo op in a soup kitchen, or from Trump’s tossing paper towels at Puerto Ricans after Hurricane because he staged his own.
Anyway… here’s where the “fact checkers” come in: They marked my post “partly false.”
It’s important to understand that the only thing I wrote was a sarcastic, “Who knew Gov. Abbott was a Democrat?” as an intro, with a direct link to Donald Trump, Jr’s own tweet. I certainly did not claim that Governor Abbott was a Democrat. And I only linked to Donald Trump, Jr’s own words. I’ll cop to sarcasm and snark, here, but not to lying:
So, how is this “partly false”? Here’s what they have to say, when you click “See Why”:
Had I written a whole news article claiming that Donald Trump, Jr was an idiot who truly believed that Governor Abbott was a Democrat, as opposed to being an angry weasel who cannot write a proper sentence and shouldn’t be allowed to wield an apostrophe, then I could see their point, too. I certainly wouldn’t argue that Donald Trump, Jr. was an eloquent rhetorician who brilliantly expressed what he was trying to say in his tweet. I also wouldn’t argue, as some have uncharitably done, that he looked high when he posted it. He clearly needs the likes of writer Alexis Tereszcuk to decipher for us mere mortals what he was trying, inadequately, to say.
But apparently, it’s not my annoyance over his laughable ineptitude and misuse of the English language that bothered the “independent fact checkers” Facebook uses. It is mind-bogglingly weird that they seem to be fact-checking Donald Trump, Jr’s own tweet, in his own words, to say that he didn’t mean to say what he clearly said and still hasn’t bothered to delete.
I’m assuming he’s allowed to delete his own tweets? He is a private citizen, right — not a government official using Twitter to conduct official business, as his father and so many other politicians seem to think is appropriate? He could say, like Britney Spears, “Ooops, I did it again!” and write whatever the hell it was he actually meant to write. That is, if he didn’t mean to write what he did write, rather than…
It’s like one of those Escher drawings.
I sent in an appeal, but only as a matter of principle. I told Lead Stories that I truly don’t care if they ever remove the “False Information” overlay — it’s funnier this way, and highlights the inadequacies of social media’s attempts at automated content moderation. I would applaud their efforts, but there is still so much truly dangerous misinformation being spread online about COVID19, so much racist, misogynistic, hate-filled garbage, so many fake and fraudulent accounts, so many bait-and-switch advertisers — just so many other, more important things that Facebook deliberately ignores despite repeated reports, that I wrote, “it calls the credibility and worthiness of your own efforts into question, when there is so much more false information on Facebook, dangerously misleading bunk on Facebook, that I submit reports on, and am told, “This does not violate our Community Standards.”
By far my most popular post ever on LinkedIn:
2,363 views and climbing! Did I strike a nerve? Before this, my “most popular post” was a meme showing Jimmy Carter building houses, and that only had a little over 300 views.
The content I reported to LinkedIn was a graphical “poster,” an image comprised of text, claiming that all lockdowns in the USA would end on March 1, and mask wearing or vaccines would be “optional” and high-risk individuals could just choose to stay home at the mercy of everyone else. It was not expressed as an “opinion” of what ought to be.
I know that Facebook has automated ways of text-mining images — with accuracy that’s almost frightening in its implications. I would assume that LinkedIn has access to the same tools:
Rosetta: Understanding text in images and videos with machine learning — Facebook Engineering
Understanding the text that appears on images is important for improving experiences, such as a more relevant photo…
I know about this because I once posted a word cloud with hundreds of words — including one completely made-up word — and it took Facebook Search only about 29 seconds to process it and return results. I was torn between amazement and dismay. I can think of some good reasons to use this technology, and I can think of ways it can be abused and used on the unsuspecting.
I was curious just how good the technology was, so I played cat and mouse with it, one day, just for fun. I was able to thwart it by using image layers and adjusting opacity. But I imagine the research will catch up to that, eventually.
It may have done, already. Remember when facial recognition was new and shiny and we could all play with it in Picasa, or Paintshop Pro, or Facebook? It’s still there, and it’s getting better all the time. But I think that the average user’s access to it has been somewhat limited as its capabilities prove too good. While this may be wise, and helps to prevent some forms of online vigilantism, that jinni has escaped the lamp. It’s there, if you know where to look.
So why wasn’t LinkedIn able to recognize that the image I reported was in clear violation of their rules?
You would think that these social media platforms — with their populations to rival some of the largest countries, and their CEOs who are richer than 90% of the population — would be able to do a better job of both AI and human moderation than they do. Instead, they rely on an overworked and traumatized human workforce and some ham-fisted algorithms. It’s not that they don’t try, but that they didn’t take their responsibility seriously enough from the start, or that they simply weren’t willing to invest the money in doing a better job of it.
In 2015, Stefania Milan wrote, in “When Algorithms Shape Collective Action: Social Media and the Dynamics of Cloud Protesting”:
I found that the infrastructure dramatically configures people’s options and ends up steering collective action in problematic ways. In fact, “there is a difference in what the cloud wants and what Facebook can give” (Leistert, 2013b). By enabling only some forms of engagement and positive affectivity, social media “facilitat[e] a web of positive sentiments in which users are constantly prompted to like, enjoy, recommend, and buy as opposed to discuss and critique” (Gerlitz & Helmond, 2013, p. 15).
She went on to say that “today’s ‘“ ‘communicative capitalism’ produces a political discourse that may be ‘free’ but is also devoid of political potency (Dean, Anderson, & Lovink, 2006).”
Is it any wonder that they’re scrambling to keep up now? On the one hand, a riled up membership is an active and engaged one, with more users and more eyeballs for the advertisers. On the other hand, an audience interested in LOLCats and recipe exchanges and sharing their parenting challenges is far easier to moderate. A “no sex, politics, or religion” rule might be overly restrictive, but where does one draw the line?
The pandemic has added new challenges. Lies and mud-slinging have become an expected part of the rancorous political landscape. But now, social media platforms are inundated with dangerously misleading pseudo-science, from vaccine hoaxes to quack cures for COVID19. Forget the Tide Pods challenge — people have died thinking their fish tank cleaner tablets would cure a deadly virus. To the people who think that’s funny —
When it comes to “free” social media platforms, we are the product. We willingly — without a full understanding of the implications — traded our personal information: at minimum, our names, locations, and browsing habits — in exchange for a platform that let us keep up with family, friends, and neighbors. Our privacy has already been sold to the lowest bidders, if not given away or hacked.
I think we have a right to expect better and more equitable, even-handed enforcement of the terms we all agreed to when we joined these platforms. While some would argue in favor of absolute freedom of speech, freedom of speech has never been absolute. The whole reason it is enshrined in the US Constitution is to protect us from being thrown in a deep, dark hole or drawn and quartered for criticizing our government. Really, that’s it.
Freedom of speech comes with the right to say some pretty unpalatable and reprehensible things, but only our government has to put up with that nonsense. Our family, friends, neighbors, and companies we work for or buy from do not. My living room is not your public forum. Walmart’s produce department is not your public forum. One might argue that, by virtue of size and access, not to mention the way they’ve sleazed their way into our lives to become “essential” and “necessary” adjuncts to how we live and shop and communicate and pay bills, Facebook, Twitter, and LinkedIn are “public forums.” But that still doesn’t mean that anyone other than the government has to put up with divisive and mean-spirited rhetoric.
And freedom of speech, even then, has public safety limitations: You can’t yell “Fire!” in a crowded theater unless there’s a fire. There are laws against telling lies in marketing and advertising claims — speaking of laws that ought to be better enforced. Likewise, no one should be allowed to deliberately spread false information about health and healthcare, or elections. And this bears repeating, loudly and often:
You can’t incite a mob to violent insurrection and claim “freedom of speech.”
I would argue that social media platforms have a duty to prevent harm, not to focus solely on figuring out what makes us open our wallets and stay engaged 24/7. And they are doing a poor job of it. I don’t think that’s malicious intent; I simply think they got too big for their britches, too fast, and pretty soon — as any parent could have warned — they’ll be too big for anyone to tell them “No.”