Text messaging has become a way for many organizations to notify their members of issues of importance. The use of automated “application to person” messaging campaigns can be highly effective. It has been widely used to get out the vote, to notify people of Covid-19 vaccination opportunities, and to advocate for particular issues. Automated texting can be especially effective for organizations that lack large media budgets.
Automated text messaging sounds great for businesses or organizations trying to communicate with their customers or members. But it also poses risks. What if wireless networks get overloaded with spam or if messages contain malware? One answer is the service of Campaign Registry, which enables businesses or organizations to register their messaging campaigns and have them vetted. The vetting process establishes a trust score. A higher score means higher throughput to customers.
What if we had a similar trust scoring system for our person-to-person and social media communications? We’ve learned how various messaging systems and social media platforms were used to incite and coordinate the January 6 insurrection. We’ve seen the ongoing negative impacts of mis- and disinformation spread through social media. We’ve also seen the deplatforming, or banning from social media, of individuals for spreading misinformation and inciting violence, most prominently former President Trump. So, what if our social media and text messaging platforms used a trust scoring system similar to Campaign Registry?
Given the sheer volume of messages and social media posts, the vetting process would have to be automated. A computer algorithm could create something like the Campaign Registry’s trust score. Perhaps it could also generate a truth score to assess the veracity of the message or post. If the score is too low, the message wouldn’t be allowed on the texting service or on social media. Perhaps the trust feature could allow some reputational tracking over time, to establish the reliability of posts from a particular person. Again, the higher the score, the higher the throughput of the messages.
But who would develop the algorithms for vetting messages? Would the vetting process be confidential or would it be transparent and open to public review? If you find your messages with a low score, perhaps even banned, would there be an appeal process? Would that appeal process be transparent—and would it also be automated?
The truth scores and the compliance practices could hopefully cut down on hateful messaging as well as disinformation campaigns. But the scores would be subject to any biases, intentional and unintentional, that are built into the algorithms by the developers. We already know that artificial intelligence programs often discriminate against women and people of color, despite the intentions of the developers. But what if there was also intentional bias? What if one of the companies doing the algorithm development was aligned with a particular political viewpoint?
Democracy was always intended to be a grass roots endeavor where each person has a voice. We’ve seen what happens when those voices lend themselves to disinformation. But what happens if some of those voices are stifled by automated screening processes?
Just imagine the many ways that screening systems used by social media or messaging services could be manipulated to the benefit of one particular partisan point of view. Just imagine the barriers to speaking out that could result from these systems. Just imagine how the integrity of such systems can be assured without transparency. Finally, just imagine how these systems can be used to undermine the very notion of fair elections by those who are prone to raise conspiracies when election results don’t go their way.
* * *
“Truth is the property of no individual but is the treasure of all men.” – Ralph Waldo Emerson