To be clear, Twitter’s design is what primarily determines its culture and user experience—that is, the wide selection of tweets you’re most likely to see (for example, those from people you follow) and the ones you’re less likely to see. likely to see (for example, those of people who follow you).
Unfortunately, even a carefully curated feed can contain a lot of unwanted content, from semi-interesting dunks in other tweets and tidbits of provocative but perhaps false information, mixed in with really interesting and timely nuggets. The recommendation algorithm that decides the order in which these items appear has a profound effect on whether the user is delighted, confused, or bored.
But “opening up” that algorithm could bring people a lot more unwanted content.
It probably wouldn’t, as some have suggested, make it easier for other platforms to compete with Twitter. The recommendation algorithm is probably a variation of a standard model that data scientists learn in school rather than actual intellectual property. And the main obstacle to competition is the Twitter network, not the way it is moderated. It’s almost impossible to get enough people addicted enough to your new platform to create a great ongoing conversation (just ask Donald Trump).
In any case, the vast majority of Twitter users will not be able to make use of open source code, because the code is often difficult to read even for those who wrote it. Trust me.
However, there is a third group that might be eager to see Twitter’s open source code: people hoping to use it to game the system and make their tweets go viral. They, too, may be disappointed, because using the code will require real-time metrics that they won’t have. They will not be able to see the constantly evolving data that the code is based on.
Still, open source code can explain the kinds of things that cause a specific tweet to be promoted. It may turn out, for example, that the number of followers you have matters, or that the number of retweets in the last four minutes matters, or that what is important is the product of these two things. But even knowing that information, it would take a lot of work, and possibly a lot of money, to game the system in order to make a tweet go viral. In any case, once the code is opened and explored, expect its weaknesses to be exploited for that kind of explicit gameplay.
Given that Musk has so far only floated the idea of open source code, it’s unclear how much of Twitter’s algorithms would be made public. Would the opening include things that may be encrypted but could also be considered “data”, such as the list of words or phrases that, if used, are automatically censored or cause accounts to be closed? Such a list would be especially interesting for people who want it to be shorter or removed altogether. I suspect this is exactly the conversation Musk wants to foster on the basis of free speech. If so, it will be a fight.
If the platform opens up to more use of offensive language, that probably won’t improve the atmosphere for the average user. It will primarily appeal to those seeking attention and outrage: Twitter users like Musk himself.
This column does not necessarily reflect the opinion of the editorial board or of Bloomberg LP and its owners.
Cathy O’Neil is an opinion columnist for Bloomberg. She is a mathematician who has worked as a teacher, hedge fund analyst, and data scientist. She founded ORCAA, an algorithmic audit firm, and is the author of “Weapons of Math Destruction.”
More stories like this are available at bloomberg.com/opinion