This was quite a fitting time to discuss the reality of politics moving increasingly to the digital sphere, for better or worse. Fears of Russian interference in the 2016 election reached a fever pitch this past week, with Congress grilling tech executives about their role in allowing for misinformation and bot spam to spread on their platforms. As the citizens of a nation conduct their lives online, it’s only natural that politics evolves to meet them where they are, ruthlessly taking advantage of social media and digitized information to mobilize voters to cause an intended effect.
The internet has created a new set of industries around marketing, like SEO (search engine optimization) or narrowly tailored social media marketing, or the phenomenon of “influencers” whose professional lives and personal lives blend seamlessly. Barack Obama was perhaps the first presidential candidate to begin taking advantage of the social media revolution, targeting millennials especially, on platforms like Facebook and Twitter in 2008/2012. Back then, social media was seen as a force for good in politics, keeping to the mission of transparency and increased engagement with relevant constituents. Little did we foresee that by bringing political campaigning online, we’d be opening up our democratic infrastructure to attack on an unprecedented scale. Before the internet, attacks would generally have to be through the traditional media gatekeepers, or by physically altering voting booths. Now, all it takes is a connection to the internet to allow anyone, including non-citizens, to spread political information. And due to the viral nature of online echo chambers, that information can have drastic effects on the outcome of an election.
Even domestically, campaigns are adapting to a digital reality of information being the most important commodity. By analyzing online behavior of potential voters, campaigns can build deeply personalized models and target voters in ways that make them likely to respond positively. Subtle additions to their news feeds, which citizens rely on for their information, can alter behavior, which both private industry and political campaigns use to their advantage. Since targeted advertising relies heavily on psychological tricks, it brings up important ethical questions of how these advertisements should be labeled. Should an Instagram influencer be allowed to show off a company’s product without disclosing the funding they’re receiving in exchange? These questions become even more critical when dealing with political campaigning. Without appropriate labeling, money is essentially able to buy votes, since whoever controls the money is able to purchase the most advertising, and platforms are all too happy to take that money.
Along with targeted advertising, the second disconcerting aspect of 2016’s campaigning was the spread of fake news. Some of this may have been associated with large-scale political disruption campaigns, but much of it has been attributed to entrepreneurial individuals trying to drive clicks to their websites for ad revenue. The question is whether platforms like Facebook and Twitter are willing to do anything about this. After all, more users and engagement on their platforms is traditionally a good thing they can pitch to advertisers, right? Sure, they could take steps to combat misinformation, but corporations don’t just do things out the goodness of their hearts. However, with increasing Congressional scrutiny, companies would rather take some voluntary steps to quell the tide rather than risk burdensome regulations being imposed on them. It’s under this calculus that Mark Zuckerberg announced during Facebook’s earnings that the company would be taking greater steps to combat fake news and bot accounts, at the risk of sacrificing profits. If Facebook does indeed take a greater role in such efforts, we run another risk of having a corporation determining who/what is real and not, a perhaps even more dystopian scenario as we live out our lives on these platforms. For now, I think the best approach to take would be to algorithmically flag questionable content and present a warning to users, allowing for informativeness taking advantage of artificial intelligence while avoiding the pitfalls of outright censorship.