The Longest Now

Forging Social Proof: the Networked Turing Test Rules the First AI War
Friday September 25th 2020, 2:51 pm
Filed under: citation needed,fly-by-wire,Uncategorized

A few years ago I wrote about how our civilization was forfeiting the zeroth AI war — allowing individual attention hacks, deployed at scale, to diminish and replace our natural innovation and productivity in every society.  We gained efficiency in every area of life, and then let our new wealth and spare time get absorbed by newly-efficient addictive spirals.

Exploit culture

This war for attention affects what sort of society we can hope to live in. Channeling so much wealth to attention-hackers, and the networks of crude AI tools and gambling analogs that support them, has strengthened an entire industry of exploiters, allowing a subculture of engineers and dealmakers to flourish.  That industry touches on fraud, propaganda, manipulation of elections and regulation, and more, all of which influence what social equilibria are stable.

The first real AI war

Now we are facing the first real artificial-intelligence war — dominated by entities that appear as avatars of independent, intelligent people, but are artificial, scripted, automated.  

What is new in this? Earlier low-tech versions of this required no machine learning or programming: they used the veil of pseudonymity to fake authorship, votes, and small-scale consensus.  In response, we developed layers of law and regulation around earlier attacks — fraud, impersonation, and scams are illegal.  AI can smoothly scale this to millions of comments on public bills, and to forging microtargeted social proof in millions of smaller group interactions online. And these scaled attacks are often still legal, or lightly penalized and enforced.

Bad Behavior has blocked 540 access attempts in the last 7 days.