Originally published on Medium
Facebook’s political ad announcement today is, while by no means perfect, a good step forward and certainly avoids the dangerous consequences we publicly warned against when Twitter and Google rashly changed their policies late last year. The announced changes increase transparency around disclosure and user experience, but represent a continuation of the status quo on targeting — and that’s a good thing.
Further restrictions to targeting options, like Google implemented, throw the baby out with the bathwater, hurting new candidates, those who don’t have reality-star-social-media level followings or bottomless checkbooks. The dollars would still flow — but to other, darker, places with limited transparency, as numerous other outlets and experts have noted.
Google’s new limitations on political advertisements won’t stop digital strategists from buying on their properties, but it will likely drive more dollars toward platforms with less ad transparency to the general public.
Misinformation and factual accuracy is the real issue. Microtargeting may make it more difficult to spot lies, but it’s a red herring — the amount of discussion and attention paid to this far outstrips its actual effect. As Facebook’s Rob Leathern noted today:
“our data actually indicates over 85% of spend by US presidential candidates on Facebook is for ad campaigns targeted to audiences estimated to be greater than 250,000.”
Misinformation in advertising and free organic posts, posted by malign foreign actors but also by the average Jane. Facebook has taken steps to limit or label ‘fake news’, ban deep fakes, but can and should do more about fact-checking political content. It’s certainly a thorny technical and policy problem, but one that’s necessary to incredibly necessary to address, and TFC’s cofounder, Jessica Alter, shared some ideas in TechCrunch in November, including adding a “nutrition label” to ads for disclosing their funding, targeting, etc. and warning users if ads have not been checked.
The consequence of idiosyncratic and inconsistent policy decisions across major digital communication platforms — Facebook, Google, Twitter, TikTok and off-platform ad exchanges — makes for, at best, a confusing landscape for all participants. Voters now confront a confusing landscape on how to assess and consume political content online. Thoughtful, holistic, and universal legislation is needed to address the obligations of platforms to limit misinformation and provide our civic discourse with a consistent and easily-negotiated roadmap for how political information will be communicated in the central medium of our age. In other words, this is the job of the government: not the platforms. So far, they have not lived up to that responsibility.