A 21st Century Tech Challenge: Net Persuasion

Short URL: https://is.gd/NetPersuasion

Recent documentaries such as PBS’s “Hacking Your Mind”and Netflix’s “The Social Dilemma” have highlighted the challenges that social media and similar technologies present. These suggest that abuses of our rapidly advancing technology may (and in some cases have) led to polarization, violence, devolution of democracy and civil discourse. The potential of civil unrest, autocratic take-over of democratic societies, ecological disaster, and economic demise are some of the projected impact.

Does this level of existential threat exist? What are the forces driving in that direction? And what paths forward might reduce or eliminate these negative outcomes?

The Business Model: Advertising

Key corporations of the “big five” (Apple, Microsoft, Amazon, Alphabet and Facebook) are primarily advertising funded, others are developing this and similar revenue streams. The advertising revenue model builds on selling access to users, with the actual customers being the entities that pay for that access. There are a few aspects of advertising that are important to note, since these have evolved into extremely sophisticated tools with the emergence of the Internet:

· Advertising must reach an audience -- historically this is referred to as “impressions” or “eyeballs”

· Advertising must influence the action of some amount of the audience, ideally a measurable event.

The domination of the advertising market by big-tech, as evidenced in revenue, is proof that these channels have become very effective. This is more impressive when you consider that the “price” per impression is measured in cents, and the total cost of a specific campaign might only run in the hundreds of dollars for the actual “delivery” component. A quick look at Facebook’s Ad Library shows many ad placements (same images, same text) targeting a few thousand individuals with total costs running under $100 -- while the entities initiating the ads have placed tens of thousands of different ads. The key here is to deliver a well focused message to specific individuals and ideally monitor the results to identify what the next step is to trigger the desired action.

A decade ago, the idea of customizing ads for very small target groups, being able to deliver to just those groups and monitor the results in real time was inconceivable. Today it is the norm. With big-data it is possible to accumulate millions of data points on every online individual. Alphabet has 30 Gigabytes of data about me. Compare this to the Library of Congress with 40 million books with about 300 pages and 3000 characters per page -- or 36 thousand Gigabytes. But Google (an Alphabet subsidiary) has 4 billion users, so that is 4 million times the Library of Congress book collection worth of data. Facebook has similar data repositories on its 3.14 billion users. It is important to note that an individual does not need to be a user of these websites to be tracked and “served” by their advertising engines. Non-users who are on the Internet are tracked using icons such as the “Like” button for Facebook, and more subtle methods as well. It is reasonable to assume that all of the 4.6 billion Internet users worldwide have databases of tracking data about them. In some cases duplications occur and resolution to the level of name, address, and photo might not exist for each of these.

To accomplish the inconceivable it is necessary to take that mass of data and transform it into a profile that is useful in targeting and influencing each of those 4+ billion people. This is where Artificial Intelligence (AI’s) come into play. Detailed analysis of most individuals have been developed that can identify classic demographic data (age, residence, gender, race, income level, etc.) but also can infer significantly more, including political orientation and personality type [e.g. Psychological targeting as an effective approach to digital mass persuasion]. These data provide the basis for targeting individuals with messages that are tuned for them specifically and delivered at a time and place[vi] to maximize the impact. [e.g. NY Times series on "One Nation, Tracked" Dec 2019 special section 26 Jan 2020 ] An advertiser might place highly relevant car ads to an individual searching for repair facilities, encouragement to vote for a candidate to an individual as they read relevant news articles, or perhaps discouragement from voting at all if that is the objective of the advertiser. AIs can then track the subsequent actions, and determine the success of a given approach to evolve a better model of each individual. The “Social Dilemma” documentary personifies the AI’s, showing the real time approaches that are available to engage and influence an individual. This particular dramatization, with three experts manipulating a single person implicitly shows how the introduction of this technology has transformed the impossible: multiple persons assigned to every individual 24/7 to monitor and nudge that individual, into continuous engagement.

To summarize the mandate for advertising funded channels: the company must track as many persons as possible; ideally they engage and even dominate that person’s attention. With a clear profile of that individual, they then sell that attention to advertisers with little or no concern about the objectives or impact of that advertising. There is competition in this market. Big Tech players compete to expand their user base, then compete to monopolize the attention of each user and finally compete to provide the highest paying advertiser access to each targeted individual. These companies must try to acquire any prospective competitor, not just to capture the ad income these might provide, but also to capture the increased user base they attract.

Bottom of the Brainstem

One of the key voices sounding the alarm in this arena is Tristan Harris, a prior Google Design Ethicist, and co-founder of the Center for Humane Technology. In one of his early TED presentations he observes that the keys to capturing an individual’s attention are fear and outrage. These triggers get the attention of our pre-cognitive brain. At this point a cascade of events occurs. We tend to “call the alarm”, sharing the content with our online communities, triggering a similar response on their part. The jolt of chemicals makes us feel alive and useful to our tribe of contacts. And the AI queues up other, likely more disturbing, content to reinforce and continue the engagement. This is combined with paid content that leverages our response to trigger the desired action. This natural human process leads to what Harris calls “the race to the bottom of the brainstem”: a downward spiral of engagement that expands on the strongest emotional responses possible, taking full advantage of the comprehensive view of each individual’s personality and “hot buttons”.

A second aspect of engagement is maximizing each individual’s time with the given environment. In addition to the fear and outrage appeals to our survival and security motivators[i], many platforms take advantage of higher level motivators. Our desire to belong to groups, real and virtual along with the reputation/recognition these provide us trigger our rewards systems, and reinforce our recurring and continuing use of that environment. There is a term for this when it becomes irresistible: “addiction[ii]”. The concern about “tech addiction” dates back to television and video games -- environments targeting “the masses”. Today’s personalized, real time connection takes this to a whole new level. Broadcast media has always sought to maximize the number of viewers, and highly popular content has been the key to that success. Narrowcasting seeks out like-minded communities for given content or ads. There is not yet a term for casting to a single individual, but that is where we are, and what these channels provide. A big-tech delivery channel can combine a truly personalized mix of content that is tailored to match your specific interests, hot-buttons, key-words, personality type, demographic, recent web searches, last You Tube viewing, and today’s credit card purchases. The good news is there is no one else in the world just like you. The bad news is that these corporations know exactly what that means and want you as part of their product-offering. (Remember, in advertising, you are the product, not the customer.)

It is critical that we realize this is not a conscious human process, on either side of the screen. “Hacking Your Mind” points to the significant research that confirms we are making most decisions on “auto pilot”, again utilizing the pre-cognitive brain. On the other side of the screen is a far more formidable player, the AI. Don’t get caught up in visions of Terminator robots, or even generally “intelligent” machines. Just consider the black-box capability of learning to beat every human at chess or Go, to beat humans at Jeopardy, and in general excel at each specific task where massive data and “deep learning” can lead to un-anticipatable effectiveness. Billions of dollars of corporate revenue are tied to the success of the “team” of AI’s that profile you, track you, engage you, put you in the sniper sights of the highest bidder for your attention, monitor the impact and line up the next shot. This is not “evil”, it is just “efficiency” and it is adhering to the corporate profit objective. It is also worth pointing out that this may not be in the interest of the users (i.e. the products) but it is in the interest of the consumers (i.e. advertisers.)


[i] https://www.simplypsychology.org/maslow.html as of 5 Oct. 2020

[ii] https://health.clevelandclinic.org/is-it-possible-to-become-addicted-to-social-media/ as of Oct 7, 2020

E Unum Pluribus

(Out of One, Many)

What could possibly go wrong?

As with most technology, every aspect of net persuasion has the potential for good or evil. These tools can be used to nudge individuals towards lives that are longer, healthier, and better per Thaler and Susstein’s book “Nudge”. One can envision beneficent governments, non-profits, religious institutions or others applying the tools and methods that are becoming available towards these ends. But it doesn’t pay. Specifically, the return on investment to transform the world for the better does not provide the revenue stream or results sufficient to justify this so far. That does not mean that these types of entities are not fully engaging with the net persuasion opportunity. They are.

The 2018 book, “Like War: The Weaponization Of Social Media”[i], describes how ISIS applied these methods to recruit soldiers and disperse their opponents as they slaughtered their way into Iraq. Active social media campaigns have become a mainstay “communications” (aka propaganda) messaging approach for military operations over the last decade. Myanmar used social media[ii] to both justify genocidal actions and incite the primarily Buddhist majority to initiate violent actions against their neighbors. Proctor and Gamble wanted to sell you soap. Russia wants to discredit and destroy democracies[iii]. The channels, AIs, and personalization available via the tech giants are the same for all paid-content suppliers. The web-savvy ones combine paid content with an arsenal of additional opportunities: creating thousands of fake personas, fictitious group web sites and social media pages, and then polluting the net persuasion environment with these. Even Proctor and Gamble does not have the dedicated resources to research, test, and deploy messages to match nation-state players. Some of these players are targeting specific countries, elections, candidates, protest movements, conspiracy theories, disinformation campaigns, health advice, economic conditions, social divides … in short, any opportunity to promote their objectives. “Hacking Your Mind” describes a different dynamic in China[iv], where these tools and techniques are used to drive for social coherence[v]. There are no doubt pros and cons to the comprehensive application of these methods in China, but at least it is the resident government seeking to implement their objectives, not a third party seeking to disrupt their society.

But wait, there’s more: the truth doesn’t count. Unfortunately for humanity, the factors that engage us are not rational nor are they tied to veracity. Fake news travels faster than factual content[vi]. It is more likely to be endorsed and shared within the increasingly isolated communities of users. For any one topic, there are many fake alternatives for each factual one, and a given target group will respond better to one fake over another. For serious disinformation campaigns, a combination of well crafted web sites, high-sounding titles and press releases/published articles, or even fake scientific journals can provide excellent accessories for a well -dressed campaign. AI has its part to do as well, creating “deep fakes”[vii], with combinations of audio, video and imagery that can fool most of the people most of the time. Repetition helps establish human’s confidence that something is true. “Hacking Your Mind” also demonstrates the impact of “social proof” (everyone believes it) and how quickly children can identify with a group and project positive characteristics on their group along with negative ones on the “other” group[viii]. Subtle prepping[ix] of a target group can alter their perspective on a wide base of issues, and to a limited degree can be reversed. However, it is also clear that misinformation may be almost impossible to correct once it is embedded in the psyche of a social group[x].

The symbiosis of big-tech AI personalization, and advanced behavioral science[xi] methods of persuasion create a perfect storm. The tech players have strong economic incentives to reach every person on Earth, and some are actually deploying devices, networks and even satellites to accomplish this. They have the incentive and capability to segment the 7 billion of us into a variety of different buckets actually formed on the fly to match the targets of a given advertising campaign. At times an advertiser might want to address all persons searching for mid-price range trucks, at other times despondent folks looking for guns, or perhaps disaffected voters registered in political party “x” in region “y” to prod them to the polls, or discourage that action. The paid-content suppliers then can apply their research, propaganda expertise and focus groups to select the message most likely to succeed with each group potentially down to the individual level, and follow-up in real time with secondary messages that will further advance their cause. AIs will be applied by all of these players to further advance their objectives. What AIs have demonstrated is that they can learn, and become more effective, increasing the potential for success, and the reduction of individual agency.

The result is accentuating the polarization of societies in many countries. Different communities/groups are getting different messages from different interests that reinforce their tribal identities. This results in the “filter bubbles” that we find, and the associated breakdown in civil discourse. Different groups are working off of different facts. Polarization combines with or can exacerbate social divides along economic, racial, religious, ethnic and other lines -- whether perceived or real. Professional Internet war-fighters are preparing the ground with sites, groups, bots, sock-puppets, trolls, fellow travelers, “news” outlets, and more to take advantage of any spark or at-risk- community that may surface. In various countries this has led to demonstrations, violence, and at least diminished trust in various institutions.


[i] Singer, P.W. & Brooking, Emerson; LikeWar: The Weaponization Of Social Media, 2018

[ii] https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html as of 5 Oct. 2020

[iii] https://intelligence.house.gov/social-media-content/ and https://www.intelligence.senate.gov/press/senate-intel-committee-releases-bipartisan-report-russia%E2%80%99s-use-social-media and https://www.rand.org/content/dam/rand/pubs/research_reports/RR2200/RR2237/RAND_RR2237.pdf as of 5 Oct. 2020

[iv] https://www.pbs.org/show/hacking-your-mind/extras/season/2020/ as of 5 Oct. 2020

[v] https://en.wikipedia.org/wiki/Social_Credit_System as of 5 Oct. 2020

[vi] https://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308 as of 5 Oct. 2020

[vii] https://www.partnershiponai.org/a-report-on-the-deepfake-detection-challenge/ and the classic example; https://www.buzzfeed.com/craigsilverman/obama-jordan-peele-deepfake-video-debunk-buzzfeed as of 5 Oct 2020

[viii] https://www.pbs.org/video/why-cant-we-all-get-along-fyreky/ as of 5 Oct. 2020

[ix] https://www.pbs.org/video/what-did-scientists-rise-trump-study-reveal-vjzrft/ as if 5 Oct. 2020

[x] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7090020/ as of 5 Oct. 2020

[xi] https://www.pbs.org/video/new-discoveries-being-made-behavioral-sciences-nmn5gj/ as of 5 Oct 2020

Paths into or out of the woods

There are massive economic and political engines driving towards societies reflecting a dystopian net persuasion. Leading corporations in the persuasion game have no incentives to interfere with this trajectory. Without intervention, things will get worse. Which of the threatened disruptive impacts: economic, social, political, environmental or others might emerge first, or ever is up in the air. The prognosis does not look good.

Fortunately, many entities are engaging to counter some aspects of this challenge. In the fall of 2020 we are seeing an increasing investment in addressing the “fake news” challenge from Universities, non-profits like AARP and Consumer Reports, and even the US Government with NSF being a primary sponsor of the “Hacking Your Mind” documentary. Clearly widespread education is critical -- the public and policy makers need to be exposed if not informed about the potential impact involved. Where paths to help individuals in protecting themselves exist from Universities and Non-Profits -- these need to be vetted, made visible and widely propagated.

The role of government is a key point of discussion. Corporate economic interests are in conflict with individual interests, and a traditional role of government is to protect individuals in this situation. Key policy objectives must be driven by the need for individuals to be informed and able to pursue their own self interest. Clear and accurate identity of content suppliers may be part of this. Individuals need to be able to identify and control information collected and inferred about them as this is the core asset of the advertising economy. Existing and potentially new regulatory involvement is likely to be needed, and this must include persons expert in the related technologies and science. Tools for validation of content and sources should be funded and deployed, as well as education for the public. Existing policies such as Section 230 of the Communications Decency Act, net neutrality, tax incentives/disincentives and consumer protection need to be reviewed in this context. We must have the knowledge and tools to be as informed and aware as we wish, and be able to retain our privacy, agency and ultimately our free will.

What's a person to do? SIFTing your response!

There are a number of key points of discussion and recommendations coming along with the various sources above. Here is a summary of key ones:

  • Consumer Reports points to the SIFT process developed by Mike Caulfield , Washington State U. I have included links to many of Mike's expansions on the concepts below.

    • S - Stop --

      • if you have a strong emotional response to online content - start your thinking responses --- This corresponds with the recommendation from "Hacking Your Mind" and "Nudge" to choose to move from your auto-pilot/reflexive response mode to your slow/reflective response mode. (suggested steps are:)

    • I -Investigate the source --

      • hold your cursor over links to see where they really lead (hover), check the domain name -- .gov is the US Government, .ru is the Russian Federation, .edu is an educational institution, and some are open to spoofing. Try this with the URLs above for examples (it took a while to find an .ru site with the search "visiting Russia" even though many of the listed sites are clearly, and appropriately sponsored by Russian entities that have acquired other domain sites.)

      • Check Wikipedia -- search the url + ” Wikipedia” to find the relevant Wikipedia page (This works for sites like IEEE.org, but not for JimIsaak.com ... woe is me.)

      • Even applying the "stink test" (Judge Judy) is useful - if it is too good to be true, it probably isn't true, ditto if it triggers fear or outrage --- key "click bait" content providers use to get your attention and trigger your reptilian brain (auto-pilot).

    • F - Find better coverage

      • Check sources you do trust, and maybe something credible on the other side of the political divide (remember - polarization is both an affect of the current environment, and the objective of some of the content providers.)

      • It's fun to be "the one" who lets your friends, contacts, et al to first know of breaking news. This is one of the dopamine triggers that encourages your reptilian brain to act without thinking. Note that "dopamine" starts with "dope", don't be one, go back to "Stop"

    • T - Trace Claims, Quotes, and Media to the Original Context

  • AARP has a set of resources, including recommended sites for fact checking such as:

Going deeper, check the RAND report on Tools that Fight Disinformation Online which curates a large list of tools that serve various purposes online.