This is the apparent price of “smart speakers”, “Smart phones” and devices in our cars that monitor our activities, contacts and preferences and applies these personalized profiles for any interested buyer[i].
John Brockman has put together a series of engaging essays in his compilation “Possible Minds: 25 Ways of Looking at AI”[ii] . The inspiration for this is Norbert Weiner’s “Human Use of Human Beings”[iii] (1950/54). The diverse set of contributors provides a snapshot of their thinking on the risks, opportunities, benefits or banality of emerging AI technology.
A few address the potential governmental abuse of widespread surveillance outlined in Orwell’s 1984[iv], or variations on that theme. In most cases they cast an intelligent machine in the role of “Big Brother”. Steven Pinker ‘s essay, “Tech Prophecy and the Underappreciated Causal Power of Ideas”, asserts that “technologically advanced societies have long had the means to install Internet-connected cameras in every bar and bedroom. Yet that has not happened…” Professor Pinker is limiting his view to governmental surveillance in the United States. He seems to ignore the pervasive cameras, microphones and other tracking systems in every smart phone, automobile and many home entertainment devices. (“Yes Seri and Alexa, we are talking about you, which is why your ears are burning”) While the massive data collection associated with these as well as every browser search, email, Facebook post, or Tweet is collected, collated and analyzed by AI technology, it is in the hands of a few corporate entities and not the U.S. Government, (note - your government may differ), and of course excepting the U.S. NSA which is chartered with collecting a smaller amount of information related to international interactions. The zetabytes of data[v] accumulated by these corporate interests are available to US Governments by warrant, and application of this information is available for purchase by foreign governments as well as any private enterprise in the form of “ad placements”.
So we have a far more comprehensive monitoring system than Orwell anticipated, privatized with the added value of sophisticated profiles that go beyond the disclosed information to include personality types[vi], voting preferences, sexual orientation, purchasing and travel patterns[vii]. While the selection criteria might not allow these specific terms, it is likely that requests to present paid content to the following groups would be possible:
· Disgruntled conservative voters in swing states,
· Disassociated persons of color in swing states who might vote,
(These previous two being specific targets of Cambridge Analytica[viii] in 2016)
· Teenage girls about to run away from home
· Alienated sociopaths who have recently posted violent rhetoric and purchased assault weapons.
The intentions of the ad purchaser might be benign, evil or just a pragmatic effort to persuade the targeted individuals towards some action. The interventions of a foreign government[ix] with ill intent might have different impacts from interactions by social workers. Basically the entities selling access to select communities like these have limited concern or obligation to consider the intentions or impact of their focused message delivery.
A disturbing battle has formed in Massachusetts this fall (2020) over “ballot question 1: Right To Repair”[x] legislation. The objective is to force auto manufactures to use open data formats so independent repair shops can use their telemetric system data. With autos having dozens of sensors and computers resulting in “little red lights” that indicate a problem -- and in some cases a problem that prevents a car from passing inspection, the need for this data to resolve issues is clear. Interestingly the push back against the proposal is based on the access to personal data -- names, phone numbers, addresses, driving patterns, etc. that is also collected by these systems. The auto manufacturers argue that their collection and potential uses of the buyer’s personal data, with limited control if any by the buyer, is the reason that the buyer should not allow 3rd parties to access the data. Apparently it has not crossed the minds of the public or advocates for “right to repair” that the assumed “right to invade privacy, collect and share personal data” by automobiles is a point of concern. It is unclear that it will be possible to purchase a car in the future that not only monitors all of these things, but with wireless connections may share it with government, insurance companies, advertisers, and inadvertently with stalkers, and neer-do-wells who wish to take control of your car and disable your breaks at high speed[xi]. No doubt the government, with appropriate warrants for some values of the term “government”, could use this to apprehend persons who fall into their categories of “criminals”.
This is 2020, with AI’s involved in the collection, analysis and application efficacy of data. Most of the authors in Possible Minds, get caught in the “AGI Trap”. This is not to discount the possibility or even proximity of AGI, where an AI has as-good-as-human capability to learn and apply that knowledge to objectives that may be established by the AI, and not associated humans. However, as indicated by the above examples, there are potential risks from AI before we reach that tipping point. Consider an AI that is as effective at persuading humans as a dedicated AI is to playing Chess or Go. This is the objective of advertising, which is the driving force for many of the world’s largest corporations. There is an observation attributed to Abraham Lincoln which is at best three hypotheses’: “You can fool some of the people all of the time, and all of the people all of the time, but not all of the people all of the time.” Pinker asserts that “ideas matter … a healthy society … allows information sensed and contributed by its members to feedback and affect how society is governed.” Noting that historically speech/meme suppression has been the tool of despots, and free speech with effective channels have been a saving grace and also that “…technology’s biggest threat to political discourse comes from an amplifying too many dubious voices rather than suppressing enlightened ones.” In effect asserting that the un-fooled folks can stem the tide of AI personalized, monitored and executed persuasion messages. He applies this to AGI’s and presumably even AI 2024 will also fail to overcome the viral effectiveness of “white hat memes”. The new Lincon-ian hypothesis needs to be “can we fool a sufficient number of the people a sufficient amount of the time?” Certainly our most profitable corporations are developing the AIs for this task, along with some number of government entities.
Two contributors, Danny Hillis and Sandy Pentland, suggest that human-AI hybrid systems are a a possible direction for the future, and exist in some form already. Institutions already have many of the characteristics of AI’s, with humans acting as “neurons” and creating results that may not reflect the intentions/directives or best interests of management, stockholders, citizens, customers or other communities. Hillis identifies paths where hybrid AI-nation-states (CyGovs?) or AI-Corporations (CyBiz?) attain or compete for global domination (noting that this is already the case with future machine intelligence to be added to the mix.) Pentland identifies a serious flaw in our current, and future control system, which is the disruption created by propaganda/fake-news. His team has identified a process that selects for “culture” (communities) in evolution addressing one of the questions raised in evolutionary biology. When individuals “social sample” (observe common behaviors in their community), and apply a “reflective attitude” (‘is this right for me’) the result is superior decision making benefiting both the individual and the community. This parallels the challenge of training AI’s, assuring that they are “feeding” on sufficiently diverse and quality data; leading to concerns with misleading information and decisions, intentional or otherwise.
Possible Minds contributor Stuart Russell provides the analogy of a bus driver, accelerating towards a cliff, comforting the passengers “…trust me, we will run out of gas before we get there.” Again, he is responding to those who assert AGI is not possible. But his analogy applies to any scenario where an AI might lead to catastrophic impact on humanity, or even just liberal western culture. There are more cliffs ahead of the bus, and facilitating targeted communities to go over some of them is an explicit intent of some well funded players in this game.
George Church ‘s paper compares the emergence of intelligence in machines with that in humans. He notes that we have a tendency to discount the intelligence of other species and even that of other groups of humans, and asks if we are ready to grant “human” rights to all sentient beings. Being an expert in genetic engineering, he also points to the parallel enhancement of homo-sapiens along with hybrids at the individual level, expanding the various economic-social divides that already exist in society. He quotes William Gibson: “The future is already here, it’s just not very evenly distributed.”
A regular theme of the contributors is the need for quality software engineering with safety, human values, and ethics at the core. Unfortunately, we are faced with regular failures of systems in terms of data security, ‘bot armies’, and physical control systems from cars to airframes that indicate corporations have not placed these criteria as part of their priorities. A demonstrable proof point of this is that the NCEES[xii] dropped the U.S. licensing of “Software Engineers”, with all of the standards, body of knowledge and testing in place for a lack of demand. If corporations from finance to manufacturing are willing to accept the impact of “just hack it until it works”, we can’t expect more from companies developing advanced AI systems -- “Move fast and Break Things”[xiii] is not the approach recommended for systems that are considered existential threats by many professionals.
The closing interview is an excerpt wherein Stephen Wolfram points to the need to have “purpose” for both human and computer intelligence to assure that there are defined goals. He notes that humans develop these from their experience and culture, a background lacking in machines. His dystopian concern is the loss of purpose in humans. He points to a singularity where humans, immortalized in machines, spend their eons playing video games. His vision is to make sure every child is trained to code, and therefore be able to provide direction and purpose for machine intelligence and contribute value in our emerging brave new world. We can only hope that these empowered youth will also embrace the ethics and appreciation of truth that will direct applications for the benefit of humanity.
[i]Eisenstat, Y; TED, 17 Aug 2020; https://www.ted.com/talks/yael_eisenstat_how_facebook_profits_from_polarization?language=en
[ii] Brockman, J; 2019. Possible Minds: Twenty-Five Ways of Looking at AI. Penguin Group , The.
[iii] Wiener, N. (1950). The human use of human beings: cybernetics and society. Houghton Mifflin.
[v] https://www.nodegraph.se/how-much-data-is-on-the-internet/ , 13 Sept 2020
[vii] Thompson S., Warzel, C.;.Twelve Million Phones, One Dataset, Zero Privacy; NY Times 19 Dec 2019; https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html
[ix] NY Times coverage, August 2020 https://www.nytimes.com/2020/08/18/us/politics/senate-intelligence-russian-interference-report.html
[xiii] Facebook’s motto until 2014 https://en.wikipedia.org/wiki/Facebook,_Inc.#History