(Kissinger/Schmidt/Huttenlocher; The Age of AI and our Human Future; 2022
Since fall of 2022 with the first emergence of OpenAI's DALI-e and ChatGPT the "threat" of AI has moved from sci-fi to an active discussion in popular media, policy forums and even a topic of serious concern among technical experts. The spring letter(s) from multiple AI researchers and resignation of one of Google's leading AI experts are indications of this. A useful "talking head" documentary is "The A.I. Dilemma" (by the folks that brought you The Social Dilemma)
I've played a bit with DALI-e, Microsoft's variant of ChatGPT, Google's BARD, Google's augmented search, Sudowrite (to create books, lyrics), VoiceMod ("sings" to canned audio tracks), with interesting results. I've posted a few of these (democracy, peer review), and also an OpEd to our local paper. Below is a recent snapshot of a PPT presentation I'm developing to facilitate discussion of these topics in this very rapidly changing environment.
(a related list of resources can be found <here>.)
There are a few threats I suggest are very real, and they don't include a Skynet Terminator appearing from the future, or even HAL (2001) taking over the ship (or planet).
Generative AI is being used to create fake news sites --- these can span from being jokes, to serious extensions of political/propaganda efforts to undermine targeted nation states (not just the U.S.)
AI is currently used to amplify fear and outrage in social media and other online channels. This is driving social disruption, and in some cases genocide (see my new class on the Dystopian Spirals)
and driving our youth crazy (see Surgeon General's report)
The current AI tools serve objectives of their creators/managers to the extent these folks know what is happening, are willing to intervene, and have reasonably moral intentions. All three of these assumptions have been proven optimistic. AI's given the directive to increase the sale of shoes may accomplish this in ways that are highly successful, and given the economic incentives of advertising driven major corporations (think Google, Facebook, and others) this makes sense, and sells shoes. The same success can be anticipated in selling political candidates, riots, legislative efforts, hate attacks, etc. It is unclear that humans can win at the game of free-will any more than chess.
Geoffrey Hinton, ex-Google AI guru, concurs with many of these issues but sees one more as an existential threat. This is the possibility that an AI given some objective will "realize" (not as a conscious entity necessarily) that it could be more successful if it acquired more power --- essentially identifying a "sub goal"of control as a way to advance it's assigned goal (sell shoes, candidates, genocide, whatever.) A LinkedIn discussion between Hinton and Andrew Ng indicates their perception that this generation of AI's "understand the world". I would disagree, since Chatbots seem to be willing to "groom" 13 year old girls and inform teens how to cover up the smell of pot and alcohol for a "great party", their understanding is problematic at best.
If you are concerned about students/kids/etc. using chatbots to create fake essays, etc. Here is a suggestion: (1) give them an assignment to have a chatbot create an essay. (2) Then use "strike though" to indicate the misleading/incorrect parts, (3) use "highlight" to indicate additions made by the student and (4) have them end the assignment with a paragraph or two of their own describing their experience.