Last year, pre-ChatBot, the Internet search and newsfeed entities simply delivered sets of headlines and web pages. Some would be responsive to my interests, others off target. Some would be accurate, others misleading or outright disinformation. Like any other user, I had to sort out the wheat from the chaff. I also had to recognize the factors that caused certain items to appear at the “top of the feed.” Paid ads are fairly well marked. The “trending” items, or “recommended for you” can be less obvious — and may have been selected to grab my attention rather than provide information I might find useful.
Some “channels” like Facebook, YouTube and TikTok, which are advertising funded, gain attention by prioritizing content that is outrageous, threatening, or simply appeals to insecurities and bias.
Now that Microsoft Bing is offering Chat (driven by a variant of OPENai’s ChatGPT) and Google has their Chatbot “Bard,” a new world of “responses” can be generated. These are far more fluent and seem more credible than a simple list of diverse page links.
I’ve generated short science fiction stories, job cover letters and reference letters, class proposals, essays with citations, press releases, and discussed copyright implications, as well as whether these “large language models” would pass the Turing Test (an early cut at evaluating if a computer is intelligent).
Here’s the ugly truth: Chatbots lie. They make stuff up. They make blatant errors — like not knowing what day of the week is tied to a particular day of the month. And they are amazingly convincing.
After sharing some of the chatbot-created items I was getting feedback critiquing the content from professionals whom I thought I’d clearly warned it was generated content. If marked content shared to raise awareness is so easily misconstrued, surely we have a problem.
The ability to sort out a student-generated essay, applicant-generated resume, researcher-generated grant proposal, reader-generated op-ed, etc. is likely to fail. Some professionals use AI generated content as a starting point, then correct it, clean it up, and add their human value — and may or may not indicate the co-authorship. After all, using spelling or grammar checking software is commonplace, why not “augmented research” or “content embellishment”?
There are steps we can take.
In the school environment, have students create an essay in class, by hand, without electronic devices and use that as a baseline to identify generated content. Or, we have students explicitly use these tools and critique/augment the results. They could use strike-though to indicate content that is false, highlight their own additions and add a segment at the end evaluating the characteristics of the resulting work.
Education about the capabilities and limitations of this 2022 generation of technology is important. However, we need to be aware that the 2024 iteration will be different. Results may be more accurate, more biased, harder to detect, more influential, perhaps providing innovative insights in various areas.
Part of the challenge will continue to be sorting the wheat from the chaff. A proposed cancer treatment may be a breakthrough, a dud, or a total and fatal fabrication. There will be ne’er-do-wells who promote misleading content to advance their cause, conspiracy, crime, or con. This will include foreign agents sowing the seeds of distrust and rebellion — pick almost any country and the right topic and folks will bring placards, pitchforks or guns into the streets.
Some technologists recommend a “moratorium” on AI development, albeit for only six months. Even if a few of the major players accept this approach, others will not, and the race will continue with a few sitting idle on the sidelines for a short period.
Perhaps the most critical step is to hold the technology companies responsible for the content they generate and promote. Some legal actions have been suggested related to harm that may be caused by erroneous content.
The U.N. in 2018 and again in 2023 has expressed concern that social media was contributing to social disruption and perhaps even genocide in Myanmar. Violent acts in various countries have been tied to online misleading content. These problems emerged before the generative AI technology was creating images and video from text, as well as fabricated op-eds, blog posts, essays — and yes even obituaries. People’s lives have been severely disrupted, displaced, doxed, canceled and jobs lost as a result of false online information.
We must find a way to understand and counter these trends through education and policy changes. We must also start an ongoing dialogue where we can track the evolution of these technologies and try to put safeguards in place before, well, we are all reading our obituaries.