Generative AI Regulation

The following OpEd was published in the Union Leader April 2023

IT STARTED when I was reading my obituary and found not only had I missed my memorial service, but it was held on Saturday, Jan. 22, 2023 — which was a Sunday.   (click down arrow to see the rest of the op-ed)

Last year, pre-ChatBot, the Internet search and newsfeed entities simply delivered sets of headlines and web pages. Some would be responsive to my interests, others off target. Some would be accurate, others misleading or outright disinformation. Like any other user, I had to sort out the wheat from the chaff. I also had to recognize the factors that caused certain items to appear at the “top of the feed.” Paid ads are fairly well marked. The “trending” items, or “recommended for you” can be less obvious — and may have been selected to grab my attention rather than provide information I might find useful.

Some “channels” like Facebook, YouTube and TikTok, which are advertising funded, gain attention by prioritizing content that is outrageous, threatening, or simply appeals to insecurities and bias.

Now that Microsoft Bing is offering Chat (driven by a variant of OPENai’s ChatGPT) and Google has their Chatbot “Bard,” a new world of “responses” can be generated. These are far more fluent and seem more credible than a simple list of diverse page links.

I’ve generated short science fiction stories, job cover letters and reference letters, class proposals, essays with citations, press releases, and discussed copyright implications, as well as whether these “large language models” would pass the Turing Test (an early cut at evaluating if a computer is intelligent).

Here’s the ugly truth: Chatbots lie. They make stuff up. They make blatant errors — like not knowing what day of the week is tied to a particular day of the month. And they are amazingly convincing.

After sharing some of the chatbot-created items I was getting feedback critiquing the content from professionals whom I thought I’d clearly warned it was generated content. If marked content shared to raise awareness is so easily misconstrued, surely we have a problem.

The ability to sort out a student-generated essay, applicant-generated resume, researcher-generated grant proposal, reader-generated op-ed, etc. is likely to fail. Some professionals use AI generated content as a starting point, then correct it, clean it up, and add their human value — and may or may not indicate the co-authorship. After all, using spelling or grammar checking software is commonplace, why not “augmented research” or “content embellishment”?

There are steps we can take.

In the school environment, have students create an essay in class, by hand, without electronic devices and use that as a baseline to identify generated content. Or, we have students explicitly use these tools and critique/augment the results. They could use strike-though to indicate content that is false, highlight their own additions and add a segment at the end evaluating the characteristics of the resulting work.

Education about the capabilities and limitations of this 2022 generation of technology is important. However, we need to be aware that the 2024 iteration will be different. Results may be more accurate, more biased, harder to detect, more influential, perhaps providing innovative insights in various areas.

Part of the challenge will continue to be sorting the wheat from the chaff. A proposed cancer treatment may be a breakthrough, a dud, or a total and fatal fabrication. There will be ne’er-do-wells who promote misleading content to advance their cause, conspiracy, crime, or con. This will include foreign agents sowing the seeds of distrust and rebellion — pick almost any country and the right topic and folks will bring placards, pitchforks or guns into the streets.

Some technologists recommend a “moratorium” on AI development, albeit for only six months. Even if a few of the major players accept this approach, others will not, and the race will continue with a few sitting idle on the sidelines for a short period.

Perhaps the most critical step is to hold the technology companies responsible for the content they generate and promote. Some legal actions have been suggested related to harm that may be caused by erroneous content.

The U.N. in 2018 and again in 2023 has expressed concern that social media was contributing to social disruption and perhaps even genocide in Myanmar. Violent acts in various countries have been tied to online misleading content. These problems emerged before the generative AI technology was creating images and video from text, as well as fabricated op-eds, blog posts, essays — and yes even obituaries. People’s lives have been severely disrupted, displaced, doxed, canceled and jobs lost as a result of false online information.

We must find a way to understand and counter these trends through education and policy changes. We must also start an ongoing dialogue where we can track the evolution of these technologies and try to put safeguards in place before, well, we are all reading our obituaries.

Generative AI is now "passing the Turing Test" (i.e. indistinguishable from human generated content) and that is a solid reason for concern.  There will be a significant increase in generated content and a significant, continuing decline in trust.  And Trust is one of the most valuable assets in today's world, the key to reversing alienation, divisiveness and political tensions globally.

Two segments of 60 Minutes, 16 April 2023, were dedicated to a discussion with Google/Alphabet leaders on the implications of the rapid evolution of AI technology. 

"I've always thought of AI as the most profound technology humanity is working on. More profound than fire or electricity or anything that we've done in the past," said Sundar Pichai, the CEO of Google and its parent company Alphabet.

...

Pichai told 60 Minutes he is being responsible by not releasing advanced models of Bard, in part, so society can get acclimated to the technology, and the company can develop further safety layers.

...

As Pichai noted in his 60 Minutes interview, consumer AI technology is in its infancy. He believes now is the right time for governments to get involved.

"There has to be regulation. You're going to need laws…there have to be consequences for creating deep fake videos which cause harm to society," Pichai said. "Anybody who has worked with AI for a while…realize[s] this is something so different and so deep that, we would need societal regulations to think about how to adapt."

In the Interview Pichai specifically mentions regulation, laws and treaties as forms of governmental action required as well as broad involvement beyond tech companies.

Trust me, this is important