AI This (recently) Week  

(https://is.gd/AIinfo)

"Artificial Intelligence, How it is currently applied and near term potential uses.  Who is applying Artificial  Intelligence and how effective is it in its present embodiment." --- which is a moving target where I am at best a distant observer.   With the addition of generative AI (ChatCPT, et al) I've added a page and presentation with a focus on this aspect (2023)

What is clear is that aspects of AI are creeping into every aspect of our lives.  Siri, Alexa, Ok Google, self driving cars, phone-voice response systems, language recognition, face recognition, big data analytics, influencing purchasing ("welcome back Jim, here are some new items we think you will buy"), elections, online dating, and who knows what else?)

There are some folks who think that an AI must be conscious, just like we think other humans are conscious, so an AI that is as interactive/smart, etc as a porpoise, octopus, dog or cat would not qualify,  This might be viewed as denial (D-ni-AI-l? ) But when they get beat at Chess, GO, the stock market, or  oncological analysis, they can dismiss their embarrassment as "well it was just programmed for that and but it can't <whatever> as well as I can." (Hint Bobby Fischer might not be able to do that as well either.) 

This week (sort of, with accelerating technology on all fronts it is unclear what time frame can be considered stable; more like "once upon a time") Deep Learning on Big Data seems to be the path forward. Systems using this approach tend to have a black-box algorithm that evaluates massive data sets to extract criteria that are not obvious to the casual human expert. There is a call for such systems to also provide an 'explanation' for their work -- which might not be useful or even 'correct'.   And there is The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which is pursuing the process of identifying some of the bounds that should be considered.

Perhaps one most critical consideration is recognizing that we humans can be persuaded, and in ways that are not tied to our conscious decision making or rational processes.  Work on this was being done at Stanford's Persuasive Tech lab, and a counter trend, and related work is being done at Oxford's Computational Propaganda Project

A recent observation with respect to the Deep Learning approach suggests:

Most AI learning algorithms, particularly deep learning algorithms, are greedy, brittle, rigid, and opaque.2 The algorithms are

(K. Hole and S. Ahmad, "Biologically Driven Artificial Intelligence" (2019))

Resources: