AI This Week
"Artificial Intelligence, How it is currently applied and near term potential uses. Who is applying Artificial Intelligence and how effective is it in its present embodiment." --- which is a moving target where I am at best a distant observer.
What is clear is that aspects of AI are creeping into every aspect of our lives. Siri, Alexa, Ok Google, self driving cars, phone-voice response systems, language recognition, face recognition, big data analytics, influencing purchasing ("welcome back Jim, here are some new items we think you will buy"), elections, online dating, and who knows what else?)
There are some folks who think that an AI must be conscious, just like we think other humans are conscious, so an AI that is as interactive/smart, etc as a porpoise, octopus, dog or cat would not qualify, This might be viewed as denial (D-ni-AI-l? ) But when they get beat at Chess, GO, the stock market, or oncological analysis, they can dismiss their embarrassment as "well it was just programmed for that and but it can't <whatever> as well as I can." (Hint Bobby Fischer might not be able to do that as well either.)
This week (sort of, with accelerating technology on all fronts it is unclear what time frame can be considered stable) Deep Learning on Big Data seems to be the path forward. Systems using this approach tend to have a black-box algorithm that evaluates massive data sets to extract criteria that are not obvious to the casual human expert. There is a call for such systems to also provide an 'explanation' for their work -- which might not be useful or even 'correct'. And there is The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which is pursuing the process of identifying some of the bounds that should be considered.
Perhaps one most critical consideration is recognizing that we humans can be persuaded, and in ways that are not tied to our conscious decision making or rational processes. Work on this was being done at Stanford's Persuasive Tech lab, and a counter trend, and related work is being done at Oxford's Computational Propaganda Project
A recent observation with respect to the Deep Learning approach suggests:
Most AI learning algorithms, particularly deep learning algorithms, are greedy, brittle, rigid, and opaque.2 The algorithms are
- greedy because they demand big data sets to learn
- brittle because they frequently fail when confronted with a mildly different scenario than that in the training set
- rigid because they cannot keep adapting after initial training
- opaque because the internal representations make it challenging to interpret their decisions.
(K. Hole and S. Ahmad, "Biologically Driven Artificial Intelligence" (2019))
- Collaborative work by professionals in the related fields
- IEEE"s Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS)
- Electronic Freedom Foundation (eff.org)
- measuring progress in AI
- AI Moral Code
- The Future of Life Institute (Includes "Beneficial AI" as well as Climate Change, Bio Tech, and Nuclear Weapons as points of interest)
- MIRI - Machine Intelligence Research Institute
- Association for the Advancement of Artificial Intelligence (AAAI)
- IEEE Collabrotec community on the Future of Technology (open to non-members)
- AI Now Institute at New York University. -- AI_Now_2017_Report.pdf
- Partnership on AI (IBM, Google, Amazon, Facebook, Microsoft, Apple)
- Moral Machine website at MIT -- soliciting the public to assert their preferences for how autonomous systems should respond (Trolley problem etc)
- Allen Institute for AI (Paul Allen)
- Center For Human-Compatible Artificial Intelligence (Berkeley) TED talk by Stuart Russell
- Leverhulme Centre For The Future Of Intelligence (Cfi)
- CalPoly Ethics and Emergency Science Group
- Data & Society Research Institute
- The Future Of Humanity Institute (Cambridge)
- AAAI and open access Digital Library and AI tracking of AI topics in real time
- Edcucational TV/Radio shows
- TED talks and other short videos
- Artificial Horizons by Martin Ciupa @ TEDxTanglinTrustSchool Jan 2018
- "The prophets of Doom and the Doom of Profits (being put first)"
- Ethical Steering of peoples thoughts? TED talk by ex-Google Ethicist -- the Race for attention -- to the bottom of our brain-stem
- TED talk on Machine Learning by Jeremy Howard
- Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence (2017) (YouTube video)
- Sam Harris podcast The Future of Intelligence (with Max Tegmark) or
- watch his TED Talk titled Can we build AI without losing control over it?
- Garry Kasparov ... on losing chess game to Deep Blue and partnering with AI (TED 20xx)
- Bostrom, Nick; Superintelligence: Paths, Dangers, Strategies -- TED Talk
- Professional Publications
- Commercial publications
- Blog posts and commentary
- Historical background and technical references
- Demonstrations/Discussions of AI Capabilities
- Government Interests
- Her - a love affair between a human and AI -- a delightful look at a benign AI connecting with folks in need of a friend. --- Also quite unusual as there is no "bad guy", no "violence" ... in short, none of the elements that create the dystopian depictions of the future.
- Transcendence, envisions the emergence of a General AI that is threatening enough that the defense is to shut down all of the computers in the world (and ignores the billions of people who will die as a result) --- when the AI was actually beneficial.
- I, Robot -- classic Asimov story dealing with how his "Three Laws of Robotics" can fail
- (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- (2) A robot must obey orders given it by human beings except where such orders would conflict with the First law.
- (3)A robot must protect its own existence as long as such protection does not conflict with the First or Second law
- Tools -- Python vs R -- one author's perspective
- Udacity "School of AI" as of April 2018