(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.) Re109: TikTok (app), Tik-Tok (novel), and Low-Power Mode (Day 7, AIMA4e Chpt. 7) retraice.com An observation of AI in action (TikTok), a decision (Low-Power Mode), and a coincidence (Tik-Tok). TikTok as addictive spying tool; Tik-Tok, the novel; changes in technology vs. lack of changes in human wants and needs; creeping totalitarianism, illiberty, war, climate change, Artilect War, superintelligence; the gorilla problem; making a living, making a difference; AIMA4e, Retraice, audience; low-power mode. Air date: Saturday, 7th Jan. 2023, 10:00 PM Eastern/US. Prediction: default doom Consider TikTok (the app), built on AI, ultimately controlled by the Chinese Communist Party,^1 on which millions of Americans have been made addicted to pure amusement, and Tik-Tok (the novel), yet another warning about the bleakness of a robot's would-be life, and the robot's power to respond. It seems the ever-increasing power of technology is not being tracked by any obvious change in human desires.^2 If so, it's reasonable to be pessimistic and expect that worse forms of previous bad things will happen because stronger technology makes them possible:^3 o Creeping totalitarianism, illiberty: See, for example: Strittmatter (2018); Andersen (2020). o Normal war: Add, for example, `slaughterbots'^4 to the otherwise familiar current methods of war. o Climate change: The generalized doom scenario is that we can't adapt quickly enough to the changes we're causing, by use of technologies, in the environment (changes that go beyond just average temperatures)--see H6 of the hypotheses in Re17, Retraice (2022/03/07). o Artilect War: A `gigadeath' conflict between two human groups who anticipate AI surpassing human abilities. One group is in favor (cosmists), the other opposed (terrans). de Garis (2005). o Superintelligence: Bostrom (2014). I.e. super-human AI with its own purposes, causing what Russell & Norvig (2020) call "the gorilla problem: about seven million years ago, a now-extinct primate evolved, with one branch leading to gorillas and one to humans. Today, the gorillas are not too happy about the human branch; they have essentially no control over their future. If this is the result of success in creating superhuman AI--that humans cede control over their future--then perhaps we should stop work on AI, and, as a corollary, give up the benefits it might bring. This is the essence of Turing's warning: it is not obvious that we can control machines that are more intelligent than us."^5 We might add that there are worse fates than death and zoos. Preferences: competing goals * making a living; * making a difference--to us, working to decrease the likelihood of the above `doom' scenarios.^6 Retraice was meant to make a living and a difference. It's doing neither, and only has hope of doing one (difference). Two things are obvious at this point: 1. Continuing with Russell & Norvig (2020) (daily investing even more time) is more likely to make a difference and a living. 2. If Retraice has an audience out there, we have no way of finding it--and it's much smaller than we thought it would be. It also seems clear that completely stopping Retraice is wrong, because we like doing it. And it still has a chance of making a difference, given enough time and luck. Decision: low-power mode The new Retraice plan: * Time on AIMA4e: more; * Time on podcast: less (something like changing from daily `podcast' to short daily `transmission'); * Money on podcast: less (the equivalent of keeping one light bulb on, the bare minimum in costs and expenses). __ References Andersen, R. (2020). The panopticon is already here. The Atlantic. Sep. 2020. https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/