Notes by Retraice

Retraice, Inc.
Notes by Retraice

All PDF notes for Retraice and Margin podcasts: https://www.retraice.com/retraice https://www.retraice.com/margin Privacy, Terms and Contact: https://www.retraice.com/privacy-policy https://www.retraice.com/terms-of-use contact@retraice.com

  1. 24/06/2023

    Re116-NOTES.pdf

    (The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.) Re116: When Does the Bad Thing Happen? (Technological Danger, Part 4) retraice.com   Agreements about reality in technological progress.   Basic questions; a chain reaction of philosophy; deciding what is and isn't in the world; agreeing with others in order to achieve sharing; other concerns compete with sharing and prevent agreement; the need for agreement increasing.   Air date: Saturday, 14th Jan. 2023, 10:00 PM Eastern/US. The chain reaction of questions   We were bold enough to predict a decrease in freedom (without defining it);^1 we were bold enough to define technological progress (with defining it).^2 But in predicting and assessing `bad things' (i.e. technological danger), we should be able to talk about when the bad things might or might not happen, did or didn't happen. But can we? When does anything start and stop? How to draw the lines in chronology? How to draw the lines in causality? There is a chain reaction of questions and subjects:    * Time: When did it start? With the act, or the person, or the species?    * Space: Where did it start?    * Matter: What is it?    * Causality: What caused it?    * Free will: Do we cause anything, really? Ontology and treaties for sharing   Ontology is the subset of philosophy that deals with `being', `existence', `reality', the categories of such things, etc. I.e., it's about `what is', or `What is there?', or `the stuff' of the world. From AIMA4e (emphasis added):    "We should say up front that the enterprise of general ontological engineering has so far had only limited success. None of the top AI applications (as listed in Chapter 1) make use of a general ontology--they all use special-purpose knowledge engineering and machine learning. Social/political considerations can make it difficult for competing parties to agree on an ontology. As Tom Gruber (2004) says, `Every ontology is a treaty--a social agreement--among people with some common motive in sharing.' When competing concerns outweigh the motivation for sharing, there can be no common ontology. The smaller the number of stakeholders, the easier it is to create an ontology, and thus it is harder to create a generalpurpose ontology than a limited-purpose one, such as the Open Biomedical Ontology."^3  Prediction: the need for precise ontologies is going to increase.   Ontology is not a solved problem--neither in philosophy nor artificial intelligence. Yet we can't sit around and wait. The computer control game is on. We have to act and act effectively. And further, our need for precise ontologies--that is, the making of treaties--is going to increase because we're going to be dealing with technologies that have more and more precise ontologies. So, consider:    * More stakeholders makes treaties less likely;    * The problems that we can solve without AI (and its ontologies and our own ontologies) are decreasing;    * Precise ontology enables knowledge representation (outside of machine-learning), and therefore AI, and therefore the effective building of technologies and taking of actions, and therefore work to be done;    * Treaties can make winners and losers in the computer control game;    * Competing concerns can outweigh the motive for sharing, and therefore treaties, and therefore winning.   __ References    Retraice (2023/01/11). Re113: Uncertainty, Fear and Consent (Technological Danger, Part 1). retraice.com.   https://www.retraice.com/segments/re113 Retrieved 12th Jan. 2023.    Retraice (2023/01/13). Re115: Technological Progress, Defined (Technological Danger, Part 3). retraice.com.   https://www.retraice.com/segments/re115 Retrieved 14th Jan. 2023.    Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approa

  2. 14/01/2023

    Re115-NOTES.pdf

    (The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.) Re115: Technological Progress, Defined (Technological Danger, Part 3) retraice.com How we would decide, given predictions, whether to risk continued technological advance. Danger, decisions, advancing and progress; control over the environment and `we'; complex, inconsistent and conflicting human preferences; `coherent extrapolated volition' (CEV); divergence, winners and losers; the lesser value of humans who disagree; better and worse problems; predicting progress and observing progress; learning from predicting progress. Air date: Friday, 13th Jan. 2023, 10:00 PM Eastern/US. Progress, `we' and winners If the question is about `danger', the answer has to be a decision about whether to proceed (advance). But how to think about progress? Let `advance' mean moving forward, whether or not it's good for humanity. Let `progress' mean moving forward in a way that's good for humanity, by some definition of good.^1 Progress can't be control over the environment, because whose control? (Who is we?) And we can't all control equally or benefit equally or prefer the same thing. This corresponds to the Russell & Norvig (2020) chpt. 27 problems of the complexity and inconsistency of human preferences,^2 and Bostrom (2014) chpt 13 problem of "locking in forever the prejudices and preconceptions of the present generation" (p. 256). A possible solution is Yudkowsky (2004)'s `coherent extrapolated volition'.^3 If humanity's collective `volition' doesn't converge, this might entail that there has to be a `winner' group in the game of humans vs. humans. This implies the (arguably obvious) conclusion that we humans value other humans more or less depending on the beliefs and desires they hold. Better and worse problems can be empirical Choose between A and B: o carcinogenic bug spray, malaria; o lead in the water sometimes (Flint, MI), fetching pales; o unhappy day job, no home utilities (or home). Which do you prefer? This is empirical, in that we can ask people. We can't ask people in the past or the future; but we can always ask people in the present to choose between two alternative problems. Technological progress First, we need a definition of progress in order to make decisions. Second, we need an answer to the common retort that `technology creates more problems than it solves'. `More' doesn't matter; what matters is whether the new problems, together, are `better' than the old problems, together. We need to define two timeframes of `progress' because we're going to use the definition to make decisions: one timeframe to classify a technology before the decision to build it, and one timeframe to classify it after it has been built and has had observable effects. It's the difference between expected progress and observed progress. Actual, observed progress can only be determined retrospectively. Predicted progress: A technology seems like progress if: the predicted problems it will create are better to have than the predicted problems it will solve, according to the humans alive at the time of prediction.^4 Actual progress: A technology is progress if: given an interval of time, the problems it created were better to have than the problems it solved, according to the humans alive during the interval. (The time element is crucial: a technology will be, by definition, progress if up to a moment in history it never caused worse problems than it solved; but once it does cause such problems, it ceases to be progress, by definition.) Prediction progress (learning): `Actual progress', if tracked and absorbed, could be used to improve future `predicted progress'. _ References Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. e

  3. 13/01/2023

    Re114-NOTES.pdf

    (The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.) Re114: Visions of Loss (Technological Danger, Part 2) retraice.com Human loss of freedom by deference to authority, dependency on machines, and delegation of defense. Wiener: freedom of thought and opinion, and communication, as vital; Russell: diet, injections and injunctions in the future; Horesh: technological behavior modification in the present; terrorist Kaczynski: if AI succeeds, we'll have machine control or elite control, but no freedom; Bostrom: wearable surveillance devices and power in the hands of a very few as solution. Air date: Thursday, 12th Jan. 2023, 10:00 PM Eastern/US. All bold emphasis added. Mathematician Wiener Is this what's at stake, in the struggle for freedom of thought and communication? Wiener (1954), p. 217:^1 "I have said before that man's future on earth will not be long unless man rises to the full level of his inborn powers. For us, to be less than a man is to be less than alive. Those who are not fully alive do not live long even in their world of shadows. I have said, moreover, that for man to be alive is for him to participate in a world-wide scheme of communication. It is to have the liberty to test new opinions and to find which of them point somewhere, and which of them simply confuse us. It is to have the variability to fit into the world in more places than one, the variability which may lead us to have soldiers when we need soldiers, but which also leads us to have saints when we need saints. It is precisely this variability and this communicative integrity of man which I find to be violated and crippled by the present tendency to huddle together according to a comprehensive prearranged plan, which is handed to us from above. We must cease to kiss the whip that lashes us...." p 226: "There is something in personal holiness which is akin to an act of choice, and the word heresy is nothing but the Greek word for choice. Thus your Bishop, however much he may respect a dead Saint, can never feel too friendly toward a living one. "This brings up a very interesting remark which Professor John von Neumann has made to me. He has said that in modern science the era of the primitive church is passing, and that the era of the Bishop is upon us. Indeed, the heads of great laboratories are very much like Bishops, with their association with the powerful in all walks of life, and the dangers they incur of the carnal sins of pride and of lust for power. On the other hand, the independent scientist who is worth the slightest consideration as a scientist, has a consecration which comes entirely from within himself: a vocation which demands the possibility of supreme self-sacrifice...." p. 228: "I have indicated that freedom of opinion at the present time is being crushed between the two rigidities of the Church and the Communist Party. In the United States we are in the process [1950] of developing a new rigidity which combines the methods of both while partaking of the emotional fervor of neither. Our Conservatives of all shades of opinion have somehow got together to make American capitalism and the fifth freedom [economic freedom^2 ] of the businessman supreme throughout all the world...." p. 229: "It is this triple attack on our liberties which we must resist, if communication is to have the scope that it properly deserves as the central phenomenon of society, and if the human individual is to reach and to maintain his full stature. It is again the American worship of know-how as opposed to know-what that hampers us." Mathematician and philosopher Russell Will this happen? Russell (1952), pp. 65-66:^3 "It is to be expected that advances in physiology and psychology will give governments much more control over individual mentality than they now have even in totalita

  4. 12/01/2023

    Re113-NOTES.pdf

    (The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.) Re113: Uncertainty, Fear and Consent (Technological Danger, Part 1) retraice.com Beliefs, and the feelings they cause, determine what chances we take; but possibilities don't care about our beliefs. A prediction about safety, security and freedom; decisions about two problems of life and the problem of death; uncertainty, history, genes and survival machines; technology to control the environment of technology; beliefs and feelings; taking chances; prerequisites for action; imagining possibilities; beliefs that do or don't lead to consent; policing, governance and motivations. Air date: Wednesday, 11th Jan. 2023, 10:00 PM Eastern/US. Prediction: freedom is going to decrease The freedom-security-safety tradeoff will continue to shift toward safety and security. Over the next 20 years, 2023-2032, you'll continue to be asked, told, and nudged into giving up freedom in exchange for safety (which is about unintentional danger), in addition to security (which is about intentional danger).^1 (Side note: We have no particular leaning, one way or another, about whether this will be a good or bad thing overall. Frame it one way, and we yearn for freedom; frame it another way, and we crave protection from doom.) For more on this, consider: o Wiener (1954); o Russell (1952); o Dyson (1997), Dyson (2020); o Butler (1863); o Kurzweil (1999); o Kaczynski & Skrbina (2010); o Bostrom (2011), Bostrom (2019). Decisions: two problems of life and the problem of death First introduced in Re27 (Retraice (2022/10/23)) and integrated in Re31 (Retraice (2022/10/27)). Two problems of life: 1. To change the world? 2. To change oneself (that part of the world)? Problem of death: 1. Dead things rarely become alive, whereas alive things regularly become dead. What to do? Uncertainty We just don't know much about the future, but we talk and write within the confines of our memories and instincts. We know the Earth-5k well via written history, and our bodies `know', via genes, the Earth-2bya, about the time that replication and biology started. But the parts of our bodies that know it (genes, mechanisms shared with other animals), are what would reliably survive, not us. Most of our genes can survive in other survival machines, because we share so much DNA with other creatures.^2 But there is hope in controlling the environment to protect ourselves (vital technology), though we also like to enjoy ourselves (other technology). There is also irony in it, to the extent that technology itself is the force from which we may need to be protected. Beliefs and feelings * a cure, hope; * no cure, fear; * a spaceship, excitement; * home is the same, longing; * home is not the same, sadness; * she loves me, happiness; * she hates me, misery; * she picks her nose, disgust. Chances Even getting out of bed--or not--is somewhat risky: undoubtedly some human somewhere has died by getting out of bed and falling; but people in hospitals have to get out of bed to avoid skin and motor problems. We do or don't get out of bed based on instincts and beliefs. Side note: von Mises' three prerequisites for human action:^3 1. Uneasiness (with the present); 2. An image (of a desirable future); 3. The belief (expectation) that action has the power to yield the image. (Side note: technology in the form of AI is becoming more necessary to achieve desirable futures, because enough humans have been picking low-hanging fruit for enough time that most of the fruit is now high-hanging, where we can't reach without AI.) Possibilities * radically good future because of technology (cure for everything); * radically bad future because of technology (synthetic plague); * radically good future because of humans (doctors invent cure);

  5. 11/01/2023

    Re112-NOTES.pdf

    (The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.) Re112: The Attention Hazard and The Attention (Distraction) Economy retraice.com Drawing attention to dangerous information can increase risk, but the attention economy tends to draw attention toward amusement. Information hazards; formats include data, idea, attention, template, `signaling' and `evocation'; increasing the number of information locations; adversaries, agents, search, heuristics; the dilemma of attention; suppressing secrets; the Streisand effect; the attention economy as elite `solution'; Liu's `wall facers'. Air date: Tuesday, 10th Jan. 2023, 10:00 PM Eastern/US. Attention hazard of information Bostrom (2011): "Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm."^1 Attention is one format (or `mode') of information transfer:^2 "Attention hazard: The mere drawing of attention to some particularly potent or relevant ideas or data increases risk, even when these ideas or data are already `known'."^3 This increase is because `attention' is physically increasing the number of locations where the hazard data or idea are instantiated. Adversaries and agents "Because there are countless avenues for doing harm, an adversary faces a vast search task in finding out which avenue is most likely to achieve his goals. Drawing the adversary's attention to a subset of especially potent avenues can greatly facilitate the search. For example, if we focus our concern and our discourse on the challenge of defending against viral attacks, this may signal to an adversary that viral weapons--as distinct from, say, conventional explosives or chemical weapons--constitute an especially promising domain in which to search for destructive applications. The better we manage to focus our defensive deliberations on our greatest vulnerabilities, the more useful our conclusions may be to a potential adversary."^4 Consider the parallels in Russell & Norvig (2020): * `adversarial search and games' (chpt. 5); * `intelligent agents' (chpt 2); * `solving problems by searching' (chpt. 3); * drawing attention can facilitate search: heuristics (sections 3.5, 3.6); The dilemma: We focus on risk, and also lead adversary-agents to our vulnerabilities. Cf. the `vulnerable world hypothesis'^5 on the policy implications of unrestrained technological innovation given the unknown risk of self-destructing innovators. "Still, one likes to believe that, on balance, investigations into existential risks and most other risk areas will tend to reduce rather than increase the risks of their subject matter."^6 Secrets and suppression "Clumsy attempts to suppress discussion often backfire. An adversary who discovers an attempt to conceal an idea may infer that the idea could be of great value. Secrets have a special allure."^7 https://en.wikipedia.org/wiki/Streisand_effect: "[T]he way attempts to hide, remove, or censor information can lead to the unintended consequence of increasing awareness of that information." The attention (distraction) economy Might the attention economy, one day or even already, be a `solution' (an elite solution) to the attention hazard? Would it work against AI? Or buy us time? What about `wall facers'?^8 Cf. Re30, Retraice (2022/10/26), on things being done. _ References Bostrom, N. (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, 10, 44-79. Citations are from Bostrom's website copy: https://www.nickbostrom.com/information-hazards.pdf Retrieved 9th Sep. 2020. Bostrom, N. (2019). The vulnerable world hypothesis. Global Policy, 10(4), 455-476. Nov. 2019. Citations are from Bostrom's website copy: https://nickbostrom.com

  6. 10/01/2023

    Re111-NOTES.pdf

    (The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.) Re111: AI and the Gorilla Problem retraice.com Russell and Norvig say it's natural to worry that AI will destroy us, and that the solution is good design that preserves our control. Our unlucky evolutionary siblings, the gorillas; humans the next gorillas; giving up the benefits of AI; the standard model and the human compatible model; design implications of human compatibility; the difficulty of human preferences. Air date: Monday, 9th Jan. 2023, 10:00 PM Eastern/US. The gorilla problem Added to Re109 notes after live: "the gorilla problem: about seven million years ago, a now-extinct primate evolved, with one branch leading to gorillas and one to humans. Today, the gorillas are not too happy about the human branch; they have essentially no control over their future. If this is the result of success in creating superhuman AI--that humans cede control over their future--then perhaps we should stop work on AI, and, as a corollary, give up the benefits it might bring. This is the essence of Turing's warning: it is not obvious that we can control machines that are more intelligent than us."^1 We might add that there are worse fates than death and zoos. Most of the book, they say, reflects the majority of work done in AI to date--within `the standard model', i.e. AI systems are `good' when they do what they're told, which is a problem because `telling' preferences is easy to get wrong. (p. 4) Solution: uncertainty in the purpose (the `human compatible' model^2), which has design implications (p. 34): * chpt. 16: a machine's incentive to allow shut-off follows from uncertainty about the human objective; * chpt. 18: assistance games are the mathematics of humans and machines working together; * chpt. 22: inverse reinforcement learning is how machines can learn about human preferences by observation of their choices; * chpt. 27: problem 1 of N, our choices depend on preferences that are hard to invert; problem 2 of N, preferences vary by individual and over time. The human problem But how do we ensure that AI engineers don't use the dangerous standard model? And if AI becomes easier and easier to use, as technology tends to do, how do we ensure that no one uses the standard model? How do we ensure that no one does any particular thing? The `human compatible' model indicates that the `artificial flight' version of AI (p. 2), which is what we want, is possible. It does not indicate that it is probable. And even to make it probable would still not make the standard model improbable. Nuclear power plants don't make nuclear weapons' use less probable. This is the more general problem taken up by Bostrom (2011) and Bostrom (2019). _ References Bostrom, N. (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, 10, 44-79. Citations are from Bostrom's website copy: https://www.nickbostrom.com/information-hazards.pdfRetrieved 9th Sep. 2020. Bostrom, N. (2019). The vulnerable world hypothesis. Global Policy, 10(4), 455-476. Nov. 2019. Citations are from Bostrom's website copy: https://nickbostrom.com/papers/vulnerable.pdfRetrieved 24th Mar. 2020. Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. ISBN: 978-0525558613. Searches: https://www.amazon.com/s?k=978-0525558613 https://www.google.com/search?q=isbn+978-0525558613 https://lccn.loc.gov/2019029688 Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498 Footnotes ^1 Russell & Norvig (2020) p. 33. ^2 Russell (2019).

  7. 09/01/2023

    Re110-NOTES.pdf

    (The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.) Re110: TikTok for Addicting the World's Kids retraice.com Tristan Harris's analysis of China's TikTok vs. the exported version. Tristan Harris on TikTok; spinach TikTok for Chinese kids, opium for everyone else; the Opium Wars and the `Century of Humiliation'; TikTok content and time limits for Chinese kids; Netflix on the attention economy vs. sleep; Russia and China trying to radicalize U.S. veterans via social media; war and civil war. Air date: Sunday, 8th Jan. 2023, 10:00 PM Eastern/US. This is a follow-up to Re109, Retraice (2023/01/07), where we described TitTok as a tool for Chinese spying. It's worse than that. Tristan Harris is co-founder of the Center for Humane Technology, worked as a design ethicist at Google, and studied computer science at Stanford.^1 Tristan Harris on 60 Minutes, 2022: "It's almost like [Chinese company Bytedance] recognize[s] that technology [is] influencing kids' development, and [so] they make their domestic version a spinach TikTok, while they ship the opium version to the rest of the world."^2 Cf. Re48, Retraice (2022/11/12), on the Opium Wars and the `century of humiliation [of China]', a Chinese term. TikTok in China, if you're under 14 years old:^3 * science experiments * museum exhibits * patriotism * educational content * limited to 40min per day * mandatory 5 sec delay now and then * opening and closing hours Harris on Joe Rogan, 2021 "It's like Xi saw The Social Dilemma [and so enacted changes to protect only China's kids]." On the attention economy more broadly: "Even Netflix said their biggest competitor is sleep, because they're all competing for attention."^4 In the same episode, Harris says Russia and China try to radicalize U.S. veteran's groups on social media, to increase the likelihood of such tactically trained people joining or starting civil war. Cf. Re17, Retraice (2022/03/07), on both war with China and U.S. civil war. _ References Retraice (2022/03/07). Re17: Hypotheses to Eleven. retraice.com. https://www.retraice.com/segments/re17 Retrieved 17th Mar. 2022. Retraice (2022/11/12). Re48: From Drugs to Mao to Money. retraice.com. https://www.retraice.com/segments/re48 Retrieved 14th Nov. 2022. Retraice (2023/01/07). Re109: TikTok (app), Tik-Tok (novel), and Low-Power Mode (Day 7, AIMA4e Chpt. 7). retraice.com. https://www.retraice.com/segments/re109 Retrieved 8th Jan. 2023. Footnotes ^1 https://en.wikipedia.org/wiki/Tristan_Harris. ^2 TikTok in China versus the United States -- 60 Minutes, Nov. 8, 2022. Available on YouTube: https://www.youtube.com/watch?v=0j0xzuh-6rY ^3 Some of these items and the following quotes are from Tristan Harris on Joe Rogan #1736, 2021. Clip available at: What China's Crackdown on Algorithm's Means for the US, Nov. 18, 2021. ^4 https://youtu.be/im4O2sW3FiY?t=210

  8. 09/01/2023

    Re109-NOTES.pdf

    (The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.) Re109: TikTok (app), Tik-Tok (novel), and Low-Power Mode (Day 7, AIMA4e Chpt. 7) retraice.com An observation of AI in action (TikTok), a decision (Low-Power Mode), and a coincidence (Tik-Tok). TikTok as addictive spying tool; Tik-Tok, the novel; changes in technology vs. lack of changes in human wants and needs; creeping totalitarianism, illiberty, war, climate change, Artilect War, superintelligence; the gorilla problem; making a living, making a difference; AIMA4e, Retraice, audience; low-power mode. Air date: Saturday, 7th Jan. 2023, 10:00 PM Eastern/US. Prediction: default doom Consider TikTok (the app), built on AI, ultimately controlled by the Chinese Communist Party,^1 on which millions of Americans have been made addicted to pure amusement, and Tik-Tok (the novel), yet another warning about the bleakness of a robot's would-be life, and the robot's power to respond. It seems the ever-increasing power of technology is not being tracked by any obvious change in human desires.^2 If so, it's reasonable to be pessimistic and expect that worse forms of previous bad things will happen because stronger technology makes them possible:^3 o Creeping totalitarianism, illiberty: See, for example: Strittmatter (2018); Andersen (2020). o Normal war: Add, for example, `slaughterbots'^4 to the otherwise familiar current methods of war. o Climate change: The generalized doom scenario is that we can't adapt quickly enough to the changes we're causing, by use of technologies, in the environment (changes that go beyond just average temperatures)--see H6 of the hypotheses in Re17, Retraice (2022/03/07). o Artilect War: A `gigadeath' conflict between two human groups who anticipate AI surpassing human abilities. One group is in favor (cosmists), the other opposed (terrans). de Garis (2005). o Superintelligence: Bostrom (2014). I.e. super-human AI with its own purposes, causing what Russell & Norvig (2020) call "the gorilla problem: about seven million years ago, a now-extinct primate evolved, with one branch leading to gorillas and one to humans. Today, the gorillas are not too happy about the human branch; they have essentially no control over their future. If this is the result of success in creating superhuman AI--that humans cede control over their future--then perhaps we should stop work on AI, and, as a corollary, give up the benefits it might bring. This is the essence of Turing's warning: it is not obvious that we can control machines that are more intelligent than us."^5 We might add that there are worse fates than death and zoos. Preferences: competing goals * making a living; * making a difference--to us, working to decrease the likelihood of the above `doom' scenarios.^6 Retraice was meant to make a living and a difference. It's doing neither, and only has hope of doing one (difference). Two things are obvious at this point: 1. Continuing with Russell & Norvig (2020) (daily investing even more time) is more likely to make a difference and a living. 2. If Retraice has an audience out there, we have no way of finding it--and it's much smaller than we thought it would be. It also seems clear that completely stopping Retraice is wrong, because we like doing it. And it still has a chance of making a difference, given enough time and luck. Decision: low-power mode The new Retraice plan: * Time on AIMA4e: more; * Time on podcast: less (something like changing from daily `podcast' to short daily `transmission'); * Money on podcast: less (the equivalent of keeping one light bulb on, the bare minimum in costs and expenses). __ References Andersen, R. (2020). The panopticon is already here. The Atlantic. Sep. 2020. https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/

About

All PDF notes for Retraice and Margin podcasts: https://www.retraice.com/retraice https://www.retraice.com/margin Privacy, Terms and Contact: https://www.retraice.com/privacy-policy https://www.retraice.com/terms-of-use contact@retraice.com

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada