(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)
Re116: When Does the Bad Thing Happen? (Technological Danger, Part 4)
retraice.com
Agreements about reality in technological progress. Basic questions; a chain reaction of philosophy; deciding what is and isn't in the world; agreeing with others in order to achieve sharing; other concerns compete with sharing and prevent agreement; the need for agreement increasing.
Air date: Saturday, 14th Jan. 2023, 10:00 PM Eastern/US.
The chain reaction of questions
We were bold enough to predict a decrease in freedom (without defining it);^1 we were bold enough to define technological progress (with defining it).^2 But in predicting and assessing `bad things' (i.e. technological danger), we should be able to talk about when the bad things might or might not happen, did or didn't happen. But can we? When does anything start and stop? How to draw the lines in chronology? How to draw the lines in causality? There is a chain reaction of questions and subjects: * Time: When did it start? With the act, or the person, or the species? * Space: Where did it start? * Matter: What is it? * Causality: What caused it? * Free will: Do we cause anything, really?
Ontology and treaties for sharing
Ontology is the subset of philosophy that deals with `being', `existence', `reality', the categories of such things, etc. I.e., it's about `what is', or `What is there?', or `the stuff' of the world. From AIMA4e (emphasis added):
"We should say up front that the enterprise of general ontological engineering has so far had only limited success. None of the top AI applications (as listed in Chapter 1) make use of a general ontology--they all use special-purpose knowledge engineering and machine learning. Social/political considerations can make it difficult for competing parties to agree on an ontology. As Tom Gruber (2004) says, `Every ontology is a treaty--a social agreement--among people with some common motive in sharing.' When competing concerns outweigh the motivation for sharing, there can be no common ontology. The smaller the number of stakeholders, the easier it is to create an ontology, and thus it is harder to create a generalpurpose ontology than a limited-purpose one, such as the Open Biomedical Ontology."^3
Prediction: the need for precise ontologies is going to increase.
Ontology is not a solved problem--neither in philosophy nor artificial intelligence. Yet we can't sit around and wait. The computer control game is on. We have to act and act effectively. And further, our need for precise ontologies--that is, the making of treaties--is going to increase because we're going to be dealing with technologies that have more and more precise ontologies. So, consider: * More stakeholders makes treaties less likely; * The problems that we can solve without AI (and its ontologies and our own ontologies) are decreasing; * Precise ontology enables knowledge representation (outside of machine-learning), and therefore AI, and therefore the effective building of technologies and taking of actions, and therefore work to be done; * Treaties can make winners and losers in the computer control game; * Competing concerns can outweigh the motive for sharing, and therefore treaties, and therefore winning.
__
References
Retraice (2023/01/11). Re113: Uncertainty, Fear and Consent (Technological Danger, Part 1). retraice.com. https://www.retraice.com/segments/re113 Retrieved 12th Jan. 2023.
Retraice (2023/01/13). Re115: Technological Progress,
Information
- Show
- FrequencyUpdated daily
- Published24 June 2023 at 14:46 UTC
- RatingClean