A Chemical Mind

Nicholas Kircher

Stories of our fascination with the Brain: from medical mysteries, great triumphs and cautionary tales, to great discoveries and tragic failures, conspiracy theories, technology, and more; hosted by Nicholas Kircher (Published every Tuesday AU Time) chemicalmind.substack.com

  1. MAY 7

    Saving "Disorder"

    Note: I am not a doctor (I didn’t even finish High School) and none of this should be taken seriously let alone be considered medical advice, or in any way accurate, rational, logical, grammatically correct, or even comprehensible, and should probably be ignored by all of humanity. Anyone taking this too seriously might want to get checked out for Autism ahem I mean ENT J/P. I’m not usually interesting or popular enough to be written about, but on occasion it happens. This time, it was due to a post I made about being accepted into a “secret society of neurodivergents” at my new workplace: (If you can’t see the light-hearted, tongue-in-cheek nature of this post, you should perhaps consider therapy) I got into an interesting discussion in the comments, challenged on whether ADHD is just a personality profile, and what exactly makes it pathological. I think we had a good discussion about it. Turns out, however, the same individual went to the trouble of interpreting this as me being self-diagnosed, and decided to turn me into the poster boy for everything that is wrong with the kids these days and all their social-media-driven self-labelling rainbow-67-skibidi-toilet nonsense. Apparently I’m not who I think I am? Take this fellow, Nicholas Kircher, who posted about being added to a ‘secret’ AuDHD group. Now, those not up with the lingo, this Frankenstein word is a fusion of Autism and ADHD, two of the most conflated and error-prone ‘diagnoses’ of the last 15 years. The authors take issue with the whole neuro-divergence thing, and seek to un-diagnose me. They paint me as having succumbed to some tiktok fad, joined a bandwagon, been “influenced” into self-diagnosis. Nicholas and the rest of the Neurodivergent bandwagon Which, lets be honest, is pretty funny considering my history (and I don’t even use tiktok.) The whole thing is horrifically over-cooked, and although probably not meant to be mean-spirited, it definitely smells like self-righteous silliness. This might be a valuable teaching opportunity. Then again, it might not. Time will tell. So I am posting today to send the whole thing even more over-the-top than is necessary, or responsible, or even legal. For one time only. After this, I’m not spending any more time on the subject, as we all have better things to do. Example of a better thing to do: subscribe! First, some history about me and my situation, because nobody asked (and perhaps if anyone had asked, the original article could have been significantly improved, and maybe a lot shorter): I was diagnosed with ADHD and Autism Spectrum in the year 1999 - the same year The Matrix came out - when “Social Media” didn’t even exist yet, and MySpace was still a good 4 years away. Back then, my diagnoses were known as ADD-Inattentive and Aspergers Syndrome. Why was I diagnosed? I was nearing the end of primary school. I’d already repeated one grade, and was failing another. I’d had no success in making any in-school friends or proper connections with others, and I fidgeted incessantly at all times. Some developmental goals I was reaching, but many I was hopelessly behind in. I was not absorbing anything being taught, I was constantly getting lost in time and in space, and for nearly every activity in class, I would somehow draw a blank when instructions were given, having no idea what was happening or what we were doing. I almost never remembered to do homework, or quite frankly anything else unless it was squarely within my “special interest” bubble, and when I did remember or was reminded, I could rarely get it started, and even then I didn’t last more than 5 minutes before I’d crash and burn, hard. I didn’t even complete the IQ test I was given by a psychologist, and never got a score. (I like to joke that I got a zero) I came home every day to my mum - who was suffering in the throes of a chronic illness - sobbing intensely, because I could not understand why I could not do what all the other kids were doing with apparent ease. Every unqualified person and their dog seemed to have an opinion on why this was. The greatest hits: * he’s lazy, * he’s faking (for attention, apparently) * he has a bad/lazy mother, * he needs to take more responsibility for himself, * he’s not being punished enough (bring out the lash), * he just needs to apply himself, * he just needs to pay more attention (duh), * he’s a designer moron Wait, a “designer moron?” Yes, friends, that was the term used for me by some of my extended family after I received my formal diagnosis. They, too, thought all this fancy label stuff was just to hide the simple fact that I was an idiot. Growing up with a single mum, she worked hard to find a way to help me. She was the only person in my life for a long time that gave a s**t, and she refused to give up on me. I saw several different specialists as we searched for some explanation that actually made sense. Most of the opinions we got in those early days were nonsense - like mum being told she needs to take “parenting classes” - while some of them were outright scams. However, two interesting leads came along. The first: perhaps I was having “absent seizures”, a kind of epilepsy that causes a blanking of conscious awareness, without the typical shaking/motor movement symptoms commonly associated with epileptic seizures. Thankfully that one can be pretty definitively tested, and it came back negative. The second: maybe I have some combination of attention deficit, and/or Asperger’s. This one was - and still is - far from having a definitive test, and very few specialists in Australia at that time were well equipped to diagnose either of them. We got a lucky break however, and managed to get in to see perhaps the leading specialist in these two psychiatric conditions in the country. My mum remembers the moment he came to a conclusion about my case: “Mrs Kircher, I see a lot of kids brought in who are suspected of having one of these conditions, and the fact is, most don’t have it. In the case of your son, there is absolutely no doubt in my mind that this diagnosis fits.” Since that day, it has been re-evaluated at least 4 separate times by independent specialists for various reasons, each time being re-confirmed. So let’s just get one thing absolutely clear before we do anything else: I am not one of your guinnea pigs for hypotheses about self-diagnosis. And back to the beginning, when we mix personality types of both individual and cultural, where the US is an amalgamation of dozens of cultures, what is neurotypical? Ironically, for Nicholas and the rest of the Neurodivergent bandwagon… They are. Currently, 20% of the US population and a whopping 53% of Gen Z self-identify as neurodivergent. 53% self-identify. Read that again: self-identify. One person’s self-identity is not the same as my psychiatric diagnosis. I’m not self-diagnosed, I’m not Gen Z, and I’m not American. Could have spent 5 seconds to find all that out, but instead of “doing the hard things,” why not just make assumptions and roll with them? And that is also a very typical human behavior because doing hard things is… hard. I bet. What’s more, I’ve lived my whole life with people casting all kinds of aspersions about whether I’m “really” this or that or the other thing, or whether I’m just bereft of moral fibre and/or character, based on some 5-second observation they’ve made of me. This is not the first time - not by a long shot - and won’t be the last. As the New Zealand pathologist Dr Temple-Camp said: “That’s the thing about opinions and arseholes, isn’t it, gentlemen; everyone’s got one.” So I asked Nicholas how he defined typical, but in answering my question, he skirted with vague statements about holding a standard job, maintaining social connections, and navigating daily life without significant distress. Which I’d like to find a single human who does this with frictionless ease. I didn’t realise I was expected to provide some sort of DSM-version of the definition of a word that arose originally as a joke. The term “neurotypical” was actually coined in the late 1990s by the autistic community, specifically by the autism rights group Autism Network International (ANI). It was created as a satirical counterpart to the idea that people with Autism were “neuro-defective.” They wanted a neutral word to describe non-autistic people (whom they playfully referred to as having “Neurotypical Syndrome”) to shift the framing away from autism being a “disease” and neurotypicals being the “healthy default.” The author’s actual question was this: What, specifically, is Neurotypical because neither Level 1 Autism, nor ADHD are divergent, given the quantities they exist and they can be easily explained by Personality proclivities, not pathologies. The reason I ask is people keep trying to tell me I’m both Autistic (I’m not, I’m analytical, disagreeable, and not nuerotic) and ADHD (I’m not, I’m Intuitive, fast thinking, and middling contientiousness.) What I’m saying I’m an ENT J/P… which is one of sixteen personality categories, totally neurotypical… Here’s the actual part of my much longer response to their overall comment that partially dealt with the author’s specific question, framed within the context of how neurological conditions and personality constructs relate and differ: So if you’re able to hold a standard job, maintain social connections, and navigate daily life without significant distress or the need for specific accommodations, you may still exhibit traits common to people with Autism or ADHD without being impacted by the symptoms which make it pathological. A diagnosis is only necessary when the underlying neurological factors create barriers to typical functioning. I wasn’t even trying to explicitly define anything, merely explain how one m

    22 min
  2. JAN 23

    The Erotic Button: A Case-Study

    Addiction Resources: https://chemicalmind.substack.com/p/addiction-support-resources Ko-Fi Link: https://ko-fi.com/dopamine Note: This is a true story, taken from one of the many fascinating case reports of medical literature. This is a fairly famous example in the annals of addiction neuroscience, and it reveals in stark colours the counter-intuitive nature of addiction. She couldn’t stop herself. She had to push the button. She kept it on all day, dialling the power knob between 75% and 100% in rapid bursts. All she could do was blast the electrodes buried deep in her brain, triggering an experience she called “pleasant discomfort,” a kind of “erotic sensation” as though her g******s were sending signals to her brain at kilowatts of intensity. By all objective measures, though, she was disintegrating completely. She would exhibit the physical symptoms of stroke. She would become extremely thirsty. Her verbal IQ would drop by a whopping 25 points. She even developed an ulcer on her index finger, the one she used to tune the power dial rapidly. This was not a particularly pleasant experience. Indeed, it was painful. And yet, she stopped going outside. She stopped talking to other people. She even stopped bathing, and eventually, eating. She could not pull herself away from the button. There were times when she’d beg for her family to take it away, and they would; only for them to give in when she went through inevitable withdrawal symptoms, and demanded its return. It was an addiction like any other; except this was not chemical. It was electrical. In 1954, James Olds and Peter Milner released a remarkable study that would go on to shape our understanding of addiction. They implanted electrodes at various locations into the brains of rats, and gave them a series of levers they could press to stimulate the different electrodes. The behaviour of the rats demonstrated the existence of specific locations in the brain which, when stimulated, could produce profound addiction: the rats would go to stimulate them again and again and again, until collapsing from exhaustion. Such experiments would have been completely unethical if done in humans, and would never be approved by a review board. So, how had this woman, in the 1980s, ended up in the position of one of Olds’ and Milner’s rats? Years earlier, the 48-year-old New Yorker had suffered a herniated disc right at the base of her spine, between L5 and S1; an excruciating experience, leaving her with severe sciatica. Pain would surge through her legs like a bolt of lightning. Her lower back was in constant agony. The pain and suffering was constant and intractable. Eventually it became too much; she obsessed about finding a way to halt the pain. Although of limited effect, so far, opiates - specifically, methadone - was the only thing keeping her functional at a basic level. That wasn’t ideal. She had a history of alcohol abuse, and knew the dangers of addiction all too well. So she had turned her body into a veritable pin-cushion, subjecting herself to any and all ideas, in search of an alternative. She had seen so many specialists. They tried pharmacological treatments of all kinds, including antidepressants, atypical analgesics, and more. Every drug wore off quickly. Massages and exercises had no effect. Acupuncture was useless. Cognitive Behavioural Therapies made no difference. The TENS units - skin-conductance electrical stimulators of nerves - achieved nothing. They tried surgically removing the back plates of 4 of her vertebrae to relieve pressure on her spinal cord. They denervated the area at the base of her spine, and even severed specific nerve fibres in her spinal cord they believed were transmitting pain signals to the brain. All of it failed. All this medical science, all these doctors and specialists, all this money, and they were at a loss. Here they were - in the 1980s - and they had no idea what was wrong or how to fix it. One well-meaning specialist had an idea: if we can’t seem to solve it from the nerve side, perhaps we can find another way through the brain itself. Pressing the button below will instantly cure all ailments, guaranteed! (note: not actually guaranteed, but it might work?) Electricity for pain management is, remarkably, far from a new idea. There really is nothing new under the sun. The first written description we have found of using electricity to manage pain was by the ancient Greeks, where both Plato and Aristotle described the use of the “Torpedo Fish” - a kind of electric ray, after which the submarine weapon is named - as an aid in curing ailments; but it was the Romans who wrote specifically about its use in treating headaches and gout. Some 2000 years later, in the 1950s, we began to seriously experiment with delivering electricity directly to the brain. The implanting of electrodes directly into neural structures could bypass faulty central nervous system wiring, not to mention all the chemical filtering which made pharmacokinetics such a challenge. Early experiments with this method as a treatment for chronic pain seemed to yield some positive results. Deep Brain Stimulation (DBS) was, indeed, originally meant to treat pain. Later, people like José Delgado would begin experimenting with the technique to treat movement disorders, epilepsy, and paralysis. DBS has a storied and controversial history. José Delgado, among others, would become the targets of crazy conspiracy theorists like Peter Breggin, who for their own political purposes ran a campaign of lies designed to paint Delgado in particular as an evil mind-control villain, eventually chasing him out of the United States. Despite this, by the 1970s, DBS for pain was quickly gaining traction, with companies like Medtronic setting up its neurological division for manufacturing devices to control electrode stimulation. Case reports started to come out showing positive results for pain with electrodes implanted into the thalamus. So, although still quite a new treatment, it wasn’t as hare-brained an idea as it might seem to us today, though it still sounds utterly counter-intuitive: stimulating a brain region that is signalling pain in order to suppress that pain. However, it was based on a historical precedent, or perhaps, neurological dogma: that stimulation mimics ablation (destruction). The thinking was that with electrical stimulation, based on this precedent, they could use it as a reversible alternative to destroying those neurons entirely. If it worked, we might never need to surgically damage or destroy anything in the brain, whether in cases of severe intractable epilepsy or chronic pain. It could eliminate the use of lobotomies, which were still popular in the United States at the time. So in ~1979, specialists handling the case of this woman’s intractable pain decided to give it a try. They made 2 attempts. First, they implanted an electrode in the right Posterior Medial Thalamus, the region of the brain mostly concerned with your emotional response to a sensory input, e.g being startled by sudden sharp pain. When they turned on the electric current, she felt a warm flush spread across the left side of her body, and the pain dissipated. For 6 months, this seemed to work remarkably well. However, like everything else, it was only temporary, and soon the device gave her no relief from pain at all. They left the electrode in place, dormant, and continued to pursue other treatment options. Four years later, they made another attempt. The target this time was the left Ventral Posteriolateral Nucleus, the region which helps to determine what a sensation is (touch, pressure, temperature, pain) and where it’s coming from; it then relays the information to other regions of the brain relevant to that specific sensation. Again, it seemed to have a reasonable impact on the pain. She initially reported a tingling “paraesthesia” down her left side, but that was it. For a few months, the pain became manageable again. Then, there was something else. She had begun to experiment with the settings on the stimulator. She found that by manipulating the controls in a certain way, she could induce these peculiar “erotic sensations,” and reported this to the clinicians shortly following the procedure. Unfortunately, this did not ring any alarm bells. When the pain inevitably returned a few months later, her need for stimulation didn’t end. Indeed, it was only beginning. She found herself continuing to need more and more stimulation. She would keep the device on, set to 75% power, and every few minutes rapidly turn the dial between 75% and 100% power. It brought her to the edge of climax, but never over the precipice. She was always just out of reach of satisfaction. During these intense stimulations, her body would writhe in severe discomfort. The left side of her face would droop as though she were in the throes of a massive stroke. She would experience paroxysmal atrial tachycardia, and the most extreme thirst leading to psychogenic polydipsia (compulsive water drinking to the point of toxicity.) There’s simply no way this could have been pleasure she was experiencing, and even if there was, it would have paled in comparison to the devastation and pain being inflicted upon her body and brain. Somehow, she managed to continue like this for 2 years. Her addiction was so total, she spent most of that time in complete inactivity, save for occasions when her obsession with stimulation drove her to tamper with the device in an effort to further increase its amplitude. IQ tests she performed shortly before and shortly after implantation of the electrode, and again by the authors of the case study 3 years later, reveal the utter devastation wrought on her brain. Initially scoring 99 both before and after implantation on full-scale IQ - a reasonable score - by the time of the case study, she had lost a whopping 11 points. Her memory qu

    18 min
  3. JAN 9

    Sacrificing yourself to save your life

    “The common conception that the brain is primarily for thinking, or other cognitive processes, is potentially misleading... neuroscience may benefit from a theoretical structure that centers on basic questions of how the brain coordinates and efficiently regulates the body.” What if our entire understanding of the purpose of the brain is wrong? By default, we have come to believe the main task of a brain is to think, to wield intelligence, to learn and remember and feel. Is this not what our own brains do the most? It turns out it’s not nearly so simple, and by focussing on just our own perceptions - our conscious mind - we have completely ignored perhaps its most fundamental role: regulation of bodily systems to keep us alive. I’m talking about the system of allostasis, or predictive regulation. Unlike homoeostasis, where the body reacts to problems when they arise, allostasis is all about predicting the bodies needs ahead of time and adequately preparing for them, regulating internal systems to manage energy and resource-use. It’s tasked with efficient logistics planning, ensuring the supply is available at times when demand is predicted to rise. This makes for a very obvious evolutionary purpose to the brain: it likely evolved as a means of managing the complex biochemistry of large multi-cellular organisms, and consciousness arose from the eventual complexity of its allostatic functions. Although not a new idea, allostasis seems to be going through a bit of a revival. According to an article recently published in Neuron Volume 113 (Issue 24), Jordan Theriault et al. argue that thought and consciousness might be a case of exaptation, a kind of happy accident of evolution which turned out to be useful in its own right. If we adopt this "allostasis-first" lens, the very things we traditionally call "the mind" - our emotions, awareness, even sensations - appear to be low-resolution readouts of our metabolic state. Your mood, for instance, may function as a low-dimensional "allostatic barometer," a summary of how efficiently your brain is managing your body’s internal energy budget. Even "stress" loses its purely psychological weight. In this biological framework, stress is simply the brain predictively issuing commands to deliver glucose and oxygen to your tissues in anticipation of a metabolic outlay. It is a value-neutral preparation for action should action be necessary. This is not to suggest that these barometers and predictions are always correct. Like any sensor, they can be fed faulty data or be improperly calibrated. Like some safety systems installed on aircraft, a bad sensor can kick off a distinctly inappropriate response (see: Boeing 737 Max) Still, this regulatory priority is etched into the very architecture of the human cortex. The brain is organised along a structural gradient that stretches from a "limbic core" to our primary senses. At this limbic core, signals are abstract and low-dimensional compressed summaries of the body’s collective needs. As these signals flow outward toward the motor and sensory systems, they "decompress" into the specific particulars required to move a muscle or adjust a heart rate. Simultaneously, the firehose of raw sensory data coming from the world is compressed as it travels inward. It is stripped of its noise and categorised into meanings, with the most salience being placed on its allostatic value, i.e “what does this sight or sound mean for my survival?” Ultimately, this shift in perspective dissolves the artificial wall we have built between the mental and the physical: we are not merely a mind inhabiting a body. The mind and body are a unified system. What happens if we look at cognitive decline using this allostatic lens? Take Alzheimer’s, one of the main examples cited in the paper: the perspective shifts from a system breaking down, to a series of desperate, yet calculated, sacrifices. Higher-order cognition uses up a fair bit of energy, but is non-essential for staying alive. It also produces a lot of waste products, due to the rather-inefficient method of anaerobic metabolism of glucose products into ATP (up to 15 times more inefficient than the aerobic alternative.). These waste products need to be removed regularly, mostly via the bloodstream, otherwise they can build up and cause havoc. However, as we age, our vascular health naturally declines and becomes less efficient at this waste-removal task. So when a system is faced with a situation where the waste produced by this high-energy, higher-order cognition begins to out-pace the brains ability to remove that waste, the authors suggest we might be forced into an “allostatic trade-off”, with the allostasis mechanism automatically rationing the amount of glucose supplied to the brain to reduce the overall demand on the vascular system. If brain waste clearance is compromised, then it may be allostatically beneficial for the brain to downregulate glucose metabolism by restricting the transport of glucose into neurons and across the blood-brain barrier. Consistent with this, in older adults, glucose uptake and glucose transporter density (GLUT1 and GLUT3) decline following amyloid accumulation but before the appearance of cognitive decline. That trade-off might allow the body to physically live longer than it otherwise would, but at what cost? Put your email in this box. Just trust me. Alzheimer’s results in the gradual destruction of the inner self; cognitive ability, memories, beliefs, even volition, eventually slip away. However, the allostasis model does suggest that approaching Alzheimer’s as an energy-management syndrome centred around glucose and the function of the vascular system in general might lead to better treatments. Perhaps, if we work on improving vascular system health, as well as finding ways to clear up debris as we presently do, this combined, systems-driven approach might give us a fighting chance. Everything psychological that a brain accomplishes—sensing, perceiving, thinking, feeling, deciding, acting—can be considered a means to the end of its core ongoing task: coordinating and regulating internal bodily systems, as an organism navigates a constantly changing but only partly predictable world. The mind is a prediction machine. That’s its purpose: to predict what happens next, what’s coming down the pipeline, where and when we need energy levels to be at their highest readiness and when we will need to rest. The mind is all about forward-projection, and from birth it is continuously training against the incoming data to identify sequences and patterns in time. As a fellow Substacker mentioned recently, it’s possible that consciousness arises only when the subconscious mind is inadequate to the task of managing one particular system in a particular context. Take breathing. You breathe autonomically, and are entirely unconscious of it most of the time. However, now that I’ve mentioned it and brought it to your attention, it will have entered your conscious awareness. You’re now aware of the action of your diaphragm, as it works to expand and contract the lungs. You can now choose to alter its action, slow it down, speed it up, or hold your breath for a period of time. This also happens at times of physical exertion, or when your head is underwater, or your conscious mind perceives the air around you to be unsafe to breathe: your conscious mind takes over to analyse the situation and decide when and how to take your next breath. Soon, likely in the next few minutes, it will return to subconscious autonomic action, and your conscious mind will focus on other things. If allostasis is the brain’s primary job, then it must prioritise the body’s internal state over external data. This leads to a phenomenon called sensory gating. Emerging evidence suggests that our distance senses - vision, hearing, and touch - are synchronised to our internal rhythms. We’re not perceiving the world at a constant, steady rate; the brain “samples” the environment in time with the cardiac cycle. In fact, so much about our perception is aligned with such cycles. For instance, during systole (when the heart contracts and pumps blood), we are statistically slower and less accurate at detecting visual or auditory stimuli. The brain actually suppresses external input during these moments of high internal pressure, effectively “blinking” our sensory awareness. We saccade - rapidly move our eyes - more frequently during systole, but we fixate and actually process the world during diastole, when the heart is at rest. Breathing acts in a similar way, functioning as a global “oscillatory pacemaker”. It synchronises neural signalling across the brain, impacting everything from memory consolidation in the hippocampus to how we process emotion and make decisions. Self-regulation isn’t a silo, either. Humans are social animals, we have evolved to “outsource“ some of our allostatic regulation to others. This is known as Social Allostasis, and resembles the concept of co-regulation. When we are in close, trusting relationships, our companions help regulate our heart rates, breathing, and even our core temperature, effectively reducing the metabolic “tax” on our own systems. The sense of safety these relationships provide allows us to operate at a reduced level of vigilance. This explains why loneliness is so physically toxic; without a social network to help distribute the load, our brains can remain stuck in a highly costly state of vigilance, which eventually wears down the system. In the context of Alzheimer’s, it’s one way to explain why strong social support shows a slower rate of cognitive decline; the social environment can help regulate a struggling internal energy budget. In short: we see and hear with our hearts, think with our lungs, and heal with our friends. Thank you for joining me on today’s short dive into some new resear

    12 min
  4. 12/24/2025

    Can A Pill Change Your Morality?

    Coffee Fund: https://ko-fi.com/dopamine Note from Author: Yes, the title is slightly click-bait; but only slightly. Welcome to another deep dive. Morality (from Latin moralitas ‘manner, character, proper behavior‘) is the categorization of intentions, decisions and actions into those that are proper, or right, and those that are improper, or wrong. Let me ask you a simple question: would you ever commit murder? We need to dig deeper. Let’s clarify a bit: would you ever commit murder if you knew there would be no negative consequences? What if the proposed victim had brutally abused and murdered your children, and would not face justice in any other way? If you answered either “Yes” or “No” to any or all of these, you’d actually be wrong. The only possible answer to these is “maybe”. There is no way to know what you will or will not do in any future situation. However, if I were to ask: “is it ever morally acceptable to commit murder?” the answer is quite simple: No. Morally, it is never OK to commit murder, despite some backward parts of the world remaining committed to capital punishment (which is, indeed, a type of murder.) The distance between what we know as being morally right or wrong and our commitment to that moral judgement in our actions can be vast and incredibly dynamic. One easy example is this: is it ever morally OK to lie? Technically, the answer here is more of a “maybe,” but even so, you negatively judge those that you believe to have lied for personal gain, while simultaneously fudging the information on your résumé. Don’chya? The presence of a moral sense is consistent with a focus of human evolution on mechanisms of individual behavior that maximize survival in social groups. Evolution has promoted social cooperation through emotions against harming others, a need for fairness and the enforcement of moral rules. Mario F. Mendez, The Neurobiology of Moral Behavior: Review and Neuropsychiatric Implications (2009) Our moral values feel universal and immutable. We struggle to talk about them or think about them in any other way. We disdain others who don’t share our moral values, and often dehumanise them entirely. The Imperial Japanese in WWII believed dying for the emperor was the single highest moral good a human being could achieve on earth. Compared to the Yankee boys in the Pacific who, although ready and willing to fight and avenge the attack on Pearl Harbour, would have preferred to be home and warm and safe and at peace. They could not comprehend each other at all, and saw each other as less than fully human. When starting and waging war it is not right that matters, but victory. Close your hearts to pity. Act brutally. Eighty million people must obtain what is their right. Their existence must be made secure. The stronger man is right. The greatest harshness. A troglodyte with a toothbrush moustache. Interestingly, that brutality Hitler spoke of to his Generals shortly before the war, was never intended for to the soldiers of the nations at war with him (except for the Russians, of course.) No, that brutality he spoke of was meant only for a specific subset of people: those considered “racially impure” or “defective”, such as Jews, Gypsies, Slavs, the disabled, and many more men, women and children. The Germans would go on to murder millions of them with industrial efficiency. Considering the fairly frequent self-justifications made by officials, they knew it was wrong. Hitler knew they would know, and demanded they operate with black hearts regardless. Despite having some of the most potent chemical weapons ever made (even to this day) and having mass stockpiles of them ready to be used, Hitler refused to deploy them on the battlefield, not out of fear of retaliation, but as a moral judgement. Imagine that. The Japanese saw it as quite acceptable to decapitate captive enemy soldiers with a samurai sword, and did so frequently. They saw this as building up their own “Sei-shin”, or “Fighting Spirit”. The Allies saw this as barbaric; they preferred to do their slaughter from a great distance and with superior technology, such as the fire-bombing attacks on Japanese cities which burned alive around half a million men, women and children. Indeed, it was perhaps this idea that the Japanese had deliberately crossed some kind of universal red line into immorality in their waging of war which may have helped to loosen the Allies own inhibitions on morality, allowing them to adopt one of the most barbaric war-fighting tactics: terror-bombing. More recently, there have been debates about how pharmacology may one day provide “moral enhancement”, in which certain drugs could re-enforce a specific set of moral ideals and behaviours. The problem is: whose moral ideals and behaviours, exactly? It’s hard for many people to imagine that simply changing the balance of certain chemicals in the brain and body could change how we make moral decisions, but in fact, they already do. Research in 2014 found that we already have such drugs, and they’re already impacting our morality so much more than we could imagine; beta-blockers for example were found to significantly reduce racial bias in tests with only a single dose. So, if certain drugs can strengthen our moral inhibitions, there must be others which can weaken them, right? Press this button to get your fix! Weakening the Will In the late-2000s, a teacher from the UK was caught downloading sexually explicit images of minors. During the investigation and trial, it was found that the medications he was on - to treat his worsening Parkinson’s Disease - had been a direct cause of this paraphilia which had not been present before treatment. The medications in question - primarily dopamine agonist levodopa, but also the many others which contribute to overall dopamine agonist effect - have long been linked to the development of various psycho-social/sexual disorders, but this was a fairly landmark case which essentially found that a psychoactive medication was the causative agent of a persons offending. This leads one to wonder: how exactly does this occur, and in what circumstances? As far as I am aware (IANAL), individuals that go on drug-fuelled binges and commit crimes are considered responsible for their actions under the law. The reason is fairly straight forward: despite the mass of information put out about the impact on decision making by various drugs - alcohol, cocaine, methamphetamine, PCP, and more - choosing to take them anyway means you assume full moral and legal responsibility for your actions while under their influence. However, in a case like this, the substance was medically necessary (quite literally life-saving), properly prescribed, and the potential influence on decision-making was not widely known. All medications which have an effect can influence us in one way or another; even placebos which have no biological or chemical effect can change our perceptions and decision-making. Paracetamol reducing pain can result in a better mood, with decision making patterns that would be different to those made while in pain. Beta-blockers like Propanolol inhibit instant amygdala-driven fear responses, significantly reducing things like racial bias. Then, there’s dopamine. I’ve long been fascinated with the Nucleus Accumbens (NAcc). It’s a tiny little blob of brain cells (two of them actually, on both sides of the brain). It’s essentially the seat of motivation and goal-directed valuation. My friend Kent C Berridge developed the Incentive Salience theory of motivation, in which the NAcc plays its part by calculating how much value something has. This calculation is done using dopaminergic neural circuits running along the mesolimbic pathway. When we think of an action that might have some value - regardless of morality, consequences, effort, or any other cost - a signal is sent out along that pathway. The stronger the signal, the greater the potential value. Inhibition then acts to counter that signal. It is essentially subtractive, performing the other part of the Volition Equation, which is to subtract cost - in effort, risk, social standing - from the original value, thus weakening the signal. In order for an action to be taken, this signal must run the gauntlet of our inhibitions, and remain strong enough to clear a minimum threshold. Anything below that threshold is dropped. Morality can play a part on both sides of the Volition Equation; either adding or subtracting, so on the one hand, doing the “morally right thing” can boost the motivating signal. On the other, it can slam the brakes on behaviours and actions which might be detrimental to those social measures. Morality evolved to help us maintain conformity with a group, and as a core part of our own sense of self. So, while moral behaviour is an obvious benefit when in a group context, we still often exhibit moral behaviour when alone. Sometimes, this need for conformity can work against us, and lead to horrible and dramatic consequences. Drugs of Brute Force A teenager is hanging out with his friends one day, when one of them - a leader in the group - brings out a crack pipe. The teenager has never seen one before, but he’s able to make an inference on what it’s likely to be. He’s been told, by his parents, teachers, the government, and others, never to try it, because one hit is all it takes to cement a permanent addiction and lead him into a life of ruin, destitution, and early death. His friends, however, are all jovial about it. “Nah that’s all b******t, I can stop whenever I want to. I just like having fun, it feels f*****g awesome. C’mon don’t be a p***y.” The attitude of his in-group - the group with whom he identifies - significantly reduces the inhibitory effect of anxiety about the potential dangers, and his need to prove himself a bona-fide member of his in-group - a core part of

    24 min
  5. 11/29/2025

    Gods in a Machine

    If you’re interested in further discussion on neuro-integration from all aspects and angles, I’ve set up a discord server here. Come join the fun! WARNING: The following contains MANY spoilers for the show Pantheon (2022) from AMC. You probably haven’t seen it yet, but reading this article will make you want to see it, so I highly suggest you watch it first if you care about spoilers. If not, let us proceed. On the 30th of November 2022, the technological landscape tilted on its axis. OpenAI released ChatGPT, and suddenly Generative AI was the only thing anyone could talk about. We became obsessed with Large Language Models, transformers, and the eerie ability of a machine to predict the next token in a sentence. Just two months earlier, in September 2022, a show called Pantheon aired on AMC. While the rest of the world was about to lose its mind over a chat bot, Pantheon was quietly laying out a roadmap for something far more profound: Computer-Simulated Human Consciousness, or “Uploaded Intelligence” (UI). It contained some of the most realistic depictions of mind uploads, and the potential effects on society, that I have ever seen. It grapples with the hardest philosophical questions head-on, such as the true meaning of consciousness, personality cults, the consequences of immortality and digital death, self-copying, and the universe-as-simulation theory. I don’t think many people saw it when it came out. (I didn’t) Yet there are so many fantastic, mind-bending, life-changing things about AMC’s Pantheon not just in its philosophy, but also the way it depicts cyber-security, software engineering and big data concepts; not to mention the cut-throat world of big tech both within and beyond Silicon Valley. It’s not perfect, but it’s easily the most accurate I’ve ever seen from a show of this kind. Watching it today makes it feel somewhat prophetic, but likely was the result of extremely good footwork by the producers and their team, getting the most up-to-date picture of the inner world of big tech at the time, between 2021 and 2022. Facebook became Meta at the end of 2021 as a result of Zuckerberg’s Metaverse strategy, and you see a lot of those ideas were depicted in Pantheon. We also see clear depictions of things Generative AI are doing today: chat interfaces with digital intelligence, for instance. The digital intelligences in Pantheon weren’t artificial neural networks, but fully scanned and emulated human brains. They call these “Uploaded Intelligence”, or UI (which is confusing in tech, since UI stands for User Interface, but we’ll roll with it.) In the show, a UI is created by laser-scanning a biological brain, layer by layer, down to the stem. The brain is destroyed in the process - vaporised - but stored apparently in its entirety in digital form. This is a human mind, stripped of its biological substratum which is replaced by silicon. The entire connectome is then reconstructed digitally, presumably with simulated sensory receptors; the rest of the central nervous system is not included in the actual scan. Somewhere in between all that, there is some magic fairy dust that allows the full emulation to happen, but since that is still one of the great millennium-type problems of our time, I don’t expect them to have that bit of detail. At one point in the show, a UI produces an 80-page patent in seconds. Although GenAI today would likely screw that up with hallucinations, you can see the resemblance. It feels eerily prescient. However, we’re not here to talk about GenAI, and neither does the show: the real focus is on the Uploaded Intelligence and the ability to fully simulate a human being computationally. This requires a few major assumptions: * The brain - that is, the cortex and brain stem - constitutes the entirety of who we are as individuated conscious beings * An individual’s behaviours, emotions, and cognition, can be simulated in their entirety from a scan of the cortical network using classical and quantum computing platforms Before we can even begin to tackle these assumptions, we need to understand what it is to compute, to be intelligent, and to be conscious. I Am, Therefore I Compute When we perform a mental task, sometimes we are conscious of the effort expended to perform it. Other times, we are unconscious of that effort, and an input gets processed and turned into an output which is re-integrated with our conscious awareness sometime later. To us, these results arrive as flashes of inspiration or insight; that is the moment of reintegration. In truth, the brain was likely working on that problem for a period of time without you consciously being aware of it. These unconscious computations are presently done in our biological circuitry. Technically, the physical medium of computation doesn’t matter, and could be electronic. However, there is an inherent problem in viewing the brain as equivalent to electronic circuits. Classical computing and electronics are based on gates. Gates allow you to perform simple, but exact, operations on incoming electrical signals. For example, an OR gate takes 2 inputs, and so long as at least one of those inputs is receiving a signal, the OR gate outputs a signal. A simple way to model this would be like: 0 OR 0 = 0 1 OR 0 = 1 0 OR 1 = 1 1 OR 1 = 1 The number 1 denotes an electrical signal, while 0 is no signal. Meanwhile, an AND gate takes 2 inputs, and only outputs a signal if both inputs have a signal: 0 AND 0 = 0 1 AND 0 = 0 0 AND 1 = 0 1 AND 1 = 1 Gates like these feel intuitive to us. They follow a simple logic. They’re also exact and about as deterministic as it gets. They have no hidden influences outside the two expected inputs which can affect the result; so 0 AND 1 should never result in 1, just as 2 + 2 should never equal 5. Brains, and biology in general, are nothing like that. The thing about biological computation is that it is fuzzy. For most of us, doing novel arithmetic in our head is a combination of heuristics learned from repetition in similar tasks, which we use to get a sense of approximating a value; then more heuristics refine it down until we have a value in mind that we feel confident enough about. Let’s do a quick experiment. Solve the following 2 problems: * What is half of 10,000,000? * What is half of 8,626,400? Which one required more time/mental energy, the bigger number or the smaller one? If we followed a purely computational way of thinking about things, our expectation should be that the bigger number would be more computationally expensive to calculate than the smaller one. However for the human brain, it’s not dependent on the size of the numbers we’re working with, but on their composition. In the first problem, although the number was larger, its composition was vastly simpler, made almost entirely of zeroes. To solve it, we could reduce it by 6 decimal places, and then the problem becomes “What is half of 10?” The second smaller number had many more non-zero digits, meaning we could not reduce it in the same way. Instead, our natural inclination is to solve for each non-zero digit separately: “What is half of 8? What is half of 6? What is half of 2?” and so on. 1 problem instantly turns into 5. “Therefore, our brains are not computers, therefore, our brains cannot be simulated by computers.” Woah hold up there cowboy, not so fast. Plenty of things which are not computers are simulated on computers literally all the time in every field ever; we just haven’t simulated literally everything in the universe that exists. This argument, that the brain is not a computer and therefore could never be modelled by one, always drives me a little insane: just because it doesn’t follow gate-based logic does not mean it is not performing computation, and does not mean that it cannot be simulated computationally. It is, and it can. The problem here is one of dimensionality. To simulate the human brain, you need way more than merely its connectome. We know this from simulations of C. Elegans, the Nematode Worm whose species all have exactly the same number of neurons: 302, no more, no less. We have been working to simulate its entire set of known behaviours for decades computationally, and we have made progress; but consider how incredibly simple C. Elegans is, and yet we still haven’t figured it out? How complicated can a near-microscopic worm possibly be? Some have reached the conclusion that the complexity is not in simulating the worm, but rather, simulating chemistry itself. Chemistry is, in my opinion, the most sophisticated and most powerful phenomenon in the physical universe. Chemistry is like the operating system, the essential firmware, upon which the software of human minds can run. Even firmware requires something firmer: hardware. That’s quantum mechanics. That’s effectively the architecture upon which the Firmware must execute. Indeed, if the answer is that we need to simulate chemistry itself, then think of our predicament this way: Imagine you were trying to simulate a classical gate-based Turing machine on some highly exotic computer, and you only managed to implement AND and OR gates; then you proceed to try running DOOM on it (as is tradition.) You wouldn’t get very far, would you? Technically, we can build a complete computing machine using any combination of “NOT” operator with “AND” or “OR” gates, but we don’t even know the NOT operator exists, let alone what a universal Turing machine even is. That’s more or less where we are in the grand scheme of things when it comes to simulating all of chemistry. I Am, Therefore I Intellect Another very common assumption made by just about everyone is that the ability to think implies intelligence. It’s also extremely common to conflate intelligence with emotion, with motivation, with the survival instinct. This is part of the problem which has often taken debates on simulated

    23 min
  6. 10/27/2025

    The Illness That Didn't Exist

    “So far, there is no convincing evidence for Abdominal Migraines. Migraines just don’t work like that.” These were the words spoken to me by a Head of Neurology; a very accomplished man at the top of his career, running one of the most prestigious Neurology centres in the country. He was smiling paternally at me from across the desk, his arms leaning against it and his hands interlocked in front of him, his suit and tie practically glistening with the importance and prestige that oozed from every fibre of his being. I, the distinctly unimportant, uneducated, scruffy-haired kid in torn jeans and t-shirt that I was, glistened mostly with nervous sweat as I shifted uneasily in my seat, adjusting my direction of lean from left to right. I never sit straight up in any chair ever; I’m always tilted somehow. Just more comfortable that way. “Ok, well, what do you think it could be?” I asked. I knew he would have no good answer. Nothing I hadn’t already tried. I was right. “Sounds like an allergy, you should see a dietician.” It’s hard to tell someone whose expertise rightfully deserves respect that they are wrong, especially when you don’t have the benefit of all those many years of medical school, real-world experience and countless citations, awards, and grants to your name. I can’t help but cringe to the depths of my soul when I read stories of folks who are into homoeopathy proudly proclaiming victory over medical science, because “it worked for me!” and so I have developed a habit of deferring to the experts, even when I doubt them. However, not all experts are made equal, and not all illness is necessarily found in a diagnostic manual either. The reason I was even here, in this room, speaking to this highly accomplished medical professional, had nothing to do with abdominal migraines, or migraines at all for that matter. Several months earlier, I’d had a tonic-clonic (grand mal) seizure. They were trying to find out if I had epilepsy (thankfully it was my first and so far only seizure, and no epilepsy was found). While I was there, I decided to ask them about something my doctor had mentioned to me. These were experts in the field of brain-things, surely they’d know something about migraines. Right? The fact is, I had gastrointestinal problems literally my entire life, and they go through periods of variation, kind of like “phases”. Y’know, like one day you discover them dressed up all in black, smelling of cigarette smoke and listening to My Chemical Romance. “It’s just a phase.” Sometimes, a phase would manifest as episodes of excruciating abdominal pain. Sometimes, it would be bloating and general discomfort. Most of the time, it included nausea. Nausea was the worst, particularly because I had a fear of vomiting (something called “emetophobia”). It would often send me into a spiralling panic attack, forcing me to pace the floor, back and forth sometimes for hours, controlling my breathing and repeating little “safety” behaviours to myself. Though certainly unwelcome, it was never a huge concern. That is, until I began working full time. There were days when I would be scared to get on the train in the morning when nauseous. A couple of times, I had to get off at a stop part-way to work and call someone to pick me up and take me home again. Fear and tunnel-focus can make us do weird stuff. Since I was young, it had been drilled in to me that if I ever felt like I was in trouble or having a crisis at a train station or some other public place, that I should just ask for help. I don’t know if you have ever tried asking for help when your stomach is turning itself in knots and you are in a full-blown panic, possibly on the verge of ejecting its contents: I did, at a central train station, and I got nothing but confused, bemused and mildly annoyed apathy. In all fairness, what are they supposed to do? Then again, what was I supposed to do?? My mind returned to the Neurologist’s office, and his suggestion of a dietician. “I already tried that.” A single eyebrow on the Neurologist’s face migrated north, followed eventually by the other eyebrow, before the whole face gave in to that expression people use when they no longer want to bother. That, or he didn’t believe me. I mean, I was so skinny and pale and young, I bet he thought I just needed some protein, grit, and a tan. “Well, if it’s not dietary, then it’s almost certainly psychological. There is, medically speaking, nothing wrong with you.” I realised this conversation wasn’t getting anywhere, and ultimately it didn’t really matter. I had suffered this for so long, I had come to assume it was going to be my life now. I was just being naive, thinking that there might be hope for me. That maybe there might be a legitimate explanation for my infuriatingly inexplicable malady. My relationship with food had never been what one might call “normal”. I’d always been skinny, mostly because I was incredibly active and athletic, but also because I just wasn’t interested in food as a thing. I ate because I was hungry, and only to satiate that hunger. For the past few years leading up to this conversation, my stomach issues had escalated significantly; that nausea which used to stop by just to visit before leaving again, turned into something called cyclic vomiting, with cold sweats and excruciating abdominal pain. It would turn up seemingly out of nowhere, though it was more likely during some anxiety-inducing event; it wasn’t consistent in that regard. I went to hospital one time, thinking I was dying. I will never forget the looks of complete disdain from the hospital staff on that particular day. I was in total crisis, and I was made to feel like a fool for seeking medical attention. Thankfully that was the only time I had such an experience with a hospital (though perhaps that’s due to my avoiding them.) Much of my medical history was psychiatric: ADHD, ASD, Panic Disorder and general Anxiety. Autism Spectrum Disorder has a lot of overlap with gastric discomfort and upsets, so for a while we thought maybe it was just more of that manifesting. My father had died of Crohns disease when I was 20, so we were also on the look-out for any signs that I might also develop the disease, but no sign of that was present. Doctors had looked everywhere, poked and prodded, taken nearly my body weight in blood to be run through every test, had scopes of my gastrointestinal tract both up and down. They looked for cancer, diabetes, thyroid issues, drugs, various types of flu, they even wondered whether I still had dormant malaria from the times I had contracted P. Falciparum during my time in the Solomon Islands. They looked for Hepatitis, Gastroenteritis, Meningitis, Strep, they even checked me for ticks. I was checked for Toxoplasmosis, considering my lifelong history with cats. They looked for gastric ulcers, appendicitis, liver function, kidneys. They did an MRI on me. Actually, they did 3 of them. Nothing. Not. One. Thing. Was. Found. And yet, I was still losing weight, despite already being the skinniest guy I knew. It horrified me. I was struggling to keep food down. I was gaunt, pale, a bag of bones. I chanced to bump into someone I hadn’t seen for a very long time in the street one day, and they were shocked when they recognised me. “Are you ok?” I remember them asking. “Are you... like, sick?” I looked like a cancer patient. I didn’t know how to explain. I couldn’t really say “Ok, sit down, this is gonna take a while”. So I shrugged. “It’s fine.” The evening before my neurology appointment was when I had seen my GP. I had turned up to the clinic covered in sweat, trembling like mad, gaunt, and emaciated. It felt like an attack of the flu, but there was no virus. I was nearing a point of no return. If I couldn’t escape this cycle, I was seriously considering ending it all. After listening to my entire medical history, I remember him sitting back in his chair, hands folded across his chest, staring at the wall. The look on his face was utterly perplexed. It was a look I hadn’t actually seen before in any medical professional. It was strangely reassuring. Perplexity meant consideration. It meant he was seriously thinking about my situation. He was running it through in his mind, peering into his experience and education, searching for a glimmer of light somewhere in the darkness. He was taking me seriously. “Hmm...”, he broke the silence after a minute or two. “It’s a long shot, but from what you’ve told me, there’s only one thing you might not have tried yet.” My ears perked up. “Have you heard of abdominal migraines?” I hadn’t. “This is unlikely to work, but I think we should try you on a simple beta blocker. The good thing is that it has almost no side-effects, and we can stop it any time without harm. But there is a tiny chance that it might help, we can just see. How does that sound?” I jumped at it. A tiny chance that I hadn’t yet taken was a chance worth taking. I was booked in to see him again the next week. When I came back to see him, I nearly cried. It was the first time in years that I had gone a whole week without nausea. No nausea whatsoever. No vomiting. I could keep food down. I hadn’t had a crisis on a train while commuting. No crises while at the office. I could even eat yoghurt and I felt absolutely bloody fine. “You really don’t know what you’ve done for me”, I told him. My voice was breaking. I didn’t want to come across as melodramatic, but it was difficult to avoid: this man had saved my life. Somehow, a simple idea, and a simple remedy, had cured me from an illness that had been utterly intractable, an illness which had eluded so many more highly-paid and highly-respected experts, many of whom preferr

    14 min
  7. 10/20/2025

    Is pro-natalism a political stance?

    Note: I’m neither an economist nor a demographist, I’m just some guy with an opinion about everything, and by default, you should assume that everything I’ve written here is wrong. Most of you know me by now, I think. I’m a socialist, with radical leftist pacifist leanings. I have all the leftist badges, although I don’t wear them on my sleeve. I won’t bore you with them here. I also frikkin love babies. Noisy? Yes. Smelly? Yes. Utterly incoherent?? Yes. Cute as hell? So much yes. Despite this, something has always made me feel uncomfortable about identifying myself with “pro-natalism”. The reason is fairly simple: it tends to be conflated with the pro-life, anti-contraception crowd, not to mention the “your body, my choice” types who like to troll Xitter. Quite frankly, contraception is one of the greatest public health boons in human history, and has saved an incalculable number of lives and livelihoods. Yet somehow, there have always been those among the super-religious of many kinds (and commonly right-wing though not exclusively) who have a thing for banning contraceptives; today you’ll typically find them wrapping it in the guise of “trying to stop a population crash”. So I started to ask myself: is it possible to be pro-natalist and pro-contraceptive/pro-choice? Is it possible to have a pro-natalist policy which doesn’t infringe on the rights of women? I think the answer is “Yes”. But then I realised I hadn’t even asked myself the very first and most obvious question: Question #1: What Is The Problem To Be Solved? Already, I’m struggling. I’m not entirely sure there is a real problem (yet). For many decades particularly following the baby-boom, there was widespread worry and panic about the possibility of over-population. For most of that time, there was good reason to be worried: it did look like we were in for unchecked exponential growth. In fact, we were in exponential growth for a while following World War 2. See the following charts, from Our World In Data: Y’know the term “baby boomer”? Yeah, that’s the sharp mountain bit. That was a thing that happened, and it was indeed exponential. In close-up, it looks like the population boom only started to emerge around 1925 at the earliest. There was, without doubt, an absolute explosion in population growth from 1945, peaking in 1963 and has been falling ever since. However, if we zoom waaaaaay the f**k out: Population growth has had a foot on the accelerator since the 1700s, and once we discovered how to treat disease - and, more importantly, avoid much of it - population hopped on a rocket sled and rode that sucker until the fuel ran out. Right now, and since about 1974, we’re in linear growth with roughly 1 billion people added to the pile every 12-14 years. Based on projections by the UN, world population is expected to peak in 2068 at 10.43 billion, and dip slightly. It seems to me that we might now be extrapolating that little dip the same way that people extrapolated the exponential growth phase - by simply assuming the immediate trendline to continue indefinitely - thus beginning a new panic about the coming “end of humanity.” Whereas to me, what seems more likely is a kind of population plateau, rather than a crash (let alone one so apocalyptic). As Hans Rosling once illustrated in a lecture on the subject (he did it with physical blocks, so I made a digital version): Even without a crash, we are presented with several economic problems. The first is that any country which is losing population is going to have a hell of a time. The East Germans built the Berlin Wall not to keep out spies or prevent sabotage: they simply could not afford the loss of population to the west, and their primary financial backer - the Soviet Union - would not pay the bill indefinitely. It’s often said that economics is not a zero-sum thing, but at a population plateau, it would certainly resemble one in important ways. There would only be a certain number of possible consumers world-wide, and population shift through migration is far less likely to be offset in any meaningful way by birth rates. That means there will most definitely be some losers. So, is there a problem? Well, maybe. It’s all very speculative at this point. If we remain at a plateau as I expect we probably will, things will be interesting for a little while, but very manageable. It only becomes a real problem in the event of an actual crash, and in the interests of hedging ones bets, it’s probably a good idea to at least start working on the problem now, so that we’re not caught out if such a time ever comes. So really, you could say pro-natalism is a bit of a “break glass” stance for me. If the future of the human population is ever at stake due to a crashing birth rate, then consider me a pro-natalist. Until then, babies are cute, and sex is awesome. As I see it, there are two main directions pro-natalist policy can take: * Subtractive policy: subtract the right to control how many children you have by making it difficult to have sex for fun or terminate a pregnancy (whether simply accidental, or due to a sex crime) * Additive policy: add support and assistance to people who want to have kids (or more kids) to ease the burden on the family Some people even suggest - as in one of the many really dumb-ass viral Xitter posts from a few months ago - that we should literally stop educating women, which is yet another wonderful example of subtractive policy. All these policies cost money, but only one has a hope in hell of actually working and not leading to a mass proletarian revolt. Can you guess which one it is? Hint: it’s the one that isn’t a buzz-kill. No, I’m not getting into an abortion debate in the comments. Don’t do it. Yeah but what if? Ok ok, lets talk about Japan. I know you really wanted to. Let’s do it. Japan is a fascinating case study for all sorts of things, but population dynamics has been a big concern for them since the “lost decade” of the 1990s. From October 2021 to October 2022, Japan’s population shrank by more than half a million people. According to the Financial Times, they lose 100 people every single hour. You might be surprised to know, however, that Japan isn’t actually the most relevant example for our purposes here. By far, that honour belongs to South Korea. South Korea has, for many years, had the worst birth rate in the entire world, and it gets worse year after year. In fact, it’s so bad right now, that within 50 years, their work-age population will have fallen by half. HALF. That’s mind-boggling. In fact, I dare say that’s catastrophic. This is all despite the vast amounts of money they have been throwing at the problem for the last 20 years, and despite politicians having declared it a legitimate national emergency. Honestly, for once, I don’t think that’s hyperbolic in any way shape or form. At the same time, South Korean women are saying that government simply is not listening to them. Whaaaaat?! How can this be? Surely, incentives would be the most preferred way of going about finding a solution to the problem? Clearly, spending to incentivise having children has not worked, neither for South Korea, nor Japan which has been taking many of the same approaches to a problem that has haunted them since the 90s. Actually, no, that’s not really true. We can’t just say that it “has not worked”, we need to be more specific: it has not worked in the very unique cultural, social and political circumstances of these two Asian economies. Much of Asia is a world that places incredible pressure on young people to have an all-consuming career, and where their malignant corporate culture and rampant sexism means that maternity leave is out of the question if you have any hope of remaining in work for the future. South Korea has the highest rates of women’s education in the world, but much of this push for education is in service of their economy’s insane demands on workers. First you slave away at that degree, with your parents demanding that you be top of the class; then you slave away at a job run by (mostly) men that demand all your time and energy be committed to work, and the moment you’re burned out, you’re discarded like empty packaging. This is when the great dragon that is Asia’s unique circumstances rears its head and stares us in the face: for the average South Korean woman, the near-impossibility of having both a work-life and home-life, let alone a balance between the two, is the determinant factor which drives them away from the prospect of raising a family. When actually asked, women will repeat this point consistently, though it seems their politicians prioritise the appearance of working the problem more than the substance. [One South-Korean woman] also shares the same fear of every woman I spoke to - that if she were to take time off to have a child, she might not be able to return to work. "There is an implicit pressure from companies that when we have children, we must leave our jobs," she says. She has watched it happen to her sister and her two favourite news presenters. One 28-year-old woman, who worked in HR, said she'd seen people who were forced to leave their jobs or who were passed over for promotions after taking maternity leave, which had been enough to convince her never to have a baby. Jean Mackenzie, BBC Japan and South Korea are both in dire straits here. The only hope they have left is to radically overturn their entire social and corporate culture, one that is perhaps rooted in centuries of national development, and replace it with something that grants more individual liberty from their relentless “culture to succeed”, and at the same time, to punish businesses that directly or indirectly violate a persons right to return to work following maternity leave. Or, to phrase it in

    16 min
  8. 10/11/2025

    Emotions of Mass Destruction

    “All the cruel and brutal things, even genocide, starts with the humiliation of one individual.” Kofi Annan Red. That was all I could see. Red. The colour. The smell. The sound. It saturated my very being. Every sinew, every muscle, every nerve. I was filled with a rage, a hatred, so large, so unimaginably monstrous, that I felt myself capable of anything in response. What was the cause of all this boiling emotion? A series of comments, made by an anonymous person, on Reddit, directed at me, another anonymous person. A laughing bombardment of insults and jeering which were invulnerable to my attempted de-escalations; by someone that followed me around, into threads on other communities; pursued me. Taunted me. Humiliated me. Called me “pathetic.” All this by someone I have never known, never met, and never will. Still, it felt personal. Deeply personal. Although it occurred many years ago, I can remember every detail. I’ve always practised a kind of “strategic patience”. I assume everyone is just having a bad day, that they’re truly good people underneath, and that I just need to show them that I pose no threat to them, and they’ll calm down. Then we can talk like grown ups. Very few people have actually broken my will to be patient when I’m actively putting it into practice. Those few that managed to do it live in my head rent-free. The reason? They made me feel utterly humiliated in front of others. If you want to destroy me, find a way to humiliate me in a public way. Works every time. Use at your own risk, can backfire severely. This feeling isn’t just an online phenomenon; it’s a force that has directly shaped history, and continues to do so. In her book “Making Enemies: Humiliation and International Conflict”, scholar Evelin Lindner gave this force a name: “The Nuclear Bomb of Emotion”. Since its release in 2006, the book has given international relations and violence a whole new clarity: underneath the surface, the radioactivity of feelings of humiliation poison everything, and can last for generations. The humiliating act itself may be entirely forgotten by all but the individual who felt aggrieved. It might even be seen as humiliation only by the aggrieved. World War 2, and the rise of Hitler, was made possible by the very deliberate humiliation of Germany by the Entente powers after the end of the Great War. Hitler himself made use of these feelings, embedded in the German psyche, and his rhetoric of vengeance and reclamation of dignity and might were nearly irresistible. This is why most international correspondents who covered his rallies and speeches could not understand the reaction of the masses. William Shirer described seeing the “distorted faces” and “extended arms” of the audience in attendance at one of Hitler’s speeches, all engaged in a kind of primal scream as they saluted “der Führer”. The general excitement and enthusiasm shown by even those Germans who he believed were the least likely to fall for völkisch propaganda seemed to defy explanation. Shirer, being an American, did not share the feeling of humiliation. He wasn’t primed to receive Hitler’s message the way most Germans were. He was not equipped to understand the phenomenon he was witnessing. The message Hitler was sending - the one of reclaimed dignity - was utterly non-partisan. Left or right, socialist or nationalist, democrat or fascist, all were vulnerable. People were ready to do anything to overcome that sense of humiliation; from there, killing comes easily. Not the sanitised, abstracted kind of killing practised by the modern-day drone operator, separated as they are by thousands of kilometres, centring fuzzy blobs between virtual cross-hairs. The genocidal kind, the up-close-and-personal kind, the kind that defines the word “bloodlust”. During the Cuban Missile Crisis, even some of the fiercest critics of Fidel Castro within Cuba declared themselves ready and willing to enlist in the brigades, to defend with their lives the sovereignty and “Dignidad” - dignity, pride - of Cuba from the Yankee aggressor, should they ever attempt to invade the island. America had made Cubans feel humiliated for a very long time, and the Bay of Pigs was yet more salt poured into an open wound. So, when Khrushchev offered to send Castro his own weapons of mass destruction, it should have been obvious - had anyone in the Kremlin, or anywhere else, been paying attention - that the Cubans would seize on the opportunity to square up, chest against chest, with their belligerent superpower neighbour. Small bands of Cuban exiles notwithstanding, the determination to re-assert their sovereign existence was universal among Cubans, powered by those feelings of humiliation, and through this, they were unified. This is what made the crisis so dangerous. Castro and the Cubans were deadly serious. Not even Khrushchev understood that, and by the time he figured this out, he was standing “eyeball to eyeball” with Kennedy. The Genocide in Rwanda was fomented by a sense of humiliation, too. Hutus had long felt like second-class citizens, and saw the Tutsi as elite oppressors. Despite a Hutu “Power” dictatorship having been in control of the country for more than 30 years by the time of the genocide, those feelings were easily hijacked, and people could be turned instantly from friends and neighbours into brutal killers. Japan in World War 2? They had felt humiliated by the United States over sanctions following Japan’s invasion of China. They believed this action to be an overtly racist double-standard: they were “merely” following the same play-book the Western powers had been following for centuries, and now the Western powers were punishing Japan for it. They felt aggrieved and deliberately excluded from the World Power club, despite having proven themselves just as militarily capable as any European country after they smacked the Russians around at Port Arthur. School shooters, a phenomenon seen primarily in the United States, are often considered to be victims of bullying or acts of humiliation who have snapped. Although there are a few notable exceptions: Columbine, for example, was a classic case of charismatic psychopathy from the mind of Eric Harris, who swept Dylan Klebold up in a cycle of rage and hatred, which further amplified each other. These events can vary wildly in scale, yet they are all powered by similar psychological mechanisms. To understand why it’s so potent, we have to distinguish humiliation from simple embarrassment; because humiliation is so much more than that. Humiliation is external and relational. It can even feel existential. Its core ingredients are powerlessness, public exposure, and a sense of injustice. It’s an attack on the social self which, more even than the physical self, is essential not only to our own survival as part of the herd, but also to our own sense of identity. It introduces us to a whole new dimension of vulnerability, one we could never have imagined, one which we have not had time to make peace with. “…one of the defining characteristics of humiliation as a process is that the victim is forced into passivity, acted upon, and made helpless.” Evelin Lindner: The Anatomy of Humiliation The perpetrator, the victim, the witness: this is called the “humiliation triangle”. Indeed, this is a defining characteristic of humiliation: it requires a minimum of 3 actors. The psychological and the physical are deeply intertwined when it comes to perceptions of pain. The same regions of the brain are lit up when experiencing either physical or emotional pain. Yet, these mechanisms are incredibly complex, and there is no single shared pathway which begins with the experience of emotional pain and ends in extreme violence. Moreover, “Perpetrator” is not always a strictly accurate description of one of the actors in the triangle; nor is “victim”. “…a perpetrator may want to commit humiliation but not succeed, some people may wish to be humiliated rather than wish to avoid it, a ‘do-gooder’ may cause humiliation while trying to do good, and a third party may identify ‘victims’ who do not see themselves as such, or fail to see victims in those cases where they do exist.” As with all human social interactions, there is a highly complex interplay of individual intent, perceived intent, intended perception, the act itself, the interpretation of free will behind the act, the view of 3rd parties, the reputations of those involved, their social caste, the dynamics of culture, politics, power and sexuality, and much else besides. What matters in the end is that someone has perceived a deliberate, malicious act on the part of another actor, identifying that actor as a perpetrator and themselves as a victim, and believes the act to have reduced their own social standing in the eyes of the witnessing parties. Regardless of the accuracy of this perception, it is the perception itself which creates the emotion and embeds it deeply into the psyche of the aggrieved. Thankfully, it is not always thus. Lindner found that, on rare occasion, leaders who deny themselves the kind of vengeance and retribution which might otherwise seem their right have emerged. Nelson Mandela did not unleash genocide on the white elite in South Africa. After 27 years of humiliation in prison, he emerged as a wise peacemaker, not as a humiliation entrepreneur like Hitler. Although today it is a bit of a cliché to cite Mandela as inspiration, no doubt Mandela’s example is particularly powerful, no less for the catastrophe that he had the power to unleash had he wanted to. He could have had all of white South Africa eradicated in an instant, along with anyone considered “collaborators”. After the treatment he experienced at their hands, we might have expected as much. Instead, he welco

    18 min

About

Stories of our fascination with the Brain: from medical mysteries, great triumphs and cautionary tales, to great discoveries and tragic failures, conspiracy theories, technology, and more; hosted by Nicholas Kircher (Published every Tuesday AU Time) chemicalmind.substack.com