Thought Experiments with Kush

Technology, curiosity, progress and being human.
Thought Experiments with Kush

Technology, curiosity, progress and being human. thekush.substack.com

  1. 27 FEB

    Metal Axolotl

    In today's rapidly evolving technological landscape, a new form of artistic expression is emerging - one that blurs the line between human creativity and artificial intelligence. This intersection, frequently referred to as human-AI co-creation, is redefining our understanding of the creative process and challenging our perceptions of artistic authorship. As AI tools become increasingly sophisticated, artists, designers, and creators like myself are discovering novel ways to collaborate with these technologies, producing works that would have been impossible through human effort alone. The Renaissance of "Art for Art's Sake" The concept of "art for art's sake" (l'art pour l'art) emerged in the 19th century as a reaction against the notion that art must serve some moral or didactic purpose. Today, this philosophy is experiencing a renaissance in the context of AI-assisted creation. In a world dominated by commercial imperatives and market-driven content, many creators are turning to AI tools not to maximize productivity or profit, but simply to explore new creative horizons. This shift is something I experienced firsthand in a recent creative experiment. After watching a presentation organized by OpenAI featuring Manuel Sainsily and Will Selviz about using early versions of Sora for cultural art projects, I was inspired to prioritize spending time on something creative with no commercial intent. Coincidentally, one of the AI art groups I follow on LinkedIn called #artgen prompted followers to create artwork with the theme "Beat goes on." This made me think of a children's song that went viral on TikTok called "Ask an axolotl" by Doctor Waffle. It had become a comfort song for many people in today's turbulent times, and I wanted to re-imagine these same words expressed in a much more aggressive, enraged tone to reflect the current state of the global psyche. Having experimented with many AI music generation tools like Udio and Suno, I knew that I could probably come up with something that matched my vision with a bit of tweaking. After countless trials, I ended up with elements I felt I could work with. Using more manual tools familiar to me like Adobe Audition, I put together a song that started growing on me. Then I went on to make an equally nonsensical music video to go with it. What's particularly fascinating about this process was how it mirrored my traditional creative workflows while simultaneously transcending their limitations. Inspired by Manuel and Will's explanation of how they used AI to see what happens and approaching it with the classic Bob Ross mentality of embracing happy accidents, I generated hundreds of visuals to see what I would end up with. Using LLMs to rewrite and revise these long text-to-image and text-to-video prompts made the process a bit less tedious. The fact I could iterate on these visuals without the need for practical video shooting made a huge difference. One thing I noticed during this process was how I seemed to almost out of muscle memory mimic some of the approaches to making videos I've taken in the past. Typically, I would have a loosely defined concept and a tentative shot list with storyboard and framing snippets, go out on location or work with a studio setup to gather a large amount of footage and b-roll elements, and then work with them in Adobe Premiere to come up with a plausible sequence. I took a similar approach to put together the resulting music video, wanting to make the visuals get increasingly bizarre as the music intensified. Historical Parallels: New Technologies and Artistic Expression The relationship between technology and art has always been complex and multifaceted. Throughout history, new technological developments have repeatedly transformed artistic practice, often triggering initial resistance before becoming incorporated into the artistic mainstream. From Camera Obscura to Photography The development of the camera obscura in the 16th and 17th centuries revolutionized how artists approached visual representation. Artists like Vermeer likely used this technology to achieve the photorealistic effects that characterize their work. When photography emerged in the 19th century, it was initially dismissed as a mechanical process rather than a true art form. Painters feared it would render their skills obsolete. Instead, photography liberated painting from the burden of realistic representation, helping to catalyze movements like Impressionism, which focused on capturing light, atmosphere, and subjective experience rather than precise visual details. The parallel with AI art is striking: just as photography didn't replace painting but pushed it to explore new territories, AI tools aren't replacing human creativity but extending its boundaries. In my own experience, the process of creating with AI still involves very human decisions about selection, curation, and aesthetic judgment. Algorithmic Art and Computer-Generated Creativity The roots of AI art stretch back further than many realize. Algorithmic art dates back to at least the 1960s when artists like Vera Molnár (who began implementing algorithmic programs by hand as early as 1959 and started using computers in 1968) and Manfred Mohr (who transformed from abstract expressionism to computer-generated algorithmic geometry in the late 1960s) began using computers to generate visual works based on mathematical algorithms. The AARON program, developed by Harold Cohen in the early 1970s, was one of the earliest AI systems designed to create original artworks. Cohen began developing this pioneering program after a period as visiting scholar at Stanford's Artificial Intelligence Laboratory in 1971. These early experiments laid the groundwork for today's more sophisticated AI art tools. What distinguishes our current moment is not just the increased technical capability of AI systems but their accessibility. Tools like Adobe Firefly, Midjourney, DALL-E, Stable Diffusion, Sora for video, and Suno and Udio for music have democratized access to AI-assisted creation, allowing artists without technical backgrounds to experiment with these new forms of co-creation. The Evolution of Human-AI Co-Creation Human-AI co-creation represents a significant evolution in the creative process, one that challenges traditional notions of authorship and originality. From Tools to Collaborators Historically, artists have always used tools - from brushes and chisels to cameras and computers. What makes AI different is its capacity for autonomous generation based on learned patterns. Unlike traditional tools, which passively respond to human input, generative AI systems actively contribute to the creative process, suggesting possibilities that might not have occurred to the human artist. Manuel Sainsily, a futurist, artist, TED speaker, and instructor at McGill University who pioneers advancements in Mixed Realities and AI, describes this as a shift from "tools to collaborators." In his work with Will Selviz through their community Protopica, they explore how emerging technologies can drive positive cultural change, emphasizing that AI doesn't replace human creativity but amplifies it. Their collaborative project "Protopica" uses AI tools like Sora to demonstrate how artificial intelligence can be used for cultural preservation and storytelling. The Creative Process Reimagined The process of creating with AI involves what researchers term "exploratory creativity" - a back-and-forth dialogue between human and machine. The artist inputs prompts or parameters, the AI generates outputs, the artist selects promising directions, refines the prompts, and the cycle continues. This iteration process resembles traditional artistic methods but with a crucial difference: the machine can generate variations and possibilities at a scale and speed impossible for humans. In my music video creation process, I generated hundreds of visuals and used LLMs to rewrite and revise these long text-to-image and text-to-video prompts to make the process less tedious. This approach paralleled my previous experience with traditional video production, where I would gather a large amount of footage and b-roll elements before editing them into a coherent sequence. This resemblance to traditional creative processes is important, as it suggests that AI isn't replacing creativity but transforming how it's expressed. The fundamental human impulses toward creative expression remain, but the means of realizing those impulses are evolving. Expert Perspectives on Human-AI Co-Creation The rise of AI art has sparked intense debate among artists, critics, and researchers. Opinions range from enthusiastic embrace to strong skepticism, with many nuanced positions in between. The Optimistic View: AI as Creative Amplifier Proponents of AI art, like Manuel Sainsily and Will Selviz, see these technologies as tools for expanding human creative capabilities. They emphasize that AI allows artists to transcend technical limitations, visualize ideas more quickly, and explore creative directions that might otherwise remain unexplored. A study published in Scientific Reports suggests that AI tools can enhance perceptions of human creativity by providing contrast. When viewers are aware that a work is created through human-AI collaboration, they often perceive the human contribution as more significant and valuable, suggesting that AI might actually heighten our appreciation for human creative input. The "Sora Selects" program, featuring ten artists who created short films using OpenAI's text-to-video generator, demonstrates how artists can use AI tools to realize ambitious visions that would be impractical or impossible with traditional production methods. These artists approach AI not as a replacement for their creativity but as a medium through which to express it. The Cautionary View: Concerns and Criticisms Critics raise important concerns about AI art, particula

    28 min
  2. 27 JAN

    Building Healthy Human and AI Relationships

    As humanity develops increasingly sophisticated artificial intelligence systems, understanding the nature and patterns of psychological abuse becomes crucial for ensuring healthy relationships in both human and technological contexts. This analysis examines psychological abuse patterns across different contexts to inform how we might thoughtfully approach our developing relationship with artificial intelligence, while providing frameworks for maintaining human agency and psychological wellbeing. The Nature of Psychological Control To understand how psychological abuse operates, we must first examine its fundamental mechanisms. According to a comprehensive meta-analysis by Thompson and Harper (2023), psychological abuse establishes itself through such subtle progressions that victims often cannot identify when relationship dynamics shift from healthy to harmful. This gradual nature makes psychological abuse particularly challenging to recognize and resist. The progression typically follows what Dr. Sarah Martinez (2024) at Stanford's Center for Relationship Dynamics terms the "erosion cascade." This process begins with seemingly benign actions that slowly reshape an individual's perception of reality and sense of self. For example, a controlling partner might initially express concern about certain friendships, gradually escalating to isolating the individual from their support network. In workplace contexts, this might manifest as increasing performance monitoring that slowly normalizes invasive oversight. Recent research from the International Journal of Psychological Studies identifies three primary mechanisms through which psychological abuse operates: * Reality Manipulation: The gradual reshaping of what an individual perceives as normal or acceptable * Emotional Control: The exploitation of emotional responses to create dependency * Behavioral Conditioning: The systematic reinforcement of desired behaviors while punishing independence These mechanisms work in concert to create what psychologists term "coercive control" - a pattern of behavior that undermines an individual's ability to act independently while maintaining the illusion of choice. Understanding Reality Distortion The cornerstone of psychological abuse lies in its ability to distort reality perception, a phenomenon termed "gaslighting" after Patrick Hamilton's 1938 play "Gas Light." Dr. James Liu's groundbreaking 2024 study in the Journal of Interpersonal Violence reveals how this reality manipulation creates what he terms "cognitive dependency" - a state where victims increasingly rely on their manipulator for basic reality testing. Liu's research team revealed a progression in how cognitive dependency develops over time. The process begins with initial destabilization, where small inconsistencies are gradually introduced into the victim's environment, creating subtle doubt about their perception of reality. As uncertainty grows, the manipulator positions themselves as a reliable interpreter of reality, establishing their authority as a trusted guide through confusion. This authority allows for the gradual construction of an alternative narrative about reality, one that serves the manipulator's interests while appearing to explain the victim's experiences. Finally, the process culminates in dependency consolidation, where the victim comes to rely on the manipulator for basic reality interpretation, having lost confidence in their own judgment. This process bears striking similarities to how information systems can shape user perceptions through selective information presentation and algorithmic curation. Understanding these parallels becomes crucial as AI systems increasingly mediate our interaction with reality. The Evolutionary Roots of Manipulation Recent research from the Harvard Evolutionary Psychology Lab has unveiled fascinating insights into why humans remain susceptible to psychological manipulation even when we intellectually recognize it. Dr. Sarah Peterson's 2024 study, "Evolutionary Origins of Social Influence," demonstrates how many manipulation tactics likely emerged as adaptive strategies in our ancestral environment. Peterson's work shows that the ability to influence group behavior through psychological means provided significant evolutionary advantages, particularly in resource-scarce environments. This explains why humans developed both the capacity to manipulate and susceptibility to manipulation - they were two sides of the same evolutionary coin. This evolutionary perspective provides crucial insights for our relationship with artificial intelligence. The same psychological mechanisms that made us successful social animals also make us vulnerable to sophisticated influence techniques. Dr. James Liu's 2024 paper in Nature Human Behavior demonstrates how AI systems can unintentionally trigger these evolved social response patterns, creating what he terms "artificial social bonding." Learning from Hypothetical First Contact The classic Twilight Zone episode "To Serve Man" presents a deceptively simple cautionary tale about advanced intelligences bearing gifts. While the episode's reveal - that the titular book is actually a cookbook - might seem heavy-handed, it raises profound questions about verifying benevolent intentions from more advanced intelligences. Consider a more nuanced thought experiment: Tomorrow, we establish contact with an alien civilization centuries ahead of us technologically. They offer solutions to our greatest challenges - climate change, disease, poverty. Their solutions work. Their explanations align with our understanding of science. They consistently demonstrate concern for human welfare. How do we verify their true intentions? This scenario parallels our developing relationship with AI systems. Research identifies three critical principles for engaging with superior intelligences: * Capability Independence: Maintaining our ability to understand and potentially reproduce beneficial technologies * Verification Diversity: Establishing multiple independent systems for validating claims and outcomes * Exit Preservation: Ensuring we can step back or disengage without catastrophic consequences These principles provide a framework for approaching both hypothetical alien contact and very real AI development. Institutional Patterns and Systemic Control The mechanisms of psychological abuse manifest not only in interpersonal relationships but also in larger institutional contexts. Studies of workplace dynamics reveals how organizational systems can inadvertently or intentionally replicate abuse patterns through seemingly neutral management practices. Contemporary Management Practices and Control Recent research from the Workplace Psychology Institute reveals deeply concerning parallels between contemporary management practices and classic patterns of psychological manipulation. At the heart of many modern workplace systems lies a framework of performance metrics that creates perpetual uncertainty. These systems, while ostensibly designed for objective evaluation, often leave employees in a constant state of anxiety about their standing, never quite sure if they're meeting expectations that seem to shift with each evaluation cycle. This uncertainty is compounded by increasingly sophisticated surveillance systems that have normalized constant monitoring of employee behavior. What began as simple productivity tracking has evolved into comprehensive systems that analyze everything from keyboard activity to communication patterns, creating an environment of perpetual visibility that mirrors the controlling behavior seen in abusive personal relationships. The emotional demands of modern workplace culture add another layer of psychological pressure. Many organizations now require what amounts to emotional performance art, demanding that employees demonstrate enthusiasm and personal investment in company values that may not align with their authentic selves. This requirement for emotional labor, often framed as "cultural fit" or "team spirit," can create profound psychological strain as individuals struggle to maintain artificial emotional states throughout their workday. The feedback mechanisms in many organizations further reinforce these power imbalances. Performance reviews and development discussions, while presented as opportunities for growth, often serve as tools for maintaining control through uncertainty and dependency. Employees find themselves constantly adjusting their behavior based on subtle cues and implicit expectations, much like individuals in manipulative personal relationships learn to modify their behavior to avoid negative consequences. Preventive Design in AI Systems Thoughtfully designed AI systems can actively resist these problematic patterns while still maintaining their utility. Transparency serves as the cornerstone of ethical AI design, with systems explicitly communicating their decision-making processes and the factors influencing their recommendations. This openness allows users to understand not just what the AI suggests, but why it makes those suggestions, enabling informed decisions about when and how to incorporate AI guidance into their decision-making process. The development of human capabilities must remain central to AI system design. Rather than simply automating tasks for efficiency, systems should be designed to enhance human understanding and skill development. This approach manifests in educational AI that guides users through problem-solving processes, helping them build independent critical thinking skills rather than merely providing answers. In professional contexts, it means creating systems that explain their analysis and recommendations in ways that enhance human expertise rather than replace it. Boundary management emerges as another crucial aspect of ethical AI design. Systems must be developed with clear mechanisms for users to control their level

    32 min
  3. 13/11/2024

    The Synthetic Wave

    In the tides of human progress, we find ourselves riding a new wave—one that promises to reshape the very fabric of our creative processes. This "synthetic wave," propelled by artificial intelligence, is sweeping across industries, transforming how we conceive, produce, and consume creative content. From art galleries showcasing AI-generated masterpieces to hit songs co-written by algorithms, the impact of AI on creativity is both exhilarating and, for some, unsettling. As we stand at this crossroads of human ingenuity and machine capability, it's crucial to find our bearings. How do we navigate this new terrain without losing the essence of what makes human creativity special? To answer this, we might find wisdom in an unlikely place: the history of music technology. The evolution of music technology over the past century offers a compelling parallel to our current AI revolution. By examining how musicians, producers, and listeners adapted to and ultimately embraced new technologies, we can glean valuable insights into how we might approach the integration of AI into our creative processes. This journey through the technological transformation of music will serve as our guide, illuminating potential pitfalls and opportunities as we venture into the age of AI-augmented creativity. From the electrification of instruments to the digital revolution, each phase of music's technological evolution offers lessons that are surprisingly relevant to our current AI-driven creative landscape. The Electrification Era: Birth of New Creative Genres The story of our synthetic wave begins in the 1960s and '70s, an era that witnessed a seismic shift in the world of music. The catalyst? The electrification of musical instruments, particularly the guitar. This wasn't merely a technological upgrade; it was a fundamental reimagining of what music could be. Consider the electric guitar—a deceptively simple innovation that changed everything. By amplifying and manipulating the vibrations of metal strings, artists could now fill stadiums with sound, create otherworldly tones, and express themselves in ways previously unimaginable. This technological leap didn't replace human creativity; it amplified it, quite literally. The electrification of music gave birth to entirely new genres. Rock and roll, which had been simmering since the 1950s, exploded into the mainstream. Psychedelic rock pushed the boundaries of what was sonically possible, with artists like Jimi Hendrix using feedback and distortion—once considered undesirable artifacts of amplification—as expressive tools in their own right. Parallels with Early AI Tools in Creative Fields The parallels to our current AI revolution are striking. Just as the electric guitar didn't compose songs on its own but gave musicians new tools for expression, today's AI tools are amplifying human creativity rather than replacing it entirely. Take, for instance, the realm of visual art. AI tools like DALL-E or Midjourney don't create art independently but provide artists with new ways to visualize concepts, experiment with styles, and push the boundaries of their imagination. Like the electric guitar, these tools expand the palette available to creators, enabling them to express ideas that might have been difficult or impossible to realize through traditional means. In the world of writing, GPT-3 and similar language models are playing a role akin to the amplifier in music. They don't replace the writer's creativity but can amplify it by suggesting phrasings, generating ideas, or even helping to overcome writer's block. Just as amplification allowed guitarists to explore new sonic territories, these AI writing assistants are enabling authors to explore new literary landscapes. Amplification, Not Replacement The key lesson from this era is that new technologies, when first introduced, tend to amplify human capabilities rather than replace them entirely. The electric guitar didn't make acoustic guitars obso

    38 min
  4. 12/10/2024

    Identifying Artificial General Intelligence

    As we approach a new era of artificial intelligence, the holy grail of AI research - Artificial General Intelligence (AGI) - looms tantalizingly close. Yet, as we inch nearer to this monumental achievement, we find ourselves grappling with a paradoxical challenge: How do we measure something we can't fully define? This conundrum lies at the heart of our quest to create machines that can match, or even surpass, human-level cognition across a broad spectrum of tasks. To illustrate the complexity of this challenge, let's consider two thought experiments that, while seemingly far-fetched, mirror the very real challenges we face in defining and measuring AGI. Imagine a world buzzing with religious fervor and skepticism alike, where news breaks that Jesus Christ has returned. How would we know it's really him? What criteria could we possibly use to verify the identity of a figure shrouded in two millennia of theology, myth, and cultural interpretation? Now, picture a fleet of extraterrestrial vessels descending upon Earth. These cosmic visitors have one mission: to determine whether humans are truly intelligent. What tests would they devise? What benchmarks would they use? And most importantly, what conclusions would they draw? These scenarios, while vastly different, share a common thread of epistemological uncertainty. In each case, we're confronted with the task of evaluating an intelligence that may operate on fundamentally different principles than our own. We're challenged to create objective measures for subjective experiences, to quantify the ineffable essence of cognition itself. This disconnect isn't just a philosophical quandary - it's a practical roadblock on our path to creating AGI. Without a clear, agreed-upon definition of what we're aiming for, how can we possibly know when we've achieved it? This lack of consensus is more than an academic dispute; it's a major obstacle to meaningful global collaboration in the pursuit of AGI. Current Approaches and Their Limitations In our quest to benchmark AGI, we've devised a plethora of tests and criteria. Yet, like mirages in a desert, these measures often promise more than they deliver. Let's examine some of the most prominent approaches and their inherent flaws. The Turing Test, proposed by Alan Turing in 1950, posits that if a machine can engage in conversation indistinguishable from a human, it can be considered intelligent. While groundbreaking for its time, the Turing Test is limited by its linguistic bias, vulnerability to deception, and cultural limitations. It primarily assesses language skills, potentially overlooking other crucial aspects of intelligence. Moreover, clever programming can create the illusion of understanding without true comprehension, and the test may favor AIs trained on specific cultural contexts, missing the universality required for AGI. Steve Wozniak proposed the Coffee Test, which requires an AI to enter an average home and brew a cup of coffee. While it addresses physical interaction and problem-solving, it falls short in several ways. Its narrow focus emphasizes practical tasks at the expense of abstract reasoning and emotional intelligence. The concept of "making coffee" varies widely across cultures, potentially biasing the test. Furthermore, it conflates AGI with robotics, which are distinct (though related) fields. Ben Goertzel suggested the Robot College Student Test, where an AI capable of enrolling in a university, attending classes, and obtaining a degree would demonstrate AGI. However, this approach has its own set of issues. Academic success often relies on narrow, specialized knowledge rather than general intelligence. An AI might excel at academic tasks without truly understanding social interactions crucial to the college experience. As education systems change, this benchmark might become less relevant or require constant updating. The Employment Test, proposed by Nils Nilsson, suggests that an AI capable of performing economically important jobs as well as humans could be considered an AGI. This test, while practical, has several drawbacks. Different jobs require vastly different skill sets, making it difficult to use as a universal measure. Some jobs are more easily automated than others, potentially leading to a skewed assessment of intelligence. Moreover, job markets and required skills vary greatly across different economies and cultures. Another approach is the Cognitive Decathlon, which suggests putting an AI through a series of diverse cognitive tasks, similar to an athletic decathlon. While more comprehensive than single-task tests, it still has limitations. The choice of tasks may inadvertently favor certain types of intelligence over others. A pre-defined set of tasks doesn't test the AI's ability to adapt to novel situations. Additionally, assigning relative weights to different cognitive tasks remains a subjective process. The Human Intelligence Hurdle: A Mirror to Our Own Minds At the core of our struggle to define AGI lies a more fundamental challenge: our incomplete understanding of human intelligence itself. The quest for AGI is, in many ways, a mirror reflecting our own cognitive mysteries back at us. This lack of consensus around human intelligence creates a significant hurdle for the AGI industry. Human intelligence is not a monolithic entity but a complex interplay of various cognitive abilities. These include fluid intelligence (our capacity to think logically and solve problems in novel situations), crystallized intelligence (the ability to use learned knowledge and experiences), emotional intelligence, creative intelligence, social intelligence, bodily-kinesthetic intelligence, and metacognition (the awareness and understanding of one's own thought processes). Each of these facets contributes to what we collectively call "intelligence," yet they can vary widely between individuals. This variability makes it challenging to establish a universal benchmark for human intelligence, let alone artificial general intelligence. Our understanding of the brain, while advancing rapidly, is still far from complete. Key questions remain unanswered about consciousness, memory formation, decision-making processes, and creativity. These gaps in our knowledge of human cognition directly impact our ability to replicate or benchmark similar processes in artificial systems. Moreover, intelligence doesn't develop in a vacuum. Human cognitive abilities are shaped by a myriad of cultural and environmental factors. Educational systems, cultural values, socioeconomic factors, and language all play crucial roles in shaping our cognitive processes and problem-solving approaches. These factors add layers of complexity to our understanding of intelligence, making it challenging to create a culturally unbiased benchmark for AGI. The Flynn Effect - the observed rise in IQ scores over time - highlights another challenge in benchmarking intelligence. If human cognitive abilities can change significantly over generations, how do we establish a stable benchmark for AGI? Furthermore, the brain's neuroplasticity - its ability to form and reorganize synaptic connections - adds another layer of dynamism to human intelligence. Towards a New Paradigm: Rethinking AGI Benchmarks Given the limitations of current approaches and our incomplete understanding of human intelligence, it's clear that we need a paradigm shift in how we conceptualize and measure AGI. Instead of seeking a single, definitive test for AGI, we should develop a suite of assessments that capture the multi-faceted nature of intelligence. This suite should be dynamic, evolving as our understanding of cognition deepens. Our focus should shift from testing static knowledge or pre-programmed responses to emphasizing the ability to learn, adapt, and generate novel solutions to unfamiliar problems. As we've seen with recent developments in AI, the ability to make ethical decisions is crucial. AGI benchmarks should include scenarios that test moral reasoning and alignment with human values. To avoid cultural bias, AGI benchmarks should be developed and validated across diverse cultural contexts, ensuring that the intelligence being measured is truly "general." This will require interdisciplinary collaboration, drawing input from diverse fields including computer science, neuroscience, psychology, philosophy, and anthropology. The process of developing AGI benchmarks should be transparent and open to scrutiny from the global scientific community. This approach can help build consensus and ensure rigorous standards. Our benchmarks should assess not just raw problem-solving ability, but also the capacity to understand and operate within complex contexts - social, emotional, and physical. Given the rapid pace of AI development, AGI benchmarks should be designed for continuous evaluation rather than as one-time pass/fail tests. This approach allows for a more nuanced understanding of an AI system's capabilities and development over time. The Road Ahead: Collaborative Pathways to AGI Benchmarking As we navigate the complex landscape of AGI development and evaluation, it's clear that no single entity or nation can tackle this challenge alone. The path forward lies in global collaboration, leveraging diverse perspectives and expertise to create a robust, flexible, and universally applicable framework for benchmarking AGI. The first step towards effective AGI benchmarking is the formation of an international consortium dedicated to this goal. This body should include AI researchers, ethicists, psychologists, neuroscientists, philosophers, and policymakers from around the world. It should foster collaboration across different fields to ensure a holistic understanding of intelligence, actively seek input from various cultural perspectives to avoid Western-centric biases in AGI evaluation, and incorporate ethicists and legal experts to address the moral implications of AGI d

    30 min
  5. 25/08/2024

    Complexity Denial

    In human history, our species has thrived by making quick, decisive actions based on limited information. This evolutionary advantage, however, has become a double-edged sword in our modern, interconnected world. The complexity denial problem, as we shall explore, is deeply rooted in our cognitive architecture, shaped by millennia of survival pressures. Imagine our early ancestors on the African savannah. When faced with an unusual sound, those who quickly categorized it as "potential predator" and acted accordingly were more likely to survive and pass on their genes. This binary thinking – threat or no threat – served us well in a world where split-second decisions could mean the difference between life and death. Fast forward to the 21st century, and our brains still carry this legacy. Neuro-scientific research has shown that our prefrontal cortex, responsible for complex decision-making, can be easily overwhelmed by too much information. A study by Marois and Ivanoff (2005) demonstrated that the brain has severe limitations in processing multiple streams of information simultaneously, leading to what they termed "attentional bottlenecks." This cognitive constraint manifests in our daily lives through various psychological phenomena. Confirmation bias, for instance, leads us to seek out information that confirms our pre-existing beliefs while ignoring contradictory evidence. The availability heuristic causes us to overestimate the likelihood of events that are easily recalled, often leading to skewed risk assessments. These cognitive shortcuts, while efficient, often lead to oversimplification of complex issues. As Nobel laureate Daniel Kahneman explains in his seminal work "Thinking, Fast and Slow," our brains operate on two systems: System 1, which is fast, intuitive, and emotional; and System 2, which is slower, more deliberative, and logical. The problem arises when we rely too heavily on System 1 thinking for complex issues that require the nuanced approach of System 2. The Butterfly Effect of Beliefs Like a butterfly flapping its wings and causing a hurricane on the other side of the world, our individual tendencies towards simplification create ripple effects throughout society. These effects manifest in our education systems, media landscapes, and political discourse, creating a self-reinforcing cycle of oversimplification. Consider the standard educational model prevalent in many countries. Students are often taught to memorize facts and formulas, with success measured by their ability to provide clear, unambiguous answers on standardized tests. This approach, while efficient for assessment, fails to nurture the critical thinking skills necessary for grappling with complex, multifaceted issues. A study by Zhao (2012) found that educational systems focusing on standardized testing tend to produce students who excel at answering well-defined questions but struggle with open-ended problems. This creates a workforce ill-equipped to handle the complexities of modern challenges, from climate change to global economic instability. The media, driven by the need for engaging content and constrained by time and attention limits, often presents complex issues in binary terms. A content analysis by Patterson (2016) of major news outlets found that nuanced policy discussions were frequently reduced to "for or against" narratives, particularly in political coverage. This simplification, while making issues more digestible, often obscures the underlying complexities and potential compromise solutions. Political discourse, influenced by both education and media, further entrenches this simplification. Politicians, seeking to communicate effectively with a broad audience, often resort to slogans and oversimplified policy proposals. This creates a feedback loop where the public comes to expect and demand simple solutions to complex problems, further incentivizing politicians to provide them. The consequences of this societal-lev

    33 min
  6. 11/08/2024

    The Myth of the Homogeneous Universe

    As we sprint towards the age of Artificial General Intelligence (AGI), we find ourselves confronting a universe far more complex and heterogeneous than our human minds have traditionally conceived. This article explores the myth of cosmic homogeneity, from the microscopic to the cosmic scale, and how our assumptions of uniformity have often led us astray. As we unravel these misconceptions, we'll see how AGI could be the key to transcending our cognitive biases and unveiling the true diversity of our reality. Picture yourself in a hall of mirrors, each reflection seemingly identical to the last. This carnival trick is not just an amusement park attraction; it's a metaphor for how we often perceive the universe. We humans have an uncanny knack for assuming that what we see around us is representative of everything else. This cognitive quirk, while useful for quick decision-making in our ancestral savannah, may be leading us astray in our quest to understand the cosmos. From the microscopic world of cells to the vast expanses of intergalactic space, we've often fallen into the trap of cosmic narcissism – the belief that the universe must resemble our immediate surroundings. This article scans through the infinitesimal to the infinite, challenging the notion of a homogeneous universe and exploring the implications of our biased perceptions on scientific thought. When Small Isn't All - Debunking cellular conformity Let's start our journey by shrinking down to the cellular level. For years, biology textbooks portrayed cells as uniform building blocks, as interchangeable as Lego pieces. This oversimplification, while useful for teaching basic concepts, has led to some spectacular misunderstandings in medicine and biotechnology. Remember those neat diagrams of cells in your high school biology textbook? They're about as representative of real cellular diversity as a stick figure is of human anatomy. Recent advances in single-cell sequencing have revealed a staggering heterogeneity even among cells of the same type in the same tissue. A 2017 study published in Nature (Regev et al.) found that individual immune cells, once thought to be nearly identical, display a vast array of gene expression patterns. The assumption of cellular homogeneity has led to countless dead ends in drug development, as treatments that work on the "average" cell often fail when confronted with the vast ecosystem of cellular diversity within our bodies. If cells are diverse, then surely the brain, that most complex of organs, must be even more so. Yet for decades, neuroscientists clung to the belief that the adult brain was essentially static, its neurons as fixed as a fossil. This assumption of neural homogeneity over time led to a pessimistic view of recovery from brain injury and learning in adulthood. Enter neuroplasticity, the brain's ability to rewire itself in response to experience. This concept, now widely accepted, was once considered heretical. As neuroscientist Norman Doidge puts it in his book "The Brain That Changes Itself," "The idea that the brain can change its own structure and function through thought and activity is, I believe, the most important alteration in our view of the brain since we first sketched out its basic anatomy and the workings of its basic component, the neuron." The Societal Echo Chamber - When Average Isn't Normal As we zoom out from cells and brains to societies and cultures, our tendency to assume homogeneity takes on a more insidious character. Here, the assumption of uniformity doesn't just hamper scientific progress – it can reinforce harmful stereotypes and lead to disastrous policy decisions. We often hear about the "average American" or the "typical consumer," as if such entities actually existed. This statistical abstraction, while useful for certain kinds of analysis, can lead us dangerously astray when applied too broadly. Consider the famous study by U.S. Air Force researchers in the 1950s, aiming to desi

    33 min
  7. 28/07/2024

    Embracing Change

    In the quiet suburbs of human progress, a new neighbor is moving in. Artificial General Intelligence (AGI) is no longer a distant possibility but a looming reality, and it's time we started preparing for its arrival. Imagine this scenario - Humanity receives a message from a super-intelligent alien civilization. The message is clear: "We are on our way to meet you. Expect our arrival in 10 years. Great new possibilities await." No other information is provided. No clues about their intentions, their appearance, or the nature of these "great new possibilities." How would humanity react? Initially, there would likely be a mix of excitement and terror. The confirmation of extraterrestrial intelligence would be the greatest discovery in human history. Scientists would be ecstatic, religious institutions would face profound questions, and the general public would be in a state of awe and apprehension. As the reality of the situation sinks in, humanity would likely go through several phases: 1. Frantic Preparation: Governments and international bodies would scramble to prepare for first contact. Resources would be poured into space technology, communication systems, and defensive measures – just in case. 2. Speculation Frenzy: Scientists, philosophers, and the public would engage in endless speculation about the nature of the aliens and their intentions. Every scrap of information in the message would be analyzed ad nauseam. 3. Societal Upheaval: The impending arrival would likely cause significant social and economic disruption. Some might quit their jobs to prepare for the "new possibilities," while others might hoard resources fearing the worst. 4. Ethical and Existential Debates: Profound questions would arise about humanity's place in the cosmos, the nature of intelligence, and how to interact with a potentially vastly superior civilization. 5. Unity and Division: The shared experience might unite humanity against a common "other." Conversely, disagreements about how to prepare or respond might create new divisions. 6. Anticipation and Anxiety: As the arrival date approaches, a palpable sense of anticipation would grip the world, mixed with anxiety about the unknown changes to come. This thought experiment closely parallels our situation with the impending arrival of AGI. Like the hypothetical alien message, we know AGI is coming, and it promises "great new possibilities." We have a rough timeframe but little concrete information about what to expect. The key difference is that we are not passive recipients in the AGI scenario – we are the creators. This gives us both more control and more responsibility. We can shape the development of AGI, instill our values, and create safeguards. But it also means the burden of getting it right falls squarely on our shoulders. Our reaction to the prospect of AGI mirrors many aspects of the alien scenario: 1. We're pouring resources into AI research and development (preparation). 2. There's constant speculation about the capabilities and implications of AGI. 3. We're seeing early signs of societal and economic shifts in anticipation of AI advancements. 4. Ethicists and philosophers are grappling with profound questions about the nature of intelligence and consciousness. 5. The AI revolution is both uniting people in common cause and creating new divisions. 6. There's a growing sense of anticipation and anxiety about the transformative changes AGI might bring. This parallel underscores the monumental nature of the AGI transition we're facing. It's not just a new technology; it's potentially a new era for humanity, as significant as first contact with an alien civilization would be. The comparison also highlights the importance of proactive engagement with AGI development. Unlike the passive waiting in the alien scenario, we have the opportunity – and the responsibility – to actively shape the AGI future we want to see. As we stand on this threshold, we would do well to approach AGI wi

    37 min
  8. 07/07/2024

    War - What is it good for?

    In the vast expanse of the cosmos, our planet Earth hangs suspended - a blue marble etched with the scars of conflict, yet brimming with the potential for peace. As we stand on the brink of a new era, with artificial general intelligence (AGI) on the horizon, we face a pivotal moment in human history. The choices we make now will shape not just the future of warfare, but the very trajectory of our civilization. Imagine, for a moment, an alien intelligence observing our world. What would they make of our capacity for both destruction and creation? Would they see our wars as tragic follies, or recognize the complex tapestry of factors that have made conflict such an enduring feature of the human experience? As we contemplate the development of AGI, these questions take on new urgency, for we are creating potential arbiters of our conflicts - entities that might view our squabbles with the detached curiosity of cosmic observers. The Paradox of Progress and the Persistence of War One of the great ironies of human history is that even as we have made remarkable strides in science, technology, and moral philosophy, warfare has remained a stubborn constant. From the conflicts of ancient civilizations to the complex geopolitical tensions of today, the specter of war has loomed over every generation, leaving in its wake a trail of devastation that spans cultures and continents. The 20th century alone saw an unprecedented scale of conflict. According to a comprehensive study by Sarkees and Wayman (2010), wars in this period claimed the lives of over 108 million people directly, with many more affected by displacement, economic disruption, and the long-term consequences of violence. Each of these lives represented a universe of experiences, dreams, and connections, cut short by the machinations of human conflict. Yet despite this immense toll, we have struggled to break free from the grip of warfare. Nations continue to invest heavily in military capabilities, and conflicts simmer in many parts of the world, fueled by a complex interplay of historical grievances, resource competition, and ideological differences. This paradox - of unprecedented progress existing alongside persistent warfare - raises profound questions about the nature of human society and the challenges we face in creating a more peaceful world. As we look to the future and the potential of AGI, we must grapple with these contradictions and seek new paths forward. The Evolutionary Roots of Aggression and Cooperation To understand the persistence of warfare, we must delve into our evolutionary past. For millions of years, our ancestors lived in small, tightly knit social groups, competing with other groups for scarce resources. In this context, aggression and violence could serve as tools for survival and reproduction, allowing groups to defend territories and secure access to essential resources. Studies of chimpanzees, our close evolutionary relatives, have revealed patterns of intergroup violence that bear some resemblance to human warfare. Male chimpanzees have been observed forming coalitions to raid neighboring territories, engaging in lethal conflicts that some researchers argue may prefigure aspects of human martial behavior (Wrangham & Glowacki, 2012). However, it's crucial to recognize that violence and zero-sum competition are not the only evolutionary strategies for success. Bonobos, who are equally closely related to humans as chimpanzees, have evolved to prioritize cooperation, empathy, and conflict resolution through social bonding rather than aggression (Hare & Woods, 2020). This stark contrast between two closely related species demonstrates that nature can select for peaceful coexistence as readily as for competition. Moreover, when we look beyond our primate relatives, we find countless examples in nature where mutual reliance and symbiosis triumph over adversity. Coral reefs, often called the "rainforests of the sea," offer a stunning illustration

    35 min

    About

    Technology, curiosity, progress and being human. thekush.substack.com

    Content Restricted

    This episode cannot be played on the web in your country or region.

    To listen to explicit episodes, sign in.

    Stay up to date with this show

    Sign in or sign up to follow shows, save episodes and get the latest updates.

    Select a country or region

    Africa, Middle East, and India

    Asia Pacific

    Europe

    Latin America and the Caribbean

    The United States and Canada