ibl.ai

ibl.ai
ibl.ai

ibl.ai is a generative AI education platform based in NYC. This podcast, curated by its CTO, Miguel Amigot, focuses on high-impact trends and reports about AI.

  1. 4D AGO

    Baruch College: Not all AI is Created Equal – A Meta-Analysis Revealing Drivers of AI Resistance Across Markets, Methods, and Time

    Summary of https://www.sciencedirect.com/science/article/pii/S0167811625000114 Presents a meta-analysis of two decades of studies examining consumer resistance to artificial intelligence (AI). The authors synthesize findings from hundreds of studies with over 76,000 participants, revealing that AI aversion is context-dependent and varies based on the AI's label, application domain, and perceived characteristics. Interestingly, the study finds that negative consumer responses have decreased over time, particularly for cognitive evaluations of AI. Furthermore, the meta-analysis indicates that research design choices influence observed AI resistance, with studies using more ecologically valid methods showing less aversion. Consumers exhibit an overall small but statistically significant aversion to AI (average Cohen’s d = -0.21). This means that, on average, people tend to respond more negatively to outputs or decisions labeled as coming from AI compared to those labeled as coming from humans. Consumer aversion to AI is strongly context-dependent, varying significantly by the AI label and the application domain. Embodied forms of AI, such as robots, elicit the most negative responses (d = -0.83) compared to AI assistants or mere algorithms. Furthermore, domains involving higher stakes and risks, like transportation and public safety, trigger more negative responses than domains focused on productivity and performance, such as business and management. Consumer responses to AI are not static and have evolved over time, generally becoming less negative, particularly for cognitive evaluations (e.g., performance or competence judgements). While initial excitement around generative AI in 2021 led to a near null-effect in cognitive evaluations, affective and behavioral responses still remain significantly negative overall. The characteristics ascribed to AI significantly influence consumer responses. Negative responses are stronger when AI is described as having high autonomy (d = -0.28), inferior performance (d = -0.53), lacking human-like cues (anthropomorphism) (d = -0.23), and not recognizing the user's uniqueness (d = -0.24). Conversely, limiting AI autonomy, highlighting superior performance, incorporating anthropomorphic cues, and emphasizing uniqueness recognition can alleviate AI aversion. The methodology used to study AI aversion impacts the findings. Studies with greater ecological validity, such as field studies, those using incentive-compatible designs, perceptually rich stimuli, clear explanations of AI, and behavioral (rather than self-report) measures, document significantly smaller aversion towards AI. This suggests that some documented resistance in purely hypothetical lab settings might be an overestimation of real-world aversion.

    14 min
  2. 4D AGO

    CSET: Putting Explainable AI to the Test – A Critical Look at Evaluation Approaches

    Summary of https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/ This Center for Security and Emerging Technology issue brief examines how researchers evaluate explainability and interpretability in AI-enabled recommendation systems. The authors' literature review reveals inconsistencies in defining these terms and a primary focus on assessing system correctness (building systems right) over system effectiveness (building the right systems for users). They identified five common evaluation approaches used by researchers, noting a strong preference for case studies and comparative evaluations. Ultimately, the brief suggests that without clearer standards and expertise in evaluating AI safety, policies promoting explainable AI may fall short of their intended impact. Researchers do not clearly differentiate between explainability and interpretability when describing these concepts in the context of AI-enabled recommendation systems. The descriptions of these principles in research papers often use a combination of similar themes. This lack of consistent definition can lead to confusion and inconsistent application of these principles. The study identified five common evaluation approaches used by researchers for explainability claims: case studies, comparative evaluations, parameter tuning, surveys, and operational evaluations. These approaches can assess either system correctness (whether the system is built according to specifications) or system effectiveness (whether the system works as intended in the real world). Research papers show a strong preference for evaluations of system correctness over evaluations of system effectiveness. Case studies, comparative evaluations, and parameter tuning, which are primarily focused on testing system correctness, were the most common approaches. In contrast, surveys and operational evaluations, which aim to test system effectiveness, were less prevalent. Researchers adopt various descriptive approaches for explainability, which can be categorized into descriptions that rely on other principles (like transparency), focus on technical implementation, state the purpose as providing a rationale for recommendations, or articulate the intended outcomes of explainable systems. The findings suggest that policies for implementing or evaluating explainable AI may not be effective without clear standards and expert guidance. Policymakers are advised to invest in standards for AI safety evaluations and develop a workforce capable of assessing the efficacy of these evaluations in different contexts to ensure reported evaluations provide meaningful information.

    20 min
  3. 4D AGO

    Harvard Business School: The Value of Open Source Software

    Summary of https://www.hbs.edu/ris/Publication%20Files/24-038_51f8444f-502c-4139-8bf2-56eb4b65c58a.pdf Investigates the economic value of open source software (OSS) by estimating both the supply-side (creation cost) and the significantly larger demand-side (usage value). Utilizing unique global data on OSS usage by firms, the authors calculate the cost to recreate widely used OSS and the replacement value for firms if OSS did not exist. Their findings reveal a substantial multi-trillion dollar demand-side value, far exceeding the billions needed for recreation, highlighting OSS's critical, often unmeasured, role in the modern economy. The study also examines the concentration of value creation among a small percentage of developers and the distribution of OSS value across different programming languages and industries. This study estimates that the demand-side value of widely-used open source software (OSS) is significantly larger than its supply-side value. The researchers estimate the supply-side value (the cost to recreate the most widely used OSS once) to be $4.15 billion, while the demand-side value (the replacement value for each firm that uses the software and would need to build it internally if OSS did not exist) is estimated to be much larger at $8.8 trillion. This highlights the substantial economic benefit derived from the reuse of OSS by numerous firms. The research reveals substantial heterogeneity in the value of OSS across different programming languages. For example, in terms of demand-side value, Go is estimated to be more than four times the value of the next language, JavaScript, while Python has a considerably lower value among the top languages analyzed. This indicates that the economic impact of OSS is not evenly distributed across the programming language landscape. The study finds a high concentration in the creation of OSS value, with only a small fraction of developers contributing the vast majority of the value. Specifically, it's estimated that 96% of the demand-side value is created by only 5% of OSS developers. These top contributors also tend to contribute to a substantial number of repositories, suggesting their impact is broad across the OSS ecosystem. Measuring the value of OSS is inherently difficult due to its non-pecuniary (free) nature and the lack of centralized usage tracking. This study addresses this challenge by leveraging unique global data from two complementary sources: the Census II of Free and Open Source Software – Application Libraries and the BuiltWith dataset, which together capture OSS usage by millions of global firms. By focusing on widely-used OSS, the study aims to provide a more precise understanding of its value compared to studies that estimate the replacement cost of all existing OSS. The estimated demand-side value of OSS suggests that if it did not exist, firms would need to spend approximately 3.5 times more on software than they currently do. This underscores the massive cost savings and productivity enhancement that the existence of OSS provides to the economy. The study argues that recognizing this value is crucial for the future health of the digital economy and for informing policymakers about the importance of supporting the OSS ecosystem.

    22 min
  4. 4D AGO

    Hoover Institution: The Artificially Intelligent Boardroom

    Summary of https://www.hoover.org/sites/default/files/research/docs/cgri-closer-look-110-ai.pdf Examines the potential impact of artificial intelligence on corporate boardrooms and governance. It argues that while AI's influence on areas like decision-making is acknowledged, its capacity to reshape the operations and practices of the board itself warrants greater attention. The authors explore how AI could alter board functions, information processing, interactions with management, and the role of advisors, while also considering the challenges of maintaining board-management boundaries and managing information access. Ultimately, the piece discusses how AI could transform various governance obligations and presents both the benefits and risks associated with its adoption in the boardroom. AI has the potential to significantly transform corporate governance by reshaping how boards function, process information, interact with management and advisors, and fulfill specific governance obligations. Boards are already aware of AI's potential, ranking its increased use across the organization as a top priority. AI can reduce the information asymmetry between the board and management by increasing the volume, type, and quality of information available to directors. This allows boards to be more proactive and less reliant on management-provided information, potentially leading to better oversight. AI tools can enable directors to search and synthesize public and private information more easily. The adoption of AI will significantly increase the expectations and responsibilities of board members. Directors will be expected to spend more time preparing for meetings by reviewing and analyzing a greater quantity of information. They will also be expected to ask higher-quality questions and provide deeper insights, leveraging AI tools for analysis and benchmarking. AI can enhance various governance functions, including strategy, compensation, human capital management, audit, legal matters, and board evaluations. For example, AI can facilitate richer scenario planning, provide real-time compensation benchmarking, identify skills gaps in human capital, detect potential fraud, monitor legal developments, and analyze board effectiveness. This may also lead to a supplementation or replacement of work currently done by paid advisors. The integration of AI into the boardroom also presents several risks and challenges, including maintaining the separation of board and management responsibilities, managing information access, ensuring data security, addressing the potential for errors and biases in AI models, and avoiding "analysis paralysis". Boards will need to develop new protocols and skills to effectively utilize AI while mitigating these risks.

    15 min
  5. MAR 16

    Harvard Business School: Why Most Resist AI Companions

    Summary of https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5097445 This working paper by De Freitas et al. investigates why people resist forming relationships with AI companions, despite their potential to alleviate loneliness. The authors reveal that while individuals acknowledge AI's superior availability and non-judgmental nature compared to humans, they do not consider AI relationships to be "true" due to a perceived lack of essential qualities like mutual caring and emotional understanding. Through several studies, the research demonstrates that this resistance stems from a belief that AI cannot truly understand or feel emotions, leading to the perception of one-sided relationships. Even direct interaction with AI companions only marginally increases acceptance by improving perceptions of superficial features, failing to alter deeply held beliefs about AI's inability to fulfill core relational values. Ultimately, the paper highlights significant psychological barriers hindering the widespread adoption of AI companions for social connection. People exhibit resistance to adopting AI companions despite acknowledging their superior capabilities in certain relationship-relevant aspects like availability and being non-judgmental. This resistance stems from the belief that AI companions are incapable of realizing the essential values of relationships, such as mutual caring and emotional understanding. This resistance is rooted in a dual character concept of relationships, where people differentiate between superficial features and essential values. Even if AI companions possess the superficial features (e.g., constant availability), they are perceived as lacking the essential values (e.g., mutual caring), leading to the judgment that relationships with them are not "true" relationships. The belief that AI companions cannot realize essential relationship values is linked to perceptions of AI's deficiencies in mental capabilities, specifically the ability to understand and feel emotions, which are seen as crucial for mutual caring and thus for a relationship to be considered mutual and "true". Physical intimacy was not found to be a significant mediator in this belief. Interacting with an AI companion can increase willingness to engage with it for friendship and romance, primarily by improving perceptions of its advertised, more superficial capabilities (like being non-judgmental and available). However, such interaction does not significantly alter the fundamental belief that AI is incapable of realizing the essential values of relationships. The mere belief that one is interacting with a human (even when it's an AI) enhances the effectiveness of the interaction in increasing acceptance. The strong, persistent belief about AI's inability to fulfill the essential values of relationships represents a significant psychological barrier to the widespread adoption of AI companions for reducing loneliness. This suggests that the potential loneliness-reducing benefits of AI companions may be difficult to achieve in practice unless these fundamental beliefs can be addressed. The resistance observed in the relationship domain, where values are considered essential, might be stronger than in task-based domains where performance is the primary concern.

    11 min
  6. MAR 16

    Center for AI Policy: US Open-Source AI Governance – Balancing Ideological and Geopolitical Considerations with China Competition

    Summary of https://cdn.prod.website-files.com/65af2088cac9fb1fb621091f/67aaca031ed677c879434284_Final_US%20Open-Source%20AI%20Governance.pdf This document from the Center for AI Policy and Yale Digital Ethics Center examines the contentious debate surrounding the governance of open-source artificial intelligence in the United States. It highlights the tension between the ideological values promoting open access and geopolitical considerations, particularly competition with China. The authors analyze various policy proposals for open-source AI, creating a rubric that combines ideological factors like transparency and innovation with geopolitical risks such as misuse and global power dynamics. Ultimately, the paper suggests targeted policy interventions over broad restrictions to balance the benefits of open-source AI with national security concerns, emphasizing ongoing monitoring of technological advancements and geopolitical landscapes. The debate surrounding open-source AI regulation involves a tension between ideological values (innovation, transparency, power distribution) and geopolitical considerations, particularly US-China competition (Chinese misuse, backdoor risks, global power dynamics). Policymakers are grappling with how to reconcile these two perspectives, especially in light of advancements in Chinese open-source AI. Heavy-handed regulation like blanket export controls on all open-source AI models is likely sub-optimal and counterproductive. Such controls would significantly disrupt the development of specific-use applications, have limited efficacy against Chinese misuse, and could undermine US global power by discouraging international use of American technology. More targeted interventions are suggested as preferable to broad restrictions. The paper analyzes policies such as industry-led risk assessments for model release and government funding for an open-source repository of security audits. These approaches aim to balance the benefits of open-source AI with the need to address specific security risks more effectively and with less disruption to innovation. The nature of open-source AI, being globally accessible information, makes it inherently difficult to decouple the US and Chinese ecosystems. Attempts to do so through export controls may have unintended consequences and could be circumvented due to the ease of information transfer. Further research and monitoring are crucial to inform future policy decisions. Key areas for ongoing attention include tracking the performance gap between open and closed models, understanding the origins of algorithmic innovations, developing objective benchmarks for comparing models from different countries, and advancing technical safety mitigations for open models.

    24 min
  7. MAR 16

    National Security: Superintelligence Strategy

    Summary of https://arxiv.org/pdf/2503.05628 This expert strategy document from Dan Hendrycks, Eric Schmidt and Alexander Wang addresses the national security implications of rapidly advancing AI, particularly the anticipated emergence of superintelligence. The authors propose a three-pronged framework drawing parallels with Cold War strategies: deterrence through the concept of Mutual Assured AI Malfunction (MAIM), nonproliferation to restrict access for rogue actors, and competitiveness to bolster national strength. The text examines threats from rival states, terrorists, and uncontrolled AI, arguing for proactive measures like cyber espionage and sabotage for deterrence, export controls and information security for nonproliferation, and domestic AI chip manufacturing and legal frameworks for competitiveness. Ultimately, the document advocates for a risk-conscious, multipolar strategy to navigate the transformative and potentially perilous landscape of advanced artificial intelligence. Rapid advances in AI, especially the anticipation of superintelligence, present significant national security challenges akin to those posed by nuclear weapons. The dual-use nature of AI means it can be leveraged for both economic and military dominance by states, while also enabling rogue actors to develop bioweapons and launch cyberattacks. The potential for loss of control over advanced AI systems further amplifies these risks. The concept of Mutual Assured AI Malfunction (MAIM) is introduced as a likely default deterrence regime. This is similar to nuclear Mutual Assured Destruction (MAD), where any aggressive pursuit of unilateral AI dominance by a state would likely be met with preventive sabotage by its rivals, ranging from cyberattacks to potential kinetic strikes on AI infrastructure. A critical component of a superintelligence strategy is nonproliferation. Drawing from precedents in restricting weapons of mass destruction, this involves three key levers: compute security to track and control the distribution of high-end AI chips, information security to protect sensitive AI research and model weights from falling into the wrong hands, and AI security to implement safeguards that prevent the malicious use and loss of control of AI systems. Beyond mitigating risks, states must also focus on competitiveness in the age of AI to ensure their national strength. This includes strategically integrating AI into military command and control and securing drone supply chains, guaranteeing access to AI chips through domestic manufacturing and strategic export controls, establishing legal frameworks to govern AI agents, and maintaining political stability in the face of rapid automation and the spread of misinformation. Existing strategies for dealing with advanced AI, such as a completely hands-off approach, voluntary moratoria, or a unilateral pursuit of a strategic monopoly, are flawed and insufficient to address the multifaceted risks and opportunities presented by AI. The authors propose a multipolar strategy based on the interconnected pillars of deterrence (MAIM), nonproliferation, and competitiveness, drawing lessons from the Cold War framework adapted to the unique challenges of superintelligence.

    28 min
  8. MAR 13

    Monash University: Gen AI in Higher Ed – A Global Perspective of Institutional Adoption Policies and Guidelines

    Summary of https://www.sciencedirect.com/science/article/pii/S2666920X24001516 This paper examines how higher education institutions globally are addressing the integration of generative AI by analyzing the adoption policies of 40 universities across six regions through the lens of the Diffusion of Innovations Theory. The study identifies key themes related to compatibility, trialability, and observability of AI, the communication channels being used, and the defined roles and responsibilities for faculty, students, and administrators. Findings reveal a widespread emphasis on academic integrity and enhancing learning, but also highlight gaps in comprehensive policies and equitable access, offering insights for policymakers to develop inclusive AI integration strategies. Universities globally are proactively addressing the integration of generative AI (GAI) in higher education, primarily focusing on academic integrity, enhancing teaching and learning, and promoting AI literacy. This is evidenced by the emphasis on these themes in the analysis of policies across 40 universities from six global regions. The study highlights that institutions recognize the transformative potential of GAI while also being concerned about its ethical implications and impact on traditional educational values. The study, utilizing the Diffusion of Innovations Theory (DIT), reveals that while universities are exploring GAI's compatibility, trialability, and observability, significant gaps exist in comprehensive policy frameworks, particularly concerning data privacy and equitable access. The research specifically investigated these innovation characteristics in university policies. Although many universities address academic integrity and the potential for enhancing education (compatibility), and are encouraging experimentation (trialability), fewer have robust strategies for evaluating GAI's impact (observability) and clear guidelines for data privacy and equal access. Communication about GAI adoption is varied, with digital platforms being the most common channel, but less than half of the studied universities demonstrate a comprehensive approach to disseminating information and fostering dialogue among stakeholders. The analysis identified five main communication channels: digital platforms, interactive learning and engagement channels, direct and personalized communication channels, collaborative and social networks, and advisory, monitoring, and feedback channels. The finding that not all universities actively use a range of these channels suggests a need for more focused efforts in this area. Higher education institutions are establishing clear roles and responsibilities for faculty, students, and administrators in the context of GAI adoption. Faculty are largely tasked with integrating GAI into curricula and ensuring ethical use, students are responsible for ethical use and maintaining academic integrity, and administrators are primarily involved in policy development, implementation, and providing support. This highlights a structured approach to managing the integration of GAI within the educational ecosystem. Cultural backgrounds may influence the emphasis of GAI adoption policies, with institutions in North America and Europe often prioritizing innovation and critical thinking, while those in Asia emphasize ethical use and compliance, and universities in Africa and Latin America focus on equity and accessibility.This regional variation suggests that while there are common values, the specific challenges and priorities related to GAI adoption can differ based on cultural and socio-economic contexts.

    24 min

    Ratings & Reviews

    5
    out of 5
    3 Ratings

    About

    ibl.ai is a generative AI education platform based in NYC. This podcast, curated by its CTO, Miguel Amigot, focuses on high-impact trends and reports about AI.

    You Might Also Like

    Content Restricted

    This episode can’t be played on the web in your country or region.

    To listen to explicit episodes, sign in.

    Stay up to date with this show

    Sign in or sign up to follow shows, save episodes, and get the latest updates.

    Select a country or region

    Africa, Middle East, and India

    Asia Pacific

    Europe

    Latin America and the Caribbean

    The United States and Canada