ibl.ai

ibl.ai
ibl.ai

ibl.ai is a generative AI education platform based in NYC. This podcast, curated by its CTO, Miguel Amigot, focuses on high-impact trends and reports about AI.

  1. 5 HR AGO

    Open Praxis: The Manifesto for Teaching and Learning in a Time of Generative AI – A Critical Collective Stance to Better Navigate the Future

    Summary of https://openpraxis.org/articles/777/files/6749b446d17e9.pdf This document presents a collaboratively written manifesto offering a critical examination of the integration of Generative AI (GenAI) in higher education. It identifies both the positive and negative aspects of GenAI's influence on teaching and learning, stressing that it is not a neutral tool and risks reinforcing existing biases. The manifesto calls for research-backed decision-making to ensure GenAI enhances human agency and promotes ethical responsibility in education. It also acknowledges that while GenAI has potential, educators must also think about the deprofessionalization of the education field if AI tools increasingly automate tasks like grading, tutoring, and content delivery, potentially leading to job displacement and reduced opportunities for educators. The text explores the importance of AI literacy for users and also looks to the risks of human-AI symbiosis, including the erosion of human judgement, autonomy and creative agency. The authors hope to encourage debate and offer insight into the future of GenAI in educational contexts. Here are the five main takeaways: GenAI is not a neutral tool. It reflects worldviews and can reinforce biases, potentially marginalizing diverse voices. GenAI can both enhance and diminish essential human elements in education. While it offers potential for personalized learning and efficiency, it also risks eroding creativity, critical thinking, and empathy. Ethical considerations are paramount. Issues such as bias, fairness, transparency, and data security must be addressed to ensure responsible deployment of GenAI. Educators, administrators, and policymakers need to rethink education. Continuing with 'business as usual' is not an option. A shift is needed to emphasize learning processes and adapt assessment methods. Robust, evidence-based research is crucial. Decisions about integrating GenAI in education should be guided by a deep understanding of its impacts.

    19 min
  2. 2 DAYS AGO

    Microsoft: The AI Decision Brief – Insights from Microsoft and AI Leaders on Navigating the Generative AI Platform Shift

    Summary of https://www.microsoft.com/en-us/microsoft-cloud/blog/2025/02/18/maximizing-ais-potential-insights-from-microsoft-leaders-on-how-to-get-the-most-from-generative-ai/ Microsoft's "The AI Decision Brief" explores the transformative power of generative AI across industries. It offers guidance on navigating the AI platform shift, emphasizing strategies for effective implementation and maximizing opportunities while mitigating risks. The brief outlines stages of AI readiness, key drivers of value, and examples of successful AI adoption. It addresses challenges such as skill shortages, security concerns, and regulatory compliance, providing insights from industry leaders and customer stories. Furthermore, it emphasizes building trustworthy AI through security, privacy, and safety measures, underscoring Microsoft's commitment to supporting customers in their AI transformation journey. The document concludes by highlighting the future potential of AI in sustainability and various sectors, emphasizing the importance of collaboration and continuous learning in the age of AI. Here are five key takeaways: Generative AI is rapidly transforming industries, presenting opportunities for unprecedented impact and growth for leaders who embrace its potential. Its adoption rate is historically fast, with usage among enterprises jumping from 55% in 2023 to 75% in 2024. AI is becoming more accessible, and Microsoft is committed to providing broad technology access to empower organizations and individuals worldwide to develop and use AI in ways that serve the public good. Organizations progress through five stages of AI readiness: exploring, planning, implementing, scaling, and realizing, each with its own strategic priorities. Identifying the correct stage and implementing appropriate strategies is critical for managing generative AI transformation. Trust is crucial for AI innovation, and organizations should prioritize responsible AI practices and security. Trustworthy AI comprises three pillars: security, privacy, and safety. AI leaders are seeing greater returns and accelerated innovation, averaging a 370% ROI, with top leaders achieving a 1000% ROI. The highest-performing organizations realize almost four times the value from their AI investments compared to those just getting started.

    35 min
  3. 2 DAYS AGO

    Georgia Institute of Technology: It’s Just Distributed Computing – Rethinking AI Governance

    Summary of https://www.sciencedirect.com/science/article/pii/S030859612500014X Argues that the current approach to governing "AI" is misguided. It posits that what we call "AI" is not a singular, novel technology, but rather a diverse set of machine-learning applications that have evolved within a broader digital ecosystem over decades. The author introduces a framework centered on the digital ecosystem, composed of computing devices, networks, data, and software, to analyze AI's governance. Instead of attempting to regulate "AI" generically, the author suggests focusing on specific problems arising from individual machine learning applications. The author critiques several proposed AI governance strategies, including moratoria, compute control, and cloud regulation, revealing that most of these proposed strategies are really about controlling all components of the digital ecosystem, and not AI specifically. By shifting the focus to specific applications and their impacts, the paper advocates for more decentralized and effective policy solutions. Here are five important takeaways: What is referred to as "artificial intelligence" is a diverse set of machine learning applications that rely on a digital ecosystem, not a single technology. "AI governance" can be practically meaningless because of the numerous, diverse, and embedded applications of machine learning in networked computing. The digital ecosystem is composed of computing devices, networks, data, and software. Many policy concerns now attributed to "AI" were anticipated by policy conflicts associated with the rise of the Internet. Attempts to regulate "AI" as a general capability may require systemic control of digital ecosystem components and can be unrealistic, disproportionate, or dangerously authoritarian.

    17 min
  4. 4 DAYS AGO

    George Mason University: Generative AI in Higher Education – Evidence from an Analysis of Institutional Policies and Guidelines

    Summary of https://arxiv.org/pdf/2402.01659 This paper examines how higher education institutions (HEIs) are responding to the rise of generative AI (GenAI) like ChatGPT. Researchers analyzed policies and guidelines from 116 US universities to understand the advice given to faculty and stakeholders. The study found that most universities encourage GenAI use, particularly for writing-related activities, and offer guidance for classroom integration. However, the authors caution that this widespread endorsement may create burdens for faculty and overlook long-term pedagogical implications and ethical concerns. The research explores the range of institutional approaches, from embracing to discouraging GenAI, and highlights considerations related to privacy, diversity, equity, and STEM fields. Ultimately, the findings suggest that HEIs are grappling with how to navigate the integration of GenAI into education, often with a focus on revising teaching methods and managing potential risks. Here are five important takeaways: Institutional embrace of GenAI: A significant number of higher education institutions (HEIs) are embracing GenAI, with 63% encouraging its use. Many universities provide detailed guidance for classroom integration, including sample syllabi (56%) and curriculum activities (50%). This indicates a shift towards accepting and integrating GenAI into the educational landscape. Focus on writing-related activities: A notable portion of GenAI guidance focuses on writing-related activities, while STEM-related activities, including coding, are mentioned less frequently and often vaguely (50%). This suggests an emphasis on GenAI's role in enhancing writing skills and a potential gap in exploring its applications in other disciplines. Ethical and privacy considerations: Over half of the institutions address the ethics of GenAI, including diversity, equity, and inclusion (DEI) (52%), as well as privacy concerns (57%). Common privacy advice includes exercising caution when sharing personal or sensitive data with GenAI. Discussions with students about the ethics of using GenAI in the classroom are also encouraged (53%). Rethinking pedagogy and increased workload: Both encouraging and discouraging GenAI use implies a rethinking of classroom strategies and increased workload for instructors and students. Institutions are providing guidance on flipping classrooms and rethinking teaching/evaluation strategies. Concerns about long-term impact and normalization: There are concerns regarding the long-term impact on intellectual growth and pedagogy. Normalizing GenAI use may make its presence indiscernible, posing ethical challenges and potentially discouraging intellectual development. Institutions may also be confusing acknowledging GenAI with experimenting with it in the classroom.

    36 min
  5. 4 DAYS AGO

    Digital Education Council: Global AI Faculty Survey 2025

    Summary of https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-faculty-survey The Digital Education Council's Global AI Faculty Survey 2025 explores faculty perspectives on AI in higher education. The survey, gathering insights from 1,681 faculty members across 28 countries, investigates AI usage, its impact on teaching and learning, and institutional support for AI integration. Key findings reveal that a majority of faculty have used AI in teaching, mainly for creating materials, but many have concerns about student over-reliance and evaluation skills. Furthermore, faculty express a need for clearer guidelines, improved AI literacy resources, and training from their institutions. The report also highlights the need for redesigning student assessments to address AI's impact. The survey data is intended to inform higher education leaders in their AI integration efforts and complements the DEC's Global AI Student Survey. Here are the five most important takeaways: Faculty have largely adopted AI in teaching, but use it sparingly. 61% of faculty report they have used AI in teaching. However, a significant majority of these faculty members indicate they use AI sparingly. Many faculty express concerns regarding students' AI literacy and potential over-reliance on AI. 83% of faculty are concerned about students' ability to critically evaluate AI output, and 82% worry that students may become too reliant on AI. Most faculty feel that institutions need to provide more AI guidance. 80% of faculty feel that their institution's AI guidelines are not comprehensive. A similar percentage of faculty feel there is a lack of clarity on how AI can be applied in teaching within their institutions. A significant number of faculty are calling for changes to student assessment methods. 54% of faculty believe that current student evaluation methods require significant changes. Half of faculty members believe that current assignments need to be redesigned to be more AI resistant. The majority of faculty are positive about using AI in teaching in the future. 86% of faculty see themselves using AI in their teaching practices in the future. Two-thirds of faculty agree that incorporating AI into teaching is necessary to prepare students for future job markets.

    12 min
  6. 5 DAYS AGO

    Google: Towards an AI Co-Scientist

    Summary of https://storage.googleapis.com/coscientist_paper/ai_coscientist.pdf Introduces an AI co-scientist system designed to assist researchers in accelerating scientific discovery, particularly in biomedicine. The system employs a multi-agent architecture, using large language models to generate novel research hypotheses and experimental protocols based on user-defined research goals. The AI co-scientist leverages web search and other tools to refine its proposals and provides reasoning for its recommendations. It is intended to collaborate with scientists, augmenting their hypothesis generation rather than replacing them. The system's effectiveness is validated through expert evaluations and wet-lab experiments in drug repurposing, target discovery, and antimicrobial resistance. Furthermore, the co-scientist architecture is model agnostic and is likely to benefit from further advancements in frontier and reasoning LLMs. The paper also addresses safety and ethical considerations associated with such an AI system. The AI co-scientist is a multi-agent system designed to assist scientists in making novel discoveries, generating hypotheses, and planning experiments, with a focus on biomedicine. Here are five key takeaways about the AI co-scientist: Multi-Agent Architecture: The AI co-scientist utilizes a multi-agent system built on Gemini 2.0, featuring specialized agents (Generation, Reflection, Ranking, Evolution, Proximity, and Meta-review) that work together to generate, debate, and evolve research hypotheses. The Supervisor agent orchestrates these agents, assigning them tasks and managing the flow of information. This architecture facilitates a "generate, debate, evolve" approach, mirroring the scientific method. Iterative Improvement: The system employs a tournament framework where different research proposals are evaluated and ranked, enabling iterative improvements. The Ranking agent uses an Elo-based tournament to assess and prioritize hypotheses through pairwise comparisons and simulated scientific debates. The Evolution agent refines top-ranked hypotheses by synthesizing ideas, using analogies, and simplifying concepts. The Meta-review agent synthesizes insights from all reviews to optimize the performance of other agents. Integration of Tools and Data: The AI co-scientist leverages various tools, including web search, domain-specific databases, and AI models like AlphaFold, to generate and refine hypotheses. It can also index and search private repositories of publications specified by scientists. The system is designed to align with scientist-provided research goals, preferences, and constraints, ensuring that the generated outputs are relevant and plausible. Validation through Experimentation: The AI co-scientist's capabilities have been validated in three biomedical areas: drug repurposing, novel target discovery, and explaining mechanisms of bacterial evolution and antimicrobial resistance. In drug repurposing, the system proposed candidates for acute myeloid leukemia (AML) that showed tumor inhibition in vitro. For novel target discovery, it suggested new epigenetic targets for liver fibrosis, validated by anti-fibrotic activity in human hepatic organoids. In explaining bacterial evolution, the AI co-scientist independently recapitulated unpublished experimental results regarding a novel gene transfer mechanism. Expert-in-the-Loop Interaction: Scientists can interact with the AI co-scientist through a natural language interface to specify research goals, incorporate constraints, provide feedback, and suggest new directions. The system can incorporate reviews from expert scientists to guide ranking and system improvements. The AI co-scientist can also be directed to follow up on specific research directions and prioritize the synthesis of relevant research.

    15 min
  7. 5 DAYS AGO

    OpenAI: Building an AI-Ready Workforce – A Look at College Student ChatGPT Adoption in the US

    Summary of https://cdn.openai.com/global-affairs/openai-edu-ai-ready-workforce.pdf OpenAI's report examines the prevalence of ChatGPT use among college students in the United States and its implications for the future workforce. It highlights that students are actively using AI tools for learning and skill development, even outpacing formal educational integration. The study identifies disparities in AI adoption across different states, which could lead to future economic gaps. The report advocates for increased AI literacy, wider access to AI tools, and the development of clear institutional policies regarding AI use in education. It also emphasizes the importance of aligning educational practices with the growing demand from employers for AI-ready workers. The document uses data from ChatGPT usage and surveys of college students to support its findings and recommendations. Here are 5 key takeaways from the source: State-by-state differences in student AI adoption could create gaps in workforce productivity and economic development. The source indicates that employers are increasingly looking for candidates with AI skills. Because of this, states with low rates of AI adoption risk falling behind. States like Utah and New York are proactively incorporating AI into higher education. For example, Salt Lake Community College is integrating AI experience into industry pipelines, and the University of Utah launched a $100 million AI research initiative. In New York, the State University of New York (SUNY) system will include AI education in its general education requirements starting in 2026. Many students are self-teaching AI skills due to a lack of formal AI education in their institutions, which creates disparities in AI access and knowledge. Many college and university students are teaching themselves and their friends about AI without waiting for their institutions to provide formal AI education or clear policies about the technology’s use. The rapid adoption by students across the country who haven’t received formalized instruction in how and when to use the technology creates disparities in AI access and knowledge. The education ecosystem is in an important moment of exploration and learning. To build an AI-ready workforce, states should focus on driving access to AI tools, demystifying AI through education, and developing clear policies around AI use in education. The source suggests that AI literacy is essential for students’ future success. However, while three in four higher education students want AI training, only one in four universities and colleges provide it. The source suggests that teaching AI effectively requires practical examples that show students how AI can support their learning rather than replace it. A nationwide AI education strategy—rooted in local communities and supported by American companies—will help equip students and the workforce with AI skills. Academic institutions, professors, and teachers must also lay out clear guidance around AI use - across classwork, homework, and assessments.

    26 min
  8. 6 DAYS AGO

    MIT: The AI Agent Index

    Summary of https://arxiv.org/pdf/2502.01635 The AI Agent Index is a newly created public database documenting agentic AI systems. These systems, which plan and execute complex tasks with limited human oversight, are increasingly being deployed in various domains. The index details each system’s technical components, applications, and risk management practices based on public data and developer input. An analysis of the data shows ample information on agentic systems' capabilities and applications. However, the authors found limited transparency regarding safety and risk mitigation. The authors aim to provide a structured framework for documenting agentic AI systems and improve public awareness. It sheds light on the geographical spread, academic versus industry development, openness, and risk management of agentic systems. The five most important takeaways from the AI Agent Index, with added details, are: The AI Agent Index is a public database designed to document key information about deployed agentic AI systems. It covers the system’s components, application domains, and risk management practices. The index aims to fill a gap by providing a structured framework for documenting the technical, safety, and policy-relevant features of agentic AI systems. The AI Agent Index is available at https://aiagentindex.mit.edu/. Agentic AI systems are being deployed at an increasing rate. Systems that meet the inclusion criteria have had initial deployments dating back to early 2023, with approximately half of the indexed systems deployed in the second half of 2024. Most indexed systems are developed by companies located in the USA, specializing in software engineering and/or computer use. Out of the 67 agents, 45 were created by developers in the USA. 74.6% of the agents specialize in either software engineering or computer use. While most agentic systems are developed by companies, a significant fraction are developed in academia. Specifically, 18 (26.9%) are academic, while 49 (73.1%) are from companies. Developers are relatively forthcoming about details related to usage and capabilities. The majority of indexed systems have released code and/or documentation. Specifically, 49.3% release code, and 70.1% release documentation. Systems developed as academic projects are released with a high degree of openness, with 88.8% releasing code. There is limited publicly available information about safety testing and risk management practices. Only 19.4% of indexed agentic systems disclose a formal safety policy, and fewer than 10% report external safety evaluations. Most of the systems that have undergone formal, publicly-reported safety testing are from a small number of large companies.

    19 min

About

ibl.ai is a generative AI education platform based in NYC. This podcast, curated by its CTO, Miguel Amigot, focuses on high-impact trends and reports about AI.

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada