ibl.ai

ibl.ai
ibl.ai

ibl.ai is a generative AI education platform based in NYC. This podcast, curated by its CTO, Miguel Amigot, focuses on high-impact trends and reports about AI.

  1. 8 HR AGO

    Coursera: 2025 Job Skills Report

    Summary of https://assets.ctfassets.net/2pudprfttvy6/5hucYCFs2oKtLHEqGGweZa/cf02ebfc138e4a3f7e54f78d36fc1eef/Job-Skills-Report-2025.pdf The Coursera Job Skills Report 2025 analyzes the fastest-growing skills for employees, students, and job seekers, highlighting the impact of generative AI. The report draws from data of over five million enterprise learners across thousands of institutions. Key findings emphasize the surging demand for AI skills like GenAI, computer vision, and machine learning, alongside crucial skills in cybersecurity, data ethics, and risk management. These trends reflect the need for individuals and organizations to adapt to technological advancements and evolving job market demands. The report also identifies regional differences in skill priorities and provides recommendations for businesses, educational institutions, governments, and learners to foster workforce readiness. Overall, the report underscores the importance of continuous upskilling and reskilling in areas like AI, data, and cybersecurity to thrive in the future of work. GenAI skills are in high demand and are rapidly growing across all enterprise learners. Course enrollments in GenAI have surged, with a significant portion of learners coming from India, Colombia, and Mexico. This growth highlights the increasing need for individuals to develop AI capabilities to succeed in the workplace. Cybersecurity and risk management skills are crucial due to the increase in cyberattacks. As cyberattacks become more frequent and sophisticated, there is a growing demand for professionals who can identify, assess, and mitigate risks. Data ethics and data governance are growing priorities, especially among employees and students. There's an increasing emphasis on responsibly managing and analyzing customer data, driven by the need to ensure "safe and secure" AI use. Students are focusing on sustainability skills such as waste minimization, business continuity planning, and disaster recovery. This focus aligns with the growing demand for green jobs and reflects concerns about the effects of climate change. Upskilling and reskilling initiatives are vital for workforce readiness. Businesses, higher education institutions, and governments must work together to equip individuals with essential skills in AI, cybersecurity, and data literacy. These initiatives are crucial for improving employability, productivity, and overall competitiveness in a rapidly evolving job market.

    27 min
  2. 8 HR AGO

    McKinsey: The Critical Role of Strategic Workforce Planning in the Age of AI

    Summary of https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-critical-role-of-strategic-workforce-planning-in-the-age-of-ai McKinsey emphasizes the growing importance of strategic workforce planning (SWP) in the age of rapidly evolving technology, particularly generative AI. It highlights how forward-thinking companies are treating talent management with the same importance as financial capital, using SWP to anticipate future needs and proactively manage their workforce. The article outlines five best practices, including prioritizing talent investments, considering both capacity and capabilities, planning for multiple scenarios, filling talent gaps innovatively, and embedding SWP into business operations. By adopting these practices, organizations can improve their agility, ensure they have the right people with the right skills, and gain a competitive advantage in a dynamic market. The authors stress that SWP is crucial for navigating technological changes and ensuring long-term resilience. Ultimately, SWP allows for data-driven talent decisions, resource allocation, and a shift away from reactive hiring practices. The five best practices for companies preparing for disruptions from technological changes such as generative AI through strategic workforce planning (SWP) are: Prioritizing talent investments as much as financial investments. Successful organizations understand that their workforce is a strategic asset, and investing in talent development and retention is essential for long-term health. Employees represent both an organization’s largest investment and its deepest source of value. Considering both capacity and capabilities. Organizations can identify the specific skills and competencies required for critical roles that drive higher performance and create more value. Planning for multiple business scenarios. By implementing a scenario-based approach, organizations create flexibility for rapidly changing industry conditions. Taking an innovative approach to filling talent gaps. Weigh the time and cost implications of internal versus external hires, considering internal redeployments, reskilling or upskilling existing talent, acquisitions, and outsourcing. Embedding SWP into business as usual. Strategic workforce planning should become a business-as-usual process, not just a one-off exercise. By embedding SWP into core business operations, companies can better anticipate workforce needs, respond to changing demands, and ensure long-term agility and resilience.

    16 min
  3. 5 DAYS AGO

    Open Praxis: The Manifesto for Teaching and Learning in a Time of Generative AI – A Critical Collective Stance to Better Navigate the Future

    Summary of https://openpraxis.org/articles/777/files/6749b446d17e9.pdf This document presents a collaboratively written manifesto offering a critical examination of the integration of Generative AI (GenAI) in higher education. It identifies both the positive and negative aspects of GenAI's influence on teaching and learning, stressing that it is not a neutral tool and risks reinforcing existing biases. The manifesto calls for research-backed decision-making to ensure GenAI enhances human agency and promotes ethical responsibility in education. It also acknowledges that while GenAI has potential, educators must also think about the deprofessionalization of the education field if AI tools increasingly automate tasks like grading, tutoring, and content delivery, potentially leading to job displacement and reduced opportunities for educators. The text explores the importance of AI literacy for users and also looks to the risks of human-AI symbiosis, including the erosion of human judgement, autonomy and creative agency. The authors hope to encourage debate and offer insight into the future of GenAI in educational contexts. Here are the five main takeaways: GenAI is not a neutral tool. It reflects worldviews and can reinforce biases, potentially marginalizing diverse voices. GenAI can both enhance and diminish essential human elements in education. While it offers potential for personalized learning and efficiency, it also risks eroding creativity, critical thinking, and empathy. Ethical considerations are paramount. Issues such as bias, fairness, transparency, and data security must be addressed to ensure responsible deployment of GenAI. Educators, administrators, and policymakers need to rethink education. Continuing with 'business as usual' is not an option. A shift is needed to emphasize learning processes and adapt assessment methods. Robust, evidence-based research is crucial. Decisions about integrating GenAI in education should be guided by a deep understanding of its impacts.

    19 min
  4. 24 FEB

    Microsoft: The AI Decision Brief – Insights from Microsoft and AI Leaders on Navigating the Generative AI Platform Shift

    Summary of https://www.microsoft.com/en-us/microsoft-cloud/blog/2025/02/18/maximizing-ais-potential-insights-from-microsoft-leaders-on-how-to-get-the-most-from-generative-ai/ Microsoft's "The AI Decision Brief" explores the transformative power of generative AI across industries. It offers guidance on navigating the AI platform shift, emphasizing strategies for effective implementation and maximizing opportunities while mitigating risks. The brief outlines stages of AI readiness, key drivers of value, and examples of successful AI adoption. It addresses challenges such as skill shortages, security concerns, and regulatory compliance, providing insights from industry leaders and customer stories. Furthermore, it emphasizes building trustworthy AI through security, privacy, and safety measures, underscoring Microsoft's commitment to supporting customers in their AI transformation journey. The document concludes by highlighting the future potential of AI in sustainability and various sectors, emphasizing the importance of collaboration and continuous learning in the age of AI. Here are five key takeaways: Generative AI is rapidly transforming industries, presenting opportunities for unprecedented impact and growth for leaders who embrace its potential. Its adoption rate is historically fast, with usage among enterprises jumping from 55% in 2023 to 75% in 2024. AI is becoming more accessible, and Microsoft is committed to providing broad technology access to empower organizations and individuals worldwide to develop and use AI in ways that serve the public good. Organizations progress through five stages of AI readiness: exploring, planning, implementing, scaling, and realizing, each with its own strategic priorities. Identifying the correct stage and implementing appropriate strategies is critical for managing generative AI transformation. Trust is crucial for AI innovation, and organizations should prioritize responsible AI practices and security. Trustworthy AI comprises three pillars: security, privacy, and safety. AI leaders are seeing greater returns and accelerated innovation, averaging a 370% ROI, with top leaders achieving a 1000% ROI. The highest-performing organizations realize almost four times the value from their AI investments compared to those just getting started.

    35 min
  5. 24 FEB

    Georgia Institute of Technology: It’s Just Distributed Computing – Rethinking AI Governance

    Summary of https://www.sciencedirect.com/science/article/pii/S030859612500014X Argues that the current approach to governing "AI" is misguided. It posits that what we call "AI" is not a singular, novel technology, but rather a diverse set of machine-learning applications that have evolved within a broader digital ecosystem over decades. The author introduces a framework centered on the digital ecosystem, composed of computing devices, networks, data, and software, to analyze AI's governance. Instead of attempting to regulate "AI" generically, the author suggests focusing on specific problems arising from individual machine learning applications. The author critiques several proposed AI governance strategies, including moratoria, compute control, and cloud regulation, revealing that most of these proposed strategies are really about controlling all components of the digital ecosystem, and not AI specifically. By shifting the focus to specific applications and their impacts, the paper advocates for more decentralized and effective policy solutions. Here are five important takeaways: What is referred to as "artificial intelligence" is a diverse set of machine learning applications that rely on a digital ecosystem, not a single technology. "AI governance" can be practically meaningless because of the numerous, diverse, and embedded applications of machine learning in networked computing. The digital ecosystem is composed of computing devices, networks, data, and software. Many policy concerns now attributed to "AI" were anticipated by policy conflicts associated with the rise of the Internet. Attempts to regulate "AI" as a general capability may require systemic control of digital ecosystem components and can be unrealistic, disproportionate, or dangerously authoritarian.

    17 min
  6. 22 FEB

    George Mason University: Generative AI in Higher Education – Evidence from an Analysis of Institutional Policies and Guidelines

    Summary of https://arxiv.org/pdf/2402.01659 This paper examines how higher education institutions (HEIs) are responding to the rise of generative AI (GenAI) like ChatGPT. Researchers analyzed policies and guidelines from 116 US universities to understand the advice given to faculty and stakeholders. The study found that most universities encourage GenAI use, particularly for writing-related activities, and offer guidance for classroom integration. However, the authors caution that this widespread endorsement may create burdens for faculty and overlook long-term pedagogical implications and ethical concerns. The research explores the range of institutional approaches, from embracing to discouraging GenAI, and highlights considerations related to privacy, diversity, equity, and STEM fields. Ultimately, the findings suggest that HEIs are grappling with how to navigate the integration of GenAI into education, often with a focus on revising teaching methods and managing potential risks. Here are five important takeaways: Institutional embrace of GenAI: A significant number of higher education institutions (HEIs) are embracing GenAI, with 63% encouraging its use. Many universities provide detailed guidance for classroom integration, including sample syllabi (56%) and curriculum activities (50%). This indicates a shift towards accepting and integrating GenAI into the educational landscape. Focus on writing-related activities: A notable portion of GenAI guidance focuses on writing-related activities, while STEM-related activities, including coding, are mentioned less frequently and often vaguely (50%). This suggests an emphasis on GenAI's role in enhancing writing skills and a potential gap in exploring its applications in other disciplines. Ethical and privacy considerations: Over half of the institutions address the ethics of GenAI, including diversity, equity, and inclusion (DEI) (52%), as well as privacy concerns (57%). Common privacy advice includes exercising caution when sharing personal or sensitive data with GenAI. Discussions with students about the ethics of using GenAI in the classroom are also encouraged (53%). Rethinking pedagogy and increased workload: Both encouraging and discouraging GenAI use implies a rethinking of classroom strategies and increased workload for instructors and students. Institutions are providing guidance on flipping classrooms and rethinking teaching/evaluation strategies. Concerns about long-term impact and normalization: There are concerns regarding the long-term impact on intellectual growth and pedagogy. Normalizing GenAI use may make its presence indiscernible, posing ethical challenges and potentially discouraging intellectual development. Institutions may also be confusing acknowledging GenAI with experimenting with it in the classroom.

    36 min
  7. 21 FEB

    Digital Education Council: Global AI Faculty Survey 2025

    Summary of https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-faculty-survey The Digital Education Council's Global AI Faculty Survey 2025 explores faculty perspectives on AI in higher education. The survey, gathering insights from 1,681 faculty members across 28 countries, investigates AI usage, its impact on teaching and learning, and institutional support for AI integration. Key findings reveal that a majority of faculty have used AI in teaching, mainly for creating materials, but many have concerns about student over-reliance and evaluation skills. Furthermore, faculty express a need for clearer guidelines, improved AI literacy resources, and training from their institutions. The report also highlights the need for redesigning student assessments to address AI's impact. The survey data is intended to inform higher education leaders in their AI integration efforts and complements the DEC's Global AI Student Survey. Here are the five most important takeaways: Faculty have largely adopted AI in teaching, but use it sparingly. 61% of faculty report they have used AI in teaching. However, a significant majority of these faculty members indicate they use AI sparingly. Many faculty express concerns regarding students' AI literacy and potential over-reliance on AI. 83% of faculty are concerned about students' ability to critically evaluate AI output, and 82% worry that students may become too reliant on AI. Most faculty feel that institutions need to provide more AI guidance. 80% of faculty feel that their institution's AI guidelines are not comprehensive. A similar percentage of faculty feel there is a lack of clarity on how AI can be applied in teaching within their institutions. A significant number of faculty are calling for changes to student assessment methods. 54% of faculty believe that current student evaluation methods require significant changes. Half of faculty members believe that current assignments need to be redesigned to be more AI resistant. The majority of faculty are positive about using AI in teaching in the future. 86% of faculty see themselves using AI in their teaching practices in the future. Two-thirds of faculty agree that incorporating AI into teaching is necessary to prepare students for future job markets.

    12 min
  8. 20 FEB

    Google: Towards an AI Co-Scientist

    Summary of https://storage.googleapis.com/coscientist_paper/ai_coscientist.pdf Introduces an AI co-scientist system designed to assist researchers in accelerating scientific discovery, particularly in biomedicine. The system employs a multi-agent architecture, using large language models to generate novel research hypotheses and experimental protocols based on user-defined research goals. The AI co-scientist leverages web search and other tools to refine its proposals and provides reasoning for its recommendations. It is intended to collaborate with scientists, augmenting their hypothesis generation rather than replacing them. The system's effectiveness is validated through expert evaluations and wet-lab experiments in drug repurposing, target discovery, and antimicrobial resistance. Furthermore, the co-scientist architecture is model agnostic and is likely to benefit from further advancements in frontier and reasoning LLMs. The paper also addresses safety and ethical considerations associated with such an AI system. The AI co-scientist is a multi-agent system designed to assist scientists in making novel discoveries, generating hypotheses, and planning experiments, with a focus on biomedicine. Here are five key takeaways about the AI co-scientist: Multi-Agent Architecture: The AI co-scientist utilizes a multi-agent system built on Gemini 2.0, featuring specialized agents (Generation, Reflection, Ranking, Evolution, Proximity, and Meta-review) that work together to generate, debate, and evolve research hypotheses. The Supervisor agent orchestrates these agents, assigning them tasks and managing the flow of information. This architecture facilitates a "generate, debate, evolve" approach, mirroring the scientific method. Iterative Improvement: The system employs a tournament framework where different research proposals are evaluated and ranked, enabling iterative improvements. The Ranking agent uses an Elo-based tournament to assess and prioritize hypotheses through pairwise comparisons and simulated scientific debates. The Evolution agent refines top-ranked hypotheses by synthesizing ideas, using analogies, and simplifying concepts. The Meta-review agent synthesizes insights from all reviews to optimize the performance of other agents. Integration of Tools and Data: The AI co-scientist leverages various tools, including web search, domain-specific databases, and AI models like AlphaFold, to generate and refine hypotheses. It can also index and search private repositories of publications specified by scientists. The system is designed to align with scientist-provided research goals, preferences, and constraints, ensuring that the generated outputs are relevant and plausible. Validation through Experimentation: The AI co-scientist's capabilities have been validated in three biomedical areas: drug repurposing, novel target discovery, and explaining mechanisms of bacterial evolution and antimicrobial resistance. In drug repurposing, the system proposed candidates for acute myeloid leukemia (AML) that showed tumor inhibition in vitro. For novel target discovery, it suggested new epigenetic targets for liver fibrosis, validated by anti-fibrotic activity in human hepatic organoids. In explaining bacterial evolution, the AI co-scientist independently recapitulated unpublished experimental results regarding a novel gene transfer mechanism. Expert-in-the-Loop Interaction: Scientists can interact with the AI co-scientist through a natural language interface to specify research goals, incorporate constraints, provide feedback, and suggest new directions. The system can incorporate reviews from expert scientists to guide ranking and system improvements. The AI co-scientist can also be directed to follow up on specific research directions and prioritize the synthesis of relevant research.

    15 min

    About

    ibl.ai is a generative AI education platform based in NYC. This podcast, curated by its CTO, Miguel Amigot, focuses on high-impact trends and reports about AI.

    You Might Also Like

    Content Restricted

    This episode cannot be played on the web in your country or region.

    To listen to explicit episodes, sign in.

    Stay up to date with this show

    Sign in or sign up to follow shows, save episodes and get the latest updates.

    Select a country or region

    Africa, Middle East, and India

    Asia Pacific

    Europe

    Latin America and the Caribbean

    The United States and Canada