33 min

News & Research Roundup 28 March AI Education Podcast

    • Education

The season-ending episode for Series 7, this is the fifteenth in the series that started on 1st November last year with the "Regeneration: Human Centred Educational AI" episode. And it's an unbelievable 87th episode for the podcast (which started in September 2019).
When we come back with Series 8 after a short break for Easter, we're going to take a deeper dive into two specific use cases for AI in Education. The first we'll discuss is Assessment, where there's both a threat and opportunity created by AI. And the second topic is AI Tutors, where there's more of a focus on how we can take advantage of the technology to help improve support for learning for students.
This episode looks at one key news announcement - the EU AI Act - and a dozen new research papers on AI in education.
News
EU AI Act
https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
The European Parliament approved the AI Act on 13 March and there's some stuff in here that would make good practice guidance. And if you're developing AI solutions for education, and there's a chance that one of your customers or users might be in the EU, then you're going to need to follow these laws (just like GDPR is an EU law, but effectively applies globally if you're actively offering a service to EU residents).
The Act bans some uses of AI that threaten citizen's rights - such as social scoring and biometric identification at mass level (things like untargeted facial scanning of CCTV or internet content, emotion recognition in the workplace or schools, and AI built to manipulate human behaviour) - and for the rest it relies on regulation according to categories. 

High Risk AI systems have to be assessed before being deployed and throughout their lifecycle.
In the High Risk AI category it includes critical infrastructure (like transport and energy), product safety, law enforcement, justice and democratic processes, employment decision making - and Education. So decision making using AI in education needs to do full risk assessments, maintain usage logs, be transparent and accurate - and ensure human oversight. Examples of decision making that would be covered would be things like exam scoring, student recruitment screening, or behaviour management.
General generative AI - like chatgpt or co-pilots - will not be classified as high risk, but they'll still have obligations under the Act to do things like clear labelling for AI generated image, audio and video content ; make sure there's it can't generate illegal content, and also disclose what copyright data was used for training.
But, although general AI may not be classified as high risk, if you then use that to build a high risk system - like an automated exam marker for end-of-school exams, then this will be covered under the high risk category.
All of this is likely to become law by the middle of the year, and by the end of 2024 prohibited AI systems will be banned - and by mid-2025 the rules will start applying for other AI systems.
Research
Another huge month. I spent the weekend reviewing a list of 350 new papers published in the first two weeks of March, on Large Language Models, ChatGPT etc, to find the ones that are really interesting for the podcast

Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges arXiv:2401.08664
 
A Study on Large Language Models' Limitations in Multiple-Choice Question Answering arXiv:2401.07955
 
Dissecting Bias of ChatGPT in College Major Recommendations arXiv:2401.11699
 
Evaluating Large Language Models in Analysing Classroom Dialogue arXiv:2402.02380 
 
The Future of AI in Education: 13 Things We Can Do to Minimize the Damage https://osf.io/preprints/edarxiv/372vr
 
Scaling the Authoring of AutoTutors with Large Language Models https://arxiv.org/abs/2402.09216
 
Role-Playing Simulation Games using ChatGPT h

The season-ending episode for Series 7, this is the fifteenth in the series that started on 1st November last year with the "Regeneration: Human Centred Educational AI" episode. And it's an unbelievable 87th episode for the podcast (which started in September 2019).
When we come back with Series 8 after a short break for Easter, we're going to take a deeper dive into two specific use cases for AI in Education. The first we'll discuss is Assessment, where there's both a threat and opportunity created by AI. And the second topic is AI Tutors, where there's more of a focus on how we can take advantage of the technology to help improve support for learning for students.
This episode looks at one key news announcement - the EU AI Act - and a dozen new research papers on AI in education.
News
EU AI Act
https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
The European Parliament approved the AI Act on 13 March and there's some stuff in here that would make good practice guidance. And if you're developing AI solutions for education, and there's a chance that one of your customers or users might be in the EU, then you're going to need to follow these laws (just like GDPR is an EU law, but effectively applies globally if you're actively offering a service to EU residents).
The Act bans some uses of AI that threaten citizen's rights - such as social scoring and biometric identification at mass level (things like untargeted facial scanning of CCTV or internet content, emotion recognition in the workplace or schools, and AI built to manipulate human behaviour) - and for the rest it relies on regulation according to categories. 

High Risk AI systems have to be assessed before being deployed and throughout their lifecycle.
In the High Risk AI category it includes critical infrastructure (like transport and energy), product safety, law enforcement, justice and democratic processes, employment decision making - and Education. So decision making using AI in education needs to do full risk assessments, maintain usage logs, be transparent and accurate - and ensure human oversight. Examples of decision making that would be covered would be things like exam scoring, student recruitment screening, or behaviour management.
General generative AI - like chatgpt or co-pilots - will not be classified as high risk, but they'll still have obligations under the Act to do things like clear labelling for AI generated image, audio and video content ; make sure there's it can't generate illegal content, and also disclose what copyright data was used for training.
But, although general AI may not be classified as high risk, if you then use that to build a high risk system - like an automated exam marker for end-of-school exams, then this will be covered under the high risk category.
All of this is likely to become law by the middle of the year, and by the end of 2024 prohibited AI systems will be banned - and by mid-2025 the rules will start applying for other AI systems.
Research
Another huge month. I spent the weekend reviewing a list of 350 new papers published in the first two weeks of March, on Large Language Models, ChatGPT etc, to find the ones that are really interesting for the podcast

Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges arXiv:2401.08664
 
A Study on Large Language Models' Limitations in Multiple-Choice Question Answering arXiv:2401.07955
 
Dissecting Bias of ChatGPT in College Major Recommendations arXiv:2401.11699
 
Evaluating Large Language Models in Analysing Classroom Dialogue arXiv:2402.02380 
 
The Future of AI in Education: 13 Things We Can Do to Minimize the Damage https://osf.io/preprints/edarxiv/372vr
 
Scaling the Authoring of AutoTutors with Large Language Models https://arxiv.org/abs/2402.09216
 
Role-Playing Simulation Games using ChatGPT h

33 min

Top Podcasts In Education

The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
Mick Unplugged
Mick Hunt
Law of Attraction SECRETS
Natasha Graziano
This Is Purdue
Purdue University
Digital Social Hour
Sean Kelly