
159 episodes

Cybertraps Podcast Frederick Lane & Jethro Jones
-
- Technology
-
-
5.0 • 22 Ratings
-
We explore the risks arising from the use and misuse of digital devices and electronic communication tools. We interview experts in the fields of cybersafety, cybersecurity, privacy, parenting, and technology and share the wisdom of these experts with you!
-
Teaching Cybersecurity using Sphero with Tod Johnston Cybertraps 158
Tod Johnston, a Senior Education Content Manager at Sphero, discusses how his company uses robots to teach cybersecurity concepts to middle school students. Their robotic balls help students visualize abstract cybersecurity topics like man-in-the-middle attacks. Tod explains how they developed lessons in collaboration with cybersecurity experts to give students an initial understanding of cyber threats and how to act responsibly online. Tod hopes to expand these lessons to younger students in the future. The discussion also touches on the challenges of educating both students and adults about cybersecurity given that technology is evolving rapidly and privacy policies are often difficult to understand.
Sphero - Blueprint- basics of engineering
Educators need to think about cybersecurity from a student’s perspective, rather than a technology perspective.
We should be inviting students to learn about their privacy policies to help them make better choices.
Sphero programmable balls are good for teaching programmable, algorithmic skills, but it’s always difficult to teach cybersecurity.
An example of a man in the middle attack
Can’t damage other people’s property
Student in Miami-Dade who hacked the school district.
Dr. Pauline Mosley collaborated on Sphero’s curriculum
The hope for the future of designing software and hardware and what they should look like.
- How GDPR has ruined the web
About Tod Johnston
Tod Johnston is a Senior Education Content Manager at Sphero, leveraging over 10 years of experience in classroom settings. With a focus on classroom technology, math education, STEM, and the environment, Tod applies practical teaching expertise to positively impact technology integration in schools. He also has experience as a Learning Experience Designer – designing curriculum, presenting at conferences, and researching educational technology and math education trends. He is dedicated to transforming education through innovative approaches. -
AI Policies in Schools Cybertraps 157
In Cybertraps 157, the APLUS Framework for adopting AI in schools is discussed. The framework emphasizes Accessibility, Privacy and Ethics, Learner-centered approach, Usability, and Sustainability. The irony of principals wanting AI to assist them while trying to prevent students from doing the same is highlighted. Examples of AI policies, including a plagiarism policy, are mentioned. The importance of viewing AI as an ecosystem rather than just a tool is emphasized. A blog post is referenced, stating that 73% of something is discussed.
APLUS Framework for adopting AI
A - Accessibility
P - Privacy and Ethics
L - Learner-centered
U - Usability
S - Sustainability
The irony of principals asking for AI to do their jobs while simultaneously trying to find ways to prevent students from doing the same.
Example 1: (I only have it in Email form) Plagiarism Policy
Example 2: much betterPeninsula school district: https://docs.google.com/document/d/1zM7qJbgPc01JG5d63XSLuJnRILaKdj2vvSGFqjWrrkw/edit
AI is an ecosystem, not a tool.
73% of what blog post: https://world.hey.com/jason/73-of-what–80e24c13 -
Those Who Don't Read Science Fiction Are Doomed to Repeat It Cybertraps 156
In Cybertraps 156, the podcast discusses the potential dangers of AI-fitted teddy bears. These toys have the ability to read children personalized bedtime stories using private information they have overheard. The episode highlights a news item that warns about the privacy concerns associated with these “scary” gadgets.
Harry Harrison – “I Always Do What Teddy Says” – https://www.deviantart.com/aegiandyad/art/I-Always-Do-What-Teddy-Says–259013944
Jethro’s Copy from the book: https://www.dropbox.com/s/brrf8hkt3zbm2l8/I%20always%20do%20what%20Teddy%20Says.pdf?dl=0
Amazon link: https://amzn.to/44JeusZ
News Item: “Beware of the AI-fitted teddy bears: ‘Scary’ gadgets could read children personalised bedtime stories using private details they have overheard, leading toymaker claims” – https://www.dailymail.co.uk/sciencetech/article–12233551/ChatGPT-style-teddy-bears-read-bedtime-stories-toymaker-claims.html (scroll down for story) -
Bot's Up, Doc? with Jethro Jones Cybertraps 155
In this episode, Jethro and Fred discuss chatbots and artificial intelligence. The episode covers the history of chatbots, including the Turing Test and the development of Large Language Models (LLMs) such as ChatGPT, Bing, and Jasper. The potential uses and issues with chatbots are explored, including incomplete or misinformation, theft of intellectual property, inappropriate uses, and threats to various types of jobs. The episode also touches on the impact of chatbots on education and the potential for weaponization of disinformation, cybersecurity, and more emotion-targeted advertising.
Beginning of Cybertraps Podcast Episode Index
Writebettr.com - test out AI with your poorly written emails
AILeader.info - learn about AI and how to use it to save time with 3 minute masterclasses.
Today’s Topic: Bot’s Up, Doc?
Keynote delivered at last minute for Alaska Society for Technology in Education
Artificial Life and Artificial IntelligenceWhy chatbots are NOT “artificial intelligence” – yet
“The Father of Chat”
The Turing Test
Alan Turing OBE FRS [1912–1954] – British mathematician and computer scientist
Leader in development of computer and algorithmic theory
At Bletchley Park, helped design a machine to crack the Enigma code
1950 – Turing devises The Turing Test:
Can a computer produce answers indistinguishable from a human?
The Imitation Game
1954 – Turing commits suicide
Large Language Models (LLMs)
ChatGPT (esp. 4)
Bing
Jasper
embedded AI
Photoshop
Google Workspace
incredibly rapid change
Current ChatGPT Issues
Incomplete Data or Misinformation
Theft of Intellectual Property
Inappropriate Uses
Response to MSU Shooting
Threat to a Various Types of Jobs
Mid-to Lower-Level Tech
Media / PR Professionals
Customer Service
Paralegals / Attorneys?
Religious Leaders?
Monetization
A Quick Object Lesson
Censorship Is a Biz-Kill
China Was a Tech Leader in 2010s
WeChat
AliPay
Beijing (CCP) Got Nervous
Party Officials Took Corporate Seats
Goal Was to Limit Social Influence
Chinese Tech Companies Slashed Investment in Pure Research
ChatGPT and Education
A Flawed Resource for Students
Incomplete Information
Misinformation
Kids Will Use Technology to Cheat
Not the First Time …
Several Schools Have Had Cheating Scandals
NYC Blocked, then Unblocked, Access to ChatGPT
Responses and Solutions
Tools for Identifying Chat-Generated Content
Incorporate Chat Critiques into Curricula
The Revenge of the Palmer Method?
Create Assessments that ChatGPT Can’t Answer
The Parade of Horribles
Weaponization of Disinformation
Cybersecurity
Social Engineering
Scams and Spams
Manipulative Suggestions
Integration with Other Technologies
More Emotion-Targeted Advertising
Displaced Emotional Relationships
Personalized Chatbot (“Amanuensis”)
Fasten Your Seat Belt. It’s Going to Be a Bumpy Night. -
Armies of Enablers with Amos Guiora cybertraps 154
In this episode, Fred Lane interviews Amos Guiora, a law professor at the University of Utah. The bulk of the interview centers on Professor Guiora’s recently published book, "Armies of Enablers: Survivor Stories of Complicity and Betrayal in Sexual Assaults". In his book, Professor Guiora attempts to answer a difficult question:
“What do sexual assault survivors expect of the enabler-bystander? In this powerful book, Amos N. Guiora shares the stories of survivors to expose how individual and institutional enablers allow predators to perpetrate their crimes through silence and other failures to act. He then proposes legal, cultural, and social measures aimed at the enabler from the survivor’s perspective.”
In addition to his work at the University of Utah law school, Professor Guiora has been active in S.E.S.A.M.E., the organization led by Terri Miller that is working the so-called “passing of the trash.”
Frederick Lane is an author, attorney, educational consultant, and lecturer based in Brooklyn, NY. He is the co-founder of The Center for Cyberethics and is a nationally-recognized expert in the areas of cybersafety, digital misconduct, personal privacy, and other topics at the intersection of law, technology, and society. Lane has appeared on “The Daily Show with Jon Stewart,” CNN, NBC, ABC, CBS, the BBC, and MSNBC.
He has written ten books, including most recently Cybertraps for Educators 2.0 (2020), Raising Cyberethical Kids (2020), and Cybertraps for Expecting Moms & Dads (2017). He is currently working on his newest book, _The Rise of the Digital Mob_ (Beacon Press 2022). All of his books are available on Amazon.com or through his Web sites, FrederickLane.com and Cybertraps.com.
With Jethro Jones (The Transformative Principal), Lane co-hosts “The Cybertraps Podcast.” He is also the publisher of “The Cybertraps Newsletter” (newsletter.cybertraps.com). -
Ghana Update and the Growing Problem of Deepfakes Cybertraps 153
Update from Ghana - #2023–03–13_1100 Meeting with the Cybercrime Unit of the Ghana Police Service - #2023–03–14_1200 Visit to 5/6 classroom at Primus Hybrid School - #2023–03–19_1400 Pan-Africa webinar for parents - How can parents and carers monitor their children’s online activity without infringing on their privacy? - What are the long-term effects of excessive technology use on children’s mental and physical health? - How can parents and carers stay informed about new technology trends and potential risks? - What should parents and carers do if they suspect their child is being cyberbullied or harassed online? - How can parents and carers effectively communicate with their children about technology use without creating conflict or tension? - How can parents and carers address their own technology use and set a good example for their children? - What is the role of peer pressure and social media in shaping children’s online behavior, and how can parents and carers help children navigate these pressures? - How can technology be used to enhance learning and development for children, and what are some best practices for incorporating technology into education? - How can parents and carers help children build healthy relationships with their devices and encourage offline activities and hobbies? - What is the role of technology companies and platforms in promoting safe and responsible technology use, and how can parents and carers hold them accountable? - The Growing Problem of Deepfakes - News Item: New York students create a deepfake video of middle school principal saying racist things “https://www.washingtonpost.com/nation/2023/03/14/racist-deepfakes-carmel-tiktok/” - Details - Target of the malicious attack was George Fischer Middle School - In late January or early February, multiple videos were released on Tiktok, with a male voice laid over videos of Principal John Piscitella - The voiceovers contained racist statements and threats of violence - TikTok quickly took the videos down but not before they were seen by multiple students - Carmel Central School District sent out a letter on February 13, 2023, alerting parents to the videos and saying “that three high-schoolers had “used artificial intelligence to impersonate the staff” and made them appear to make “inappropriate comments” in videos.” - The school did not describe the videos, nor did it specifically mention the racist comments or threats of violence - Simultaneous, local police closed their investigation after determining that no crime had been committed - The District defended its response to angry parents, saying that “they were trying to balance disclosing sensitive information without generating panic” - But parents accused the District of minimizing the videos - The videos raise many issues, most controversial: - Racism - Student Privacy - The Use and Abuse of Technology (particularly AI) - Threats of Gun Violence - Disciplinary action was taken against three students but District refused to say what action was taken - Analysis - Schools need to be more transparent about the nature of incidents like these - We may need to consider the cost of student privacy - These were relatively crude deepfake videos; the technology exists now to make much more convincing videos - Additional Resources - #2023–03–09 Principal appears to spew racist threats in disturbing video — but it never actually happened “https://www.msn.com/en-us/news/us/principal-appears-to-spew-racist-threats-in-disturbing-video-%E2%80%94-but-it-never-actually-happened/ar-AA18qImu” - #2023–03–08 High Schoolers Made a Racist Deepfake of a Principal Threatening Black Students “https://www.vice.com/en/article/7kxzk9/school-principal-deepfake-racist-video” - #2023–03–02 TikTok videos threatening Black students have Carmel parents on edge, district promising change “https://www.lohud.com/story/news/education/2023/03/02/racist-tiktok-
Customer Reviews
Fellow parents, this one is a must.
I love this podcast. Valuable information for parents, educators, and those who don't know what they don't know about the internet. Looking forward to more!
Game-changer
As a mindset researcher, I have learned that our success in life is based upon our mental habits. And, technology is playing a huge role on our mental habits. This podcast has helped me be more intentional about using technology to enforce positive mental habits, and has helped me learn how to limit tech's negative effect on my mental habits.
Important topic
Very important topic here. Glad to see that you're considering this from so many different angles. This is valuable. Well done!