9 episodes

If you are looking for new and interesting ways to think about security, this is the podcast for you.
Co-hosts Arianna and Claudine talk to cyber security researchers about the difficult (and very real) problems they are trying to solve. From online hate to hacking voice assistants with nonsense words, we showcase thinkers and doers from the Centre for Doctoral Training in Cyber Security at the University of Oxford.
Here at PTNPod we keep the main message positive and pro-active. Come and join us as we swan about with friends in a limited episode run for Spring 2022.

With thanks to our creative team, artist Audrey Tinsman and composer Sean Sirur.

Proving the Negative (PTNPod): Swanning About in Cyber Security Oxford University

    • Education

If you are looking for new and interesting ways to think about security, this is the podcast for you.
Co-hosts Arianna and Claudine talk to cyber security researchers about the difficult (and very real) problems they are trying to solve. From online hate to hacking voice assistants with nonsense words, we showcase thinkers and doers from the Centre for Doctoral Training in Cyber Security at the University of Oxford.
Here at PTNPod we keep the main message positive and pro-active. Come and join us as we swan about with friends in a limited episode run for Spring 2022.

With thanks to our creative team, artist Audrey Tinsman and composer Sean Sirur.

    Power to the Ppl

    Power to the Ppl

    Data protection and making consent more of a conversation. Listen up, and prosper! This week we're talking about Ari's research into why experts need to build (or architect) systems with better consent options, so users have more of a choice about the data they share.

    Ari's academic work relates to data protection. She argues that the burden of communication (i.e. of informing) falls on experts to communicate clearly and concisely instead of on users to understand dense technical language. Ari has collaborated with Computer Scientists, Lawyers and Medical researchers. She also educates in and out of the classroom; running classes and workshops, and cyber-policy competitions, crisis simulations and hackathons. Outside of this, Ari has led cyber security work within UK innovation testbeds, focusing on secure and trustworthy information exchange for next-generation telecommunications. Her ethos is that cyber security is strategic, it is a business enabler, and the best cyber security cultures are positive and pro-active.

    You can find Ari on Twitter: @schulite
    If you want to read more about Royal Free and Google: https://medconfidential.org/whats-the-story/health-data-ai-and-google-deepmind

    Recent outputs: What societal values will 6G address? (6G-IA working group: https://6g-ia.eu/single_post/?slug=6g-ia-white-paper-what-societal-values-will-6g-address-societal-key-values-and-key-value-indicators-analysed-through-6g-use-cases); Tabitha L. James, Jennifer L. Ziegelmayer, Arianna Schuler Scott and Grace Fox (2021). A Multiple-Motive Heuristic-Systematic Model for Examining How Users Process Android Data and Service Access Notifications, doi: 10.1145/3447934.3447941; Arianna Schuler Scott, Michael Goldsmith & Harriet Teare (2019). Wider Research Applications of Dynamic Consent, doi: 10.1007/978-3-030-16744-8_8

    • 20 min
    Voice Hackers R Us

    Voice Hackers R Us

    We’re learning about speech interfaces and hacking home assistants with nonsense and wordplay. This week we're talking with Mary about speech interface attacks (how hackers can turn your voice assistant against you with utter nonsense), pentesting and space robots! Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales; http://creativecommons.org/licenses/by-nc-sa/2.0/uk/

    • 22 min
    We are what we do

    We are what we do

    Instead of passwords, what if computers used our high fives to log us in? Okay, so what if instead of passwords, gadgets high fived us instead? This week we're talking with Klaudia about behavioural biometrics, usable security and how hackers might try to mimic gestures and body language!
    Klaudia is a doctoral student at the Centre for Doctoral Training in Cyber Security at the University of Oxford and the recipient of the Women Techmakers scholarship 2019. Her research focuses on leveraging the heterogeneity of IoT devices to improve the security of smart environments. She graduated from the NordSecMob programme in 2017, obtaining a master's degree in Security and Mobile Computing.
    Recent papers: Krawiecka, K. et al (2022). Biometric Identification System based on Object Interactions in Internet of Things Environments; Krawiecka, K. et al (2021). Plug-and-Play: Framework for Remote Experimentation in Cyber Security. In European Symposium on Usable Security 2021 (pp. 48-58). doi: 10.1145/3481357.3481518
    Check out the Perspektywy Women in Tech Summit 2022: https://womenintechsummit.pl Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales; http://creativecommons.org/licenses/by-nc-sa/2.0/uk/

    • 23 min
    Make or Break

    Make or Break

    Join us as we explore how to describe trust, reputation and messiness using maths! This week we're talking with Sean about how she translates the chaos of human relationships and interaction into precise, machine-readable descriptions. We learn why networks are useful for mapping out who (or what) is sharing information and building reputation.

    Sean’s main interest lies in describing social phenomena with mathematical and computational concepts. Her current focus is on trust (interactions with potentially risky parties) and reputation (sharing opinions on how risky a party is). Primarily, she studies how delays in information sharing can be exploited by malicious parties and how to prevent this. Personal pages: https://se-si.github.io; https://www.cs.ox.ac.uk/people/sean.sirur

    Recent papers: Properties of Reputation Lag Attack Strategies (2022, https://dl.acm.org/doi/abs/10.5555/3535850.3535985); Cooperation and distrust in extra-legal networks: a research note on the experimental study of marketplace disruption (2022, https://doi.org/10.1080/17440572.2022.2031152); Simulating the Impact of Personality on Fake News (2021, https://api.semanticscholar.org/CorpusID:244731942); The Reputation Lag Attack (2019, https://link.springer.com/chapter/10.1007/978-3-030-33716-2_4). Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales; http://creativecommons.org/licenses/by-nc-sa/2.0/uk/

    • 20 min
    Rethinking Risk

    Rethinking Risk

    Sometimes threats come from inside the system (content warning: intimate partner violence). This week we're talking with Julia about how to model risk when the threat is known and trusted (e.g., coercion, manipulation and surveillance).
    Julia Slupska is a doctoral student at the Centre for Doctoral Training in Cybersecurity and the Oxford Internet Institute. Her research focuses on technologically-mediated abuse like image-based sexual abuse ('revenge porn') and stalking, as well as emotion, care and metaphors in cybersecurity.
    Julia's recent papers: Aiding Intimate Violence Survivors in Lockdown: Lessons about Digital Security in the Covid-19 Pandemic (https://dl.acm.org/doi/abs/10.1145/3491101.3503548); Cybersecurity must learn from and support advocates tackling online gender-based violence (https://unidir.org/commentary/cybersecurity-online-GBV).
    In this episode we refer to: Ashkan Soltani's article on what 'abusability' is (https://www.wired.com/story/abusability-testing-ashkan-soltani); an example of intimidation during court proceedings held over Zoom court hearing (https://www.youtube.com/watch?v=xgz3Tx69zXk); Trust and Usability Toolkit (https://northumbria.design/projects/trust-and-abusability-toolkit); RightsCon (FREE tickets until June 3rd 2022, rightscon.org/attend); USENIX conference (Privacy Engineering Practice and Respect, or PEPR). Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales; http://creativecommons.org/licenses/by-nc-sa/2.0/uk/

    • 19 min
    The Kids aren’t OK

    The Kids aren’t OK

    Designing and building apps to protect children and young folk from data harms. This episode we're talking with Anirudh about how app design may affect children, and how data collected from apps could be putting kids at risk.

    Anirudh is a cyber security researcher at the University of Oxford and he is part of the Human Centred Computing group in the Computer Science department. In his research he focuses on children's privacy and data tracking in the mobile ecosystem.

    We mention the age-appropriate design code, here is some guidance on it from the UK Information Commissioner’s Office (ICO): https://ico.org.uk/for-organisations/guide-to-data-protection/ico-codes-of-practice/age-appropriate-design-code Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales; http://creativecommons.org/licenses/by-nc-sa/2.0/uk/

    • 20 min

Top Podcasts In Education

TED and PRX
Dr. Jordan B. Peterson
Daily Stoic
The Atlantic
Lauryn Evarts & Michael Bosstick / Dear Media
Motiversity