210 - Adversarial Misuse of Generative AI

YusufOnSecurity.com

Enjoying the content? Let us know your feedback!

As AI-generated content becomes more advanced, the risk of adversarial misuse—where bad actors manipulate AI for malicious purposes—has skyrocketed. But what does this mean in practical terms? What risks do we face, and how one of the big players is addressing them? Stick around as we break Google’s Adversarial Misuse of Generative AI report, explain the key jargon, and bust a cybersecurity myth at the end of the show.

Before we get into the main topic, lets have a look at one important news update, and that is:

  • Microsoft has expanded its Windows 11 administrator protection tests

- https://cloud.google.com: Adversarial Misuse of Generative AI

- https://deepmind.google: Mapping the misuse of generative AI

- https://learn.microsoft.com: User Account Control overview

- https://learn.microsoft.com: How User Account Control works

Be sure to subscribe!
You can also stream from https://yusufonsecurity.com
In there, you will find a list of all previous episodes in there too.

若要收聽兒少不宜的單集,請登入帳號。

隨時掌握此節目最新消息

登入或註冊後,即可追蹤節目、儲存單集和掌握最新資訊。

選取國家或地區

非洲、中東和印度

亞太地區

歐洲

拉丁美洲與加勒比海地區

美國與加拿大