28分

Fighting Financial Fraud When the Bad Guys Are Armed With AI The PaymentsJournal Podcast

    • ビジネスニュース

As fraud related to artificial intelligence (AI) becomes increasingly sophisticated and accessible, many legacy lines of defense are no longer able to effectively protect financial institutions and their customers. Financial institutions need to take a more proactive approach to fraud. By collecting and analyzing real-time data and using AI to identify patterns, FIs can quickly detect suspicious activity and clamp down on fraud.







Karen Postma, Senior Vice President of Risk Solutions at PSCU/Co-op Solutions, has long been a leader in detecting and deterring financial fraud. In a recent PaymentsJournal podcast, she sat down with Jennifer Pitt, Senior Analyst in Javelin Strategy & Research’s Fraud and Security practice, to discuss the nature of the latest attacks against credit unions and their members as well as the scourge of first-party fraud.  





PatmentsJournalFighting Financial Fraud When the Bad Guys Are Armed With AIPatmentsJournal Fighting Financial Fraud When the Bad Guys Are Armed With AIPatmentsJournaljQuery(document).ready(function ($){var settings_ap59144694 = { design_skin: "skin-wave" ,autoplay: "off",disable_volume:"default" ,loop:"off" ,cue: "on" ,embedded: "off" ,preload_method:"metadata" ,design_animateplaypause:"off" ,skinwave_dynamicwaves:"off" ,skinwave_enableSpectrum:"off" ,skinwave_enableReflect:"on",settings_backup_type:"full",playfrom:"default",soundcloud_apikey:"" ,skinwave_comments_enable:"off",settings_php_handler:window.ajaxurl,skinwave_wave_mode:"canvas",pcm_data_try_to_generate: "on","pcm_notice": "off","notice_no_media": "on",design_color_bg: "111111",design_color_highlight: "ef6b13",skinwave_wave_mode_canvas_waves_number: "3",skinwave_wave_mode_canvas_waves_padding: "1",skinwave_wave_mode_canvas_reflection_size: "0.25",skinwave_comments_playerid:"59144694",php_retriever:"https://www.paymentsjournal.com/wp-content/plugins/dzs-zoomsounds/soundcloudretriever.php" }; try{ dzsap_init(".ap_idx_444911_23",settings_ap59144694); }catch(err){ console.warn("cannot init player", err); } });





The Old Rules Don’t Apply







Consumers have learned that if an email doesn’t sound quite right or contains suspicious punctuation or misspellings, then it may not be legitimate. However, fraudsters are now leveraging generative AI like ChatGPT to create content that more effectively looks like a normal email than a phishing email.







“We can no longer tell consumers to look for those basic things like spelling errors, grammar errors,” Pitt said. “We need to be better at giving more generic advice to consumers about emails. If you’re not intending to get this email, if you don’t know the sender, don’t answer it. Instead, contact the company directly yourself.”Another way non-technical individuals use AI is with a tool called WormGPT, which effectively writes code or malware with fraudulent intent.







“I don’t have a technical background, but I could leverage these tools to create malware that I could embed in a phishing email or in other content to put keyloggers on a consumer’s computer or other device,” Postma said. “That’s probably one of the most unnerving components of AI utilization by cybercriminals.”







AI is also targeting employees at large companies. Several recent data breaches that Postma has seen have been phishing campaigns targeted at high-level employees whose credentials have been compromised, which can lead to an entire company being compromised.







AI is being leveraged to trick identity verification and circumvent know-your-customer (KYC) protocols via deepfakes using voice, photo and video.

As fraud related to artificial intelligence (AI) becomes increasingly sophisticated and accessible, many legacy lines of defense are no longer able to effectively protect financial institutions and their customers. Financial institutions need to take a more proactive approach to fraud. By collecting and analyzing real-time data and using AI to identify patterns, FIs can quickly detect suspicious activity and clamp down on fraud.







Karen Postma, Senior Vice President of Risk Solutions at PSCU/Co-op Solutions, has long been a leader in detecting and deterring financial fraud. In a recent PaymentsJournal podcast, she sat down with Jennifer Pitt, Senior Analyst in Javelin Strategy & Research’s Fraud and Security practice, to discuss the nature of the latest attacks against credit unions and their members as well as the scourge of first-party fraud.  





PatmentsJournalFighting Financial Fraud When the Bad Guys Are Armed With AIPatmentsJournal Fighting Financial Fraud When the Bad Guys Are Armed With AIPatmentsJournaljQuery(document).ready(function ($){var settings_ap59144694 = { design_skin: "skin-wave" ,autoplay: "off",disable_volume:"default" ,loop:"off" ,cue: "on" ,embedded: "off" ,preload_method:"metadata" ,design_animateplaypause:"off" ,skinwave_dynamicwaves:"off" ,skinwave_enableSpectrum:"off" ,skinwave_enableReflect:"on",settings_backup_type:"full",playfrom:"default",soundcloud_apikey:"" ,skinwave_comments_enable:"off",settings_php_handler:window.ajaxurl,skinwave_wave_mode:"canvas",pcm_data_try_to_generate: "on","pcm_notice": "off","notice_no_media": "on",design_color_bg: "111111",design_color_highlight: "ef6b13",skinwave_wave_mode_canvas_waves_number: "3",skinwave_wave_mode_canvas_waves_padding: "1",skinwave_wave_mode_canvas_reflection_size: "0.25",skinwave_comments_playerid:"59144694",php_retriever:"https://www.paymentsjournal.com/wp-content/plugins/dzs-zoomsounds/soundcloudretriever.php" }; try{ dzsap_init(".ap_idx_444911_23",settings_ap59144694); }catch(err){ console.warn("cannot init player", err); } });





The Old Rules Don’t Apply







Consumers have learned that if an email doesn’t sound quite right or contains suspicious punctuation or misspellings, then it may not be legitimate. However, fraudsters are now leveraging generative AI like ChatGPT to create content that more effectively looks like a normal email than a phishing email.







“We can no longer tell consumers to look for those basic things like spelling errors, grammar errors,” Pitt said. “We need to be better at giving more generic advice to consumers about emails. If you’re not intending to get this email, if you don’t know the sender, don’t answer it. Instead, contact the company directly yourself.”Another way non-technical individuals use AI is with a tool called WormGPT, which effectively writes code or malware with fraudulent intent.







“I don’t have a technical background, but I could leverage these tools to create malware that I could embed in a phishing email or in other content to put keyloggers on a consumer’s computer or other device,” Postma said. “That’s probably one of the most unnerving components of AI utilization by cybercriminals.”







AI is also targeting employees at large companies. Several recent data breaches that Postma has seen have been phishing campaigns targeted at high-level employees whose credentials have been compromised, which can lead to an entire company being compromised.







AI is being leveraged to trick identity verification and circumvent know-your-customer (KYC) protocols via deepfakes using voice, photo and video.

28分