Sec Guy

Sec Guy

Real cybersecurity training for the real world. We take the complex theories of CompTIA Security+ and SecAI+ and translate them into actionable skills. Whether you're fighting off Prompt Injection attacks or just fighting to get your first IT job, the Sec Guy has your back. Join us for deep dives into AI Security, Network Defense, and the future of cyber. Train Hard. Stay Secure.

  1. SecAI+ Domain 3.3: The AI Analyst (Blue Team Tools, MCP & Co-Pilot)

    2 FEB

    SecAI+ Domain 3.3: The AI Analyst (Blue Team Tools, MCP & Co-Pilot)

    We have seen the weapons (Video 10). Now, let’s look at the shields. Welcome to Domain 3: AI-Assisted Security. In this video (Objective 3.3), we switch to the Blue Team. We are breaking down the "AI Co-Pilot" stack, the new hardware you need to know for the exam, and the critical standard that connects AI to your internal data without causing a leak. In this video, we cover: The AI Co-Pilot: IDE vs. CLI Plugins (GitHub Copilot vs. Terminal Assistants). Critical Exam Term: Model Context Protocol (MCP)—The standard for connecting AI to secure internal servers. Analysis Tools: Vulnerability Analysis, Anomaly Detection, Summarization, and Real-Time Translation. Hardware: NVIDIA Jetson Nano Orin (Edge AI) and Vector Databases. Privacy: Using Ollama to run local LLMs and prevent data leaks. Timecodes: 0:00 - Intro: Switching to the Blue Team 0:25 - The AI Co-Pilot (IDE vs. CLI Plugins) 1:00 - CRITICAL TERM: Model Context Protocol (MCP) 1:30 - Analysis Tools: Vuln Scans & Translation 2:15 - Anomaly Detection & Vector Databases 2:38 - Edge AI Hardware: NVIDIA Jetson Nano Orin 3:02 - Threat Hunting with Neo4j Graph Database 3:22 - Privacy Tools: Ollama & Local LLMs 3:45 - What’s Next: Automation & SOAR (Video 12) ​📚 Resources & Support ​🎓 FREE Interactive Learning Tools Don't just watch—practice. Access our new browser-based tools to test your skills live. ​AI-Powered Exam Simulators: https://secguy.org/exam-simulators ​Python for Security Labs: https://secguy.org/python-practice ​Mock Interview Board: https://secguy.org/mock-interview 💬 Join the Squad Connect with other industry veterans and students in our new dedicated study group. ​Official Discord: https://secguy.org/discord-chat ​📚 Download Course Materials Get the SecAI+ Cheat Sheet (including the MCP Architecture Diagram & Jetson specs) and full course slides directly from the academy. ​Access Here: https://secguy.org/courses #SecAI #CompTIA #BlueTeam #CyberDefense #MCP #ModelContextProtocol #Ollama #JetsonNano #CoPilot #Cybersecurity

    4 min
  2. SecAI+ Domain 3.1: The AI Analyst (Blue Team Tools, MCP & Co-Pilot)

    2 FEB

    SecAI+ Domain 3.1: The AI Analyst (Blue Team Tools, MCP & Co-Pilot)

    ​🛡️ Domain 3: AI-Assisted Security (Objective 3.1) ​We’ve analyzed the weapons in Domain 2—now it’s time to deploy the shields. Welcome to the Blue Team. ​In this video, we break down the "AI Co-Pilot" stack and the defensive tools you need to master for the SecAI+ exam. From the hardware powering Edge AI to the critical protocols that secure internal data, this is your crash course in AI-assisted defense. ​🚀 What We Cover in This Video: ​The AI Co-Pilot Stack: Understanding the difference between IDE plugins (GitHub Copilot) and CLI Terminal Assistants. ​CRITICAL Exam Concept: The Model Context Protocol (MCP)—the industry standard for connecting AI models to secure internal servers without risking data leaks. ​Defensive Analysis: leveraging AI for vulnerability scanning, anomaly detection, automated summarization, and real-time translation. ​Hardware & Architecture: A look at NVIDIA Jetson Nano Orin (Edge AI) and how Vector Databases power modern security tools. ​Threat Hunting: visualizing threats with Neo4j Graph Databases. ​Data Privacy: How to use Ollama to run local LLMs, ensuring your sensitive data never leaves the network. ​⏱️ Timecodes ​0:00 - Intro: Switching to the Blue Team ​0:25 - The AI Co-Pilot (IDE vs. CLI Plugins) ​1:00 - CRITICAL TERM: Model Context Protocol (MCP) ​1:30 - Analysis Tools: Vuln Scans & Translation ​2:15 - Anomaly Detection & Vector Databases ​2:38 - Edge AI Hardware: NVIDIA Jetson Nano Orin ​3:02 - Threat Hunting with Neo4j Graph Database ​3:22 - Privacy Tools: Ollama & Local LLMs ​3:45 - What’s Next: Automation & SOAR (Video 12) ​📚 Resources & Support ​🎓 FREE Interactive Learning Tools Don't just watch—practice. Access our new browser-based tools to test your skills live. ​AI-Powered Exam Simulators: https://secguy.org/exam-simulators ​Python for Security Labs: https://secguy.org/python-practice ​Mock Interview Board: https://secguy.org/mock-interview ​💬 Join the Squad Connect with other industry veterans and students in our new dedicated study group. ​Official Discord: https://secguy.org/discord-chat ​📚 Download Course Materials Get the SecAI+ Cheat Sheet (including the MCP Architecture Diagram & Jetson specs) and full course slides directly from the academy. ​Access Here: https://secguy.org/courses ​Next Up: Domain 3.3: The AI Automator (SOAR & Agents) ​#SecAI #CompTIA #BlueTeam #CyberDefense #MCP #ModelContextProtocol #Ollama #JetsonNano #CoPilot #Cybersecurity

    4 min
  3. Domain 3.2: The AI Offensive (Red Team, Deepfakes & Malware)

    2 FEB

    Domain 3.2: The AI Offensive (Red Team, Deepfakes & Malware)

    We have talked about how to hack an AI. Now, let’s talk about when the AI becomes the hacker. Welcome to Domain 3: AI-Assisted Security. In this video (Objective 3.2), we switch to the Red Team. We are breaking down exactly how attackers weaponize LLMs to scale social engineering, clone voices for "Vishing," and generate polymorphic malware that evades traditional antivirus. In this video, we cover: Identity Attacks: Deepfakes, Impersonation, and Social Engineering at Scale. Infrastructure Attacks: Automated Reconnaissance, Attack Vector Discovery, and AI-Enhanced DDoS. Payloads: Polymorphic Code, Obfuscation, and Adversarial Malware Generation. Hardware: Why GPUs are required for Password Cracking (PassGAN). Timecodes: 0:00 - Intro: The AI Offensive (Domain 3) 0:42 - Social Engineering & Personalized Phishing 1:05 - Voice Cloning & Vishing (The 3-Second Rule) 1:38 - Automated Recon & Attack Vector Discovery 2:05 - AI-Enhanced DDoS (Traffic Shaping) 2:28 - Writing Malware & Polymorphic Code (Obfuscation) 3:05 - Hardware: GPUs & Password Cracking (PassGAN) 3:35 - What’s Next: The Blue Team (Video 11) ​📚 Resources & Support ​🎓 FREE Interactive Learning Tools Don't just watch—practice. Access our new browser-based tools to test your skills live. ​AI-Powered Exam Simulators: https://secguy.org/exam-simulators ​Python for Security Labs: https://secguy.org/python-practice ​Mock Interview Board: https://secguy.org/mock-interview ​💬 Join the Squad Connect with other industry veterans and students in our new dedicated study group. ​Official Discord: https://secguy.org/discord-chat ​📚 Download Course Materials Get the SecAI+ Cheat Sheet (including the MCP Architecture Diagram & Jetson specs) and full course slides directly from the academy. ​Access Here: https://secguy.org/courses Next Video: Domain 3.1: The AI Analyst (Blue Team Defense) #SecAI #CompTIA #Cybersecurity #RedTeam #Deepfakes #Malware #EthicalHacking #PassGAN #AIsecurity

    5 min
  4. CompTIA SecAI+ Domain 2.5: Blue Team Defense & AI Guardrails

    2 FEB

    CompTIA SecAI+ Domain 2.5: Blue Team Defense & AI Guardrails

    Port 443 is always open, traffic is encrypted, and the attack looks like valid English. You cannot fix AI security with a traditional firewall. In this episode of the SecAI+ Course, we enter Domain 3: Blue Team Operations. We are building the "AI Shield"—the new defense stack required to protect Large Language Models from injection, sponge attacks, and data leakage. 🔥 Topics Covered: * Input Validation: Prompt Firewalls & Sanitization (NVIDIA NeMo, LangChain) * Rate Limiting: Defending against Sponge Attacks * Output Filtering: Preventing Data Leakage & Insecure Code * C2PA: The new standard for Content Provenance & Authenticity * Modern SIEM/SOAR: Using UEBA to detect anomalies * Federated Learning: Training models without moving private data 🎓 Pass the Exam: ​📚 Resources & Support ​🎓 FREE Interactive Learning Tools Don't just watch—practice. Access our new browser-based tools to test your skills live. ​AI-Powered Exam Simulators: https://secguy.org/exam-simulators ​Python for Security Labs: https://secguy.org/python-practice ​Mock Interview Board: https://secguy.org/mock-interview ​💬 Join the Squad Connect with other industry veterans and students in our new dedicated study group. ​Official Discord: https://secguy.org/discord-chat ​📚 Download Course Materials Get the SecAI+ Cheat Sheet (including the MCP Architecture Diagram & Jetson specs) and full course slides directly from the academy. ​Access Here: https://secguy.org/courses #CompTIA #SecAI #Cybersecurity #BlueTeam #AIsecurity #C2PA #SecGuy

    4 min
  5. CompTIA SecAi+ Domain 2.4: Model Theft, Model DOS, Excessive Agency, Insecure Output Handling

    2 FEB

    CompTIA SecAi+ Domain 2.4: Model Theft, Model DOS, Excessive Agency, Insecure Output Handling

    Master Prompt Injection & Jailbreaking for the CompTIA SecAI+ (Domain 2). In this lesson, we break down the most dangerous (and fun) part of AI Security: Input Attacks. Your firewall stops traffic. It does not stop words. 🛡️🚫 OWASP LLM 1, 3, 5, 10 are covered previous video link here: https://youtu.be/d4zx2amlnvU In Part 2 of our Domain 2 Deep Dive, we cover the "Context Mixing" flaw that makes all LLMs vulnerable. We explain how attackers use Prompt Injection to turn helpful chatbots into "Confused Deputies" that attack their own users. We break down the critical difference between standard Jailbreaking (Roleplaying/DAN) and the mathematical magic of Universal Adversarial Triggers (UATs). But the scariest attack isn't when you talk to the AI—it's when the AI reads a file you didn't check. We demonstrate Indirect Prompt Injection and how a simple resume PDF can hack your recruiting bot. 🎓 What You Will Learn: 🧠 Context Mixing: Why LLMs fundamentally cannot distinguish between "Safe Data" and "Malicious Instructions." 🔓 Jailbreaking Types: The difference between "DAN" (Roleplaying) and "Logical Bypasses" (Translation exploits). 🔢 Universal Adversarial Triggers (UATs): The "magic words" (nonsense strings) that break models mathematically using Gradient Ascent. 🕵️‍♂️ Indirect Prompt Injection: The invisible attack inside PDFs and websites (Zero-Click exploits). 📦 Token Smuggling: Using Payload Splitting to sneak malware concepts past the WAF. 💸 Wallet Exhaustion: How Recursive Loops drain your bank account (Denial of Wallet). ​📚 Resources & Support ​🎓 FREE Interactive Learning Tools Don't just watch—practice. Access our new browser-based tools to test your skills live. ​AI-Powered Exam Simulators: https://secguy.org/exam-simulators ​Python for Security Labs: https://secguy.org/python-practice ​Mock Interview Board: https://secguy.org/mock-interview ​💬 Join the Squad Connect with other industry veterans and students in our new dedicated study group. ​Official Discord: https://secguy.org/discord-chat ​📚 Download Course Materials Get the SecAI+ Cheat Sheet (including the MCP Architecture Diagram & Jetson specs) and full course slides directly from the academy. ​Access Here: https://secguy.org/courses ⏳ Timestamps: 00:00 - The "Context Mixing" Flaw 01:05 - Context Switching & The System Prompt 01:45 - Type 1 & 2: Roleplaying (DAN) & Logical Bypasses 02:40 - Type 3: Universal Adversarial Triggers (UATs) 03:30 - Indirect Prompt Injection (The Resume Hack) 04:55 - Token Smuggling & Payload Splitting 05:35 - Wallet Exhaustion & Recursive Loops 06:25 - Homework: Glitch Tokens #SecAIplus #CompTIA #PromptInjection #Jailbreak #RedTeam #AIsecurity #EthicalHa

    4 min
  6. CompTIA SecAI+ Domain 2.3: Model Inversion, Inference & Poisoning

    2 FEB

    CompTIA SecAI+ Domain 2.3: Model Inversion, Inference & Poisoning

    I don't need to break into your server to steal your AI. I just need to ask it the right questions. In Part 3 of our Domain 2 Deep Dive, we leave the "Prompt Injection" attacks behind and enter the world of Privacy Attacks and Model Theft. We explain how attackers can use Model Inversion to reconstruct private training data (like faces) just by analyzing confidence scores. We break down the difference between Membership Inference (knowing if you were a patient) and Attribute Inference (knowing what disease you have). Finally, we cover Model Extraction (cloning GPT-4 for cheap) and the silent killer known as Data Poisoning—where attackers install "Backdoors" into the model before it's even trained. 🎓 In this video, you will learn: Model Inversion: Reconstructing training data (faces/PII) from vector outputs. Membership Inference vs. Attribute Inference: The subtle difference between exposing a user and exposing their secrets. Model Stealing (Distillation): How "Student" models cheat off "Teacher" models to steal IP. Adversarial Reprogramming: Hijacking a medical AI to mine cryptocurrency. Data Poisoning: Split-View attacks and installing "Backdoors" (The Sticky Note hack). Sponge Examples: Attacks designed to burn energy and overheat hardware. ⏱️ Timestamps: 00:00 Intro: The "Heist" Concept 01:05 Model Inversion (Reconstructing Faces) 01:55 Membership Inference vs. Attribute Inference 02:45 Model Stealing (Distillation Attacks) 03:15 Adversarial Reprogramming (Hijacking Compute) 03:40 Data Poisoning & Split-View Attacks 04:30 How to Stop It (Teaser) 04:45 Support the Channel (Buy Me a Coffee) 05:05 Homework: Sponge Examples ​📚 Resources & Support ​🎓 FREE Interactive Learning Tools Don't just watch—practice. Access our new browser-based tools to test your skills live. ​AI-Powered Exam Simulators: https://secguy.org/exam-simulators ​Python for Security Labs: https://secguy.org/python-practice ​Mock Interview Board: https://secguy.org/mock-interview ​💬 Join the Squad Connect with other industry veterans and students in our new dedicated study group. ​Official Discord: https://secguy.org/discord-chat ​📚 Download Course Materials Get the SecAI+ Cheat Sheet (including the MCP Architecture Diagram & Jetson specs) and full course slides directly from the academy. ​Access Here: https://secguy.org/courses #SecAIplus #CompTIA #ModelInversion #DataPoisoning #AIsecurity #RedTeam #Cybersecurity #EthicalHacking #SecGuy

    6 min

Trailer

About

Real cybersecurity training for the real world. We take the complex theories of CompTIA Security+ and SecAI+ and translate them into actionable skills. Whether you're fighting off Prompt Injection attacks or just fighting to get your first IT job, the Sec Guy has your back. Join us for deep dives into AI Security, Network Defense, and the future of cyber. Train Hard. Stay Secure.