Please support this podcast by checking out our sponsors: - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron - Effortless AI design for presentations, websites, and more with Gamma - https://try.gamma.app/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Git commits with AI session notes - A new Git extension, git-memento, stores cleaned AI coding transcripts as Markdown inside git notes, preserving normal commit workflows while improving provenance and review. AI productivity: Scheme to WebAssembly - Puppy Scheme is a fast-built, alpha Scheme-to-WebAssembly compiler accelerated by Claude, featuring WASI 2, the Component Model, WASM GC, and big compile-time speedups. Auditing AI agents with eBPF - Logira uses eBPF, cgroup v2, JSONL timelines, and SQLite queries to audit what AI agents actually do on Linux—processes, files, and network—plus risky-behavior detections. Near-term AI security truce - Matthew Honnibal calls for focusing on practical AI risks like prompt injection, autonomous attack loops, and unsafe agent marketplaces—urging basic security hardening over hype. Accountable agents via cryptographic covenants - Nobulex proposes verifiable agent behavior using DIDs, Ed25519 keys, a Cedar-like policy DSL, hash-chained action logs with Merkle proofs, and staking/slashing enforcement. Military AI, interpretability, and governance - Two essays argue that lethal or medical AI must be interpretable and that the Pentagon–Anthropic debate is too narrowly framed around “human in the loop,” missing oversight and accountability. When not to share transcripts - Cory Doctorow warns that dumping chatbot transcripts into public threads is rude and unreliable, and that sending unverified AI critiques to authors shifts unpaid verification work onto them. - https://github.com/mandel-macaque/memento - https://matthewphillips.info/programming/posts/i-built-a-scheme-compiler-with-ai/ - https://github.com/melonattacker/logira - https://pluralistic.net/2026/03/02/nonconsensual-slopping/#robowanking - https://honnibal.dev/blog/clownpocalypse - https://manidoraisamy.com/ai-interpretable.html - https://github.com/nobulexdev/nobulex - https://weaponizedspaces.substack.com/p/the-information-space-around-military Episode Transcript Git commits with AI session notes Let’s start with developer workflow—because today’s most concrete shift is happening right inside Git. A new open-source project called git-memento, from the mandel-macaque/memento repository, is essentially a Git extension for provenance. The idea is simple: if an AI coding session contributed to a commit, you should be able to attach a cleaned, human-readable trace of that session to the commit—without breaking how developers already work. Here’s the clever part: it stores that transcript as Markdown in git notes, not in the commit message and not in your codebase. That means your usual flow stays intact—you can still commit with -m or open an editor—while the “how we got here” context lives alongside the commit for anyone who wants it. You initialize per repo with something like “git memento init”, optionally choosing a provider like codex or claude. Configuration lives in your local .git/config under memento.* keys, so it’s repo-scoped and doesn’t demand a new centralized service. Then the daily usage looks like: “git memento commit -m ‘message’” or “git memento amend” when you’re rewriting history. It supports both a legacy single-session format and a versioned multi-session envelope, using explicit HTML comment markers—so you can attach multiple sessions, even from different providers, to one commit. That’s important because real work rarely fits into a single AI interaction. It also leans into collaboration. Commands like share-notes, push, and notes-sync deal with refs/notes/* properly—pushing and merging notes, configuring remote fetch refspecs, and even creating timestamped backups under refs/notes/memento-backups/ before merges. If you’ve ever had git notes drift across a team, you’ll recognize why that backup step matters. For teams that rebase and rewrite history a lot, there are features to carry notes forward automatically—notes-rewrite-setup—or to aggregate notes from a rewritten range into a new commit via notes-carry, with a provenance block so reviewers can see what got rolled up. And there’s quality tooling: “git memento audit” can check coverage, validate metadata markers like provider and session ID, and even output JSON. “git memento doctor” helps debug configuration and whether your remotes are set up to sync notes sanely. From an engineering standpoint, it’s shipped as a single native executable per platform using .NET SDK 10 and NativeAOT. There’s a curl-based installer that pulls from GitHub releases/latest, plus CI smoke tests across Linux, macOS, and Windows. There’s also a GitHub Marketplace Action: one mode posts commit comments by rendering memento notes, and another mode gates CI by failing builds when audit coverage checks fail. In other words: not just capture, but enforcement. The repo is MIT-licensed, roughly 200 stars at snapshot time, and today—March 2, 2026—v1.1.0 is listed as the first public release of the CLI and Actions. Stepping back, git-memento is part of a broader theme: if AI is contributing to code, we need better receipts. Not for performative transparency—just enough traceability for code review, incident response, and institutional memory. AI productivity: Scheme to WebAssembly Now let’s talk about the upside of AI-assisted building—where the speed is real, but the maturity isn’t. Matthew Phillips wrote about building “Puppy Scheme,” a Scheme-to-WebAssembly compiler, largely motivated by watching people ship near-production tools at a surprising pace with AI in the loop. His headline claim is time: most of a weekend plus a couple weekday evenings—work that traditionally could stretch into months or even years. Claude played a major role, and the most striking example is performance. Phillips describes an overnight request to “grind on performance” that took compilation time from about three and a half minutes down to roughly eleven seconds. That is a jaw-dropping improvement, and it’s exactly the kind of story that makes developers both excited and a little uneasy: what changed, and do we really understand it? Technically, the project is ambitious for its age. Puppy Scheme reportedly supports about 73% of R5RS and R7RS. It targets modern WebAssembly features: WASI 2, the WebAssembly Component Model, and WASM GC. It includes dead-code elimination for smaller binaries, and it’s self-hosting—meaning it can compile its own source into a puppyc.wasm artifact. There’s also a wasmtime-based wrapper that turns the generated WASM into native binaries, plus a website demo running the compiler output in Cloudflare Workers. Phillips even hints at a component-model style UI approach with a counter example written in Scheme. But he’s clear: it’s alpha quality and buggy, not ready for general users. That honesty matters. We’re entering an era where “built fast” is common; “trusted and maintained” still takes time. Auditing AI agents with eBPF Next: if agents are acting on your machine, how do you verify what they actually did? A project called Logira takes a very pragmatic stance: don’t trust the agent’s narrative—instrument the operating system. Logira is an observe-only Linux CLI plus a root daemon, logirad, that uses eBPF to record runtime activity: process execution, file access, and network behavior. The key design detail is attribution. Logira tracks events per run using cgroup v2, so actions can be tied back to a single audited command invocation. The typical workflow is “logira run -- ” and then you review what happened using commands like runs, view, query, and explain. Under the hood, each run is stored locally in both JSONL—for timeline-style playback—and SQLite for fast searching, plus run metadata. That’s a sensible combo: one format optimized for auditing chronologically, one for asking pointed questions. Logira also ships with an opinionated detection ruleset aimed at risky behavior during AI or automation runs, and lets you add custom per-run rules via YAML. Defaults cover things security teams actually care about: reads or writes of credential stores like SSH keys, AWS and kube configs, .netrc, and .git-credentials; persistence and system changes like /etc edits, systemd units, cron, and shell startup files; and classic “temp dropper” patterns like executables created under /tmp or /dev/shm. It flags suspicious command patterns too—curl piped to sh, wget piped to sh, tunneling or reverse-shell tooling, base64 decode-to-shell hints—and destructive operations like rm -rf, git clean -fdx, mkfs, or terraform destroy. Network rules highlight odd egress ports and cloud metadata endpoint access. Practical constraints: Linux kernel 5.8 or newer, systemd, and cgroup v2. Licensing is Apache-2.0, with the eBPF programs dual-licensed Apache-2.0 or GPL-2.0-only for kernel compatibility. If you’re deploying agents in real environments, Logira is an important reminder: the fastest way to build trust is often to measure the world around the agent, not the agent itself. Near-term AI security truce That brings us neatly to a broader security argument: can we call a truce in the AI safety debate and focus on what’s already breaking? Matthew Honnibal is arguing exactly that—a “truce” that sets aside battles over superintelligence and focuses on near-term, severe, non-existential risks from today’s deployments. His central fear is not a brilliant adversary model. It’s c