Surviving the 9 to 5

Dead Inside by 9:05

“Surviving the 9 to 5” isn’t about hustling harder or chasing the next big thing—it’s a raw, no-bull guide for anyone stuck in the daily grind. Each episode delivers real-life hacks to protect your energy, reclaim tiny bursts of sanity, and make it through workday hell without losing yourself. No pep talks. No sugarcoating. Just straight-up survival tips for worn-out souls who refuse to disappear behind their desks.

  1. 5D AGO

    LLM Temperature

    In the context of artificial intelligence, specifically in large language models (like me!) and machine learning, temperature is a hyperparameter that controls the degree of randomness or creativity in the model's output. Here’s a breakdown of how it works: Low Temperature (e.g., 0.1 - 0.5): The model becomes more deterministic and conservative. It will choose the most likely next word with high probability. This results in outputs that are focused, predictable, and factually consistent. It's best for tasks where accuracy is crucial, such as coding, data extraction, or answering factual questions.High Temperature (e.g., 0.8 - 1.5): The model becomes more "creative" and unpredictable. It gives less likely words a higher chance of being chosen. This can lead to more diverse, surprising, and imaginative text, but it also increases the risk of the model making things up (hallucinating), going off-topic, or producing nonsensical results. It's ideal for brainstorming, creative writing, or generating poetry.Temperature of 1.0: This is often the default setting. It samples directly from the model's normal probability distribution, offering a balance between predictability and creativity.In simple terms: Think of temperature as a dial that controls how "risky" the AI's word choices are. Low temperature: The AI plays it safe, always picking the most obvious next word.High temperature: The AI takes more chances, picking less common words to create something new and unexpected.

    5 min
  2. FEB 23

    Legos Desks and Warehouses Explain RAG

    This episode presents a "digital autopsy" of Retrieval-Augmented Generation (RAG) to explain why even powerful AI models with million-token context windows still fail or hallucinate. The discussion uses three core metaphors to simplify complex AI architecture: Legos (Tokens): These are the fundamental units of AI measurement. The sources highlight a "token tax" in multilingual processing, where languages like Japanese or Chinese have a higher "weight" and financial cost compared to English.The Desk (Context Window): This represents the AI's immediate workspace. Despite massive "million-token" desks, the AI suffers from "Lost in the Middle" syndrome, where its "flashlight" of attention only illuminates the very beginning and end of a document, leaving information in the middle fuzzy and prone to error.The Warehouse (RAG): This is the long-term storage for data, which must be organized into chunks (cardboard boxes) of 500 to 2,000 characters to maintain context without overwhelming the AI with noise.The episode also details the "Silent Translator" (Query Re-writing), which prevents errors by reformulating vague user prompts into specific search queries before the AI uses its mathematical "magnet" (vectorization) to pull relevant boxes from the warehouse. From a business perspective, the episode introduces context caching as a way to save up to 90% on costs by keeping static information, like employee handbooks, permanently on the "desk". Finally, it outlines a "Golden Test Set"—a four-part stress test including: Needle in a Haystack: Finding a tiny, obscure detail.Conflict Resolution: Choosing between contradictory data points.The Hallucination Trap: Testing the AI's ability to say "I don't know" (e.g., the hoverboard test).Multi-hop Synthesis: Combining information from multiple sources to derive a new answer.Ultimately, the episode argues that building effective AI is a logistics operation of moving the right data efficiently rather than just using the biggest model available. I can create a quiz based on these metaphors to help you master the RAG blueprint, or a tailored report summarizing the "Golden Test Set" for your own AI projects. Would you like me to do that?

    22 min

About

“Surviving the 9 to 5” isn’t about hustling harder or chasing the next big thing—it’s a raw, no-bull guide for anyone stuck in the daily grind. Each episode delivers real-life hacks to protect your energy, reclaim tiny bursts of sanity, and make it through workday hell without losing yourself. No pep talks. No sugarcoating. Just straight-up survival tips for worn-out souls who refuse to disappear behind their desks.