Bits of Chris: Augment, Stay Human

Second Brains and Soft Skills for Staff Engineers. Augment, Stay Human.

AI can't replace you. But you need to adapt. The future is not humans following a black box AI created by closed source companies. The future is humans at the center of AI, in an open and transparent way. Where individuals control and own their data. We need to build Open Augmented Intelligence not Closed Artificial. Open Augmented Intelligence is AI for Real Life. It's built with humans at the center. It is open rather than closed - because together, we go further than we can imagine. Augmented Intelligence is when we leverage AI tools for what LLMs are good at - distillation, retrieval, boiler plate generation. While we focus on amplifying our unique, human strengths - thinking, creativity, empathy. Follow the journey as I build Open Augmented Intelligence. I need your help :) Augment, Stay Human. bitsofchris.com

  1. Impactful Listening & Effective Onboarding | Sophia Sithole, Founder Ofstaff

    11/01/2024

    Impactful Listening & Effective Onboarding | Sophia Sithole, Founder Ofstaff

    In this episode, I talk with Sophie Sithole about her journey building Ofstaff, an AI-powered onboarding and performance management solution. We explore the challenges of effective employee onboarding, and get into a deeper discussion about customer development, active listening, and handling vulnerability in business. Key Lessons Effective Onboarding * Alignment and clear expectations between all parties are crucial * Communication is fundamental at every stage * Both employer and employee have important roles to play * First few weeks are critical for success Product Development & Customer Research * The "Mom Test" approach: Focus on learning about the customer's world rather than pitching your idea * Distinction between product-market fit ("painkiller vs vitamin") and go-to-market fit (how to sell/distribute) * Importance of seeking to invalidate assumptions rather than validate them * Value of looking for specific examples when customers claim something is useful Effective Listening & Research * Pay attention to when people pause to think - it often indicates deeper insights * Ask for specific examples to validate claims * Focus on understanding impact across teams/organization * Practice active listening and genuine empathy Handling Vulnerability in Business * Embrace vulnerability as a pathway to learning * Focus on the "why" behind what you're doing * View challenges as learning opportunities * Balance passion for ideas with openness to pivot Links * Ofstaff * https://www.linkedin.com/in/sophiasithole/ Timeline [00:01:00] - Exploring how alignment and expectations are crucial for successful onboarding [00:04:00] - Discussion of shared responsibility between employer and employee in onboarding [00:06:00] - Introduction to UpStaff and its focus on sales team onboarding [00:09:00] - Deep dive into how AI can distill and personalize onboarding data [00:13:00] - Exploring AI-powered course recommendations and learning pathways [00:16:00] - Discussion of bootstrapping journey and product development [00:17:00] - Understanding the difference between product-market fit and go-to-market fit [00:20:00] - Introduction to the "Mom Test" and effective customer research [00:25:00] - Importance of empathy and active listening in customer discovery [00:30:00] - Discussion on why seeking to invalidate ideas can be more valuable than validation [00:32:00] - Exploring vulnerability in business and product development [00:35:00] - Wrapping up with insights on learning mindset and personal growth This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit bitsofchris.com

    37 min
  2. I just built my first Neural Network: Here's my framework for learning in public

    10/19/2024

    I just built my first Neural Network: Here's my framework for learning in public

    I recently joined a research team building time series Transformer models and have become infatuated with the field of deep learning. As a former trader, turned data engineer, I am now trying to understand the AI side of things. And this week I just hit my first significant milestone: building my first neural network from scratch, using no machine learning libraries. Today, I want to share this milestone and offer you my framework for how I decided to learn deep learning in public. (Here’s my GitHub repo and the XOR neural network). The Key: Invest in the basics Knowledge compounds over time. When you understand the basics well, you gain the freedom and flexibility to explore more advanced concepts creatively. You have a strong foundation to build upon. Taking the time to stop your task and go look up something you don’t quite know, especially if it’s something foundational that you will see again, is an investment in your future self. This is the key concept to understand to unlock the value of life long learning. When you see the compounding effect of knowledge - you look for opportunities to know something well, to learn it deeply. Slow down, and focus on the fundamentals. Why I love learning in public I've chosen to share my notes and code for this learning project on GitHub. This "learning in public" approach is better than learning on your own, but it requires a little more time in sharing what you do. It offers several benefits: 1. Accountability: Sharing your work creates a forcing function, encouraging you to go the extra mile in understanding and polishing your knowledge. 2. Continuous improvement: When you know you'll be sharing your learnings regularly, you start to notice learning opportunities in your daily life. 3. Networking: By putting your work out there, you connect with like-minded individuals, potential mentors, and future colleagues. My previous writing actually played a role in landing me on my current AI research team. 4. Knowledge retention: Externalizing your notes, whether in a private second brain or a public GitHub repo, helps solidify your understanding and creates a valuable resource that gets exponentially more valuable as you use it. My framework for learning in public Inspired by Scott Young's book "Ultralearning," here’s my framework for difficult learning projects: 1. Set a big, exciting goal Start with a project that genuinely excites you. For me, it's building deep neural networks for financial data, leveraging my background in day trading. Your goal should be challenging enough to push you out of your comfort zone but aligned with your interests and expertise. 2. Break it down into milestones Divide your big goal into smaller, manageable milestones. My first milestone was implementing a basic neural network from scratch to solve the XOR problem. Having these intermediate goals helps maintain motivation and provides a sense of progress. 3. Focus on a few high-quality sources Avoid information overload (and the stress that comes with it). Choose 1-3 reliable resources and stick with them. Even when things get difficult. Ignore everything else. 4. Balance theory with practice Adopt a "just-in-time" learning approach instead of drowning in prerequisites. Start with what excites you most, and fill in knowledge gaps as you encounter them. This approach maintains motivation while ensuring you still build a solid understanding as you go. When you're not actively coding or building, practice active recall by explaining concepts in your own words. This technique, inspired by the Feynman method, helps identify areas where your understanding is lacking. But it also provides a sense of action when you are studying theory. 5. Be consistent Practice daily, even if it's just for 5-30 minutes. I aim for six days a week, taking Sundays off. Promise yourself at least 5 minutes, this will get you past that initial wall of getting started. My first neural network: A brief reflection Implementing a neural network from scratch to solve the XOR problem was immensely satisfying. While the network itself is simple, the process of building it deepened my understanding of the core concepts behind neural networks. The journey wasn't always linear – I often found myself circling back to revisit concepts I didn't fully grasp at first. But this persistence paid off, and looking back, it's amazing to see how much I've learned in just a few weeks. Again if you are interested in the actual path I took, follow my deep learning work on GitHub. Start your own learning in public project If there’s something you want to pursue, give this framework for learning in public a try. * Start by identifying your exciting project and break it down into milestones. * Find 1-3 resources, and focus on these. * Commit to 5 minutes daily practice - balancing learning with doing. Remember, knowledge compounds over time. The key is just to consistently build on what you have. Thanks for reading and happy learning! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit bitsofchris.com

    13 min
  3. Domain Expertise and AI Tools for Data Analysts | Meghan Maloy, Staff Analytics Engineer

    10/11/2024

    Domain Expertise and AI Tools for Data Analysts | Meghan Maloy, Staff Analytics Engineer

    Key Lessons * Real-world experience and domain expertise can be your edge as a data analyst. Understanding the domain leads to better understanding the data. * AI can’t replace data analysts who understand the context of their data and have the communication skills to share results. * Using AI tools effectively requires clear & specific prompts while also understanding the limitations of LLMs. * Why Staff Level is hard to define and how to handle it. * NYC Open Data is a great way to explore some real world data. Links * Upcoming NYC Open Data Classes * How I Learned to Understand the World by Hans Rosling * How not to be ignorant about the world Timeline [00:00:00] Introduction to the Bits of Chris show and guest Meghan Maloy, staff analytics engineer at Datadog. [00:00:58] Discussion on using New York City open datasets to investigate real-life experiences. [00:02:19] Meghan shares an example of investigating traffic light timing changes in her neighborhood using open data. [00:05:33] Exploration of 311 data sets and their applications in understanding city complaints. [00:08:14] Meghan discusses her presentations at meetups using New York City open data. [00:09:34] Conversation about approaches to exploring data sets and asking questions. [00:12:54] Discussion on consuming information and book recommendations, including "How I Learned to Understand the World" by Hans Rosling. [00:17:21] Insights on the importance of domain expertise for data analysts and understanding data collection methods. [00:23:14] Meghan shares her experience transitioning to a staff-level role and finding impactful work. [00:27:23] Chris and Meghan discuss the challenges of measuring performance and impact at higher-level roles. [00:31:58] Conversation about the impact of AI and LLMs on the future of data analysis roles. [00:37:52] Discussion on using AI tools, including ChatGPT, Perplexity, and Claude, for various tasks. [00:44:38] Insights on the importance of specificity in prompts when using AI tools and interacting with colleagues. [00:50:34] Meghan shares her experience during a three-month sabbatical and the benefits of work-life balance. [00:53:53] Information about New York City Open Data training sessions and the Open Data Ambassadors program. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit bitsofchris.com

    56 min
  4. Pilot Life, Basics of LLMs, and AI for Beginners | Greg Lettieri, Corporate Aviator

    09/27/2024

    Pilot Life, Basics of LLMs, and AI for Beginners | Greg Lettieri, Corporate Aviator

    Today I’m joined by my brother Greg Lettieri, a corporate aviator with over 15 years of flight experience. We discuss the role of automation in flying, life as private jet pilot, the basics of LLMs, and how to handle FOMO around AI (hint - you’re not too late, just experiment). Enjoy! Key Lessons * Consistency is crucial, whether it's maintaining fitness while traveling or pursuing a long-term goal like writing a book. * While automation plays a significant role in aviation, human pilots are still essential due to the need for discretionary input and handling unexpected situations. * AI, particularly large language models (LLMs), can be powerful tools when used to augment human capabilities rather than replace them entirely. * The most effective use of AI for many people is in tasks like distillation, summarization, and enhancing search capabilities. * It's not too late to start learning about and experimenting with AI, as we're still in the early stages of understanding its full potential and applications. Links * https://perplexity.ai Timeline [00:00:05] Introduction to the episode featuring Greg, a corporate aviator with over 15 years of experience. [00:00:37] Greg discusses his experience flying high-profile clients and the nature of private jet life. [02:24] Explanation of the two-week on, two-week off schedule in corporate aviation. [03:40] Discussion on the unpredictability of private jet schedules and waiting for clients. [07:27] Greg shares his strategies for staying healthy and maintaining routine while traveling frequently. [11:45] Conversation about automation in aviation and why human pilots are still necessary. [14:37] Greg explains how autopilot works and when manual flying is required. [17:24] Discussion on the importance of maintaining manual flying skills to prevent skill atrophy. [18:51] Chris introduces the topic of AI and the risks of over-reliance on technology. [21:00] Greg shares his limited experience with AI and expresses interest in learning more. [21:38] Chris explains the basics of how large language models work. [24:53] Discussion on practical applications of AI, such as summarization and enhanced search capabilities. [28:48] Conversation about the financial applications of AI and its potential impact on jobs. [31:29] Chris and Greg explore potential uses of AI in aviation, particularly in expense management and flight planning. [32:20] Discussion on the fear of missing out (FOMO) surrounding AI and new technologies. [34:13] Chris reassures Greg that it's not too late to start learning about and experimenting with AI. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit bitsofchris.com

    35 min
  5. Start your Second Brain: A Quick Guide for Staff Engineers

    09/14/2024

    Start your Second Brain: A Quick Guide for Staff Engineers

    Staff Engineers! Are you overwhelmed by the constant need to learn & adapt? AI's making it worse, right? Time to build your Second Brain! 🧠 Here's a quick start guide: * Pick ANY note-taking app (I use Obsidian) * Create 3 folders: * Inbox: Quick capture - save anything worth keeping. * Reference: Curated highlights from your sources - only what resonates. * Notes: Think & write in YOUR words from your reference. * [Optional] Add 2 more folders: * Projects: Track tasks & ideas per role or project area. * Journal: Brain dumps & life homework Remember, the goal isn't just to collect info - it's to facilitate LEARNING. 🎓 AI + Your Second Brain = Augmented Engineer Once you've built your knowledge repository, use AI to supercharge it through distillation (not generation). Don't let information overwhelm you. Start your Second Brain today and let your knowledge compound over time! 💪 Key Lessons: * Building a "second brain" through structured note-taking can significantly enhance your ability to learn and adapt in a rapidly changing industry. * The primary goal of a second brain is to facilitate understanding, not just collect information. * A simple strategy for starting a second brain involves using three folders: Inbox (for quick capture), Reference (for curated highlights), and Notes (for personal insights). * Linking ideas across different notes can lead to novel insights and help trigger memories of valuable past information. * Combining a second brain with AI tools can create a powerful system for distillation and problem-solving. Links: Timeline: [00:00:43] - The challenge of keeping up with rapidly changing technology and the importance of continuous learning [00:01:25] - The purpose of a second brain: facilitating understanding and learning [00:02:11] - Simple method to start a second brain: choosing a note-taking app and creating three folders (Inbox, Notes, Reference) [00:03:20] - The importance of making capture easy and friction-free [00:03:56] - Explanation of the Reference folder for curating highlights from various sources [00:04:46] - Discussion of the Notes folder and the importance of writing in your own words [00:05:35] - Recap of the simple three-folder strategy for second brains [00:05:58] - Introduction to a more advanced method with additional folders for Projects and Journals [00:07:44] - Combining various note-taking systems (Zettelkasten, PARA, GTD) into a five-folder structure [00:08:54] - The potential of combining a second brain with AI for powerful information processing and problem-solving [00:09:33] - Encouragement to get started with a second brain and invest in learning tools This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit bitsofchris.com

    10 min
  6. Deploying AI Models at Scale | Eugene Weinstein, Engineering Director @ Google

    09/10/2024

    Deploying AI Models at Scale | Eugene Weinstein, Engineering Director @ Google

    Today I sit down with Eugene Weinstein, a speech recognition researcher and Engineering Director at Google where he leads an organization that productionizes speech recognition technology across various Google products. We discuss the evolution of speech recognition, the impact of Transformers, and the challenges of deploying models in production. This episode is packed with insight. A few things I learned from Eugene: * Build the model factory. Be able to pre-process your data, tune a model, and evaluate the model for accuracy and load testing as automated as possible. * Good data is key, but it's hard to get. Eugene shared how even Google struggles with data quality issues and ways to think about handling them. * How the Transformer architecture changed everything. Eugene breaks down why it was so impactful. * Scaling AI is an art. The trade-offs between speed and accuracy are constant battles and often need a bit of experience to get it right. * The benefits of cross-functional collaboration between engineers, researchers, and domain experts. Especially with finding data quality issues. My favorite quote: "If adding more data hurts your model performance, it's a red flag. But how do you catch it? There's no substitute for actually looking at your data." - Eugene Key Lessons * The importance of data quality and preprocessing in AI model development, including the need for manual inspection and automated checks. * The challenges and strategies for productionizing AI research, including optimizing for speed vs. accuracy and managing hardware resources efficiently. * The value of cross-functional collaboration between data engineers, researchers, and domain experts to improve AI model development and deployment. * The evolution of speech recognition technology and how recent advancements like transformer architectures have impacted the field. * The process of scaling AI models from research to production, including the importance of robust evaluation and testing frameworks. Links * https://huggingface.co/ * https://github.com/run-llama/llama_index * https://www.langchain.com/ * https://ai.google.dev/gemma * https://deepmind.google/technologies/gemini/project-astra/ Connect with Eugene * https://www.linkedin.com/in/weinsteineugene/ * https://research.google/people/eugeneweinstein/ Timeline [00:00:00] Introduction of Eugene, his background at MIT and Google [00:01:26] Eugene's early work in speech recognition and computer vision [00:02:58] Discussion of Google's scale and the evolution of machine learning techniques [00:04:38] The impact of neural networks and deep learning on speech recognition [00:07:53] Explanation of transformer architecture and its significance [00:09:00] Convergence of different AI modalities and increased accessibility of AI technologies [00:14:55] The process of taking AI research to production at Google's scale [00:19:03] Importance of data quality and preprocessing in AI model development [00:21:54] Discussion on the value of domain expertise and cross-functional collaboration [00:25:36] Signals for identifying data quality issues and the need for data checks [00:31:17] Challenges in model deployment, including speed vs. accuracy trade-offs [00:34:51] Optimizing hardware utilization for AI model inference [00:37:56] Decision-making process for model selection and deployment [00:39:47] Explanation of the model tuning process and parameter optimization [00:42:01] Importance of software engineering discipline in productionizing research code [00:43:56] Building an efficient pipeline for testing, training, tuning, and evaluating models This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit bitsofchris.com

    47 min

Ratings & Reviews

5
out of 5
2 Ratings

About

AI can't replace you. But you need to adapt. The future is not humans following a black box AI created by closed source companies. The future is humans at the center of AI, in an open and transparent way. Where individuals control and own their data. We need to build Open Augmented Intelligence not Closed Artificial. Open Augmented Intelligence is AI for Real Life. It's built with humans at the center. It is open rather than closed - because together, we go further than we can imagine. Augmented Intelligence is when we leverage AI tools for what LLMs are good at - distillation, retrieval, boiler plate generation. While we focus on amplifying our unique, human strengths - thinking, creativity, empathy. Follow the journey as I build Open Augmented Intelligence. I need your help :) Augment, Stay Human. bitsofchris.com