30 episodes

We talk about cloud computing, what's new, what to do, what not to do, and how to do it. We focus on the why and how of the cloud, why it's revolutionary, and how to adapt yourself or your organization to be part of this revolution.We cover Google Cloud and AWS pretty exclusively in the beginning, and hope to add Azure in our copious free time.

Cloud Out Loud Podcast Jon and Logan Gallagher

    • Technology
    • 5.0 • 1 Rating

We talk about cloud computing, what's new, what to do, what not to do, and how to do it. We focus on the why and how of the cloud, why it's revolutionary, and how to adapt yourself or your organization to be part of this revolution.We cover Google Cloud and AWS pretty exclusively in the beginning, and hope to add Azure in our copious free time.

    Episode 29 - The Current State of AI: LLMs Becoming Commoditized

    Episode 29 - The Current State of AI: LLMs Becoming Commoditized

    SummaryWe start off discussing the recent developments and timeline of generative AI models, particularly ChatGPT, GPT-4, Bard, Llama2, Gemini, and Claude. We covered the release of these models by companies like OpenAI, Google, Meta, and Anthropic, their performance benchmarks, and the emerging ecosystem of models. The discussion highlighted the commoditization of these models, the need to understand their capabilities and limitations, and the potential security risks and challenges associated with their use. The meeting also touched upon the concept of 'centaurs' and 'reverse centaurs' in the context of human-AI collaboration and the potential need for human oversight. Additionally, it mentioned an upcoming project involving a product based on generative AI.LINKS

    Cory Doctorow - Human in the Middle AI

    ChatGPT 

    Gemini 

    Anthropic Claude

    Episode Transcript 

    • 31 min
    Episode 28 - A Review of Where We Are

    Episode 28 - A Review of Where We Are

    Unveiling the Future of Cloud and AI at Google Cloud Next 


    Episode 28: Show Notes


    In this episode, we delve into the most recent big cloud conference, Google Cloud Next. We discuss what we saw, felt, and heard at this year's iconic Google Cloud Next and provide listeners with our biggest takeaways from the event. We discuss the cutting-edge tools in generative AI and machine learning that Google has rolled out and the potential these new Google offerings hold for developing robust enterprise solutions. Gain insights into the products and services that have the most potential, the obvious shift in Google’s approach to providing enterprise solutions, the integration of its new tools into current business operations, and much more! We also offer listeners our expectations of Amazon's upcoming conference and why the future of cloud computing is brighter than ever. To stay on the cutting edge of the cloud and AI revolution, tune in now!


    Key Points From This Episode:


    Logan shares his thoughts on the conference, particularly on the generative AI content.Hear highlights of the new features and services Google had on showcase.Discover exciting new generative AI and machine learning tools from Google.Other interesting new features that enhance productivity in the cloud.The potential that Google’s new features have for developing enterprise solutions.Integrating Google’s new tools into a business's process and operations.We unpack Google’s noticeable shift in focus and approach to business.What we think are the most exciting new features and services.An incredible data migration story that was shared at the conference.Our expectations for Amazon’s upcoming conference: AWS re: Invent.

    Quotes:


    “It was really clear that [Google’s] butts were kicked into gear by OpenAI and ChatGPT and they are playing a bit of catchup.” — Logan Gallagher [0:02:50]


    “Google is finally embracing the real world [with] a lot of these new tools.” — Jon Gallagher [0:07:49]


    “The hard work of an enterprise, the hard work of running a business is still being with the new tools from Google.” — Jon Gallagher [0:11:34]
    Links Mentioned in Today’s Episode:


    Google Cloud Next 2023
    Duet AI
    Vertex AI
    GKE Enterprise
    AWS re: Invent 2023
    Jon Gallagher on LinkedIn
    Logan Gallagher on LinkedIn

    • 17 min
    Episode 27 - Risks of Generative AI

    Episode 27 - Risks of Generative AI

    Episode 27: Show Notes
    Welcome back to Cloud Out Loud as we continue our discussion on generative AI and machine learning. Today is all about exploring the risks of modern machine learning and how we can properly navigate them as a society. Jon and Logan walk us through the benefits of AI tools for software companies, the dangers of poorly-trained generative AI models, why good code may not always be the kept standard, and how to assess the cost-effectiveness of the machine learning models at your company. Then, we dive into our concerns about the data of large language models, what generative AI could mean for the future of the internet itself, the perils of hallucinated AI data, stochastic parrots and other security vulnerabilities of generative AI, and so much more! To hear about the importance of transparency in machine learning and to find out what we’ll be talking about next week, press play now. 


    Key Points From This Episode:


    The risks to consider when implementing AI and/or machine learning in your company.Assessing the best AI tools for software companies and the benefits thereof. The importance of accurately separating good code from bad code after the initial prompts. Exploring the dangers of mistraining a generative AI model. How to know when your AI output is valid and how to monitor the system for updates. Balancing costs: how cost-effective is your machine learning model for your business?Why we’re concerned about the data that is going into large language models.  How we don’t yet know what machine learning models could mean for the internet’s future.  Our fears surrounding hallucinated AI data and the (possible) universal adoption of bad code. Some careers that could experience a boom as a result of widespread AI adoption. Stochastic parrots and the lesser known/discussed security vulnerabilities of generative AI.What we need to focus on to make generative AI and machine learning more secure.   Why more transparency is needed around the data that is produced by generative AI tools.Recapping everything we’ve discussed today and what you can look forward to next time. Tweetables:
    “Cleaning and curating your data is the least sexy but most important part of getting any value out of any of these [generative AI] tools.” — Logan Gallagher [04:39]
    “We may be increasingly reaching the point where the internet is going to be so full of AI-generated content that our subsequent versions of generative AI models will be a snake eating its own tail.” — Logan Gallagher [21:36]
    “This is something that I worry about much more than Skynet — that we end up with fragile systems or we end up with unknown attack surfaces because of frameworks that are being generated for us without our ability to have an audit trail of how this came to be.” — Jon Gallagher [32:29] 
    Links Mentioned in Today’s Episode:
    ChatGPT 
    GitHub Copilot 
    ‘Stochastic Parrots: A Novel Look at Large Language Models and Their Limitations’
    ‘Undetectable backdoors for machine learning models’
    Jon Gallagher on LinkedIn
    Logan Gallagher on LinkedIn

    • 42 min
    Episode 26 - Generative AI and Chat GPT

    Episode 26 - Generative AI and Chat GPT

    Generative AI and ChatGPT with Logan and Jon


    Episode 26: Show Notes


    Machine learning and AI are fast becoming integrated into our everyday lives. However, despite its rising popularity, there is still a lot of confusion and misunderstanding around the subject. In this episode, we unravel the fundamental principles of machine learning and artificial intelligence. We start by setting the context before diving into the technical and business side of AI. We explain the different terms used, why people are so interested in machine learning, and how it is going to shake up Silicon Valley. We also provide listeners with an overview of the benefits and drawbacks of AI and machine learning and discuss how using AI can go wrong. Learn about neural networks, the transformer algorithm, the cost of implementing AI, and how to effectively leverage these technologies. We examine both positive and negative use cases, debunk common misconceptions, and emphasize the continuous nature of AI implementation. Lastly, we navigate the landscape of cognitive computing, exploring the threats it presents along with the opportunities it brings. Tune in now to ensure you do not get left behind in the AI and machine learning race!


    Key Points From This Episode:


    Useful definitions and different terms are explained.Find out the difference between AI and machine learning.What algorithms popular AI tools are based on.Hear about exciting new technologies emerging in the space.Learn about the power of the transformer algorithm.The limitation of AI and machine learning: data.How much AI and machine learning can cost companies. Ways companies are leveraging AI to reduce costs.An overview of the good and bad use cases of AI and machine learning.Common misconceptions surrounding AI and machine learning. Why implementing AI and machine learning is a continuous process.Threats and opportunities of cognitive computing. 

    Tweetables:


    “Artificial intelligence is a broad field of study. It is an umbrella term under which these technologies fit into.” — Logan Gallagher [0:02:42]


    “When you are interacting with a model that uses transformer, it can generate very human-readable and human-intelligible text and outputs that pass off as very convincing.” — Logan Gallagher [0:05:52]


    “[Deploying new versions of AI] is a continuous process. If you are standing still, you are going to get left behind.” — Logan Gallagher [0:19:43]


    “The business opportunity [of AI] is huge here. Thus, we are not only engaged in the standard hype cycle of technology, but we are looking at a Silicon Valley that is figuring out what business it is going to be in.” — Jon Gallagher [0:22:12]


    Links Mentioned in Today’s Episode:


    ChatGPT
    The Transformer Model Tutorial
    ‘Transformer: A Novel Neural Network Architecture for Language Understanding’
    OpenAI
    Jon Gallagher on LinkedIn
    Logan Gallagher on LinkedIn

    • 24 min
    Episode 25 - The Best Cloud Environments for Machine Learning

    Episode 25 - The Best Cloud Environments for Machine Learning

    Best Environment for Machine Learning
    Episode 25: Show Notes
    Lately, there’s been a lot of hype about AI. In today’s podcast, we too are going to chat about AI, and specifically the subset of artificial intelligence called machine learning. Instead of talking about the political, social, and moral aspects of this subject, however, we’re going to speak about some of the more mundane aspects of deploying this technology. Tuning in, you’ll hear about some of the recent technologies that have been the subject of hype cycles, what Logan learned about the hype cycle of crypto and how it affected AWS, and how this relates to the current hype cycles of AIML. We then discuss the ML and production experience of the three major cloud platforms (AWS, GCP, and Azure), the ML APIs that these companies have made available, and how you can extend these APIs for your unique requirements. To learn more about how to differentiate between the different cloud providers, the importance of being able to update existing models, the necessity for the automated collection and evaluation of the current model, and so much more, tune in today!
    Key Points From This Episode:
    Examples of recent technologies that have been the subject of hype cycles.What Logan learned from the hype cycle of crypto and how it affected AWS.Google’s level of maturity in terms of AIML, despite seeming behind in this current hype cycle.The importance of knowing how to integrate AIML. Three major cloud platforms’ ML and production experience. The types of ML APIs that these companies have made available and some examples.AutoML and how you can extend these APIs for your unique requirements. Examples of how you can use this technology in your company, and possible pitfalls. How to differentiate between the different cloud providers, and choosing the right one. What Google’s BigQuery ML is and how it works.How each cloud provider has an AIML suite of tools that enables people to train their models.Why the ability to update existing models is so important. The necessity for the automated collection and evaluation of the current model for ongoing development of improved models.How the software practices that we’ve been learning and implementing over the years, still apply.Tweetables:
    “Maybe some of these companies like OpenAI will emerge as major players moving forward, but I think we can be sure that one of the big winners is guaranteed to be the cloud platforms.” — Logan Gallagher [0:04:01]
    “That is the real use case that we can identify for ML; the ability to extend the capabilities of the working software we have.” — Jon Gallagher [0:06:14]
    “What’s maybe more important than deploying a model for production is having the ability to update that model.” — Logan Gallagher [0:22:39]
    “With ML and AI, there is a temptation to treat this as something new and different, but I really see all of the important software practices that we’ve been learning and implementing over the years, still applying here.” — Logan Gallagher [0:29:01]
    Links Mentioned in Today’s Episode:
    ChatGPT
    “AWS and Blockchain”
    AWS
    GCP
    Azure 
    petqts.com 
    Snowflake
    Jon Gallagher on LinkedIn
    Logan Gallagher on LinkedIn

    • 29 min
    Leaving the Cloud

    Leaving the Cloud

    Leaving the Cloud
    Episode 24: Show Notes
    While our preference is always to use the cloud to address IT problems, today we look at a company that is doing the opposite. 37signals, the company behind the project management software Basecamp and the email system Hey, has decided to bring their workloads off the cloud and back into a data center. DHH, the Co-owner and CTO of 37signals, recently announced that the company had spent $3.2 million on Amazon Web Services (AWS) and felt that the money would have been better spent purchasing their own servers and running it themselves. In this episode, we break down the cost structure of what 37signals spent their money on in 2022, the typical arguments for moving out of the cloud and into a data center, and what key factors you need to consider before doing the same. With so many layoffs occurring in tech companies like Google, Amazon, Microsoft, and Salesforce, and so much instability in the industry, business concerns are driving developments in the tech space more than ever before. Tune in to find out more about the future of the cloud, why moving away from it may be the right decision for 37signals, and why this is not the right move for every company.
    Key Points From This Episode:
    An introduction to 37signals and the products that they are known for.The announcement by 37signals’ CEO that they would be moving off the cloud.The breakdown and cost structure of what 37signals spent their money on in 2022.The number that jumped out: their S3 spend. Some of the typical arguments for moving out of the cloud and into the data center. The key element that’s needed to move from the cloud to a data center.A big advantage of the cloud that 37signals does not make use of.Why you need to understand your user patterns before switching from the cloud to a data center.How 37signals has taken advantage of being as neutral to a vendor platform as possible. Key factors to consider before moving away from the cloud. The instability of the economy and thoughts on the lay-offs we’re seeing in tech companies. Insights into the future of the cloud.How the tech stack could be changed to be more efficient.What you can expect from the podcast this year. Tweetables:
    “Some of the advantages that the cloud has that the data center does not have, such as the ability to dramatically auto-scale out and scale back to respond to your traffic needs, are not going to be as appealing to [37signals].” — Logan Gallagher [0:11:23]
    “I think it’s very interesting that they did take advantage of the cloud for that early scaling growth and it probably was beneficial at the time when Hey was growing faster than anticipated.” — Logan Gallagher [0:13:28]
    “Every layoff represents a failure of management, a failure of management either to have the right kind of people or the right kind of growth or to anticipate where the economy was.” — Jon Gallagher [0:18:23]
    “The cloud added more space to the tool chest, a new set of tools. Those are the tools that we pick up first. But every set of tools in IT still has a role.” — Jon Gallagher [0:19:25]
    Links Mentioned in Today’s Episode:
    37signals 
    Basecamp 
    Hey 
    "Our cloud spend in 2022"  
    David Heinemeier Hansson / DHH
    Jon Gallagher on LinkedIn
    Logan Gallagher on LinkedIn

    • 22 min

Customer Reviews

5.0 out of 5
1 Rating

1 Rating

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times
This Week in XR Podcast
Charlie Fink Productions