The Cloud Pod Justin Brodley, Jonathan Baker, Ryan Lucas and Peter Roosakos
-
- Technology
The Cloud Pod is your one-stop-shop for all things Public, Hybrid, Multi-cloud, and private cloud. Cloud providers continue to accelerate with new features, capabilities, and changes to their APIs. Let Justin, Jonathan, Ryan and Peter help navigate you through this changing cloud landscape via our weekly podcast.
-
Who Let the LLamas Out? *Bleat Bleat*
Welcome to episode 257 of the Cloud Pod podcast – where the forecast is always cloudy! This week your hosts Justin, Matthew, Ryan, and Jonathan are in the barnyard bringing you the latest news, which this week is really just Meta’s release of Llama 3. Seriously. That’s every announcement this week. Don’t say we didn’t warn you.
Titles we almost went with this week:
Meta Llama says no Drama
No Meta Prob-llama
Keep Calm and Llama on
Redis did not embrace the Llama MK
The bedrock of good AI is built on Llamas
The CloudPod announces support for Llama3 since everyone else was doing it
Llama3, better know as Llama Llama Llama
The Cloud Pod now known as the LLMPod
Cloud Pod is considering changing its name to LlamaPod
Unlike WinAMP nothing whips the llamas ass
A big thanks to this week’s sponsor:
Check out Sonrai Securities‘ new Cloud Permission Firewall. Just for our listeners, enjoy a 14 day trial at www.sonrai.co/cloudpod
Follow Up
01:27 Valkey is Rapidly Overtaking Redis
Valkey has continued to rack up support from AWS, Ericsson, Google, Oracle and Verizon initially, to now being joined by Alibaba, Aiven, Heroku and Percona backing Valkey as well.
Numerous blog posts have come out touting Valkey adoption.
I’m not sure this whole thing is working out as well as Redis CEO Rowan Trollope had hoped.
AI Is Going Great – Or How AI Makes All It’s Money
03:26 Introducing Meta Llama 3: The most capable openly available LLM to date
Meta has launched Llama 3, the next generation of their state-of-the-art open source large language model.
Llama 3 will be available on AWS, Databricks, GCP, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, Nvidia NIM, and Snowflake with support from hardware platforms offered by AMD, AWS, Dell, Intel, Nvidia and Qualcomm
Includes new trust and safety tools such as Llama Guard 2, Code Shield and Cybersec eval 2
They plan to introduce new capabilities, including longer context windows, additional model sizes and enhanced performance.
The first two models from Meta Lama3 are the 8B and 70B parameter variants that can support a broad range of use cases.
Meta shared some benchmarks against Gemma 7B and Mistral 7B vs the Lama 3 8B models and showed improvements across all major benchmarks. Including Math with Gemma 7b doing 12.2 vs 30 with Llama 3
It had highly comparable performance with the 70B model against Gemini Pro 1.5 and Claude 3 Sonnet scoring within a few points of most of the other scores.
Jonathan recommends using LM Studio to get start playing around with LLMS, which you can find at https://lmstudio.ai/
04:42 Jonathan – “Isn’t it funny how you go from an 8 billion parameter model to a 70 billion parameter model but nothing in between? Like you would have thought there would be some kind of like, some middle ground maybe? But, uh, but… No. But, um, -
Begun, The Custom Silicon Wars Have
Welcome to episode 256 of the Cloud Pod podcast – where the forecast is always cloudy! This week your hosts, Justin and Matthew are here this week to catch you up on all the news you may have missed while Google Next was going on. We’ve got all the latest news on the custom silicon hot war that’s developing, some secret sync, drama between HashiCorp and OpenTofu, and one more Google Next recap – plus much more in today’s episode. Welcome to the Cloud!
Titles we almost went with this week:
I have a Google Next sized hangover
Claude’s Magnificent Opus now on AWS
US-EAST-1 Gets called Reliable; how insulting
The cloud pod flies on a g6
A big thanks to this week’s sponsor:
Check out Sonrai Securities’ new Cloud Permission Firewall. Just for our listeners, enjoy a 14 day trial at www.sonrai.co/cloudpod
General News
Today, we get caught up on the other Clouds from last week, and other news (besides Google, that is.) Buckle up.
04:11 OpenTofu Project Denies HashiCorp’s Allegations of Code Theft
After our news cutoff before Google Next, Hashicorp issued a strongly worded Cease and Desist letter to the OpenTofu project, accusing that the project has “repeatedly taken code Hashi provided under the BSL and used it in a manner that violates those license terms and Hashi’s intellectual properties.”
It notes that in some instances, OpenTofu has incorrectly re-labeled Hashicorp’s code to make it appear as if it was made available by Hashi, originally under a different license.
Hashi gave them until April 10th to remove any allegedly copied code from the OpenTofu repo, threatening litigation if the project failed to do so.
OpenTofu struck back – and they came with receipts!
They deny that any BSL licensed code was incorporated into the OpenTofu repo, and that any code they copied came from the MPL-Licensed version of terraform.
“The OpenTofu team vehemently disagrees with any suggestions that it misappropriated, mis-sourced or misused Hashi’s BSL code. All such statements have zero basis in facts” — Open Tofu Team
OpenTofu showed how the code they accused was lifted from the BSL code, was actually in the MPL version, and then copied into the BSL version from an older version by a Hashi Engineer.
Anticipating third party contributions might submit BSL terraform code unwittingly or otherwise, OpenTofu instituted a “taint team” to compare Terraform and Open Tofu Pull requests.
If the PR is found to be in breach of intellectual property rights, the pull request is closed and the contributor is closed from working on that area of the code in the future.
Matt Asay, (from Mongo) writing for Infoworld, dropped a hit piece when the C&D was filed, but thena href="https://twit -
Guess What’s Google Next? AI, AI, and Some More AI!
Welcome to episode 255 of the Cloud Pod podcast – where the forecast is always cloudy! This week your hosts, Justin, Jonathan, Matthew and Ryan are here to tackle the aftermath of Google Next. Whether you were there or not, sit back, relax, and let the guys dissect each day’s keynote and the major announcements.
Titles we almost went with this week:
How About Some AI?
“The New Way to Cloud” is a Terrible TagLine (and is what happens when you let AI do your copy)
Welcome Google Cloud Next Where There is No Cloud, Just AI
Ok Google, did your phone go off?
For 100 dollars, guess how many AI stories Google Has This Week
From Search to Skynet: Google Cloud Next’s Descent into AI Madness
‘Next’ Up from Google – AI!
Have Some Conference with Your AI
A big thanks to this week’s sponsor:
We’ve got a new sponsor! Sonrai Security
Check out Sonrai Securities’ new Cloud Permission Firewall. Just for our listeners, enjoy a 14 day trial at sonrai.co/cloudpod
GCP – Google Next 2024
We’re jumping right into GCP this week, so we can talk about all things Google Next.
01:44 FIrst impressions: Vegas > Moscone, so take that Vegas.
Both Ryan and Justin agree that Vegas is much better than the Mosconoe center in San Francisco for Google Next
The Sessions were well organized, but Ryan is a little tired from walking back and forth between them. Exercise is tiring! \
Vegas infrastructure was well utilized, something Amazon didn’t do as well.
Folks staying at area hotels that *weren’t* Mandalay Bay had some issues with trying to get onto / off property at the beginning and end of the day.
Free coffee is still available. *If you can find it.
Expo hall felt cramped
08:22 Thoughts on the Keynote Address
Note: Not enough space in the arena for keynotes; the arena holds approx. 12k; numbers released by Google say there were 30k in attendance.
Thomas Kurian kicked off the keynote, introduced their new tagline “The New Way to Cloud”
Sundar: Months can feel like decades in the cloud… WORD.
36B revenue run rate
Kurian did a rapid fire announcement of all the things coming – which required Justin to rewatch just to get them all.
A3 Mega Nvidia H100 GPUs
Nvidia GB200 NVL72 (in early 2025
TPU v5p GA
Hyperdisk ML for Inference
Cloud Storage Fuse Caching GA
Parallel Store Caching
AI Hypercomputer
Dynamic Workload Scheduler
Nvidia GPU Support for GDC Google Distributed Cloud
GKE Enterprise for GDC
AI Models on GDC
Vector Search on GDC
Vertex AI Solutions with GDC
Secret and Top Secret -
Sonrai Security with Sandy Bird
A bonus episode of The Cloud Pod may be just what the doctor ordered, and this week Justin and Jonathan are here to bring you an interview with Sandy Bird of Sonrai Security. There’s so much going on in the IAM space, and we’re really happy to have an expert in the studio with us this week to talk about some of the security least privilege specifics.
Background
Sonrai (pronounced Son-ree, which means data in Gaelic) was founded in 2017. Sonrai provides Cloud Data Control, and seeks to deliver a complete risk model of all identity and data relationships, which includes activity and movement across cloud accounts, providers, and third party data stores.
Try it free for 14 days
Start your free trial today
Meet Sandy Bird, Co founder of Sonrai Security
Sandy is the co-founder and CTO of Sonrai, and has a long career in the tech industry. He was the CTO and co-founder of Q1 Labs, which was acquired by IBM in 2011, and helped to drive IBM security growth as CTO for global business security there.
Interview Notes:
One of the big questions we start the interview with is just how has IAM evolved – and what kind of effect have those changes had on the identity models? Enterprise wants things to be least privilege, but it’s hard to find the logs. In cloud, however *most* things are logged – and so least privilege became an option.
Sonrai offers the first cloud permissions firewall, which enables one click least privilege management, which is important in the current environment where the platforms operate so differently from each other. With this solution, you have better control of your cloud access, limit your permissions, attack surface, and automate least privilege – all without slowing down DevOps2.
Is the perfect policy achievable? Sandy breaks it between human identities and workload identities; they’re definitely separate. He claims, in workload identities the perfect policy is probably possible. Human identity is hugely sporadic, however, it’s important to at least try to get to that perfect policy, especially when dealing with sensitive information. One of the more interesting data pieces they found was that less than 10% of identities with sensitive permissions actually used them – and you can use the information to balance out actually handing out permissions versus a one time use case.
Sonrai spent a lot of time looking at new solutions to problems with permissions; part of this includes purpose-built integration, offering a flexible open GraphQL API with prebuilt integrations.
Sonrai also offers continuous monitoring; providing ongoing intelligence on all the permission usage – including excess permissions – and enables the removal of unused permissions without any sort of disruptions. Policy automation automatically writes IAM policies tailored to access needs, and simplifies processes for teams.
On demand access is another tool that gives on demand requests for permissions that are restricted with a quick and efficient process.
Quotes from today’s show
Sandy: “The unbelievably powerful model in AWS can do amazing things, especially when you get into some of the advanced conditions – but man, for a human to understand what all this stuff is, is super hard. Then you go to the Azure model, which is very different. It’s an allow first model. If you have an allow anywhere in the tree, you can do whatever is asked, but there’s this hierarchy to the whole thing, and so when you think you want to remove something you may not even be removing it., because something above may have that permission anyway. It’s a whole different model to learn there.”
Sandy: “Only like 8% of those identities -
The Cloud Pod Offers Therapy Sessions to AIs With Trust Issues
Welcome to episode 254 of the Cloud Pod podcast – where the forecast is always cloudy! This week we’re talking about trust issues with some security updates over at Azure, forking drama at Redis, and making all of our probably terrible predictions for Google Next. Going to be in Vegas? Find one of us and get a sticker for your favorite cloud podcast! Follow us on Slack and Twitter to get info on finding your favorite host IRL. (Unless Jonathan is your favorite. We won’t be giving directions to his hot tub.)
Titles we almost went with this week:
The Cloud Pod Hosts Fail To Do Their Homework
The Cloud Pod Now Has a Deadline
This Is Why I Love Curl … EC2 Shop Endpoint is Awesome
AI & Elasticsearch… AI – But Not Like That
Preparing for Next Next Week
A big thanks to this week’s sponsor:
We’ve got a new sponsor! Sonrai Security
Check out Sonrai Securities’ new Cloud Permission Firewall. Just for our listeners, enjoy a 14 day trial at www.sonrai.co/cloudpod
Follow Up
02:15 AWS, Google, Oracle back Redis fork “Valkey” under the Linux Foundation
In no surprise, placeholderKV is now backed by AWS, Google and Oracle and has been rebranded to Valkey under the Linux Foundation.
Interestingly, Ericsson and Snap Inc. also joined Valkey.
03:19 Redis vs. the trillion-dollar cabals
Anytime an open source company changes their license, AWS and other cloud providers are blamed for not contributing enough upstream.
Matt Asay, from Infoworld, weighs in this time.
The fact that placeholder/Valkey was forked by several employees at AWS who were core contributors of Redis, does seem to imply that they’re doing more than nothing.
I should point out that Matt Asay also happens to run Developer relations at MongoDB. Pot, meet kettle.
04:14 Ryan – “It’s funny because I always feel like the cloud contribution to these things is managed services around them, right? It’s not necessarily improvements to the core source code. It’s more management of that source code. Now there are definitely areas where they do make enhancements, but I’m not sure the vast majority makes sense to be included in an open source made for everyone product either.”
General News
07:01 What we know about the xz Utils backdoor that almost infected the world
The Open Source community was a bit shocked when a Microsoft Developer revealed a backdoor had been intentionally planted in xz Utils, an open source data compression utility available on almost all installations of Linux and Other Unix-Like OS.
The person – or people – behind this project like -
Oracle Autonomous Database is the OG Dad Joke
Welcome to episode 253 of the Cloud Pod podcast – where the forecast is always cloudy! Justin, Ryan, and Jonathan are your hosts this week as we discuss data centers, OCI coming in hot (and potentially underwater?) in Kenya, stateful containers, and Oracle’s new globally distributed database (Oracle Autonomous Database) of many dollars. Sit back and enjoy the show!
Titles we almost went with this week:
The Cloud Pod: Transitioning to SSPL – Sharply Satirical Podcast Laughs!
The Data Centers of Loudoun County
The Forks of Redis were Speedb
AWS, I’d Like to Make a Return, Please
See…Stateful Containers Are a Thing
Azure Whispers Sweet Nothings to You
I’m a Hip OG-DAD
Legacy Vendor plus Legacy Vendor = Profit $$
Wine Vendors >Legacy Vendors
I’m Not a Regular Dad, I’m an OG Dad
A big thanks to this week’s sponsor:
We’re sponsorless this week! Interested in sponsoring us and having access to a specialized and targeted market? We’d love to talk to you. Send us an email or hit us up on our Slack Channel.
Follow Up
02:25 Microsoft Agreed to Pay Inflection $650 Million While Hiring Its Staff
Listener Note: Payway article
Last week, we talked about Microsoft hiring the Inflection Co-Founder Mustafa Suleyman and their Chief scientist, as well as most of the 70-person staff.
Inflection had previously raised 1.5B, and so this all seemed strange as part of their shift to an AI Studio or a company that helps others train AI models.
Now, it has been revealed that Microsoft has agreed to pay a 620M dollar licensing fee, as well as 30M to waive any legal rights related to the mass hiring. As well as it renegotiated a $140M line of credit that aimed to help inflection finance its operations and pay for the MS services.
03:22 Justin – “…that explains the mystery that we talked about last week for those who were paying attention.”
General News
05:17 Redis switches licenses, acquires Speedb to go beyond its core in-memory database
Redis, one of the popular in-memory data stores, is switching away from its Open Source Three-Clause BSD license.
Instead it is adopting a dual licensing model called the Redis Source Available License (RSALv2) and Server Side Public Licensing (SSPLv1).
Under the new license, cloud service providers hosting Redis will need to enter into a commercial agreement with Redis. The first company to do so was Microsoft.
Redis also announced the acquisition of Speedb (speedy-bee) to take it beyond the in memory space.
This isn’t the first time that Redis has changed the licensing model.
In 2018 and 2019, it changed the way it licensed Redis Models under the Redis Source Available License v1.
li style="fon