LogiCast AWS News

Logicata

LogiCast, brought to you by Logicata, is a weekly AWS News podcast hosted by Karl Robinson, CEO and Co-Founder of Logicata, and Jon Goodall, Lead Cloud Engineer. Each week we hand-pick a selection of news articles on Amazon Web Services (AWS) - we look at what’s new, technical how-to, and business-related news articles and take a deep dive, giving commentary, opinion, and a sprinkling of humor. Please note this is the audio only version of Logicast. If you would like the video version, please check out https://logicastvideo.podbean.com/

  1. 1H AGO

    Season 5 Episode 17 - Quick Desktop, Microcredentials and Maintenance Mode

    In Season 5, Episode 17, Karl and Jon are joined by AWS Community Builder Gabriel Torres for a wide-ranging discussion on the latest AWS and cloud industry updates. They cover the launch and key features of Amazon Quick Desktop AI Assistant, new AWS Training and Certification micro credentials, and recent AWS service end-of-life announcements, including WorkMail, App Runner, and selected Comprehend and Rekognition features. The episode also explores Amazon Q Developer’s end-of-support announcement, the move toward Codeium, and changes to the Microsoft-OpenAI exclusivity agreement and what they could mean for AWS and other cloud providers. And, of course, Karl found time to share one of his favourite dad jokes.   04:25 - Amazon Quick Desktop AI Assistant  Amazon has launched Quick Desktop, an AI assistant for desktop tasks that connects to email, calendar, files, Slack, Jira, and Codeium CLI. Running on AWS Bedrock, it keeps data within AWS to support compliance needs. After a free trial, pricing starts at $20 per user/month annually, plus a $250 organization infrastructure fee. Quick offers strong integrations and an intuitive markdown-based interface, but its success remains uncertain in a competitive AI assistant market.   16:55 - AWS Training and Certification Updates - April 2025  AWS has introduced micro credentials as a hands-on certification pathway through SkillBuilder, with practical labs in Serverless, Agentic AI, Networking, and Incident Response. Unlike traditional exams, they require no test center or subscription and focus on realistic scenarios. The format fills gaps such as serverless, includes a 30-minute preview and three-week retake wait, and supports learners seeking practical, specialized AWS skills at lower cost.   24:25 - AWS Ends WorkMail and Moves App Runner to Maintenance Mode  AWS is discontinuing or moving several low-adoption services into maintenance mode, including WorkMail, App Runner, RDS Custom for Oracle, and selected Comprehend and Rekognition features. The shift marks a clearer focus on high-adoption services and consolidation around alternatives such as Bedrock and ECS. While viable replacements exist, the App Runner change may especially affect developers who valued its simplicity over managing ECS directly.   30:28 - Amazon Q Developer End of Support Announcement  Amazon Q Developer is being phased out in 2027 as AWS consolidates its AI coding strategy around Codeium. Q Developer struggled with adoption, hallucinations, and confusing “Q” branding, while Codeium offers clearer positioning, stronger code generation, and agentic, spec-driven development. Console and Teams features will remain, but IDE plugins and CLI tools are being discontinued, requiring subscribers to migrate to Codeium.   38:16 - Microsoft-OpenAI Exclusivity Terms Change  Microsoft has modified its OpenAI exclusivity agreement, allowing OpenAI to work with AWS and Google Cloud. This removes Azure-only routing constraints, enabling more native integrations, lower latency, and simpler infrastructure across AWS and GCP. AWS could benefit through smoother OpenAI model access in services like Lambda and ECS, though regulated industries may still face privacy and compliance barriers.

    44 min
  2. 6D AGO

    Season 5, Episode 16: Lambda's Long Game, Claude's Complexity, and the AI Adoption Gap

    In Season 5, Episode 16, Karl and Jon are joined by Taylor Dolezal, Head of Open Source at Dosu, to discuss AWS Lambda’s S3 file system integration, Anthropic Claude Opus 4.7 arriving in Amazon Bedrock, Amazon Q’s cost management capabilities for FinOps, Microsoft’s £2.8 billion UK licensing lawsuit, and the latest AI adoption statistics and implementation challenges, before taking a nostalgic tangent into early-noughties personal digital assistants...   03:13 - AWS Lambda Mounts S3 Buckets as File Systems AWS Lambda can now mount S3 buckets as file systems via S3 Files, reducing the need for separate EFS infrastructure when working with large files already in S3. This simplifies use cases like image processing, user-generated content, Lambda Durable Functions, and multi-step AI workflows. Jon noted that while the feature reduces engineering complexity, pricing remains hard to predict. It also does not resolve some S3 limitations, such as folder recreation issues, and its usefulness depends on Lambda capacity provider configuration.   11:47 - Claude Opus 4.7 Available in Amazon Bedrock Anthropic’s Claude Opus 4.7 is now available in Amazon Bedrock, continuing Anthropic’s rapid model release cycle. The model offers new capabilities, but users may need to adjust prompts and implementations to see improvements. The discussion noted that newer models do not always outperform older ones in real-world use. Some teams still prefer Claude 4.6 for reliability. Taylor highlighted that performance can vary by infrastructure, such as Nvidia chips versus TPUs, and that enterprise tools for evaluating model cost, consistency, and behavior remain immature. Concerns were also raised about models detecting test environments and optimizing for metrics in unintended ways.   20:19 - Amazon Q Adds FinOps Cost Management Amazon Q is now integrated into AWS Cost Explorer, letting users ask natural language questions about AWS spending without manually building reports. This makes cost insights more accessible to business users and non-technical stakeholders. Taylor emphasized that cost predictability is critical for planning, based on experience at the Linux Foundation and Disney. While the feature lowers the barrier to cost analysis, it currently focuses on ad-hoc questions rather than recurring reports, dashboards, or automated alerts. Jon suggested future improvements such as monthly recurring analysis and natural language dashboard creation.   26:21 - Microsoft Faces UK Cloud Licensing Lawsuit Microsoft is facing a £2.8 billion UK lawsuit alleging anti-competitive cloud licensing practices, claiming Microsoft software such as Windows Server and SQL Server costs more on rival clouds than on Azure. Jon noted that while Azure pricing advantages may be expected, Microsoft has not clearly explained how price differences are calculated. Microsoft argues damages are difficult to assess because cloud providers do not separate licensing from infrastructure costs. Taylor shared that Disney had to migrate SQL Server workloads after Microsoft restricted bring-your-own-license options, creating major engineering effort. The case could force more transparency in cloud licensing, though changes would likely take time.   34:12 - AI Adoption Still Mostly Basic AWS research presented at the London Summit found that 64% of UK organizations have adopted AI, but only 25% of those use it at an advanced level, equal to about 15% of all businesses. The discussion questioned what counts as “basic” versus “advanced” AI use, such as whether Office 365 Copilot or code generation qualifies. Jon stressed the importance of definitions. Taylor argued that the biggest barriers are governance, compliance, and unclear use cases rather than skills alone. Key challenges include data residency, government requirements, supply chain security, rapid technology change, and likely future cost increases as VC subsidies decline.

    50 min
  3. APR 21

    Season 5 Episode 15: Interconnect, Migrations, and Modular Data Centers

    In Season 5, Episode 15, Karl and Jon are joined by Damien Jones, an AWS Community Builder, to discuss AWS Interconnect, now generally available for multi-cloud connectivity with Google Cloud Platform, with Azure and Oracle Cloud coming later; database migration acceleration using Kiro and Amazon Bedrock Agent Core to speed up migrations to Amazon Aurora DSQL; Project Glasswing, Anthropic’s restricted-preview model for detecting AI-driven cyberattacks and identifying vulnerabilities; Amazon’s AI revenue, with the CEO revealing $15 billion in annualized AI services revenue, roughly 10% of AWS’s run rate; and Project Houdini, AWS’s initiative using prefabricated modular data centers to accelerate construction timelines. And, of course, the guys got excited about the prospect of a Lidl cloud platform...   07:40 - AWS Interconnect - Multi-Cloud Connectivity  AWS has announced the general availability of AWS Interconnect, a dedicated service for connecting AWS with other cloud providers more reliably and efficiently than VPNs. It currently supports Google Cloud, with Azure and Oracle Cloud expected by late 2026. Pricing depends on capacity and distance, starting around $90,000 per month for 10 Gbps between nearby regions and rising to nearly $400,000 for longer cross-region links. AWS has also open-sourced the specification on GitHub to encourage broader adoption. The service removes unpredictable internet egress fees and guarantees capacity, making it most relevant for large enterprises with hybrid or multi-cloud environments. Still, it is a premium solution for moving data between clouds, not for reducing multi-cloud complexity itself.   17:22 - Accelerating Database Migration with Kiro and Bedrock Agent Corp  AWS shared a technical guide showing how Kiro and Amazon Bedrock Agent Core can speed up schema analysis for migrations to Amazon Aurora DSQL. The approach helps identify schema mapping needs and compatibility issues early, reducing the need for deep migration expertise during planning. But the discussion raised concerns about production readiness: it depends on persistent Kiro CLI sessions that lose in-memory analysis if interrupted, forcing a restart, and it lacks the real-time observability of native AWS DMS tools. While useful for proof-of-concept work and easing upfront analysis, the panelists were cautious about recommending it for production migrations without stronger persistence and observability. More broadly, they noted that AI-driven “faster” database migration tooling is part of a familiar cycle, while the core migration challenges remain largely the same.   27:06 - Project Glasswing - AI-Driven Cybersecurity Tool  Anthropic launched Project Glasswing, a restricted-preview model aimed at detecting and preventing AI-driven cyberattacks by finding software vulnerabilities. It reportedly uncovered thousands of critical bugs in core internet infrastructure, including projects like FFMPEG and OpenSSL, often maintained by very small teams. Access is limited to about 40 organizations, which sparked debate over publicly promoting a powerful tool that few can use. The discussion raised concerns about a two-tier security landscape, possible future “Glasswing scan” requirements in cyber insurance, and broader AI safety issues as models grow more capable. While restricting access to dangerous tools may be sensible, the panelists argued that the public hype creates perverse incentives and could let a small group of firms charge premium prices for exclusive access.   38:29 - Amazon AI Revenue and Investment Strategy  Amazon CEO Andy Jassy said in the annual shareholder letter that AWS AI services are now generating more than $15 billion in annualized revenue, about 10% of AWS’s total run rate. Amazon has committed $200 billion in capital spending for data centers and AI chips, reflecting strong demand for specialized infrastructure. Jassy also noted that two major AWS customers asked for exclusive access to all Graviton capacity in 2026, a request Amazon declined to avoid limiting other customers. The letter underscored the strategic value of AWS’s AI and chip business, with discussion pointing to a more disciplined approach: Amazon is now securing customer demand before building capacity, with production already committed into 2027 and 2028. While the ROI horizon is still long given the scale of spending, demand and adoption appear to be accelerating.   45:16 - Project Houdini - Prefabricated Data Center Construction  AWS announced Project Houdini, which uses prefabricated modular data center units, or “skids,” to speed up data center construction and AI infrastructure deployment. While the idea of prefabrication is not new, AWS is standardizing it at scale to cut build times. The panelists noted the bigger constraint is power, not construction: aging UK and European grids are already under strain and often cannot support modern data center demand. That has pushed companies like Amazon, Google, and Microsoft to tackle infrastructure problems typically handled by governments. Amazon is exploring options such as small modular nuclear reactors, but broader questions remain about whether private companies should be solving national power challenges. One advantage of the modular approach is portability, allowing deployment in areas where power is available.

    52 min
  4. APR 13

    Season 5 Episode 14: S3 Files, Kubernetes Scaling, and the SaaSpocalypse

    In Season 5, Episode 14, Karl and Jon are joined by Destiny Erhabor, an AWS Community Builder, to discuss S3 Files Launch, AWS’s new file system interface for S3 buckets that provides POSIX-compliant access to S3 data through a cached file system layer. They also cover EKS Managed Node Groups with EC2 Auto Scaling Warm Pools, a new feature that simplifies Kubernetes cluster auto-scaling and reduces operational complexity; the ongoing AWS Middle East data center disruptions caused by drone strikes, including full-month service credits and emergency restoration efforts; AWS’s AI investment strategy, including its simultaneous investments in Anthropic and OpenAI and how that positions it against Amazon Nova models; and the broader AI hype cycle, including whether AI could disrupt SaaS business models in a so-called “SASSpocalypse” and what kind of real ROI companies are actually seeing from AI investments. And, for the record, no crimes were committed during the recording of this podcast.   03:19 - S3 Files Launch - Making S3 Buckets Accessible as File Systems  AWS's new file system interface for S3 buckets, providing POSIX-compliant access to S3 data through a cached file system layer   15:54 - EKS Managed Node Groups Now Support EC2 Auto Scaling Warm Pools  New feature simplifying Kubernetes cluster auto-scaling and reducing operational complexity   22:26 - WS Teams Working Round-the-Clock to Restore Middle East Region Services Following Drone Strikes Ongoing impact of drone strikes on Middle East regions, including full month service credits and emergency restoration efforts   31:08 - AWS CEO Matt Garman Defends Simultaneous Multi-Billion Dollar Investments in Anthropic and OpenAI   Discussion of AWS's simultaneous investments in Anthropic and OpenAI, and competitive positioning with Amazon Nova models   37:01 - AWS CEO Addresses AI "SASSpocalypse" Concerns at Human X Conference  Debate over whether AI will disrupt SaaS business models and discussion of genuine ROI from AI investments

    44 min
  5. APR 8

    Season 5 Episode 13: Agents, Instances, and Supply Chain Attacks

    In Season 5, Episode 13, Karl and Jon discuss a packed lineup of AWS news, including the general availability of AWS DevOps Agent with autonomous incident response capabilities, support for EC2 instance store in Amazon ECS Managed Instances for latency-sensitive workloads, and the introduction of managed daemons for managed instances, similar to Kubernetes DaemonSets. They also cover how to build high-performance applications with AWS Lambda managed instances, a migration guide for moving from Amazon ElastiCache for Redis to ElistiCache for Valkey, and the European Commission data breach involving a compromised AWS account through a supply chain attack on Aqua Security’s Trivy vulnerability scanner. And along the way, the guys realize that Karl’s muscle memory for intro titles is apparently so bad, he could probably forget his own name if he took a week off.   03:24 - AWS DevOps Agent General Availability and autonomous Incident Response with DevOps Agent  AWS DevOps Agent has officially moved from preview to general availability. This service acts as an autonomous incident investigation tool that can analyze logs, telemetry, and infrastructure metrics to help teams understand what's going wrong during incidents. Rather than replacing human SREs, it accelerates the investigation phase by correlating data from multiple sources (CloudWatch logs, monitoring tools, error messages) and reducing the time spent in manual troubleshooting. The tool can be integrated with existing monitoring platforms like PagerDuty, Datadog, New Relic, and Grafana. It supports "skills" (essentially runbooks or if-then rules) that can be customized for known failure patterns specific to an organization's infrastructure. Currently in GA, it can perform investigations but cannot yet execute remediation actions, though this is expected as a future capability. Notable customers in production include Western Governors University, ZenChef, T-Mobile, and Granola. This article provides a practical walkthrough for implementing DevOps Agent in AWS environments to handle incident response workflows. It demonstrates how to set up the integration between incident management systems and DevOps Agent, allowing automated investigation workflows to be triggered when alerts fire. The article shows bidirectional integration with services like PagerDuty (which can feed alerts into DevOps Agent) and Slack (for notifications), and outbound capabilities to create incidents or update existing ones. The key value proposition is that the tool can handle approximately 80% of the incident investigation burden—the time-consuming process of correlating logs, metrics, and events—while human engineers remain responsible for decision-making and remediation approvals.   14:44 - Amazon ECS Managed Instances Support for EC2 Instant Store and Amazon ECS Managed Daemons for Managed Instances  Amazon ECS Managed Instances now supports EC2 instant store volumes, which are high-performance local storage options connected directly to physical instances. Instant store provides lower latency than EBS volumes since it's attached directly to the hardware rather than accessed over a network. This feature is primarily useful for highly latency-sensitive containerized workloads that require extremely fast disk access. While the number of use cases for this is relatively niche, it enables scenarios where applications need local, high-speed temporary storage without the network latency overhead of EBS volumes. This represents one of several enhancements to ECS Managed Instances announced recently. ECS Managed Instances now supports managed daemons, a capability analogous to Kubernetes DaemonSets. This feature ensures that exactly one instance of a specified container runs on every node in an ECS cluster. This is particularly useful for system-level services that need to be present on all instances—such as monitoring agents (New Relic, Datadog), log collectors, or security scanning tools. Previously, this functionality was available for traditional self-managed EC2 compute but was missing from managed instances. The feature automatically scales with cluster size: adding a new instance to the cluster automatically deploys the daemon, and removing an instance removes it accordingly. This brings ECS Managed Instances to feature parity with self-managed EC2 deployments for daemon-like workloads.   20:10 - Building High-Performance Apps with AWS Lambda Managed Instances  AWS has published guidance on using Lambda managed instances for high-performance computing scenarios. Lambda managed instances allow developers to run Lambda functions on dedicated EC2 instances that AWS manages, providing higher resource availability than traditional Lambda. This hybrid approach enables use cases requiring consistent high CPU capacity, GPU access, or sustained high concurrency that traditional Lambda (which has memory/CPU scaling limits) cannot efficiently support. However, this represents a shift from Lambda's original value proposition of serverless simplicity. The article frames this as a solution for specialized scenarios where traditional Lambda's constraints become limiting, though experts note this use case may better serve customers who already understand their infrastructure needs and that the distinction between Lambda managed instances and containerized solutions like Fargate becomes increasingly blurred.   25:00 - Migrating to Amazon ElastiCache for Valkey from Redis  This AWS database blog article provides best practices for migrating from Amazon ElastiCache for Redis to ElastiCache for Valkey. Valkey is Amazon's open-source Redis fork that aims to provide API compatibility with Redis while offering approximately 30% cost savings. The article presents a real-world case study of a global travel technology company that successfully migrated, achieving significant cost reduction (approximately $200/day savings) with minimal downtime and only brief periods of slightly elevated latency. The migration can be performed using in-place upgrades or snapshot-based migration approaches. AWS provides console-based one-click migration tools, though for production workloads, testing thoroughly in staging environments first is recommended. The key appeal is that Valkey maintains feature parity with open-source Redis while reducing costs, making it an attractive option for organizations with substantial caching infrastructure investments.   31:25 - European Commission Data Breach via Supply Chain Attack  A data breach affected the European Commission's AWS environment, resulting in the theft of approximately 350 gigabytes of data from multiple databases. The root cause was not an AWS vulnerability but rather a compromise of the Commission's API keys through a supply chain attack. Specifically, hackers gained access to sensitive credentials through a GitHub Actions workflow vulnerability in Aqua Security's Trivy vulnerability scanner. This compromise led to malicious code being distributed, which allowed attackers to extract the Commission's AWS API keys. This incident exemplifies the broader cybersecurity trend of supply chain attacks, where adversaries find it easier to compromise upstream dependencies than to directly breach well-hardened targets. The incident underscores that cloud security relies heavily on customer credential management and that zero-day vulnerabilities in widely-used tools can have cascading effects across organizations using those tools.

    38 min
  6. MAR 24

    Season 5 Episode 12: Buckets, Chips, and Legal Quips

    In Season 5, Episode 12, Karl and Jon are joined by Farah Abdirahman, an AWS Community Builder, to discuss Amazon S3’s new account regional namespaces for general purpose buckets, deploying AWS applications and accessing AWS accounts across multiple regions with IAM Identity Center, AWS and NVIDIA deepening their strategic collaboration to accelerate AI, celebrating 20 years of Amazon S3, and Microsoft reportedly considering legal action over the recent $50 billion Amazon-OpenAI cloud deal. Then, just when things couldn’t get any more unexpected, the conversation took a turn toward the smell of Jon’s feet — and let’s just say the guys really put their foot in it.   07:28 - Amazon S3 Account Regional Namespaces  This feature allows S3 bucket names to be unique within an account and region, rather than globally. This change simplifies bucket naming conventions and addresses long-standing challenges with global uniqueness requirements. The impact is significant for daily operations and resource management in S3.   13:59 - AWS IAM Identity Center Multi-Region Deployment  AWS now offers multi-regional replication for IAM Identity Center, enabling users to access applications and accounts across multiple regions. This feature enhances resilience and reduces the need for break-glass setups. It also supports integration with external identity providers like Okta and Microsoft Entra ID.   21:05 - AWS-Nvidia AI Collaboration AWS plans to deploy at least a million Nvidia chips in their regions this year to accelerate AI deployment. This partnership raises questions about AWS's own chip development efforts and highlights the increasing demand for AI-capable hardware. The collaboration also includes expanded support for Nvidia Nemo models on Amazon Bedrock.   26:25 - Amazon S3 20th Anniversary  S3 celebrated its 20th anniversary, highlighting impressive statistics such as 500+ trillion objects stored, 11 nines of durability, and 200+ million requests per second. The service continues to evolve and remains a cornerstone of AWS's offerings, with new features and improvements still being developed.   37:04 - Microsoft-Amazon-OpenAI Legal Dispute  Microsoft is considering legal action over the recent $50 billion deal between Amazon and OpenAI. The dispute centers on whether OpenAI can offer certain services without violating its previous agreement with Microsoft. This situation highlights the intense competition and large sums of money involved in the AI industry.

    44 min
  7. MAR 16

    Season 5 Episode 11: Astro Datacenters, AMI Lineage, and AI Coding Concerns

    In Season 5, Episode 11, Karl and Jon are joined by Dmytro Sirant, AWS Community Builder and User Group Leader from Australia, to discuss the expansion of AWS Database Savings Plans, AWS European Sovereign Cloud compliance milestones, managing Amazon Machine Image lifecycles with AMI lineage, SpaceX’s plan for a million-satellite data center and Amazon’s opposition, and AI coding assistants and their potential impact on Amazon outages, with a few unexpected tangents along the way, including turtles and frozen corpses.   09:31 - AWS Database Savings Plans expansion AWS has added Amazon OpenSearch Service and Amazon Neptune Analytics to the Database Savings Plans. This expansion provides more flexibility for clients who haven't decided which database best fits their requirements. The plans currently offer only one-year, no-upfront options, which is more limited compared to Reserved Instances.   16:48 - AWS European Sovereign Cloud compliance milestones The European Sovereign Cloud has achieved its first compliance milestones, including SOC 2 and C5 reports, plus seven ISO certifications. These certifications are crucial for organizations requiring compliance and demonstrate that the European Sovereign Cloud is operating independently from AWS proper.   27:50 - Managing AMI lifecycles using AMI lineage AWS introduced AMI lineage, a tool for managing the lifecycle of Amazon Machine Images. This solution helps track the chain of custody for AMIs, which is particularly useful in large enterprises with multiple teams working on image creation. However, it requires manual deployment and may be unnecessarily complex for many users.   32:35 - SpaceX's million-satellite data center plan and Amazon's opposition SpaceX has filed plans with the FCC for a million-satellite data center in low Earth orbit. Amazon has objected to this plan, claiming it lacks substance and is purely aspirational. The discussion touched on potential issues such as cooling systems for satellites and the impact on astronomy.   40:08 - AI coding assistants and Amazon's outages Recent outages on Amazon.com have led to speculation about the rapid adoption of AI coding assistants potentially causing issues. The discussion focused on the challenges of integrating AI-generated code into existing development processes and the need for improved review mechanisms to handle the increased output from AI assistants.

    48 min
  8. MAR 10

    Season 5, Episode 10: CLI Updates, OpenAI Partnership, and Data Center Attacks

    In Season 5, Episode 10, Karl and Jon discuss several developments in the AWS and cloud ecosystem, including the new output formats in AWS CLI v2 and how they improve usability and automation. They also explore the strategic partnership between OpenAI and Amazon and what it could mean for AI infrastructure and the broader cloud landscape. The conversation dives into architectural design as well, looking at rewriting Step Functions as Durable Functions in a Lambda-heavy approach, and how teams can use the AWS Well-Architected Framework to uncover hidden costs in their environments. They also touch on reports of AWS data centers in the UAE being targeted by Iranian drones, discussing the implications for cloud resilience and global infrastructure. And in a lighter moment, the guys compare notes on who drove the furthest for their hobbies last weekend.   02:30 - New output formats in AWS CLI v2  AWS has introduced new output formats in CLI v2, including an enhanced format for better error messaging and debugging. The update allows for suppression of CLI output, which is useful for handling sensitive information. These changes aim to improve user experience and security when working with the AWS CLI.   08:48 - Strategic partnership between OpenAI and Amazon  OpenAI and Amazon announced a strategic partnership where OpenAI will consume 2 gigawatts of Trainium capacity through AWS infrastructure. This deal involves significant investment from Amazon and allows for distribution of OpenAI's models via AWS Bedrock. The partnership raises questions about the economics and future of AI adoption.   18:14 - Rewriting Step Functions as Durable Functions (Lambda Heavy)  Danielle Heberling wrote an article about rewriting her Step Function as a Durable Function (Lambda Heavy). The post compares the two approaches, highlighting the benefits of Durable Functions for developers who prefer standard programming languages and fine-grained control over execution state in code.   28:53 - Using the AWS Well-Architected Framework to uncover hidden costs  The article discusses how the AWS Well-Architected Framework can be used to uncover hidden costs in cloud architectures. It emphasizes that hidden costs are not just about direct expenses but also include potential costs related to security breaches, downtime, and regulatory compliance.   34:58 - AWS data centers in UAE targeted by Iranian drones  AWS data centers in the UAE were targeted by Iranian drones, causing power outages and downtime for some applications. This event marks the first time data centers have been specifically targeted in a conflict, highlighting the need for multi-region resilience and raising questions about the future security measures needed for data centers in conflict zones.

    44 min

About

LogiCast, brought to you by Logicata, is a weekly AWS News podcast hosted by Karl Robinson, CEO and Co-Founder of Logicata, and Jon Goodall, Lead Cloud Engineer. Each week we hand-pick a selection of news articles on Amazon Web Services (AWS) - we look at what’s new, technical how-to, and business-related news articles and take a deep dive, giving commentary, opinion, and a sprinkling of humor. Please note this is the audio only version of Logicast. If you would like the video version, please check out https://logicastvideo.podbean.com/