M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Azure PostgreSQL Is Costing You THOUSANDS

Opening – The Hidden Azure Tax

Your Azure PostgreSQL bill isn’t high because of traffic. It’s high because of you. Specifically, because you clicked “Next” one too many times in the deployment wizard and trusted the defaults like they were commandments. Most admins treat Flexible Server as a set-it-and-forget-it managed database. It isn’t. It’s a meticulously priced babysitting service that charges by the hour whether your kid’s awake or asleep.

This walkthrough exposes why your so‑called “managed” instance behaves like a full‑time employee you can’t fire. You’ll see how architecture choices—storage tiers, HA replicas, auto‑grow, even Microsoft’s “recommended settings”—inflate costs quietly. And yes, there’s one box you ticked that literally doubles your compute bill. We’ll reach that by the end.

Let’s dissect exactly where the money goes—and why Azure’s guidance is the technical equivalent of ordering dessert first and pretending salad later will balance the bill.

Section 1 – The Illusion of “Managed”

Administrators love the phrase “managed service.” It sounds peaceful. Like Azure is out there patching, tuning, and optimizing while you nap. Spoiler: “managed” doesn’t mean free labor; it means Microsoft operates the lights while you still pay the power bill.

Flexible Server runs each instance on its own virtual machine—isolated, locked, and fully billed. There’s no shared compute fairy floating between tenants trimming waste. You’ve essentially rented a VM that happens to include PostgreSQL pre‑installed. When it sits idle, so does your budget—except the charges don’t idle with it.

Most workloads hover at 10‑30% CPU, but the subscription charges you as if the cores are humming twenty‑four seven. That’s the VM baseline trap. You’re paying for uptime you’ll never use. It’s the cloud’s version of keeping your car engine running during lunch “just in case.”

Then come the burstable SKUs. Everyone loves these—cheap headline price, automatic elasticity, what’s not to adore? Here’s what. Burstable tiers give you a pool of CPU credits. Each minute below your baseline earns credit—each minute above spends it. Run long enough without idling and you drain the bucket, throttling performance to a crawl. Suddenly the “bargain” instance spends its life gasping for compute like a treadmill on low battery. Designed for brief spurts, many admins unknowingly run them as full‑time production nodes. Catastrophic cost per transaction, concealed behind “discount” pricing.

Now, Azure touts the stop/start feature as a cost‑saving hero. It isn’t. When you stop the instance, Azure pauses compute billing—but the storage meter keeps ticking. Your data disks remain mounted, your backups keep accumulating, and those pleasant‑sounding gigabytes bill by the second. So yes, you’re saving on CPU burn, but you’re still paying rent on the furniture.

Here’s the reality check most teams miss: migration to Flexible Server doesn’t eliminate infrastructure management—it simply hides it behind a friendlier interface and a bigger invoice. You must still profile workloads, schedule stop periods, and right‑size compute exactly as if you owned the hardware. Managed means patched; it doesn’t mean optimized.

Consider managed like hotel housekeeping. They’ll make the bed and replace towels, but if you leave the faucet running, that water charge is yours.

The first big takeaway? Treat your PostgreSQL Flexible Server like an on‑prem host. Measure CPU utilization, schedule startup windows, avoid burstable lust, and stop imagining that “managed” equals “efficient.” It doesn’t. It equals someone else maintaining your environment while Azure’s meter hums steadily in the background, cheerfully compiling your next surprise invoice.

Now that we’ve peeled back the automation myth, it’s time to open your wallet and meet the real silent predator—storage.

Section 2 – Storage: The Silent Bill Multiplier

Let’s talk storage—the part nobody measures until the receipt arrives. In Azure PostgreSQL Flexible Server, disks are the hotel minibar. Quiet, convenient, and fatally overpriced. You don’t notice the charge until you’ve already enjoyed the peanuts.

Compute you can stop. Storage never sleeps. Every megabyte provisioned has a permanent price tag, whether a single query runs or not. Flexible Server keeps your database disks fully allocated, even when compute is paused. Because “flexible” in Azure vocabulary means persistent. Idle I/O still racks up the bill.

Now the main culprit: auto-grow. It sounds delightful—safety through expansion. The problem? It only grows one way. Once a data disk expands, there is no native auto‑shrink. That innocent emergency expansion during a batch import? Congratulations, your storage tier just stayed inflated forever. You can’t deflate it later without manual intervention and downtime. Azure is endlessly generous when giving you more space; it shows remarkable restraint when taking any back.

And here’s where versioning divides the careless from the economical. There are two classes of Premium SSD—v1 and v2. With v1, price roughly tracks capacity, with modest influence from IOPS and throughput. With v2, these performance metrics become explicit dials—capacity, IOPS, and bandwidth priced separately. Most admins hear “newer equals faster” and upgrade blindly. What they get instead is three independent billing streams for resources they’ll rarely saturate. It’s like paying business‑class for storage only to sit in economy, surrounded by five empty seats you technically “own.”

Performance over‑provisioning is the default crime. Developers size disks assuming worst‑case loads: “We’ll need ten thousand IOPS, just in case.” Reality: half that requirement never materializes, but your bill remains loyal to the inflated promise. Azure’s pricing model secretly rewards panic and punishes data realism. It locks you into paying for theoretical performance instead of observed need.

Then there’s the dev‑to‑prod drift—an unholy tradition of cloning production tiers straight into staging or test. That 1 TB storage meant for mission‑critical workloads? It’s now sitting under a QA database containing five gigabytes of lorem ipsum. Congratulations, you just rented a warehouse to store a shoebox.

Storage bills multiply quietly because costs compound across redundancy, backup retention, and premium tiers. Each gigabyte gets mirrored, snapshotted, and versioned. Users think they’re paying for disks; they’re actually paying for copies of copies of disks they no longer need.

Picture a real‑world mishap: a developer requests a “small test environment” for a migration mock‑run. Someone spins up a Flexible Server clone with default settings—1 TB premium tier, auto‑grow enabled. The test finishes, but no one deletes the instance. Months later, that “temporary” server alone has quietly drained four digits from the budget, storing transaction logs for a database no human queries.

The fix isn’t technology; it’s restraint. Cap auto‑grow. Audit disk size monthly. Track write latency and IOPS utilization, then right‑size to actual throughput—not aspiration. For genuine elasticity, use Premium SSD v2 judiciously and dynamically scale performance tiers via PowerShell or CLI instead of baking in excess capacity you’ll never touch.

Storage won’t warn you before it multiplies; it just keeps billing until you notice. The trick is to stop treating each gigabyte as insurance and start viewing it as rented real estate. Trim regularly, claim refunds only in saved megabytes, and never, ever leave a dev database camping on premium terrain.

Fine. Disks are tamed. But now comes the monster that promises protection and delivers invoices—high availability.

Section 3 – High Availability: Paying Twice for Paranoia

High availability sounds noble. It implies uptime, business continuity, and heroic resilience. In Azure PostgreSQL Flexible Server, however, HA often means something far simpler: you’re paying for two of everything so one can sleep while the other waits to feel useful.

By default, enabling HA duplicates your compute and your storage. Every vCore, every gigabyte, perfectly mirrored. Azure calls it “synchronous replication,” which sounds advanced until you realize it’s shorthand for “we bill you twice to guarantee zero‑data‑loss you don’t actually need.” The system keeps a standby replica in lockstep with the primary, writing every transaction twice before acknowledging success. Perfect consistency, yes. But at a perfect price—double.

The sales pitch says this protects against disasters. The truth? Most workloads don’t deserve that level of paranoia. If your staging database goes down for ten minutes, civilization will continue. Analytics pipelines can catch back up. QA environments don’t need a ghost twin standing by in the next zone, doing nothing but mirroring boredom. Yet countless teams switch on HA globally because Microsoft labels it “recommended for production.” A recommendation that conveniently doubles monthly revenue.

Here’s the fun part: the standby replica can’t even do anything interesting. You can’t point traffic at it. You can’t run read queries on it. It sits there obediently replicating and waiting for an asteroid strike. Until then, it produces zero business value, yet the meter spins as if it’s calculating π. Calling it “high availability” is generous; “highly available invoice” would be more accurate.

Now, does that mean HA is worthless? N