Azure Landing Zones Anti-Patterns: What Goes Wrong and How to Prevent It - Part 2
Welcome to Part 2 of our Azure Landing Zones Anti-Patterns series.
In Part 1, we explored the design mistakes - the architectural decisions that determine whether your ESLZ stands strong or starts cracking from day one.
Part 2 looks at what happens next.
Because even with robust architecture, the real challenges arise later - costs without ownership, uncontrolled subscription growth, governance implemented too late, naming and tagging slipping into disorder, and monitoring enabled only after something breaks.
These operational anti-patterns don’t show up with loud, dramatic failures.
They accumulate quietly - month after month - until one day you realise the platform isn’t enabling teams anymore. It’s slowing them down.
Your ESLZ might still be technically sound, but operationally?
It’s struggling.
Let’s continue with anti-patterns 6 through 10 - the operational pitfalls that can turn a well-designed landing zone into a long-term source of friction.
Anti-Pattern #6: Subscription Sprawl Without Lifecycle Management
Azure makes it incredibly easy to create new subscriptions - and that’s exactly the problem.
A new environment? Spin up a subscription.
A new project? Subscription.
Someons asks nicely? Sure, another subscription.
Fast forward a year and you’re staring at 200+ subscriptions. Nobody remembers who created half of them. Some haven’t been touched in months but still generate costs. Others host critical production workloads with zero documentation or clear ownership.
What’s Missing
Without a proper lifecycle process, subscriptions become the wild west:
No documented purpose or business context
No clear owner or accountable team
No budgets or spend alerts
No process for cleanup or decommissioning
No scheduled access reviews or policy checks
In short: subscriptions keep getting created, but they never retire.
The Subscription Vending Process
Mature organisations solve this with a subscription vending model - a simple intake and approval workflow.
Before a subscription is created, the requester must specify:
Purpose and business justification
Owner and cost centre
Compliance requirements
Expected lifespan
Environment classification (Dev / Test / Prod)
Once approved, the subscription is automatically:
Tagged with metadata
Assigned clear ownership
Equipped with budgets and alerts
Enforced with the right policies
Added to the regular review cycle
Sounds bureaucratic?
Maybe.
But it’s far easier than explaining to the CFO why a sizeable monthly bill is coming from orphaned subscriptions nobody even remembers creating.
Anti-Pattern #7: Treating Cloud as an IT Cost Centre
This one is surprisingly common. Finance sees Azure spending as a single IT budget line. IT “owns” the bill, manages overruns, explains variances, and absorbs the pressure.
Meanwhile, application teams have little visibility into what they consume, what it costs, or why the bill is rising. With no direct accountability, cost becomes somebody else’s problem.
The Runaway Cost Problem
When teams don’t own their cloud spend, predictable behaviours follow:
Oversized resources: Developers choose bigger SKUs “just to be safe.”
24/7 non-prod: Test and dev environments run around the clock because shutting them down takes effort.
Orphaned resources: Nobody deletes unused VMs, disks, NICs, or databases.
Architecture blind spots: Designs ignore cost impact entirely.
Zero incentive to optimise: Because, after all, “it’s not my budget.”
The result? Azure costs creeping up month after month - often 40 - 60% year-on-year - without any meaningful increase in business value. By the time finance raises a red flag, thousands of resources need auditing and optimisation, and everyone is scrambling.
The FinOps Model
Mature organisations move away from the “IT pays the bill” mindset and adopt FinOps - shared financial responsibility for cloud.
That means putting the right structures in place:
Cost allocation: Tags and chargeback/showback map costs to teams, projects, or units.
Visibility: Teams see their consumption clearly and regularly.
Budgets & alerts: Early signals catch runaway spend before it becomes a crisis.
Regular reviews: Monthly optimisation sessions with app teams to discuss hotspots.
Architecture with cost in mind: Cost considerations become part of design reviews.
Automation: Schedules for shutting down non-prod, automated rightsizing, cleanup tooling.
FinOps isn’t about penny-pinching.
It’s about transparency, accountability, and empowering teams to make thoughtful decisions. When everyone sees the cost of their choices, optimisation becomes a shared habit - not a last-minute firefight.
Anti-Pattern #8: Launching Without Baseline Governance
The platform is ready. Networking is configured. Subscriptions are provisioned.
And then someone says the most dangerous words in cloud architecture:
“Let’s start deploying workloads. We can add governance later.”
That “later” is where the real trouble begins.
The Drift Problem
When workloads start landing before governance is in place, your environment begins drifting on day one. The symptoms show up quickly:
Resources pop up in unapproved regions (data residency issues)
Public endpoints appear everywhere (a security incident waiting to happen)
Encryption becomes optional instead of enforced
Mandatory tags go missing (cost allocation turns into detective work)
Diagnostic logging isn’t configured (troubleshooting becomes guesswork)
Thousands of non-compliant resources accumulate quietly
Fast forward six months: 5,000+ non-compliant resources, each needing remediation, testing, and stakeholder coordination.
Fixing the mess becomes a full-scale project - expensive, risky, and intensely time-consuming.
The Baseline Policy Set
The simplest solution?
Put governance in place before the first workload lands.
Your baseline should include:
Allowed regions: Enforce data residency, supportability, and cost considerations
Mandatory tags: Cost centre, environment, owner, application
Encryption standards: At rest, in transit, key management rules
Network controls: Restrictions on public endpoints, required use of private endpoints, NSG baselines
Diagnostic settings: Required logging to Log Analytics for every resource
Security baseline: Adopt the Microsoft Cloud Security Benchmark or equivalent as table stakes
Use Azure Policy initiatives instead of scattered individual policies.
Apply them at the management group level so every subscription inherits the guardrails automatically.
Will developers hit deployment failures early on?
Absolutely - and that’s exactly what you want.
It’s far better to catch issues at deployment time than during a security incident or compliance audit.
Anti-Pattern #9: Inconsistent Naming and Tagging
The famous last words of many cloud teams:
“We’ll document naming standards and people will follow them.”
No, they won’t.
The Chaos That Follows
Without enforced naming and tagging, your Azure estate slowly turns into an archaeological dig site:
vm1,vm2,vm-prod,production-vm,appserver,app-srv-01Resources scattered randomly across resource groups
Tags missing everywhere, making cost allocation a monthly guessing game
Backups and DR configs miss critical resources because no one can identify them
Automation fails because naming patterns are inconsistent and unpredictable
Fast forward three years and someone asks:
“What does rg-misc-stuff-2 contain, and can we delete it?”
Silence. Nobody knows. Nobody wants to find out the hard way.
The Solution
There’s only one real fix: enforce standards with Azure Policy, not wishful thinking.
Define a clear naming structure
Example:<resource-type>-<env>-<region>-<app>-<instance>Enforce it using deny policies
If it doesn’t match the pattern, the deployment fails.Do the same for tags
Cost centre, environment, owner, application - mandatory.
No tags? No deployment.
Harsh? Maybe.
But not as harsh as spending a few lakhs (or more) on consultants to audit, classify, and rename 10,000 unruly resources because your estate became an unmanageable maze.
Anti-Pattern #10: Monitoring as an Afterthought
The platform is live. Applications are deployed. Everything seems fine.
And then someone casually says:
“We should probably set up monitoring.”
At that point, it’s already late.
The Blind Spot Problem
When monitoring isn’t built in from day one, you’re essentially flying blind:
Issues surface only when users complain
There’s no baseline for normal vs. abnormal behaviour
Troubleshooting becomes guesswork and heroics
Capacity planning is impossible
Security incidents slip by unnoticed
By the time problems show up, you’re reacting - not managing.
What Belongs in Every Landing Zone
Monitoring isn’t a bolt-on - it’s core platform architecture. A proper ESLZ includes:
Log Analytics workspace created as part of the platform
Diagnostic settings policies enforcing logs and metrics collection
Action groups for alerts routed to the right teams
Workbooks for standard operational and performance views
Security monitoring via Microsoft Defender for Cloud
These are not optional extras - they’re the observability backbone of your cloud estate.
Final thoughts
That wraps up our two-part series on Azure Landing Zones Anti-Patterns.
Across these ten missteps - design and operational - we’ve seen a consistent theme: ESLZ failures rarely come from one big mistake. They come from dozens of small decisions made without alignment, ownership, or guardrails.
The good news? Every one of these anti-patterns is preventable with the right intent, the right governance, and the right habits.
A well-designed, well-operated landing zone doesn’t just support workloads - it becomes the backbone of how your organisation builds, governs, and scales in Azure.
Here’s to building ESLZs that stand the test of time.
If you found this useful, tap Subscribe at the bottom of the page to get future updates straight to your inbox.
