Why landing zones are key to setting up your cloud environment

Cloud computing is constantly evolving, and businesses are always looking for new ways to adapt. One way to do this is by using landing zones. These landingzones are an efficient way to set up your cloud environment, and they can help you save time and money. In this blog post, we’ll explain what landing zones are and how they can help you build a solid foundation for your cloud environment.

What are landing zones?

A landing zone is a private instance of the (public) cloud that you create for your organisation. It serves as an anchor for your cloud environment, and allows you to easily manage, monitor, and secure your infrastructure. It’s important to set up a landing zone correctly so that you can take advantage of all the benefits the cloud has to offer.

You can compare it to building the foundation for a house. Regardless of its size, you want a rock-solid foundation to build the rest of your infrastructure on, instead of something loose and constantly changing like sand. Organisations are no different: from small companies to multinational enterprises, they all need a solid cloud environment. These foundations are built through standardisation and control. This doesn’t have to be complex, but it requires the right expertise to set it up correctly.

The importance of identity

Identity access management (IAM) is one of the most critical aspects of setting up your cloud environment correctly. IAM allows you to control who has access to what in your cloud, so you can ensure that only authorised users will be able to access your data and resources. The acronym can also stand for Identify – Authorise – Manage.

The central question here is: who can do what on which resource? Compared to on-premise infrastructures, a lot more people will have access to your systems and applications, even if you have just a few employees. Most cyberattacks nowadays are based on lost or stolen user credentials, so it’s key to keep these as secure as possible.

Keeping oversight and control

Once you have determined who’s who in your organisation, it’s time to actually start reaping the benefits of the cloud. When companies are trying to scale up their operations, one of the questions we get asked the most is how they can do this without breaking the bank.

The honest answer is that the cloud, thanks to its potential, doesn’t scale to what you want to do, but to the size of your wallet (and beyond it, if you are not careful). To move from a small team using the cloud’s infrastructure to an organisation-wide adoption, you need a comprehensive control system.

Without it, the usage cost of cloud platforms will scale as well, even for smaller companies. This can lead to some unfortunate surprises at the end of the month. It’s important to point out that control in this context is not about building walls to keep people in. Rather, it’s about setting operational boundaries so that everyone can keep working without having to worry. Setting these boundaries requires an aligned and standardised approach.

Alignment and standardisation

At clients who are just beginning their journey to the cloud, we often see situations where multiple disconnected teams work using various cloud operation standards with no standardisation or policy. There are several disadvantages to this way of working:

  •   Slow: if there is a lack of standardisation, configurations and other resources will not be reused. Every change by every team essentially has to start from scratch. Configurations that could have been repeated in minutes will take hours or even days to figure out.
  •   Lack of knowledge sharing: since teams are focused on their own projects, every developer or engineer has to know how every aspect works. Setting up new Kubernetes clusters, for example, requires at least some expertise to sieve through the many available options.
  •   Unreliable: without standards, even the smallest global change to your infrastructure could impact the availability and security of the teams’ individual setups. Of course, downtime can deal a large blow to your reputation with your customers.
  •   Expensive: because of the aspects mentioned above, there will be a lack of oversight and cost control. Employees will spend valuable time on repeating projects (and possibly mistakes) instead of innovating and adding value to the business.

Like all processes, cloud adoption should instead be based on a standardised approach. Standardisation will result in the following benefits:

  •   Faster iterations: a common layer in your infrastructure will let you use public APIs and other tools to save valuable time. Once something has been built, it can be reused in iterations and even automated through Infrastructure as Code (IaC) tools like Ansible or Terraform.
  •   Lower skill requirements: every developer will no longer have to be a prodigy in all things cloud. Instead, they can focus on what they do best and share their knowledge with the rest of the organisation. This means that the infrastructure will be improved on continuously.
  •   Reliability and scalability: because your teams use the same configuration, any upgrades and changes will work everywhere. You will no longer have to worry about downtime or other unexpected consequences, and new infrastructure elements can be added as needed.
  •   Transparency and cost savings: operational aspects like logs, labels, and budget reporting will also be identical across teams and departments. This will make controlling costs and planning for sustainable growth much easier, leading to significant savings across the board.

The importance of transparency in logs and reports is often overlooked, but it can have severe consequences in the long term. For example, one of our clients used to have an application that had a debug statement still enabled in their operational environment. This meant that an enormous amount of debug logs was ingested, while their environment scaled automatically, which resulted in a bill that was significantly higher than expected.

High velocity in the cloud

To summarise, we can compare the different stages in setting up and standardising a cloud environment to the stages of driving on a mountain road.

Initially, nothing is standardised. Teams and even individual developers just act according to their own interests, with little to no communication between them. This can be compared to driving a treacherous, unpaved road. Accidents are likely to happen, and every one of them can end in a tragedy for those involved.

In a second stage, some improvements have been made to the road thanks to standardisation. The road has been paved, which means that its users can move faster. The risk of accidents decreases, and largely relies on how responsible its users are. However, the lack of guard rails makes those few accidents that still happen have dire consequences.

In the final stage, standardisation is widespread and boundaries have been established. The road’s users can still go as fast as they went, but guard rails and other protective measures mean that they are protected by the organisation in case things do go wrong.

Wondering about how you can start setting up and configuring a cloud environment? We will go over some actionable tips and tricks in another upcoming blog. Need a bit more of a guiding hand? Feel free to contact us for some additional advice: we’d love to help you out!

Competence Center:

GC innovate

4 min
InfrastructureApp Modernization

Related content

Want to read some more?

Want to stay in the loop?

Subscribe to our newsletter and join our community of Google Cloud enthusiasts! With our newsletter, we want to cut through the noise, delivering inspiring success stories and valuable insights on all things Google by Cronos. It is our goal to keep you informed without overwhelming your inbox. On average, you can expect to hear from us once a month.