Google’s infrastructure: the foundation on which to build further
The global Google Network is again one step closer
Google continues to work hard on building a global redundant network. Next announcement is a new submarine cable in Asia between Singapore and Japan with branches to the Philippines, Taiwan and Indonesia.
Google environment scores top in HPC ranking
The growth of data is a huge challenge for all organizations, but just as important is ensuring that access to that data does not become a bottleneck. For high-performance computing (HPC) and ML/AI applications, minimizing the time to insight is essential, so finding the right storage solution that combines low latency and high-bandwidth data access at an affordable price is critical. That is and remains an important focus point at Google. Witness to this, the announcement that Google Cloud, in collaboration with its partners NAG and DDN, has demonstrated the best-performing Lustre file system as published in the recent IO500 ranking of the fastest HPC storage systems.
How to keep Cloud Functions permanently active
Cloud Functions, the Function as a Service (FaaS) offering from Google Cloud, is a lightweight computing platform for creating standalone single purpose functions that respond to events without requiring an administrator to run a server or runtime environment.
Over the past year, Google has steadily expanded its Cloud Functions: new runtimes (Java, .NET, Ruby, PHP), new regions (now 22), an improved user and developer experience, granular security, and cost and scale controls. But one element remained a challenge: the “startup tax”, or cold start: once your function is null, it may take a few seconds for it to initialize and start serving requests.
However, since August you can activate minimum (“min”) instances for Cloud Functions. By specifying a minimum number of instances of your application to stay online during low demand periods, you can dramatically improve the performance of your serverless applications and workflows.
How to run your VM environment cost-effectively?
An important advantage of cloud computing is that you can always and easily add and remove computer components, whereby you only pay for what you use. However, Google notes that production virtual machines (VMs) often run constantly, although some are only needed when running batch jobs, while others, such as development or test environments, are typically only used during business hours. Running VMs when they don’t have users or tasks to run is of no use, so managing VM properly can save you a lot of money.
However, manually managing fleets of VMs is tedious, error-prone, and difficult to enforce in a large organization. And if your setup of the VMs happened during the migration from an on-premise environment, you started from a hardware environment that does not have that option. In short, there is a need for good Best Practices to allow your VM environment to take maximum advantage of the possibilities of the cloud.
And you’ll find it in a new Google guide: Cost Optimization through Automated VM Management, walks you through several ways to manage your environment with Compute Engine VMs, ranging from simple time-based scheduling to using Recommender analytics to reducing underutilized VMs and shutting down idle VMs. The document will teach you the different approaches to efficiently run batch jobs, from virtual machines that shut themselves down automatically to orchestrating simple or complex tasks with Workflows or Cloud Composer and reducing the operational overhead of OS maintenance with VM Manager.
The guide also covers the differences between suspending, stopping, and deleting instances. With Pause and Resume, already available in Preview, you have a cost-effective way to “pause” an instance while preserving memory and application state, in an identical way a laptop remembers what you were working on when you turn it back on. When the instance resumes, your users can continue their work where they left off without having to wait for the instance to boot or their software to load.
With a strong focus on automated management
The Transfer Appliance for processing IoT data
The Google Cloud Transfer Appliance has been available in preview in online mode since August. Customers are increasingly collecting data that must be quickly uploaded to the cloud. Transfer Appliances are used to upload large amounts of data from a variety of sources (such as cameras, cars, sensors) at high speed and then stream it to a Cloud Storage bucket.
For the procedure look here
How to automate your data transfer?
The Storage Transfer Service for on-premises data API has also been in preview since August. So you can now use RESTful APIs to automate your on-prem to cloud migration workflows. Storage Transfer Service is the software service to transfer data over a network. The service provides built-in functionality such as scheduling and execution, bandwidth management, migration relaunch attempts for failed actions, and data integrity checks that simplify the data transfer workflow.
For the procedure look here
Use of the marketplace is simplified
With Google Cloud’s own private catalog, business administrators can quickly and easily activate marketplace products and thus customized solutions for their own users. In August, a series of improvements were made that should mainly lead to a better user experience when purchasing products in the marketplace. From now on, for example, administrators can add Marketplace products to the private catalog with a simple click.
See you next month!