Skip to content

Chapter 6

Distributed Cloud

Harness the power of the cloud at scale to unlock a limitless future. Embrace cloud as your foundation for becoming a “digital company.”

Lock or Unlock the future?

Can the technologies presented here put us on the path to a better future? Or do they risk leading us down the wrong road? A look at the perspective of our experts.

On the right path…

Cloud is clearly the key tool for implementing new business models, new ways of working and new lifestyles at scale. There is hardly any resistance and its innovations are so rapid that it constantly offers new answers to emerging business challenges. It is also becoming increasingly well integrated with systems which, for various reasons, are intended to remain close to the field, which increases the possibilities. Finally, it allows the environmental footprint of systems to be reduced considerably.

… or the wrong path?

However, Cloud leads to profound changes in IT practice, management and usage, and it is essential to support these changes. In addition, very solid governance must be put in place to guard against an uncontrolled proliferation of resources and to manage a mosaic of global suppliers and local specialists. While it may be tempting to entrust all of one’s systems to a single operator, this creates operational and strategic dependency as well as security and compliance risks that cannot be offset by the convenience of having a single point of contact.

Distributed Cloud

  • Adopt: 51. Ansible 52. Anthos 53. AWS Lambda Function 54. Azure Arc 55. Azure Functions 56. Google Cloud Functions 57. Hashicorp Terraform 58. Helm 59. Istio 60. Kind 61. Kubernetes 62.
  • Trial: 63. Cilium 64. Crossplane 65. Flexera 66. Hashicorp Consul 67. Knative 68. Kubecost 69. 70. Pixie 71. Spinnaker 72. VMware Cloud Foundation
  • Assess: 73. AWS CDK 74. AWS Outposts 75. Dapr 76. Hashicorp Nomad 77. Hashicorp Waypoint 78. Lens 79. Pulumi 80. Sovereign Cloud
  • Hold: 81. Microsoft Azure Stack Hub/Azure Stack HCI 82. Puppet
  • Ansible, Adopt

Ansible, acquired by RedHat in 2015, is an open source configuration tool for Infrastructure as Code (IaC), competing with Puppet, Chef or Terraform. Ansible is distinguished by its ease of use, richness and versatility, which allow it to be used in a wide variety of contexts and use cases: automation of cloud, security, network, database administration, provisioning and deployment of applications and infrastructure, disaster recovery, etc. By promoting upstream productivity, content reuse, and limiting risks, Ansible allows users to take full advantage of automation. In addition, the capabilities are fairly widespread, which facilitates its adoption. Finally, Ansible brings the necessary guarantees of durability to complete the long journeys that automation projects often entail.

  • Anthos, Adopt

The rise of hybrid and multi-cloud environments is leading to a twofold fragmentation: on one hand, of the information system, complicating the implementation of global and homogeneous policies; on the other hand, of skills, becoming rarer and more expensive. With Anthos, Google is responding to this dual challenge by offering to unify the management of heterogeneous infrastructures around a single platform that capitalises on the technologies, skills and vast ecosystem associated with Kubernetes. Through containerisation, Anthos allows users to modernise the application park, to industrialise development and deployment practices, and to rationalise administration in heterogeneous environments. Anthos clusters are currently compatible with a large number of cloud and on-premises platforms, including AWS, Azure, VMware, etc.

  • AWS CDK, Assess

AWS Cloud Development Kit (AWS CDK) is a framework for developing infrastructure as code (IaC) in the AWS world. Starting from standard AWS components (structures), AWS CDK allows you to define infrastructure resources using a common programming language (Python, Java, etc.). The execution of these programs generates an AWS CloudFormation model, which deploys the desired resources in AWS. Unlike tools such as Pulumi or Terraform, which are designed to be multi-cloud, AWS CDK assumes that it is exclusively dedicated to a single environment, for which it is perfectly optimised.

  • AWS Lambda, Adopt

AWS Lambda is the AWS serverless execution platform. The static code is executed in response to predetermined events using resources allocated and optimised by AWS, and the developer never has to worry about it. AWS Lambda supports a wide range of languages (Java, Python, Go, C#…) and also offers turnkey functions. Pioneering the serverless concept back in 2014, AWS Lambda is a mature technology appreciated by developers who can focus on business logic. It is particularly interesting for occasional and infrequent tasks because it avoids paying for resources that would be mostly occupied with waiting.

  • AWS Outposts, Assess

For security, regulatory or performance reasons, some companies cannot, or do not want to, put all their data or applications in the cloud. AWS Outposts allows you to host a “zone” of the AWS cloud in your own data centre and benefit locally from services such as EC2, RDS, EMR, S3 or SageMaker without latency or additional connection costs. A recent and growing offering, Outposts is in the original form of a dedicated rack, which makes it very simple to implement, and provides remarkable levels of technical compatibility and continuity of user experience. However, not all AWS services are offered and its pricing model reserves this solution for certain very specific use cases, such as the processing of sensitive or very large data.

  • Azure Arc, Adopt

Whether it’s to spread operational risk, avoid reliance on a single supplier, or take advantage of the strengths of each platform, almost all organisations have now adopted multi-cloud strategies. Recognising this and the complexities it creates, leading cloud providers are offering customers single control plans that provide global visibility, streamline management practices and consolidate skills. 56 TechRadar | Distributed Cloud With Azure Arc, Microsoft’s multi-cloud environment management solution, companies can control and optimise their entire infrastructure, manage and allocate the various resources (VMs, containers, etc.) in the best possible way, and apply uniform policies everywhere, particularly in terms of governance, compliance and security.

  • Azure Functions, Adopt

Azure Functions enables the development of serverless functions for applications that are intended to run in the Azure environment. The promise of serverless is that developers will not have to worry about technical issues and that resources will only be required – and therefore charged – when needed. This approach allows, for example, the rapid production of additional functionality for websites or mobile applications, but it does not offer all the guarantees needed for large-scale enterprise applications. Furthermore, while strict pay-per-use appears to be the most cost-effective approach, it is important to ensure that it remains so when scaling up. Combined with Azure DevOps (to manage versioning, deployment, etc.), Azure Functions is a very powerful tool, but it should be used with discretion.

  • Azure Stack Hub/Azure Stack HCI, Hold

For reasons of sovereignty, compliance or performance, using the cloud is not always possible or optimal, and local infrastructure is still required. Examples include branches subject to national regulations, production sites where IoT requires minimal latency, or environments with uncertain connections such as ships. In order to benefit from the power of the cloud in these situations, Microsoft offers a new generation of hyper converged infrastructures. Azure Stack HCI allows you to deploy your applications and part of your services locally while connecting them to additional cloud services (IA, ML, PCA/PRA, backup, etc.). Azure Stack Hub is a set of hardware and software that allows you to recreate the equivalent of an Azure region in your data centre, and thus benefit from the main Azure services and portability of Azure applications locally.

  • Cilium, Trial

Thanks to the Extended Berkeley Packet Filter (eBPF) technology, which allows it to intervene inside the Linux kernel in a secure way, the Cilium open source tool brings new possibilities around the connectivity of Kubernetes clusters. Cilium goes beyond the usual monitoring tools by allowing close observability of the functioning of clusters and their connections. It also allows network policies to be set at a very fine level of aggregation to improve performance and security, load balancing, on-the-fly encryption, service mesh, etc. Already adopted by Google and AWS, Cilium has been selected by the Cloud Native Computer Foundation (CNCF), where the project is being incubated. Cilium is an extremely promising technology, but it still needs to prove itself in an operational context.

  • Crossplane, Trial

Crossplane offers the ability to control all Kubernetes clusters from a single interface, regardless of their operating platform, thus unifying the way Kubernetes components are managed and assembled. Furthermore, in an Infrastructure such as Code logic, Crossplane allows the creation of meta-objects to build, deploy or group sets of clusters. This greatly simplifies the work of infrastructure managers and, above all, allows them to provide developers with higherlevel objects, accessible in selfservice via APIs, through which they can exploit the full power of Kubernetes without being specialists. Crossplane is currently being incubated by the Cloud Native Computing Foundation (CNCF) and is extremely promising, but it has yet to prove itself at scale.

  • Dapr, Assess

For distributed applications to become widespread, developers must be able to continue to use their preferred language and concentrate on functionality without worrying about technical issues. Dapr (Distributed Apps Runtime) meets this dual challenge by dissociating the business logic from the technical aspects, which are grouped together in a “sidecar”. Linked to the application itself via APIs, the sidecar is a portable execution environment where exchanges between containers (service calls, events, etc.), observability, state management, secret management, etc. are managed. This promising approach was initiated at Microsoft and is being incubated at the Cloud Native Computing Foundation (CNCF).

  • Flexera, Trial

Thanks to the acquisition of Rightscale in 2018, Flexra now offers to manage all of its IT assets, licences (on premise and cloud) and cloud services (catalogues, contracts, consumption, etc.) via a single SaaS solution called Flexera One. This makes it possible to optimise financial management globally, to consolidate and rationalise the consumption of services by the various entities, to establish precise re-invoicing, and to monitor contractual developments and budgetary commitments in a predictive manner. However, the relevance of the dashboards and suggestions for optimisation depends on the quality of the definition and organisation of the information gathered from the cloud providers.

  • Google Cloud Functions, Adopt

GCP Cloud Functions is Google’s serverless execution environment. There are two ways to trigger the execution of a function: either by calling it via a standard http request or by making it conditional on the occurrence of an event. It then runs in an environment specific to the chosen programming language (to date, Node.js, Python, Go, Java, .NET, Ruby and PHP) and only the computing resources actually used are invoiced. GCP manages the infrastructure, scales it according to the load, and provides network, security and monitoring functions. The new version of Cloud Functions (dubbed 2nd Gen), which was introduced in early 2022, provides for more than 90 potential triggering events and reinforces the available infrastructure (CPU, RAM, maximum duration, number of concurrent executions) for each function.

  • HashiCorp Consul, Trial

In distributed, hybrid, multicloud environments, exchanges between services can quickly become inextricable. Open source and agnostic, HashiCorp Consul allows us to connect, configure and secure services running on heterogeneous and dynamic infrastructures including, for example, virtual machines (VMs), containers, and/or different orchestrators (Kubernetes, Docker Swarm…). In particular, HashiCorp Consul enables service discovery by maintaining a centralised and dynamic registry where the list of services, their location and their health status are noted in real time. HashiCorp Consul also allows for the control of access to services and ensures that exchanges between services are authorised and properly encrypted. Finally, HashiCorp Consul allows you to automate certain network tasks such as load balancing.

  • HashiCorp Nomad, Assess

HashiCorp Nomad is a scheduling and orchestration tool for all types of workloads: Docker containers, microservices, legacy and batch applications, etc. Lightweight, flexible and agnostic, HashiCorp Nomad allows companies to easily deploy and manage all of their applications on a single infrastructure and via a single process. In particular, HashiCorp Nomad allows existing applications to benefit from orchestration without the need for rewriting, and to move towards containerisation without having to invest heavily in Kubernetes.

  • HashiCorp Terraform, Adopt

HashiCorp Terraform is an open source Infrastructure as Code (IaC) development tool. Based on its own language (HashiCorp Configuration Language, HCL), it 60 TechRadar | Distributed Cloud allows the creation, modification and versioning of an infrastructure, with the user describing the final state of the desired infrastructure and Terraform generating and executing the plan to achieve it. To do this, Terraform relies on two key concepts: providers, which establish gateways to the resources to be used, and modules, which are reusable infrastructure components. The repository of providers and modules developed and made available by the community currently includes more than 1,800 providers and over 8,400 modules. In particular, HashiCorp Terraform allows for the flexible, secure and standardised implementation and evolution of hybrid and multi-cloud infrastructures.

  • HashiCorp Waypoint, Assess

Deploying applications in the cloud for developers is often a source of complexity and frustration. HashiCorp Waypoint is a tool that allows them to specify in a very simple and abstract way their deployment needs for any application on any platform (Kubernetes, EC2, Azure container instances, Google Cloud run, HashiCorp Nomad…). Waypoint allows – with a single command and from a single configuration file – to build, deploy and release the application on the chosen environment. A relative newcomer to the HashiCorp suite of solutions, Waypoint could be particularly useful for small development teams that do not have infrastructure expertise and do not need the procedures of large organisations or for larger organisations that wish to normalise their workflows across multiple deployment targets.

  • Helm, Adopt

Manually defining, deploying and managing applications on Kubernetes can quickly become very complex and time consuming, hence the interest in using a package manager such as Helm. Based on pre-configured templates, configurable and gathered in a single descriptive folder (Chart), Helm allows one to easily create and manage all the resources attached to the Kubernetes cluster and necessary for its operation (pods, services…). The principle of Charts greatly facilitates the updating, sharing and versioning of clusters, and thus the management of the application lifecycle. Open source, Helm is supported by the Cloud Native Computer Foundation (CNCF), which classified it as a graduated project in 2020.

  • Istio, Adopt

By bringing flexibility and lightness to applications, containers make it possible to take better advantage of the cloud, but still remain complex technical tools. As an infrastructure layer directly 61 DISTRIBUTED CLOUD implemented in the application, a service mesh makes it possible to control data exchanges between microservices and to configure and manage the various technical services (discovery, performance, access, etc.) via APIs. This simplifies container environments and, above all, the technical aspects can be managed at the application level without strong skills, or even automatically, from templates defined by the architects. Open source and integrated with OpenShift, Istio is the oldest and most mature service mesh, and big players such as Netflix, Spotify and Twitter attest to its robustness. For companies that have made microservices widespread, it is already a must-have.

  • Kind, Adopt

Kind is an open source tool that allows you to create Kubernetes clusters on local Docker nodes very quickly and easily. As an alternative to Docker Compose, Kind is used in Go, the language in which it was developed. Clusters are destroyed immediately after use and do not require any management or cost as they do not consume any resources in the cloud. Kind is an extremely convenient and useful productivity solution for developers when making POCs, prototyping, or testing new tools or features, to name a few examples.

  • Knative, Trial

Serverless architectures have a dual benefit: to let the development teams focus on the business issues and to reduce the future execution resources (and their cost) to the bare essentials. But setting up a Serverless function in a container is a tricky task… unless a layer of abstraction masks this complexity! This is the principle behind Knative, a framework designed to deploy, run and manage Serverless applications on Kubernetes. Knative’s components notably manage the rapid deployment and automatic scaling of containers according to the needs, and the events that trigger them. Created by Google but open source, Knative is up against proprietary competitors (AWS Lambda, Azure Functions, etc.) in this segment, which is still in its early stages but is set to become an important DevOps component.

  • Kubecost, Trial

Even if cost reduction is not necessarily the primary objective of cloud-native architectures, it certainly remains an essential parameter to monitor. Founded in 2019 by former Google engineers, Kubecost provides real-time visibility on the costs specific to the operation of Kubernetes clusters (memory, CPU, etc.) and the external resources they call upon (databases, cloud services, etc.). These costs can be managed at the level of each cluster or consolidated by application, project, team or department in order to facilitate rebilling and management. Finally, Kubecost offers alerts and optimisation paths to prevent the multiplication of clusters from resulting in an explosion of budgets.

  • Kubernetes, Adopt

In an extremely competitive technological landscape, some solutions are nevertheless establishing themselves as absolute standards. One such example is the Kubernetes container orchestration platform (abbreviated to K8s), which enables a multitude of containers to be deployed, maintained and scaled in an automated manner, independently of cloud providers. Originally developed by Google and now open source, Kubernetes has established itself as the irreplaceable backbone of cloud-native architectures, earning it the nickname “OS of the cloud”. Adopted and supported by all the big tech players, surrounded by an increasingly rich ecosystem of complementary solutions (security, management, etc.), Kubernetes still remains a complex tool, requiring a certain technical maturity. Devoteam currently has more than 200 certified employees to support its customers.

  • Lens, Trial

Lens offers a visual environment that allows users to manage all their Kubernetes clusters (AWS EKS, Azure AKS, Google GKE, OpenShift, etc.) in a user-friendly and centralised manner. One can deploy and manage one’s clusters directly from the console, while dashboards allow users to monitor their status (capacity, load, performance, etc.) and that of the various resources (pods, deployments, namespaces, network, etc.). In particular, this allows problems to be spotted quickly and the origin of the problem to be understood. An open source tool launched in 2020, Lens is already a huge hit with developers and SysOps engineers, as it greatly simplifies the management of Kubernetes landscapes, the complexity of which can sometimes hinder adoption.

  •, Trial

With increasingly heterogeneous and fragmented infrastructures in multiple clouds or sites, it is becoming more and more complex to have overall visibility, to apply harmonised governance rules, to standardise processes and to finely control costs. The platform allows you to manage all resources from a single interface: public and private clouds, containers, virtual machines, bare metal, etc. The console offers a complete and detailed view of the infrastructure and its components, and allows you to industrialise management and usage reports. It allows you to create and manage a centralised repository of automation recipes, and to segment assets and actions according to user and team roles. Open source and agnostic, also allows itself to remain independent of its suppliers, particularly cloud providers.

  • Pixie, Trial

Pixie is an open source Kubernetes cluster observability tool that uses Extended Berkeley Packet Filter (eBPF) technology, which opens up the Linux kernel, to automatically collect highly detailed operational data in real time. This information provides insight into the state of the cluster, the behaviour of the resources and the performance of the code, which in turn enables debugging and optimisation of the code. Pixie was initiated by the monitoring platform New Relic and submitted to the Cloud Native Computer Foundation (CNCF), where the project is in its early stages (sandbox). Like Cilium, which also uses eBPF, Pixie offers very promising possibilities, but still lacks references.

  •, Adopt

To manage rapidly expanding container environments, it is increasingly less viable to intervene directly on the clusters. Direct interaction with the clusters or containers is both complex, risky and time-consuming. is a centralised management platform for multi-cluster environments that allows developers to deploy, manage and maintain containerised applications easily and without having to become experts. Compatible with Kubernetes, Docker Swarm and Azure ACI, is based on recognised best management practises while allowing the possibility to create its own recipes and automate its processes. Supported by a large and very active community, has proven to be a reliable and valuable tool for all organisations moving towards large-scale multi-cloud and containerised environments.

  • Pulumi, Assess

Defining itself as a cloud engineering platform, Pulumi offers to use the usual software development practices and tools to create, deploy and manage cloud infrastructures according to the “infrastructure as code” principles. Unlike Terraform, for example, which has its own language, Pulumi accepts Node.js, Python, Go and .NET for infrastructure coding. In addition to broadening programming possibilities with the use of loops and conditions, this approach allows the IT organisation to capitalise on unified skills, methods and tools (IDE, tests, CI/CD pipeline, etc.). Developments are faster, better controlled, and collaboration between teams is more fluid.

  • Puppet, Hold

Puppet is a configuration management tool that automates the setup and maintenance of infrastructures as the applications they support evolve. Puppet appeared in 2005 and is one of the most mature and popular solutions in this crowded segment. Widely validated in traditional environments, Puppet suffers from the competition of more recent alternatives, born with the distributed architectures of the cloud and the concept of Infrastructure as Code. The choice of such a tool will therefore depend on the maturity of the company and the trajectory it foresees for its infrastructure.

  • Sovereign Cloud, Assess

With the rise of national rules, the supraterritoriality of the cloud poses increasing compliance and security challenges. For companies, sovereignty is achieved through the “trusted cloud”, which adds various guarantees to the services (IaaS, PaaS, CaaS, SaaS), particularly 65 DISTRIBUTED CLOUD concerning the location of data and the non-justification of extraterritorial regulations. However, depending on its activities, location and strategy, each company has its own perception of sovereignty. For each scope (application, business, geographical area, etc.), it is therefore necessary to establish a risk matrix to select the appropriate services from the cloud providers’ catalogues, as well as the ways of operating and securing them.

  • Spinnaker, Trial

Deploying applications in a multi-cloud environment can quickly get very complex and become the bottleneck of a DevOps approach. Originally developed by Netflix and now open source, Spinnaker allows you to create a single deployment pipeline that is compatible with all major cloud platforms. This makes the deployment process much easier to manage, but also allows it to be separated from continuous integration, enabling teams to work at their own pace. With broad industry support (Google, Microsoft, IBM, etc.) and a very active community, Spinnaker has quickly established itself as the most solid, agile and rich deployment platform. It is primarily designed for companies that already have a good DevOps practice and that want to accelerate their developments in multi-cloud environments.

  • VMware Cloud Foundation, Trial

VMware Cloud Foundation is a software suite that enables you to deploy private clouds, locally driven hybrid clouds, or VMware infrastructures on most public clouds (AWS, Azure, GCP, IBM, OVH…). Based on the technologies, principles and tools common to VMware solutions, the VMware Cloud Foundation allows you to capitalise on the knowledge of your teams and service providers, which facilitates and secures the adoption of the cloud prior to a possible application redesign. The most common use cases are the implementation of cloud bursting, the implementation of disaster recovery and business continuity plans (DRP/CBP), and the closure of ageing data centres with a complete and identical port of the virtualized infrastructure.