loading...
To top

Publication in the community "Interesting news"

9a106ac186273ac52c0f831e608fc4a7.jpgsdecoret - stock. adobe. com

In this podcast, we look at distributed cloud storage with Enrico Signoretti, vice-president of product and partnerships at Cubbit.

We talk about how storage has shifted to hybrid and multicloud modes and how distributed cloud storage separates the control plane from data to provide data retention in multiple locations, on-site and in multiple clouds.

Signoretti also talks about how organisations that need to retain control over data – over costs and location, for example – can achieve that with distributed cloud, as well as talking about the workloads to which it is best suited.

Enrico Signoretti: So, I can start with why it is important right now and then delve into what it is and what it does.

It is important because we live in a moment where companies are shifting from traditional models, at the beginning, [to] just cloud, and then we discovered hybrid cloud, so keeping some of your IT stuff on-premise and some in the public cloud.

Then we were talking more and more about multicloud; most large enterprises have multiple clouds and multiple applications running in different environments.

• Download this podcast •

So, from this point of view, a distributed cloud is a model that’s totally different to what we’re used to seeing in the market. So, the big hyperscalers do everything in single datacentres. So yes, you see the cloud, but everything running in one or a set of very closed datacentres.

With the model of distributed cloud you separate the control plane from the data plane; something that happened in the past when we were talking about software-defined.

So, the service provider keeps control of this control plane . . . but resources can be used and deployed everywhere. They could be in the same public cloud environment that I mentioned before, or in your datacentre. So, you are building this distributed cloud.

More so, when it comes to storage, when we talk about geo-distributed cloud, it means these resources are really distributed geographically, meaning that you can have some of your data in France maybe and other segments of the data in Italy or Germany, or even more distributed than that.

This is the main concept, and it’s really important for everybody because it removes a lot of obstacles when it is time to work with the multicloud.

Signoretti: The main benefit of distributed cloud is control. You can have control at several levels. When you start thinking about distributed cloud there is no lock-in because you have the possibility to choose where you put your data.

There is data sovereignty as well as – we can call it – data independence. It’s not only data sovereignty that you achieve but you achieve control on all the layers and all aspects of data management.

And this is very important because even though most of the hyperscalers are very quick to respond to new regulations here in Europe, and also in the US, that are popping up, it’s still a complex world and for many organisations in Europe giving your data to this kind of organisation is not feasible.

The idea here is that with distributed cloud you have this level of sovereignty that you need but also control on cost, control on policies that are applied on this data management.

Maybe if we think about a comparison between the three models – on-premises, public cloud and distributed cloud – you can see that distributed cloud is just in the middle between the others. On the one hand, you keep control of the entire stack, and on the other hand, you have the flexibility of the public cloud.

So, matching these two, you can have a very efficient infrastructure that is deployed and managed by your organisation but still keeping all the advantages of public cloud.

Signoretti: You have to think of distributed cloud still as cloud. So, if you have a low latency, high-performance workload for which you usually need the CPU [central processing unit] very close to the storage, that’s not for distributed cloud.

In that case, it’s way better to choose something that is on-premise or in the same cloud.

From my point of view, all other workloads are fine – from backup, disaster recovery, collaboration and even big data lakes to store huge amounts of data for AI [artificial intelligence] and ML [machine learning].

In most cases you can have a good throughput. It’s just the latency that’s not there but the same goes for the public cloud. This is probably the set of use cases that are more suited for distributed cloud.

A source: www.computerweekly.com/podcast/Podcast-What-is-distributed-cloud-storage-and-its-benefits

Link