AWS release of the super slow S3 Glacier storagecases and heavy learning containers

In case you own a petabyte of information stored on AWS S3 Glacier Deep Archive it may take up to 48 hours to recover. AWS has announced a slew of new goods in Santa Clara Summit of the company yesterday. Among the new products is the technology giants a slow, S3 Glacier Deep Archive, cold storage option. There’s also M5ad and R5ad cases and also the new AWS deep learning containers available.

Slow, like a glacier, get it?

Companies in financial services, healthcare, along with other sectors are frequently required to retain information for long periods of time. It’s also characteristic of media companies at maintain on backup copies of their intellectual property. These datasets are quite large, consisting of several petabytes, and typically just a small number of this data is accessed – once or twice a year. Companies like AWS offer a slow storage option – such as Glacier – for these organizations that require considerable quantities of storage with reduced accessibility. The slow storage option is designed for 99.9999999 9% durability, and information may be retrieved within 12 hours or not.

S3 Glacier Deep Archive offers a retrieval option that allows clients retrieve petabytes of information within 48 hours. All items stored in S3 Glacier Deep Archive are duplicated and stored across at least 3 geographically dispersed accessibility zones. We’ve customers who’ve exabytes of storage locked out on tape, who’re stuck on tape infrastructure to the rare event of information retrieval. It is difficult to do and this information is not close to the rest of their data if they want to do analytics and machine learning on it, Mai Lan Tomsen Bukovec, VP of Amazon S3, said. The business also released M5ad and R5ad cases, both powered by custom AMD Epyc 7000 series chips and assembled on the AWS nitro system. These are priced than R5 cases and comparable EC2 M5. AWS released its Learning Containers that offer deep learning training or inferencing using TensorFlow or even Apache MXNet.

Share this post