In part one of this blog series, we suggested there are scenarios where storage in close proximity to compute systems offered advantages over cloud options.
It certainly is cloudy. That’s the takeaway from RightScale’s 2018 Annual State of the Cloud Report.
High-performance storage close to the compute source is not only relevant but also much less complex and costly than it once was, thanks in large part to the emergence of software-defined storage. Where cloud technologies rely on virtualized platforms, software-defined advancements allow optimized, scalable storage solutions that offer accessibility, control, data ownership, security, and more.
So what is software-defined storage? In the simplest terms, software-defined storage is a layer of abstraction that hides the complexity of the underlying compute, storage, and in some cases networking technologies. It’s a viable option, as organizations continue to express concerns over data security, leaks, capabilities, and capacity of cloud-based options.
The trend that follows is “software-defined everything,” including software-defined networking or software-defined virtual function. Hyperconvergence is next, which is a strategy that brings together infrastructure components such as compute, storage, virtualization, networking, and bandwidth onto a single platform and defines it in a software context.
For more insight, contact the team at Dedicated Computing to learn more about software-defined storage.