Categories: Data Storage
Tags: Seagate

ISJ Exclusive: The endless possibilities of composability

composability

Share this content

Facebook
Twitter
LinkedIn

Experts at leading mass data storage solutions provider Seagate discuss the benefits of composability in private cloud architecture.

Putting together a private cloud data centre is like building a sports car from scratch. You need to pick the right engine, parts and equipment to meet the performance demands of the road and driver. Thanks to hardware and software innovations, IT architects can compose their data centres to perform more like a Lamborghini and less like yesterday’s clunker.

On-premises private clouds require constant oversight, management and maintenance. Operational costs, such as replacing drives or overprovisioning, can add up quickly over time. This raises the total cost of ownership (TCO) of data centres and eats into a company’s bottom line.

Those issues are precisely why, what Seagate calls the ‘composability megatrend’, is currently underway. The modern, composable data centre allows each private cloud server to be comprised of components that are disaggregate but interconnected with optimal fabric types and bandwidth.

IT architects must decide whether to use an on-premises private cloud solution or a public/private cloud hosted by a third party. Public clouds share computing services among different customers and are hosted at an external data centre; private on-premises clouds are managed and maintained by a company’s own data centre and do not share services with other organisations.

As the demands of applications increase or decrease, components and servers communicate with one another to shift the workload. IT architects can now fashion data centres with a wider array of hardware and components from various manufacturers. In essence, they’re taking apart – or disaggregating – the traditional data centre infrastructure.

Limitations of the traditional data centre

Traditional IT architecture is reaching its limits due to exponential data growth and the increasing complexity of software applications. CPU, dynamic access memory units (DRAM), storage class memory units (SCM), GPU, SSD and HDD are among the critical components that comprise a data centre. These components are typically housed together in one box or server and are the foundations upon which the data centre is built.

Data centres originally operated under a one-application-per-one-box paradigm. And, when applications outgrew the storage and data processing capabilities that single servers could provide, IT architects began grouping multiple servers into clusters.

Data centres could then scale upward and satisfy the needs of software applications as they grew in complexity in this way: if an application required more storage, bandwidth or CPU power, additional servers or node could be added to the cluster. The clustered model of pooled resources forms the basis of what we now know as converged and hyperconverged enterprise cloud infrastructure, using enterprise hypervisor applications like VMware and others.

Node clusters served their purpose in the cloud’s infancy but are prone to overprovisioning. This occurs when IT architects purchase more servers, which are bound to contain more resources of one kind or another than are needed – resources that then don’t get used. Although the clustered approach has its benefits, unused resources within servers are inefficient. Nevertheless, IT architects had to rely on overprovisioning to meet scale, since there was no way for data centres to dynamically scale only specific resources or workloads to the demands of software applications.

IT architects were also limited in terms of which hardware components they could use to comprise a server. Hardware for each server or cluster had to be purchased from single manufacturers for compatibility purposes. And, there were no open APIs available to help hardware from various manufacturers communicate and coordinate. If architects wanted to swap out a CPU for a faster one, often they were out of luck due to incompatibility.

Hardware incompatibilities aren’t the only thing straining traditional data infrastructure. There’s also the issue of the massive amounts of data that need to be collected, stored and analysed. The explosion of big data is not only pushing the storage limits of traditional private cloud clusters, it’s also creating a data processing bottleneck.

Complex AI applications are predicated on the ability to process large amounts of data in a short amount of time. When an AI application is utilising a clustered data centre, bottlenecks in data collection and processing tend to occur. The proprietary era is coming to an end—if it hasn’t already.

How applications impact the private cloud data centre

One of the biggest forces driving the composability megatrend is software application demands. Software like AI or business analytics demands an increasingly complex set of hardware requirements specific to that application’s needs – this creates fierce competition for resource pools of storage and processing.

Application requirements evolve constantly over time and changes can take place quickly. The new version of a business app, for example, might demand twice the storage or processing power of the old one. Composability gives applications access to resource pools outside of their dedicated cluster, unlocking the processing power – or other resource – available within overprovisioned servers.

Additional processing demands also create bottlenecks within the traditional data centre fabric. The data centre fabric is what connects various nodes and clusters. Ideally, a composable fabric that meets the needs of modern software applications should create a pool of flexible fabric capacity. This fabric should be instantly configurable to provision infrastructure and resources dynamically as the processing needs of an application increases.

Composability and disaggregation are essential to meeting the demands of advanced software applications. By disaggregating components within a server box and giving them a way to communicate with API protocols, data centres can serve complex apps in a cost-efficient manner.

The benefits of private cloud disaggregation

Disaggregating a private cloud data centre means completely doing away with the traditional server box model. Resource components can all be disaggregated and re-composed à la carte within their proper fabrics. These resources can then be utilised based on what a specific application needs.

The storage resource pool that an application draws from might consist of HDDs in ten different server racks in various locations in a data centre. If the application needs more storage than it’s currently using, one HDD can simply communicate with another HDD that has space and transmit data seamlessly. It’s a stark change from JBODs (just a bunch of disks) being confined to a single server rack. JBODs evolved into pools that applications can call on at any time; architects then began turning toward standardised external storage that could communicate with one another.

Disaggregation also introduces standardised interface monitoring and allows IT architects to manage an entire composable data centre. Selecting requirements-based hardware is only one part of disaggregating the traditional data centre and shifting toward creating a composable one. Architects still require the correct open API protocols for seamless integration and a single user interface to manage the data centre.

Letting the application define how the data centre is composed, instead of vice versa, results in software-defined networking (SDN) and software-defined storage (SDS). The next evolution of SDN and SDS is the hyper-composed data centre. This could put private cloud architecture on par with some of the hyperscalers like Amazon Web Services (AWS) and Microsoft Azure. Hyperscalers are large data centre providers that can increase processing and storage on a massive scale.

Open API protocols like Redfish and Swordfish are critical to all disaggregated components working in harmony. Seagate has its own legacy REST API that it maintains for its specific class of data centre products that promote inter-device operability.

Efficiency of the modern composable data centre for private cloud

Data centre composability not only connects all the disaggregated components, it also helps improve key performance indicators (KPIs) related to the cost efficiency and performance of data centres. When data centres are overprovisioned, servers and resources are paid for but go unused; as a result hard drives are then underprovisioned.

Underprovisioned resources result in what are known as orphan pools. This can include CPU, GPU, field programmable gate array (FPGA), dynamicrandom-access memory (DRAM), SSD, HDD or storage class memory (SCM). These building blocks get dynamically composed to create application-specific hardware through software APIs.

Before composable, open source data centre architecture, private clouds were typically constructed using hardware from a single vendor. Physically connected data centres are more costly up front and organisations are often locked into a vendor-specific architecture that becomes costly over time.

The composability trend began to gain traction as part of public cloud architectures. A similar approach can be taken in terms of private cloud deployments, saving monetary resources by avoiding vendor lock-in and composing data centres with devices from multiple vendors.

Organisations can now use a third party storage solution, drop it into their data centre and seamlessly integrate it with an open source API. If an IT architect wants an SSD from one manufacturer but their data centre is built from components of a different manufacturer, that SSD can be placed in the data centre without much headache.

A composable architecture also allows for the faster collection and processing of data from multiple sources and, overall, a more efficient use of installed resources. Composable data source architecture typically includes physical sensors, data simulations, user-generated information and telemetry communications.

A single view of all resources

Orchestrating and managing a private cloud data centre is also easier with a composable architecture using a containerised orchestration client. All hardware can be disaggregated, composed and monitored from a single interface. Data centre managers can get a clear, real time picture as to which resource pools are being utilised and ensure there’s no overprovisioning. In many cases, management will be conducted with standard software. This can be either proprietary or open source.

Deployment of new software applications in an open source, containerised environment is more flexible as well, especially due to how flexibly storage resources can be purchased and laid out.

Creating more raw storage capacity is becoming necessary due to the growth and explosion of raw data. Organisations are striving to gain even more value and actionable insights from more of that data to enable new opportunities that enhance the bottom line. What’s crucial is that IT architects are able to design infrastructures that collect, store and transmit massive amounts of data. The solution is the composable data centre that connects any CPU and any storage pool with other devices as needed.

The disaggregation and composability trend in data centres is being driven by cost, performance and efficiency. Gone are the days of the CPU being the centre of the known universe, with all other devices being stuffed into the same box along with it. Private cloud architects can now choose the most appropriate devices, hardware and software based on use case and specific needs.

Composability means that processing workload is distributed in real time, sharing the burden with underutilised devices and eliminating orphan data pools. The result is a fully orchestrated private cloud infrastructure that helps data centres to process workloads faster and costs less money to operate.

For more information, visit: www.seagate.com

This article was originally published in the special September show edition of International Security Journal. To read your FREE digital edition, click here.

Newsletter
Receive the latest breaking news straight to your inbox