Blogs

Hyperconvergence and the Internet of Things: Part 1

November 27, 2017

Posted by: Zenobia Hegde

Bob Emmerson

Hyperconvergence is a relatively new marketing term, says Bob Emmerson. Unfortunately, it can come across as a hyped-up version of regular voice/data convergence, which became an overused term many moons ago. Today it is used to indicate the convergence of computing, storage and networking resources, a significant development that’s enabled by a software-centric architecture and the application of virtualisation technology.

Having got that out of the way, why is it significant? Hyperconvergence computing is bringing key benefits to IoT environments. The biggest is functionality that facilitates the trend towards network edge computing. Another is seamless, two-way interoperability between computing at the edge and native integration with private and public clouds.

Virtualisation uses software to simulate a hardware resource and it is key IT technology that is making its way into the OT domain. The technology allows IoT devices to become virtual machines (VMs) that run on industry-standard x86 servers. This enables local computing resources to be consolidated, a key feature that enhances operational efficiency at the edge.

Processing data close to the source generates real-time information and enables data to be analysed at the local level by intelligent gateways, thereby generating real-time, insightful business intelligence that allows decisions to be made “in the moment”. In addition, it eliminates the need to send everything to a remote server for cloud-level analysis. For large-scale IoT deployments this feature is critical because of the sheer volumes of data being generated.

Eurotech has signalled plans to market edge server and IoT gateway products that are pre-installed with VMware virtualisation technology. This topic will be covered in a second hyperconvergence blog.

Hyperconverged secondary storage

There are many sources that predict exponential data growth rates, but there is broad agreement that it doubles every two years and machine data is increasing at an even faster rate. The Santa Clara, California-based company, Cohesity indicates that legacy storage can’t keep pace and has addressed the issue by pioneering hyperconverged secondary storage that employs a Web-scale data platform that consolidates all secondary storage and data services.

That statement begs the question, what is the difference between primary and secondary storage? Cohesity’s answer is that primary storage is designed to support production, business-critical applications and that it provides high-throughput, low-latency transactions. Secondary storage is used for content like regular archives, backups, file shares, analytics and archives of data held on public services. A survey conducted by IDC indicated that 70% of the capacity of most data centres is used for secondary storage.

In the secondary storage facility a data platform runs on clusters of hyperconverged nodes (industry standard x86 servers). At the edge and at remote sites distributed platforms run in VMs on shared physical servers. Having the platform in both locations enables remote sites to replicate all local data in the central data centre for secure backup and recovery. This enables the creation of a data fabric that goes from virtual machines consolidated at the edge through to the secondary storage data centre and on to the cloud. This topic will be covered in a third hyperconvergence blog.

Part 2 of this blog follows tomorrow.

The author of this blog is Bob Emmerson, freelance IoT writer and commentator

Comment on this article below or via Twitter @IoTGN