This article is courtesy of TechRepublic Premium. For more content like this, as well as a full library of ebooks and whitepapers, sign up for Premium today. Read more about it here.
Smaller enterprises turned off by the high cost and complexity of hyperconvergence should take a close look at the Scale Computing HC3 virtualisation platform, which is designed specifically to match both their budgets and skill sets.
- Ludicrously simple management
- No hypervisor licensing requirements
- Single highly-available storage pool
- Optional SSD tiering
- Fast VM creation and live migration
- Multiple snapshots
- Remote replication and disaster recovery
- Usable storage capacity halved by SCRIBE redundancy technology
- No automatic balancing of workloads
From £6,500 (ex. VAT) per node
Hyperconvergence promises scalable computing without the cost or complexity of a conventional server-plus-SAN infrastructure. The majority of HCI (Hyperconverged Infrastructure) products, however, are still beyond the reach of most small and medium-sized enterprises, plus they continue to require technical expertise to manage — a resource in similarly short supply in the SME arena. That's not the case with Scale Computing's HC3 virtualisation platform which is aimed squarely at the smaller enterprise and is designed to deliver key hyperconvergence benefits in an affordable and enviably manageable format.
Enjoying this article?
Download this article and thousands of whitepapers and ebooks from our Premium library. Enjoy expert IT analyst briefings and access to the top IT professionals, all in an ad-free experience.Join Premium Today
It's all in the box
The hardware that makes up the HC3 package is based on good-quality off-the-shelf kit, with three product families to choose from. All 1U rack mounted, these start with the SuperMicro-based HC1000 family, beyond which it's Dell all the way — for both the mid-range HC2000 line-up featured here and the HC4000 for more demanding buyers.
Servers are delivered fully assembled in the form of a pre-built cluster comprising a minimum of three nodes — that's three 1U servers in a stack — that can be further scaled by simply adding extra nodes when the need for extra performance or storage arises.
Adding extra nodes is simple as different models and specifications can be mixed and matched together with automatic discovery and immediate scaling of the shared compute and storage resources, making it a simple plug-and-go process.
Scale Computing told us it had tested with up to 32 nodes in a cluster but recommends eight as a practical maximum, which should be plenty for most SME applications. The HC2100 cluster we looked at was installed in a successful manufacturing business and had four nodes that were more than enough for its particular workloads.
Each node has two network ports bonded together to connect to the LAN and two more to create a communication backplane for cluster management, migration of VMs between nodes, backup, remote replication and so on. On some models these will be a mix of Gigabit and 10GbE ports with, on the more highly specified nodes, 10GbE used throughout.
Suitable switch hardware is not included as part of the cluster, but can be provided by resellers or existing switches reassigned, as required.
The aim of the cluster is to enable hosted VMs to carry on working even in the event of the complete failure of one of the nodes. Much of this availability is delivered by the HyperCore operating system and distributed storage architecture, but it also relies on highly available hardware with lots of redundancy built in as standard, including the two ports used to create the backplane - which, in turn, are cabled to at least two switches. Hot-swap redundant power supplies are also included in each node and storage is all hot-swappable, but we'll cover that in more detail shortly.
On the computing front, Intel Xeon E5-2600 processors are employed across the board: the HC2100 we looked at used 2.4GHz E5-2620 v3 chips, equipping the cluster with 24 cores or 48 processing threads. At least 64GB of supporting DDR4 memory comes configured per node, but more can be specified and exactly what processor and RAM combination you get will depend on the number of VMs you need to support and the workloads involved. The plan, however, is for this to be worked out for you rather than customers having to size the nodes themselves. In fact, the customer whose equipment we looked at had only a vague idea of the specification beyond the number of cores and amount of RAM available across the cluster as a whole (which is clearly shown on the management console).
Another big bonus is that no external storage is required to go with the HC3 nodes. There's no complicated SAN to cable in, configure and manage, and no storage appliances — just local drives fitted inside the individual nodes. Moreover, rather than manage this independently, the Scale Computing Redundant and Independent Block Engine (SCRIBE) stripes data across all the disks in the cluster's servers, effectively creating a single pool of shared storage that can be quickly and simply assigned to new VMs when they are created. It also mirrors entire pairs of disks to further enhance availability and supports thin provisioning such that physical disk space is only consumed when data is written, making for much more flexible setup.
For the most part the storage pool is hosted on magnetic disks, the review system featuring four 600GB 15K SAS disks in each of its nodes. However, the latest release of the HyperCore OS also added SSD support, with automated tiering to ensure that the most frequently accessed data is maintained in fast SSD for maximum performance. Ours didn't have SSDs so we couldn't see this in action, but SSD drives can be retrofitted as well as specified when ordering new nodes.
Behind the mask
Enough of the hardware, which is functional but far from spectacular. It's the KVM-based HyperCore operating system that really marks the Scale Computing solution as different. This is delivered pre-installed, booting from disk to provide a complete turnkey solution that can be managed through a simple browser-based console.
Ease-of-use is a much abused term, but Scale Computing seems to have really pulled it off with a single, very simple, interface to manage everything without a command line in sight. It's not particularly pretty, with simple dials at the top showing how resources are being consumed across the cluster and tiles below to give more detailed access to individual nodes and guest VMs. There are very few buttons to press or menus to navigate, but it does the job and the essential tools are all there and can be mastered in minutes.
To create a new VM, for example, you just give it a name and tell the software how many processing cores to assign, the amount of RAM required and how big to make the thin-provisioned virtual disks it needs. You can also boot a new VM from a selected ISO image and have a working system up and running in a matter of minutes.
By default, new VMs are placed on the first available node with enough cores and memory, but you can move them to another node to better balance workloads, and do this while they are running with no downtime. Snapshots of active VMs can also be taken in seconds with a scheduling facility to automate the process; you can also clone a new VM from a snapshot to quickly recover from malware attacks, deleted files and so on.
Add a second cluster and VMs can be automatically replicated, ready to be booted up should the primary cluster have a problem. Our test site had installed a 3-node HC1100 for this purpose, located in another building from the production cluster, but you can even get away with no extra hardware and replicate VMs to a dedicated disaster recovery cluster managed by Scale Computing itself. This DR service costs around $100/month per VM.
Management of the replication and recovery processes is very straightforward, just like everything to do with the Scale Computing platform. But when things do go wrong, the support team at Scale can connect remotely and sort problems for you. Likewise they can upgrade the operating system remotely and reboot individual nodes without affecting the availability of the cluster or any hosted VMs.
What is it good for?
The company we visited had justified purchase of the HC3 platform on the back of a rewrite of its ERP system, but was also using it for applications such as database hosting and general file and print services. Other applications include VDI and private cloud, with enough scalability to handle large production workloads and customer-facing applications.
Of course compromises have, inevitably, been made in order to deliver all this in a way that's easy to manage. Customers wanting to fine tune and micro-manage their infrastructure will need to look elsewhere, as will those looking for automatic balancing of workloads as this isn't an option. The fact that SCRIBE only makes half of the available storage available to VMs could also be seen as an issue, but it's a price you pay for the high level of availability delivered by the HC3 virtualisation platform. This was illustrated by the IT manager at the test site, who admitted having accidentally powered down one of the production nodes with nobody noticing.
On the plus side, what little management needs to be done can be achieved with minimal effort from the web console and with no specialist training required. Tools to help import VMs from other platforms are also readily available and, unlike with VMware, there are no licensing costs associated with the KVM-based hypervisor.
Support does need to be factored in, but is competitively priced and complements this well-conceived hyperconverged platform, making for a great all-round SME solution.