In today’s data intensive environments, with their myriad business critical applications, it’s become all but a mandate for organizations to expand their virtualization capabilities in an effort to drive
Part1 Part2 Part3
increased data center efficiencies. Handily, NexGen Storage offers shared storage solutions that will help propel these efficiencies.
The company, which just emerged from stealth last month, combines PCIe solid state with a variety of other technologies to deliver a storage system that overcomes both the performance issues inherent in shared storage and the near-prohibitive cost of disk sprawl.
The upshot? A reduction in storage operating expenses by up to 90 percent for large virtual environments.
I spoke recently with Chris McCall, NexGen Storage’s vice president of marketing, about how NexGen’s new storage system is allowing customers to breathe a sigh of relief when it comes to being able to extend virtualization to business critical apps.
Kim Borg: NexGen’s n5 Storage System claims to yield the industry’s highest virtual machine storage density, resulting in up to 90% storage operating expense reduction. How is this achieved? More specifically, you use PCIe solid state, obviously, but can you touch on how your patent-pending Phased Data Reduction and Dynamic Data Placement allows for such a notable opex reduction?
Chris McCall: If you ask most customers how they’ve implemented storage for their virtual environments, it’s almost always with 15K RPM spinning disk drives, regardless of what system they have. They are the fastest disk drives and are typically used to support virtual environments because of the random workload virtual environments create.
The issue is that, as you scale and add more VMs to that virtual environment, it increases the workload and requires more disk drives on the backend to support performance. Companies like HP’s LeftHand and Dell’s EqualLogic did a great job of allowing people to scale the storage backend very simply with scale-out.
Essentially, you end up with lots and lots of disk drives to meet the performance requirement, but because today’s disk drives have so much capacity, you’ll typically end up with a lot of excess capacity. So now you’ve got lots of spinning disks for performance and way more capacity than you need.
Enter solid state, which is supposed to solve that problem, right? You can have a system with solid state disk drives that reduces the performance footprint and that works fantastic. The issue moves to capacity. Here’s an example: If you size by performance from a solid state perspective, you only need a couple of drives, but now you’re way short on capacity. It’s just the converse of the first issue I described. So that leads to sprawl in both cases. Either you’ve got the performance that you need but too much capacity, or with solid state, you’ve got the capacity that you need, but way too much performance.
At NexGen, we’ve combined a couple of different technologies to drive down the cost of storage for virtualized environments. We recognize that if you really want to reduce storage operating expense, you have to deliver a smaller footprint – and drive down both the capacity footprint and the performance footprint simultaneously. You can’t just focus on one and not the other; you have to focus on both at the same time.
NexGen combines a couple of different technologies within our system to do just that. The first one is PCIe-based solid state. NexGen deploys solid state on what’s called the PCIe bus, and we get the full performance from those solid state drives. That allows us more performance with less capacity so we can provide the same level of performance as hundreds to thousands of disk drives, but we consume zero footprint. Because it’s on the PCIe bus, it’s not taking up a drive slot and that allows us lots more room for capacity to put disk drives – which are very good at delivering low cost capacity.
In contrast, most other solid state solutions in the market take solid state, put it in the disk drive, plug it into a drive slot and then manage it via controller. This type of deployment creates bottlenecks and it doesn’t allow solid state to go as fast as it’s capable of.
That’s the performance footprint angle. We also need to reduce capacity footprint at the same time, so what we’ve done is design a next generation data deduplication technology that we call Phased Data Reduction. Data deduplication has traditionally been used in the backup world; it doesn’t work very well for primary storage because of the performance impacts.
The NexGen system breaks the dedupe process into four different phases, so that it doesn’t impact front-end performance at all. NexGen delivers the benefits of deduplicating data on the least expensive storage media, which is disk drives, which reduces the overall system dollar per gigabyte and reduces the capacity footprint.
With both a reduced performance footprint with PCIe solid state and a reduced capacity footprint with Phased Data Reduction, we can achieve between a 10-to-1 and a 25-to-1 disk drive consolidation ratio, depending on the workload. As an example, if you have 25 disk drives in your environment, you can eliminate 24 of those with one NexGen disk drive. NexGen delivers significant reductions in storage operating expense because we are able to provide more capacity and more performance in a smaller footprint with those two technologies.
KB: This is the industry’s first mid-range SAN to deliver complete control over shared storage performance so that users can virtualize business critical apps. Would you please expand a bit on how this complete control is attained?
CM: There are three elements to control that don’t exist in the industry today. The first one is the ability to provision performance, just like you provision capacity. When it comes to managing capacity, customers can carve up specific capacity for each volume and know exactly how much they have and exactly how much has been allocated: The system tracks it so you know when you’re going to run out. There’s hasn’t been anything like that in the industry from a performance perspective – until now. NexGen provides end users the ability to provision performance just like capacity so you know exactly how much you’ve provisioned out and how much is left before you start running out of performance resources.
The second key point is that, once you provision performance, how do you maintain those performance levels over time? That’s where Dynamic Data Placement comes into play. A lot of companies out there are talking about tiering, about moving data between higher performance and lower performance tiers. They do this with after-the-fact batch process jobs -- like at midnight, a job will kick off and move things around.
What NexGen is doing with Dynamic Data Placement is placing data at a much more granular level and we’re doing it in real time, so that as your performance workload changes, we’re automatically moving data around between storage tiers to ensure you’re hitting your quality of service levels at all times within the system. Instead of doing after-the-fact batch process jobs, we’re doing real time data placement in both solid state and disk drives and it’s being driven by quality of service settings used to provision the performance up front. So we’ve simplified and automated that entire process.
The third key point involves a capability we call Performance Service Levels. It’s easy to talk about storage in steady-state operation, but things happen. Disk drives fail, so you have to rebuild them, you have to upgrade software on your controller. NexGen’s Performance Service Levels allow the end user to tell the system which volume’s performance is more important during those degraded modes of operation.
For instance, let’s say you have to upgrade controller firmware. If you have an active/active system, that means you have to take half of the box offline, do an upgrade on that half and then do the upgrade on the other half. Well, while you’re doing that, you only have access to half of the performance of the system.
So, what we’re doing is saying, during those down periods, our mission critical data will maintain its performance service levels and there won’t be any impact to performance. Then the volumes that you’ve categorized as non-critical, they’re the ones that take the brunt of the impact of that downtime. But that’s okay because you’ve categorized them as non-critical.
KB: So the Dynamic Data Placement basically allows for the data migration in real-time to meet Quality of Service and the service levels prioritize those QoS settings?
CM: That’s exactly right. QoS settings allow us to provision performance just like capacity. Dynamic Data Placement migrates data in real time to ensure that we’re hitting those QoS levels and then service levels allow you to prioritize the different volumes within the system so that you can maintain them during degraded mode operation.
KB: Would you please talk a bit about your patent-pending DNA - Deterministic Tier Architecture - with which you’re taking advantage of the best characteristics of multiple storage types?
CM: We want to provide the customer with the ability to take advantage of the best characteristics of any type of storage media. What we don’t want to do is design a system specifically for solid state because it only solves the performance problem. It’s not very good at delivering inexpensive dollar per gigabyte, so that’s why users want disk drives in the system. That’s the media that’s best at delivering the lowest dollar per gigabyte. And who knows what’s going to happen in the future, right? There could be a new type of storage media that’s less expensive than disk drive or a new type of storage media that’s faster than today’s solid state.
Regardless, we needed to design a system that gives the end user the best of all worlds so we could combine the best of all characteristics of different types of storage media. So when we say “N-tier”, the system is architected to handle any number of any type of storage media in any combination, ultimately delivering the highest value from a storage standpoint. The glue that makes this work, of course, is our Dynamic Data Placement, which moves things around efficiently to hit the required QoS levels. That’s what ties everything together and of course it’s core to the patents that we’ve filed and which are pending.
KB: Who would you classify as your most worthy competitors? Which companies do you have on your radar?
CM: The top competitors on our radar are EMC and NetApp. They have the biggest market share and we expect to run into them much more so than many of the start-ups out there, primarily because of their sheer market size. We’re focused on our differentiation versus EMC and NetApp and how they’ve tried to implement solid state. We think we have a much better, more efficient approach.
No comments:
Post a Comment