Share:

SolidFire with CloudStack

Aside from traditional storage solutions, CloudStack has supported managed storage for some time. In this article, we will touch on SolidFire support in CloudStack 4.13 and lay out the exact steps needed to add SolidFire to CloudStack as Primary Storage (for VMware, KVM and XenServer). We will also explain the difference between the “SolidFire” and “SolidFireShared” plugins and discuss their use cases.

There will be a follow up article covering different feature sets that different hypervisors have when it comes to using SolidFire as Primary Storage, and we’ll also examine the way things work under the hood.

SolidFire 101

SolidFire has been around for many years, and the fact that it was acquired by NetApp (in early 2016) speaks for itself. SolidFire is an iSCSI-based, all-flash, distributed SAN solution, providing granular QoS on a per-LUN basis. A minimal cluster consists of 4 nodes, and newer generations of SolidFire models are able to provide 100,000 IOPS per single node. That means up to 400,000 IOPS per 4-node SolidFire cluster in just 4U of rack space (all IOPS figures assume a 4K IO size).

Different models of SolidFire nodes are available – currently 3 models (all with 100,000 IOPS per node). The differences between models are the size of SSDs and the amount of system memory / read cache. For more info on the different node models available, please visit https://www.netapp.com/us/products/storage-systems/all-flash-array/solidfire-scale-out.aspx.

Importantly, SolidFire supports mixing and matching nodes. So – if you are short on space, you can add bigger nodes to your cluster, whilst if you are short of IOPS, you can expand your cluster with smaller nodes. As a distributed SAN it has the advantage of being able to scale very well.

A great aspect of SolidFire is its granular, per-volume QoS. For each volume (LUN) created on the cluster, you can set its minimum, maximum and burst IOPS values / limits. Let’s briefly explain this:

  • Min IOPS: defines a guaranteed IOPS performance in normal conditions and in most failure / expansion scenarios. This means that having a dead SSD / node, or expanding the cluster with additional nodes (with data being redistributed) will not influence a client’s IOPS as the iSCSI client will always be able to reach its Min IOPS for a given volume.
  • Max IOPS: defines the maximum sustained IOPS performance for a volume. This means that if the client is (eg) benchmarking, the sustained IOPS numbers will be equal to the volume’s Max IOPS.
  • Burst IOPS: defines the allowed burst IOPS performance for a volume / LUN. This is very useful for VM reboots, DB backups and similar scenarios which require short IO bursts. A volume accrues 1 second of burst credit (up to a maximum of 60 seconds) for every second that the volume runs below its Max IOPS limit.

Regarding Max and Burst IOPS limits – they are just limits. It’s not guaranteed that the volume / LUN can achieve those numbers if your cluster is very busy. Those limits will be reached (when required by client / application) if the cluster has enough “unused” IOPS, as in the following example:

  • If your cluster has 400,000 IOPS of capacity but is only using 250,000, that leaves 150,000 to be consumed across the cluster, meaning that a single volume (if so configured) may theoretically achieve up to 150,000 IOPS

For volume QoS limits, it’s advisable to follow the user guide for the version of Element Software (formerly Element OS) that you are running on your nodes. Currently, those limits (Element Software v11.3) are as follows:

  • Min IOPS per volume: cannot exceed 15,000
  • Max IOPS per volume: cannot exceed 200,000

For other limits, please consult the Element Software User Guide.

SolidFire Plugins for CloudStack

There are 2 plugins: “SolidFire” and “SolidFireShared”.

SolidFire 1:1 plugin

The “SolidFire” plugin (referred to here as “SolidFire 1:1”) provides a 1:1 mapping between a CloudStack volume and a SolidFire volume (LUN), and the QoS you want for that specific CloudStack volume is configured on the SolidFire volume (LUN).

For each CloudStack volume created, it will do the following:

  • For VMware, create a dedicated VMware Datastore for each CloudStack volume.
  • For XenServer, create a dedicated XenServer SR for each CloudStack volume.
  • For KVM, create a “dedicated” iSCSI session for each CloudStack volume on a KVM host, effectively passing-through the iSCSI LUN (SolidFire volume) to a VM.

The main benefit of this plugin is that for each CloudStack volume you can set QoS as defined via Compute / Disk Offerings in the Storage QoS section. The plugin will take the “Min IOPS” and the “Max IOPS” setting (Burst IOPS is preconfigured as a multiplier of the maximum IOPS) – and will send those values to the SolidFire cluster’s API, so the values are set on the SolidFire volume / LUN. This way, CloudStack (via the plugin) manages the volumes on the SolidFire cluster, thus the name “Managed Storage”.

The downside of this plugin is that the number of Datastores (VMware) and SRs (XenServer) is limited to a relatively low value (native hypervisor limitations):

  • VMware 6.5 – maximum of 512 datastores per cluster (hard limit)
  • XenServer 6.x-8.0 – soft limit of 256 SRs (users have tested up to 500-600 SRs, but the time to mount new SRs becomes considerably higher with that many SRs as well as the time to reboot a host)
  • No particular limits for KVM

This means that for VMware and XenServer you cannot have more than ~500 volumes per cluster, but since volumes are stored on the datastore / SR, you can create VM snapshots. For KVM it’s not possible to create VM snapshots, since the iSCSI LUN is passed-through to the VM, so there is no QCOW2 file(s) in play – and KVM VM snapshots are only possible with QCOW2 files (i.e. not possible with any RAW block storage).

SolidFireShared plugin

The “SolidFireShared” plugin provides a many:1 mapping; ie. many CloudStack volumes on a single SolidFire volume, providing an alternative way to organize CloudStack volumes on SolidFire-based Primary Storage, and partially solving the scalability issues that exist when using the SolidFire 1:1 plugin (explained in the previous section). This plugin only supports VMware and XenServer.

Adding Primary Storage to CloudStack using the SolidFireShared plugin will result in the following:

  • For VMware, a new datastore being created immediately, formatted with VMFS5 and mounted on all ESXi hosts in the cluster.
  • For XenServer, a new SR being created immediately, using LVM (lvmoiscsi) and attached to all XenServers in a pool / cluster
  • All volumes will be placed on this shared LUN (datastore/SR).

With this plugin you can have a single datastore / SR for many CloudStack volumes and thus the number of volumes can be greater than the ~500 volumes with the SolidFire 1:1 plugin. However, with this setup, QoS is defined per whole datastore / SR, not per single CloudStack volume.

The SolidFireShared plugin requires that the Primary Storage be added as cluster-wide, i.e. zone-wide Primary Storage is not supported with this plugin (nor would it make much sense due to the native hypervisor limits).

As you can guess, you could do this setup manually (without using the SolidFireShared plugin), However, the plugin automates these steps making it less error-prone than the manual process.

If doing everything manually, the steps are as follows (the first 4 steps are done via the SolidFire UI or API):

  • create an Account (linked to your CloudStack installation).
  • create a list of allowed iSCSI initiators, which means all of your hosts in the specific cluster (get initiators IQN from your hypervisor hosts).
  • create a large enough SolidFire Volume with desired QoS.
  • create an Access Group, adding all previously created Initiators and the Volume to it.
  • Add an iSCSI-based Datastore / SR to Vmware / XenServer via vCenter / XenCenter.
  • Add new Primary Storage in CloudStack; for VMware use “VMFS” as a protocol and specify the previously created datastore name; for XenServer use PreSetup as a protocol and specify the previously created SR.

VMware setup

Before heading out to the CloudStack GUI and adding SolidFire / SolidFireShared-based Primary Storage, make sure that you:

  • Have an iSCSI Software adapter enabled on all ESXi hosts in the cluster.
  • Have done proper network binding of the iSCSI adapter to the correct vSwitch, so that your ESXi hosts will have an IP in the same VLAN as the SolidFire SVIP (Storage VIP).

Adding SolidFire 1:1-based Primary Storage

If adding zone-wide storage, set hypervisor=Any parameter (this is required for all hypervisor types).

CloudMonkey command to add zone-wide Primary Storage:


create StoragePool scope=zone zoneid=af61811f-3ca6-4927-ab0d-5bb6d693e3e7 hypervisor=Any name=SF121zonewide provider=SolidFire managed=true capacityBytes=107374182400 capacityIops=10000 url="MVIP=10.10.10.10;SVIP=10.254.10.10;clusterAdminUsername=admin;
clusterAdminPassword=password;clusterDefaultMinIops=1000;
clusterDefaultMaxIops=2000;clusterDefaultBurstIopsPercentOfMaxIops=2" tags=SF121ZONE

(NOTE: due to a very long URL parameter value, we have broken the URL value into multiple lines for readability – otherwise it should be a single line with no spaces)

For cluster-wide Primary Storage, syntax is slightly different:


create StoragePool scope=cluster zoneid=af61811f-3ca6-4927-ab0d-5bb6d693e3e7 podid=954065ed-a173-4c52-9f6f-062cd9b17ddb clusterid=72750371-a6ce-4d97-b567-1a9aefc416f8 name=SF121clusterwide provider=SolidFire managed=true capacityBytes=107374182400 capacityIops=10000 url="MVIP=10.10.10.10;SVIP=10.254.10.10;clusterAdminUsername=admin;
clusterAdminPassword=password;clusterDefaultMinIops=1000;
clusterDefaultMaxIops=2000;clusterDefaultBurstIopsPercentOfMaxIops=2;datacenter=Trillian" tags=SF121cluster

Most of the parameters are self-explanatory, but some do need an additional explanation:

  • The “capacityBytes” parameter is the logical / virtual size you want to deliver to CloudStack from the SolidFire cluster. The sum of the volumes, snapshots, and templates that reside on this Primary Storage cannot exceed “capacityBytes”. SolidFire performs compression and deduplication as well as leveraging thin provisioning, so the actual space used is usually much better than the sum of these virtual sizes.
  • “capacityIops”, in similar fashion, defines the total IOPS capacity that can be consumed on the SolidFire side (the sum of the Min IOPS (min_iops as visible in the cloud.volumes table in CloudStack DB)). The sum of all volumes created in CloudStack cannot exceed this value.
  • “MVIP” and “SVIP” (Management VIP and Storage VIP)
  • “clusterDefaultMinIops” and “clusterDefaultMaxIops” are default values that a CloudStack volume will get if there was no Min IOPS and Max IOPS values specified in the Compute / Disk offering.
  • “clusterDefaultBurstIopsPercentOfMaxIops” defines a Burst IOPS and is a decimal multiplier of the Max IOPS.
  • “datacenter” needs to point to the specific VMware datacenter; storage tags are optional.

We have shared the API calls in the above examples, but you can also use the GUI.

When creating Compute / Disk offerings, make sure to define “storage” as the type of QoS as shown on the image below. You can also set Min and Max IOPS for the volume here – these values will be taken from this offering and passed to the SolidFire API (via the plugin), so that the desired QoS is set on the SolidFire volume / LUN.

NOTE:  For both VMware and XenServer, you should set the “Hypervisor Snapshot Reserve” value (expressed as a percentage of the volume size). In the example above, for a 100GB volume, 200% is set (200GB) so the datastore (SolidFire volume / LUN) will be 300GB. If we didn’t set the value for this setting, the datastore would be created with the same size as the volume (100GB in this example) and taking VM snapshots would be impossible, since there would be no free space on the datastore. Since all SolidFire volumes are thinly provisioned, there is zero difference on the actual space consumption on the SolidFire cluster if the datastore is 100GB or 1TB, so make sure to take that into account.

Adding SolidFireShared-based Primary Storage

As previously mentioned, Primary Storage based on the SolidFireShared plugin can only be cluster-wide, so there are no variations regarding the scope parameter when it comes to the API call:


create StoragePool scope=cluster zoneid=af61811f-3ca6-4927-ab0d-5bb6d693e3e7 podid=954065ed-a173-4c52-9f6f-062cd9b17ddb clusterid=72750371-a6ce-4d97-b567-1a9aefc416f8 name=SFSHARED provider=SolidFireShared managed=false capacityBytes=107374182400 capacityIops=15000 url="MVIP=10.10.10.10;SVIP=10.254.10.10;clusterAdminUsername=admin;
clusterAdminPassword=password;minIops=15000;
maxIops=100000;burstIops=100000;datacenter=Trillian" tags=SFSHARED

Note the slightly different URL syntax than the one used with the SolidFire 1:1 plugin.

Some of the chosen parameters need an explanation:

  • “capacityBytes” is the size of the SolidFire volume / LUN, so make it a big number.
  • “capacityIops” needs to be the same value as the “minIops” (part of the “url” section), and as already mentioned, a single SolidFire volume cannot have more than 15,000 for its Min IOPS.
  • “maxIops” and “burstIops” may not exceed 100,000 IOPS, but you can later set these values up to the volume’s limits (200,000 IOPS currently) in the SolidFire UI.

When choosing the value for “capacityBytes” (which translates to the size of the datastore), make sure to consider any additional size needed for VM / volume snapshots.

Once you have added SolidFireShared-based Primary Storage, you’ll need to create Compute / Disk offerings as usual, but this time without defining QoS on the Storage level in the Compute / Disk offerings (as we are not managing QoS on SolidFire any further, beside setting it initially during the creation of the Primary Storage). Also, it’s not necessary to define the “Hypervisor Snapshot Reserve” value, since this parameter is only consumed by the SolidFire 1:1 plugin when creating a datastore for each volume. These settings apply for both VMware and XenServer.

XenServer setup

Before trying to add SolidFire to CloudStack, make sure that you have configured your XenServers’ networks in such a way that they can access the SVIP of the SolidFire cluster. That usually means creating an additional network on the storage VLAN and creating an IP address on that network.

Once your XenServer hosts can communicate with the SVIP of the SolidFire cluster, you are ready to add a new SolidFire Primary Storage.

Adding SolidFire 1:1-based Primary Storage

As already stated in the VMware setup guide, make sure to set hypervisor=Any parameter in your API call for creating a zone-wide Primary Storage. The syntax is pretty much the same as for VMware.

CloudMonkey command to add zone-wide Primary Storage:


create StoragePool scope=zone zoneid=d2e2da70-204c-42b3-84d1-07917a2383a7 hypervisor=Any name=SF121zonewide provider=SolidFire managed=true capacityBytes=107374182400 capacityIops=10000 url="MVIP=10.10.10.10;SVIP=10.254.10.10;clusterAdminUsername=admin;
clusterAdminPassword=password;clusterDefaultMinIops=1000;
clusterDefaultMaxIops=2000;clusterDefaultBurstIopsPercentOfMaxIops=2" tags=SF121ZONE

For cluster-wide Primary Storage, the syntax is slightly different – the difference to the VMware setup is the absence of the “datacenter” parameter in the URL:


create StoragePool scope=cluster zoneid=d2e2da70-204c-42b3-84d1-07917a2383a7 podid=711b8d51-8f67-4b89-8e68-7d7a28a013b0 clusterid=b98afe80-9614-48b3-aba1-b79624086bb9 name=SF121clusterwide provider=SolidFire managed=true capacityBytes=107374182400 capacityIops=10000 url="MVIP=10.10.10.10;SVIP=10.254.10.10;clusterAdminUsername=admin;
clusterAdminPassword=password;clusterDefaultMinIops=1000;
clusterDefaultMaxIops=2000;clusterDefaultBurstIopsPercentOfMaxIops=2" tags=SF121cluster

If some of the parameters used in the API call are unclear, please check the VMware setup guide above, where you’ll find detailed explanations for each parameter.

For the Compute / Disk offering parameters that are needed specifically when using the SolidFire 1:1 plugin, please also see the corresponding section in the VMware setup guide – same “rules” apply here.

Adding SolidFireShared-based Primary Storage

Again, Primary Storage based on the SolidFireShared plugin can only be cluster-wide, so there are no variations when it comes to the scope parameter of the API call – same syntax as with VMware, we just skip the “datacenter” parameter in the URL:


create StoragePool scope=cluster zoneid=d2e2da70-204c-42b3-84d1-07917a2383a7 podid=711b8d51-8f67-4b89-8e68-7d7a28a013b0 clusterid=b98afe80-9614-48b3-aba1-b79624086bb9 name=SFSHARED provider=SolidFireShared managed=false capacityBytes=107374182400 capacityIops=15000 url="MVIP=10.10.10.10;SVIP=10.254.10.10;clusterAdminUsername=admin;
clusterAdminPassword=password;minIops=15000;
maxIops=100000;burstIops=100000" tags=SFSHARED

In regard to the explanation of important parameters as well as different notes on the Compute/Disk offerings, please see the VMware setup for the SolidFireShared plugin above, which explains those in detail.

KVM setup

KVM, being a pretty much “unmanaged” hypervisor, is a bit different in terms of what you can do with it, and it’s much easier to make low-level changes as required. In that sense, the SolidFire 1:1 plugin works perfectly well and thus there is no need for SolidFireShared plugin support – though you can always do the big-shared-iSCSI-LUN-with-clustered-(God-forbid)-file-system yourself if you really want to.

Adding SolidFire 1:1-based Primary Storage

Before trying to add SolidFire-based Primary Storage, make sure to do the following:

  • Attach the proper storage VLAN with an IP address to all KVM hosts, so that the SolidFire SVIP is reachable.
  • Install an iSCSI initiator on all KVM hosts with yum install iscsi-initiator-utils or apt-get install open-iscsi. This will create the following file: /etc/iscsi/initiatorname.iscsi, which contains the IQN of the host (that will later be added to the cloud.host table, “url” field – this happens with all hypervisors).

You can set up both zone-wide and cluster-wide Primary Storage, as in the case of VMware and XenServer. The parameter “hypervisor” should still be set to “Any” (though the plugin will not complain if you set “hypervisor=KVM”, but will still set it to “Any” internally in the database):

CloudMonkey command to create zone-wide Primary Storage:


create StoragePool scope=zone zoneid=06938de4-0a5b-46f9-bbe7-5a264f43d4eb hypervisor=Any name=SF121zonewide provider=SolidFire managed=true capacityBytes=107374182400 capacityIops=10000 url="MVIP=10.10.10.10;SVIP=10.254.10.10;clusterAdminUsername=admin;
clusterAdminPassword=password;clusterDefaultMinIops=1000;
clusterDefaultMaxIops=2000;clusterDefaultBurstIopsPercentOfMaxIops=2" tags=SF121ZONE

Again, the syntax for cluster-wide Primary Storage is slightly different – but otherwise identical to XenServer syntax:


create StoragePool scope=cluster zoneid=06938de4-0a5b-46f9-bbe7-5a264f43d4eb podid=62717320-3fc4-4c53-9345-c53eba516710 clusterid=79064c12-659a-4886-8c4d-5ee38c842a0f name=SF121clusterwide provider=SolidFire managed=true capacityBytes=107374182400 capacityIops=10000 url="MVIP=10.10.10.10;SVIP=10.254.10.10;clusterAdminUsername=admin;
clusterAdminPassword=password;clusterDefaultMinIops=1000;
clusterDefaultMaxIops=2000;clusterDefaultBurstIopsPercentOfMaxIops=2" tags=SF121cluster

If some of the parameters used in the API call are unclear, please check the VMware setup guide, where you’ll find detailed explanations for each important parameter.

For the Compute / Disk offering parameters, it’s still required to set Min and Max IOPS as the Storage Quality Of Service parameters – but there is no need to define “Hypervisor Snapshot Reserve”, since there is no datastore / SR with KVM, and VM snapshots are not supported, so nothing to reserve space for.

This concludes this part of the SolidFire article series. In the next part, we cover different feature sets that different hypervisors have when it comes to using SolidFire as Primary Storage, and we’ll also examine the way things work under the hood.

About the author

Andrija Panic is a Cloud Architect at ShapeBlue, the Cloud Specialists, and is a committer and PMC member of Apache CloudStack. Andrija spends most of his time designing and implementing IaaS solutions based on Apache CloudStack.
We would like to thank Mike Tutkowski , Senior Software Developer at SolidFire who implemented the SolidFire plugin in CloudStack, for his review and help with this article.

 

Share:

Related Posts:

ShapeBlue

Download a step-by-step guide to migrate your existing vSphere environment to a robust IaaS cloud environment based on Apache CloudStack and the KVM Hypervisor, ensuring a smooth, low-friction migration journey.