ds713_smallI have recently purchased a Synology DS713+ unit as I needed a reliable storage solution for my new “all physical” vSphere Home Lab. Synology has a great reputation for quality, features and performances and it is one of the manufacturers of choice among home labbers. I was particularly intrigued by the fact that the DS7123+ comes with full VAAI capabilities at a reasonable price (I was able to find it for less than 400 Euros without disks).

I will spare you all the fancy stuff about the unboxing, the initial configuration and the detailed (and indeed impressive) features list and I will go straight to the point: I will show you how I managed to correctly set up a fully VAAI backed iSCSI DataStore for my ESXi 5.5 U1 hosts, after some trial and error: unfortunately the Synology documentation is a bit lacking on this topic and it took me a few attempts before I succeeded to do it correctly.

The entire configuration on the DS713+ was performed in the Storage Manager application (on the latest DSM 5.0 release). I had already inserted two HDDs and initialized them as part of the initial unit installation routine. Assuming that you did the same, you should be presented by a similar information screen:

110714_1_pic1

 

If all is good, then it is time to set up the Disk Group, which is basically Synology’s way of calling a RAID set. Click on “Disk Group” on the left, then “Create” and the Disk Group creation wizard will start.

Ensure that both your two HDDs are selected and click “Next”:

110714_1_pic2

 

You will receive a warning about data on your HDDs being erased. Think twice, and if this is what you really want, click “OK”:

110714_1_pic3

The next screen is about choosing the RAID level. I did not want to use Synology’s proprietary SHD RAID technology (which basically allows to mix and match HDDs with different characteristics) so I went for the classic RAID 1 option. Select “RAID 1” and press “Next”:

110714_1_pic4

 

In the next screen you will be asked if you want to perform or not a disk check, to identify and mark bad sectors. This is a time consuming task, which might go on for several hours: if the disks have never been used by your DS713+ unit before, then it is definitely worth performing the check to avoid problems in the future, otherwise it is safe to skip it. Choose according to your situation then, again, click “Next”:

110714_1_pic5

 

You will be presented with the Wizard recap screen. If all the presented values are what you expect, click on “Apply” and let the Synology unit do its job. This could take the time of sipping a coffee or quite a few hours, depending on the selection made at the “disk check” stage. I set up email notifications and my unit was kind enough to send me an email about the completion of the job while I was shopping for groceries!

110714_1_pic6

 

You’ll be able to follow the process in the “Disk Group” area. Note the disk check task (a.k.a. “data scrubbing”) as it progresses; if this is your case, you should refrain to perform any other activity on the unit until the disk group has been successfully created.

110714_1_pic7

 

Now comes the interesting part. In my first attempt, I created an iSCSI LUN as a block device on a newly created Disk Group, with no Volumes on top of it. I did not create any volume as I was not interested in any of the file-level services provided by the DS713+; all I needed was just to be able to provide some iSCSI LUNs to my ESXi hosts and create DataStores.

This approach seemed to work as I was able to create an iSCSI LUN and present it to my hosts, but when I validated the DataStore for VAAI support, the vCenter reported that its “Hardware Acceleration” status was “Unknown” (which indeed means that some but not all of the four VAAI primitives where supported).

A deeper check with the following esxcli command:

esxcli storage core device vaai status get

revealed this:

110714_1_pic8

 

which meant that only one of the four VAAI primitives was supported. Something was obviously wrong here, because the Synology product literature clearly states that VAAI is “fully supported” by this model.

Then I recalled that some options had showed up greyed out during the creation of the iSCSI LUN:

110714_1_pic9

 

So I deleted the LUN and tried with a different approach: the trick – as you will see – was to create a Volume on top of the Disk Group and only after that an iSCSI LUN on top of that Volume.

Let’s continue, then.

Immediately after the successful creation of the Disk Group, go to the “Volume” section and click “Create”. The Volume creation wizard will begin. The only possible choice here is “Custom” since the underlying Disk Group has not been created using SHR, so just click “Next” and move to the following step:

110714_1_pic10

 

Once again, you’ll be forced to make the only available choice, which will be “Multiple Volumes on RAID” versus the greyed out “Single Volume on RAID”. This, again, stems from the fact that we chose to create a Disk Group first. Just click “Next”.

110714_1_pic11

 

Unsurprisingly enough, also the next step will force you to click “Next” without making a choice between the two options: you’ll be forced to select “Choose an existing Disk Group” and specify from the drop-down menu on which one of the existing Disk Groups to create the Volume. In our case there is going to be only one (as we are referring to a 2 bay unit with both disks assigned to one RAID 1 group).

110714_1_pic12

 

At this stage you will be allowed to specify the size of your Volume; do not forget that you can create multiple ones, if you wish so. I personally went for a single Volume consuming the entire Disk Group capacity, since I will only use it for iSCSI LUNs and I will do my space partitioning at LUN level. Enter your desired size and click “Next”:

110714_1_pic13

 

Finally the wizard summary screen will appear; here you can perform a final check prior to click “Apply” and initiate the Volume creation task:

110714_1_pic14

 

You can follow the process or grab a coffee in the meanwhile, as it can take some time because of the file system optimization task happening in the background:

110714_1_pic15

 

When the task is completed, it is finally time to create the iSCSI LUN. You can opt to create the iSCSI target beforehand (so it is already available in the LUN creation wizard), set it up as part of the LUN creation process or independently from it.

Click on “iSCSI LUN” and then “Create”: the iSCSI LUN creation wizard will begin. Interestingly enough you’ll only be able to choose the “iSCSI LUN (Regular Files)” option. As misleading as it might seem, this is the way the LUN must be set up to guarantee full VAAI support.

110714_1_pic16

 

After you have clicked “Next”, you will have to define the LUN’s properties: choose a distinctive name for it, select its parent Volume (we only have one), and ensure that both “Thin Provisioning” and “Advanced LUN features” are set to “Yes”. Failing to do so will make most of the VAAI primitives unavailable, as seen earlier.

110714_1_pic17

 

Finally, chose an adequate size and specify if you’d like:

  • to create a new iSCSI target to map the LUN to;
  • to use an existing target;
  • not to map at all the LUN for the time being.

Since during my tinkering I have noticed some weird behavior when the “Map existing iSCSI targets” option was chosen (the creation process was never completed and the target went offline), I would suggest to better be safe than sorry and select “None”. We will create a target later on and map the LUN to it afterwards.

110714_1_pic18

 

Click “Next” and you’ll be presented the wizard summary screen. Check that everything is OK, then press “Apply”.

110714_1_pic19

 

The new LUN will be created almost immediately, and you can check the outcome as in the screenshot below. Note that with this kind of LUN the “Clone” and “Snapshot” features will be available. This DS unit really behaves like a nano-SAN!

110714_1_pic20

 

Let’s create the iSCSI target now, by clicking on the appropriate link on the left, and then on “Create”. In the first step of this wizard, you can leave everything as it is, unless you wish to customize the target name and IQN and/or enable CHAP authentication.  Make the changes according to your needs, then click “Next”:

110714_1_pic21

 

We want to map the target to an existing LUN, so choose appropriately and click “Next”:

110714_1_pic22

 

Time for a final check at the wizard summary screen: by pressing “Apply” you will confirm the creation of the target and its association with the LUN you selected in the previous step.

110714_1_pic23

 

The newly created target will appear in the iSCSI target list:

110714_1_pic24

 

By selecting a target and clicking on “Edit” it will be possible to perform some tweaking: I’d recommend to check the “Allow multiple sessions from one or more iSCSI initiators” as this is mandatory to set up shared storage among ESXi hosts:

110714_1_pic25

 

I would also do some changes in the “Masking” tab to make sure that only the right hosts have access to the LUN. In the example below, I removed the “Default privileges” Read/Write settings (which would have allowed access to the LUN by any initiator) and assigned them explicitly to one of my ESXi hosts:

110714_1_pic26

 

The “Mapping” tab can be used in the future to map new LUNs to the same target, should you want to keep it simple and use one target for more than one LUN (perfectly fine in a lab environment).

Once the target has been created and tweaked, its detailed status should appear similar to the below screenshot :

110714_1_pic27

 

It is now time to present the LUN to the ESXi hosts and verify that VAAI is working as expected. I’ll leave this to you as this is a standard vSphere admin task.

As you can see the LUN is detected by the ESXi and its VAAI status correctly shows up as “supported”:

110714_1_pic28

 

To further confirm, the esxcli command now reports that all four VAAI primitives are supported:

110714_1_pic29

 

Setting up the DS unit to fully leverage vSphere’s storage capabilities proved to be a little trickier than expected, mostly because of the lack of information in Synology’s documentation. NevertheIess, experimenting with it until I found the right procedure was a valuable experience. I hope this article will be useful to you, and if you have any comment please share them with me.

My plan now is to test the DS unit to see how much VAAI affects the performance. Probably I will write a new article as soon as I have obtained some worthwhile data.

Until then, I am leaving you with a useful VMware KB article about VAAI, very much worth to be bookmarked, which can be found here:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1021976

Share Button

7 Comments

  1. Pingback: Newsletter: July 11, 2014 | Notes from MWhite

  2. Great article! Is it a prerequisite that you install the VAAI plug-in from Synology on each of your hosts as well?

    Reply

    • Pietro Piutti

      Absolutely not. It works out of the box if you use iSCSI.
      The plugin is only for NFS DataStores.

      Reply

  3. Great article, however slightly variation from my lab – in short I am running DS1513+ with the latest FW.

    From Choose a LUN type menu:
    I am able to pick iSCSI LUN (Block-Level) – Single LUN on RAID without any grey out.

    Then following the remaining steps as per your blog, I can get Block-Level storage with VAAI supported.

    Reply

    • Pietro Piutti

      Thank you Charles, I’m sure people owning the same model as yours (or a similar one) will find your additional info very useful.

      Reply

    • Interesting. Did your Hardware Acceleration shows “Supported” in your ESXi host?

      Reply

  4. Nice write up, any update on “test[ing] the DS unit to see how much VAAI affects the performance.” Thanks!

    Reply

  5. Andrea Mariotti

    Hi,
    i followed your guide to enable VAAI on a Synology DS1815+ (6x480GB Kingston SSDs, ESXi 6, 4 iSCSI initiators, 4 targets, 4 paths) and it worked.

    Then I made some tests and the difference was huge:
    – with the “direct” block mode I got 44k IOPS/8K/100% read.
    – with the “VAAI” configuration I got near 13K IOPS/8K/100% read.

    I experienced the same differences also in write and mixed tests.

    I suspected that with the “VAAI” configuration the OS of the Synology introduce a “layer” that create latency and the peformange dropped to almost 1/3.

    So, my guess is to not to setup the Synology this way but to go “directly” block mode.
    You lost some functionalities but you gain performances.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *