StarWind iSCSI SAN
StarWind Native SAN for Hyper-V
 

Disk Bridging Bug Report

Public beta (bugs, reports, suggestions, features and requests)

Moderators: art (staff), anton (staff), Anatoly (staff), Max (staff)

Disk Bridging Bug Report

Postby AKnowles » Mon Feb 17, 2014 4:16 am

Disk Bridging Bug Report

I was testing Beta 3 using VMware IO Analyzer and decided to test disk bridging on a Samsung 840 Pro series (512 GB) model. To create the disk bridge I performed the following steps:

1) Open Computer Manager to determine the disk ID.
2) Open a command prompt and run DiskPart with the following commands:
a) Select disk=1
b) Clean
c) Exit
3) In Computer manager I selected the disk and took it offline.
4) Opened the Starwind Management Console and:
a) Add Device (Advanced)
b) Had disk Drive
c) Physical Disk
d) Selected the physical disk, selected the asynchronous flag, and clicked the Next button
e) Selected Write-Back with 4096 cache. Clicked Next.
f) Created a new target, specified SSD01 for name, and accepted defaults for target. Enabled asynchronous mode. Clicked Next.
g) Clciked Create.

5) Added target to my ESXi access rule.

6) Open the VMware vSphere Client and selected the iSCSI storage adapter and rescan the adapter.
7) Select the ATA (disk bridged) device and right click it to change the path to Round Robin (I have 4 NIC/paths).

For the fist VMware IO Accelerator VM I edited the settings to create the second disk as follows:

1) Click the Add button.
2) Select Hard disk click Next.
3) Select Create new disk click Next.
4) Change disk size to 100 GB, Thick Provision Eager Zero, Specify a datastore or datastore cluster and click Browse.
5) Selected the bridged disk. Click Next
6) Select Independent and Persistent. Click Next.
7) Click Finish.
8) Click OK.

About 12-13 percent complete the process fails with the message: A general system error occurred: Input/output error. After which the VMware datastore also disappears even though the device is still listed under the iSCSI adapter. I was able to duplicate this issue with a clean boot and have attached the log file.

BTW: Using an image file on the same SDD is working without issue.
Attachments
SSD-Failure.zip
Log file
(37.63 KiB) Downloaded 561 times
AKnowles
 
Posts: 27
Joined: Sat Feb 15, 2014 6:18 am

Re: Disk Bridging Bug Report

Postby Bohdan (staff) » Mon Feb 17, 2014 12:25 pm

Thank you!
2/15 20:10:41.935 e3c IMG: *** ImageFile_ReadWriteSectors: WriteFileEx(0x000000020F510000, 65536, 0xaf2fc0000) failed (19).
2/15 20:10:41.935 e3c IMG: *** ImageFile_ReadWriteSectorsCompleted: Error occured (ScsiStatus = 2, DataTransferLength = 0)!

This is
http://msdn.microsoft.com/en-us/library/windows/desktop/ms681382(v=vs.85).aspx
ERROR_WRITE_PROTECT
19 (0x13)
The media is write protected.

Why did you do that?
3) In Computer manager I selected the disk and took it offline.
User avatar
Bohdan (staff)
Staff
 
Posts: 437
Joined: Wed May 23, 2007 12:58 pm

Re: Disk Bridging Bug Report

Postby AKnowles » Mon Feb 17, 2014 11:31 pm

I did not write protect it. I mearly cleaned the disk to wipe all partitions and then exported the SSD. As for why I took it offline, wihtou the disk being offline Starwind would not successfully present it as a LUN. Starwind generated an error in the log. I'll post the original log with my initial attempts ...
AKnowles
 
Posts: 27
Joined: Sat Feb 15, 2014 6:18 am

Re: Disk Bridging Bug Report

Postby AKnowles » Tue Feb 18, 2014 3:58 am

For bit more clarification ...

Initially I used Starwind to export a previously formatted disk using the Physical Disk (Disk Bridging method). That method failed to present the LUN to ESXi 5.5. After I took the disk offline, I was able to successfully use Starwind to export the disk (disk bridging), but ESXi 5.5 was unable to format it. Only after I cleaned thr disk - wiping all partition information - was I able to export it using Starwind, successfully present the LUN to ESXi 5.5, and format it as VMFS 5. The LUN did not fail until after I started to copy a VM to it. It failed at 11~13 percent with the general I/O error.

If you would like me to try a different method, please let me know.
AKnowles
 
Posts: 27
Joined: Sat Feb 15, 2014 6:18 am

Re: Disk Bridging Bug Report

Postby anton (staff) » Tue Feb 18, 2014 11:00 am

What block size do you use on your physical disk? As a workaround you can format the disk to NTFS and layer IMG container on it. DB is by far not the best mapping method in any case.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3922
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands

Re: Disk Bridging Bug Report

Postby AKnowles » Wed Feb 19, 2014 4:14 am

The device is a SSD ( I think the page.block size is 8K), NTFS block size was 4K. When I cleaned it, as a raw disk it had no formated block size. Under ESXi it was formatted as VMFS 5 with a 1MB blocksize. I have it currently working as a NTFS disk with an imagefile.

I only reported it as an error because it is a beta abnd I thought you'd like to know it failed.

I am curious though why you state a disk bridge is not as good a choice as an imagefile though since I thought a raw disk natively formatted as VMFS might perform better since it did not have to deal with the underlying OS overhead.
AKnowles
 
Posts: 27
Joined: Sat Feb 15, 2014 6:18 am

Re: Disk Bridging Bug Report

Postby anton (staff) » Wed Feb 19, 2014 2:56 pm

Flash hardware page size is irrelevant, what "sector" size does this SSD report? 4KB or 512E?

Should work with both raw mapping and image file so log from a failed raw mapping attempt is req'd to see what's going on. Much appreciated!

Even with layering FLAT (image file) on top of raw disk we don't use file system so performance is identical. With LSFS raw mapping is not recommended as LSFS changes workload distribution from random -> sequential (writes) so performance would be better. LSFS cannot be layered in top of raw disks for now (but we'll add this soon).

AKnowles wrote:The device is a SSD ( I think the page.block size is 8K), NTFS block size was 4K. When I cleaned it, as a raw disk it had no formated block size. Under ESXi it was formatted as VMFS 5 with a 1MB blocksize. I have it currently working as a NTFS disk with an imagefile.

I only reported it as an error because it is a beta abnd I thought you'd like to know it failed.

I am curious though why you state a disk bridge is not as good a choice as an imagefile though since I thought a raw disk natively formatted as VMFS might perform better since it did not have to deal with the underlying OS overhead.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3922
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands

Re: Disk Bridging Bug Report

Postby AKnowles » Thu Feb 20, 2014 3:57 am

Hmm, I already posted the original log of the failed mount (second zip file). As for determining the sector size, do you have a specific command to run to determine that?

Also, I am only using thick image files. I found the LSFS method to cause too much performance loss for the initial writes to be acceptable (based on the I/O meter results. A repeat after the LSFS had grown to size was much better, but in my opinion not worth the performance difference. FWIW, I created the IO Meter VMs using eager zero thick on the LSFS backing and expected better performance the first time I ran the tests.
AKnowles
 
Posts: 27
Joined: Sat Feb 15, 2014 6:18 am

Re: Disk Bridging Bug Report

Postby anton (staff) » Thu Feb 20, 2014 5:00 pm

OK, we'll re-check.

We'll fix LSFS pre-allocation. It's especially written to boost writes. Random writes are ~10 times faster with LSFS compared to FLAT and that's dominating typical VM workload.

Do you have any number to share?

AKnowles wrote:Hmm, I already posted the original log of the failed mount (second zip file). As for determining the sector size, do you have a specific command to run to determine that?

Also, I am only using thick image files. I found the LSFS method to cause too much performance loss for the initial writes to be acceptable (based on the I/O meter results. A repeat after the LSFS had grown to size was much better, but in my opinion not worth the performance difference. FWIW, I created the IO Meter VMs using eager zero thick on the LSFS backing and expected better performance the first time I ran the tests.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3922
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands

Re: Disk Bridging Bug Report

Postby AKnowles » Fri Feb 21, 2014 3:35 am

I archived the results as part of my cleanup. It was getting hard to determine what was what for comparisons and I'll be honest I haven't read the entire VMware manual on how to restore the archives. I've only started adding descriptions so I could make better long term decisions. But here's a few snippits from my V8 B3 SSD thick disk tests:

Max_IOPS:

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-15_20:48:19 io-analyzer01 0.5k_100%Read_0%Random 8341.02 8341.02 0 4.07 4.07 0
2014-02-15_20:48:18 io-analyzer03 0.5k_100%Read_0%Random 8335.32 8335.32 0 4.07 4.07 0
2014-02-15_20:48:19 io-analyzer02 0.5k_100%Read_0%Random 8353.76 8353.76 0 4.08 4.08 0
2014-02-15_20:48:19 io-analyzer04 0.5k_100%Read_0%Random 8433.39 8433.39 0 4.12 4.12 0
Sum 33463.49 33463.49 0.00 16.34 16.34 0.00
Average 8365.87 8365.87 0.00 4.09 4.09 0.00

Max-Throughput:

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-15_20:51:22 io-analyzer01 512k_100%Read_0%Random 203.39 203.39 0 101.69 101.69 0
2014-02-15_20:51:21 io-analyzer03 512k_100%Read_0%Random 199.75 199.75 0 99.88 99.88 0
2014-02-15_20:51:22 io-analyzer02 512k_100%Read_0%Random 199.73 199.73 0 99.86 99.86 0
2014-02-15_20:51:21 io-analyzer04 512k_100%Read_0%Random 199.74 199.74 0 99.87 99.87 0
Sum 802.61 802.61 0.00 401.30 401.30 0.00
Average 200.65 200.65 0.00 100.33 100.33 0.00

Max_Write_IOPS:

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-15_20:54:24 io-analyzer01 0.5k_0%Read_0%Random 14961.42 0 14961.42 7.31 0 7.31
2014-02-15_20:54:23 io-analyzer03 0.5k_0%Read_0%Random 15007.02 0 15007.02 7.33 0 7.33
2014-02-15_20:54:24 io-analyzer02 0.5k_0%Read_0%Random 14988.76 0 14988.76 7.32 0 7.32
2014-02-15_20:54:24 io-analyzer04 0.5k_0%Read_0%Random 15019.47 0 15019.47 7.33 0 7.33
Sum 59976.67 0.00 59976.67 29.29 0.00 29.29
Average 14994.17 0.00 14994.17 7.32 0.00 7.32

Max_Write_Throughput:

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-15_20:57:26 io-analyzer01 512k_0%Read_0%Random 156.19 0 156.19 78.09 0 78.09
2014-02-15_20:57:26 io-analyzer03 512k_0%Read_0%Random 152.11 0 152.11 76.06 0 76.06
2014-02-15_20:57:26 io-analyzer02 512k_0%Read_0%Random 152.1 0 152.1 76.05 0 76.05
2014-02-15_20:57:26 io-analyzer04 512k_0%Read_0%Random 152.02 0 152.02 76.01 0 76.01
Sum 612.42 0.00 612.42 306.21 0.00 306.21
Average 153.11 0.00 153.11 76.55 0.00 76.55

This is on a white box with Intel iCore5 2400S with 32 GB RAM, Arcea 1261 RAID controller with 2 GB cache, 10 Seagate 500 GB 7200 RPM 2.5" disk drives (RAID 5), 2 Seagate 3TB 7200 RPM (RAID 1), 2 Samsung 840 Pro 512 GB SSD (on individual 6GB SATA channels on motherboard), and Intel 4 port (ET model) NIC. Motherboard is an Intel DQ67SW.

Lun is a 1TB thick provision LUN with 4GB RAM cache.

I'll spend some time this weekend regenerating the LSFS vs. Thick provisioning LUNs and post it for you. That's easy enough to do since I rebuild my other NAS (uses 5 2TB drives in RAID 5 on an IBM 5014 cross-flashed with the LSI 9260 ROM) and various other 2.5" drives. it is slower, but uses version 6 software. I wanted a stable platform for my production (so to speak) data while I tested version 8 out.
AKnowles
 
Posts: 27
Joined: Sat Feb 15, 2014 6:18 am

Re: Disk Bridging Bug Report

Postby Bohdan (staff) » Fri Feb 21, 2014 12:15 pm

AKnowles wrote:For bit more clarification ...

Initially I used Starwind to export a previously formatted disk using the Physical Disk (Disk Bridging method). That method failed to present the LUN to ESXi 5.5. After I took the disk offline, I was able to successfully use Starwind to export the disk (disk bridging), but ESXi 5.5 was unable to format it. Only after I cleaned thr disk - wiping all partition information - was I able to export it using Starwind, successfully present the LUN to ESXi 5.5, and format it as VMFS 5. The LUN did not fail until after I started to copy a VM to it. It failed at 11~13 percent with the general I/O error.

If you would like me to try a different method, please let me know.

There were modifications in caching and that caused the problem for DiskBridge. We are fixing it. I'll give you the updated build ASAP.
User avatar
Bohdan (staff)
Staff
 
Posts: 437
Joined: Wed May 23, 2007 12:58 pm

Re: Disk Bridging Bug Report

Postby AKnowles » Tue Feb 25, 2014 7:01 pm

OK, here the Thick vs. Thin numbers.

1 TB Thick 4096 cache V8 Beta 3

Max_IOPs

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-24_20:59:55 io-analyzer01 0.5k_100%Read_0%Random 7238.98 7238.98 0 3.53 3.53 0
2014-02-24_20:59:40 io-analyzer03 0.5k_100%Read_0%Random 7502.94 7502.94 0 3.66 3.66 0
2014-02-24_20:59:55 io-analyzer02 0.5k_100%Read_0%Random 7378.51 7378.51 0 3.6 3.6 0
2014-02-24_20:59:40 io-analyzer04 0.5k_100%Read_0%Random 6573.01 6573.01 0 3.21 3.21 0
Sum 28693.44 28693.44 0.00 14.00 14.00 0.00
Average 7173.36 7173.36 0.00 3.50 3.50 0.00

Max_Troughput

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-24_21:02:57 io-analyzer01 512k_100%Read_0%Random 175.66 175.66 0 87.83 87.83 0
2014-02-24_21:02:42 io-analyzer03 512k_100%Read_0%Random 177.55 177.55 0 88.77 88.77 0
2014-02-24_21:02:58 io-analyzer02 512k_100%Read_0%Random 176.07 176.07 0 88.04 88.04 0
2014-02-24_21:02:43 io-analyzer04 512k_100%Read_0%Random 178.74 178.74 0 89.37 89.37 0
Sum 708.02 708.02 0.00 354.01 354.01 0.00
Average 177.01 177.01 0.00 88.50 88.50 0.00

Max_Write_IOPs

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-24_21:06:00 io-analyzer01 0.5k_0%Read_0%Random 28958.59 0 28958.59 14.14 0 14.14
2014-02-24_21:05:46 io-analyzer03 0.5k_0%Read_0%Random 28430.84 0 28430.84 13.88 0 13.88
2014-02-24_21:06:01 io-analyzer02 0.5k_0%Read_0%Random 29047.63 0 29047.63 14.18 0 14.18
2014-02-24_21:05:46 io-analyzer04 0.5k_0%Read_0%Random 28453.68 0 28453.68 13.89 0 13.89
Sum 114890.74 0.00 114890.74 56.09 0.00 56.09
Average 28722.69 0.00 28722.69 14.02 0.00 14.02

Max_Write_Throughput

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-24_21:09:03 io-analyzer01 512k_0%Read_0%Random 137.3 0 137.3 68.65 0 68.65
2014-02-24_21:08:48 io-analyzer03 512k_0%Read_0%Random 141.96 0 141.96 70.98 0 70.98
2014-02-24_21:09:04 io-analyzer02 512k_0%Read_0%Random 139.64 0 139.64 69.82 0 69.82
2014-02-24_21:08:49 io-analyzer04 512k_0%Read_0%Random 141.63 0 141.63 70.82 0 70.82
Sum 560.53 0.00 560.53 280.27 0.00 280.27
Average 140.13 0.00 140.13 70.07 0.00 70.07

And here's the Thin for comparison.

Max_IOPS

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-25_08:18:06 io-analyzer01 0.5k_100%Read_0%Random 14422.91 14422.91 0 7.04 7.04 0
2014-02-25_08:17:51 io-analyzer03 0.5k_100%Read_0%Random 14554.35 14554.35 0 7.11 7.11 0
2014-02-25_08:18:06 io-analyzer02 0.5k_100%Read_0%Random 14495.26 14495.26 0 7.08 7.08 0
2014-02-25_08:17:52 io-analyzer04 0.5k_100%Read_0%Random 14598.09 14598.09 0 7.13 7.13 0
Sum 58070.61 58070.61 0.00 28.36 28.36 0.00
Average 14517.65 14517.65 0.00 7.09 7.09 0.00

Max_Throughput

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-25_08:21:09 io-analyzer01 512k_100%Read_0%Random 107.97 107.97 0 53.99 53.99 0
2014-02-25_08:20:55 io-analyzer03 512k_100%Read_0%Random 108.4 108.4 0 54.2 54.2 0
2014-02-25_08:21:10 io-analyzer02 512k_100%Read_0%Random 107.71 107.71 0 53.86 53.86 0
2014-02-25_08:20:55 io-analyzer04 512k_100%Read_0%Random 107.97 107.97 0 53.99 53.99 0
Sum 432.05 432.05 0.00 216.04 216.04 0.00
Average 108.01 108.01 0.00 54.01 54.01 0.00

Max_Write_IOPS

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-25_08:24:10 io-analyzer01 0.5k_0%Read_0%Random 21106.45 0 21106.45 10.31 0 10.31
2014-02-25_08:23:55 io-analyzer03 0.5k_0%Read_0%Random 20793.57 0 20793.57 10.15 0 10.15
2014-02-25_08:24:11 io-analyzer02 0.5k_0%Read_0%Random 21086.32 0 21086.32 10.3 0 10.3
2014-02-25_08:23:56 io-analyzer04 0.5k_0%Read_0%Random 20767.7 0 20767.7 10.14 0 10.14
Sum 83754.04 0.00 83754.04 40.90 0.00 40.90
Average 20938.51 0.00 20938.51 10.23 0.00 10.23

Max_Write_Throughput

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-25_08:27:14 io-analyzer01 512k_0%Read_0%Random 41.57 0 41.57 20.78 0 20.78
2014-02-25_08:26:59 io-analyzer03 512k_0%Read_0%Random 41.47 0 41.47 20.74 0 20.74
2014-02-25_08:27:14 io-analyzer02 512k_0%Read_0%Random 40.79 0 40.79 20.4 0 20.4
2014-02-25_08:27:00 io-analyzer04 512k_0%Read_0%Random 40.84 0 40.84 20.42 0 20.42
Sum 164.67 0.00 164.67 82.34 0.00 82.34
Average 41.17 0.00 41.17 20.59 0.00 20.59

On the second run (Thin) things get more interesting

Max_IOPS

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-25_08:30:16 io-analyzer01 0.5k_100%Read_0%Random 2916.22 2916.22 0 1.42 1.42 0
2014-02-25_08:30:02 io-analyzer03 0.5k_100%Read_0%Random 2787.53 2787.53 0 1.36 1.36 0
2014-02-25_08:30:17 io-analyzer02 0.5k_100%Read_0%Random 2844.11 2844.11 0 1.39 1.39 0
2014-02-25_08:30:02 io-analyzer04 0.5k_100%Read_0%Random 2792.19 2792.19 0 1.36 1.36 0
Sum 11340.05 11340.05 0.00 5.53 5.53 0.00
Average 2835.01 2835.01 0.00 1.38 1.38 0.00

Max_Throughput

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-25_08:33:20 io-analyzer01 512k_100%Read_0%Random 50.11 50.11 0 25.05 25.05 0
2014-02-25_08:33:05 io-analyzer03 512k_100%Read_0%Random 49.43 49.43 0 24.71 24.71 0
2014-02-25_08:33:21 io-analyzer02 512k_100%Read_0%Random 50.27 50.27 0 25.14 25.14 0
2014-02-25_08:33:06 io-analyzer04 512k_100%Read_0%Random 49.9 49.9 0 24.95 24.95 0
Sum 199.71 199.71 0.00 99.85 99.85 0.00
Average 49.93 49.93 0.00 24.96 24.96 0.00

Max_Write_IOPS

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-25_08:36:23 io-analyzer01 0.5k_0%Read_0%Random 17251.65 0 17251.65 8.42 0 8.42
2014-02-25_08:36:09 io-analyzer03 0.5k_0%Read_0%Random 17427.79 0 17427.79 8.51 0 8.51
2014-02-25_08:36:24 io-analyzer02 0.5k_0%Read_0%Random 17282.25 0 17282.25 8.44 0 8.44
2014-02-25_08:36:10 io-analyzer04 0.5k_0%Read_0%Random 17436.36 0 17436.36 8.51 0 8.51
Sum 69398.05 0.00 69398.05 33.88 0.00 33.88
Average 17349.51 0.00 17349.51 8.47 0.00 8.47

Max_Write_Throughput

Date/Time Guest Workload Spec IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
2014-02-25_08:39:26 io-analyzer01 512k_0%Read_0%Random 51.81 0 51.81 25.91 0 25.91
2014-02-25_08:39:11 io-analyzer03 512k_0%Read_0%Random 51.48 0 51.48 25.74 0 25.74
2014-02-25_08:39:27 io-analyzer02 512k_0%Read_0%Random 50.22 0 50.22 25.11 0 25.11
2014-02-25_08:39:12 io-analyzer04 512k_0%Read_0%Random 50 0 50 25 0 25
Sum 203.51 0.00 203.51 101.76 0.00 101.76
Average 50.88 0.00 50.88 25.44 0.00 25.44

It has been a while, but I believe the first time I did this the think results were all more than the thin results (first run) and the second time I ran the thin results they were closer to the thick results. This time, the initial run thin results were closer to the thick results (or even better for MAX_IOPs) and the second run was quite a bit less than expected.

However, in going back over these results I did notice that the VMs were not all dedicated to a single host as in my first run a couple of weeks ago. So, I'll try and rerun them with a single host with only the IO Analyzer on it.
AKnowles
 
Posts: 27
Joined: Sat Feb 15, 2014 6:18 am

Re: Disk Bridging Bug Report

Postby anton (staff) » Tue Feb 25, 2014 9:04 pm

Yes, please share some more numbers. One question so far: if you did not write with something your SSD initially it would start with a CRAZY high IOPS and then would slow down. Not because of StarWind, that's the way flash drives work :)
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3922
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands

Re: Disk Bridging Bug Report

Postby AKnowles » Tue Feb 25, 2014 10:25 pm

Anton,

The numbers above relate to an issue I noticed with Thick vs. Thin provisioning not specifically SSD related. The first set I posted (earlier in the thread) are on the SSD. I did not initialize the SSD between uses. Just deleted the LUN and files then recreated them.

The numbers directly above are on the Arcea 1261 RAID 5 disk (10 Seagate 7200 RPM 2.5" SATA). I have partitioned the disk in to two partitions so the first part of the disk is used for iSCSI LUNs and the rest for shared files. I will try and recreate the test on a single host to remove the possibility of vMotions, other VMs access causing contention, etc. a bit later this week since the number are not quite what I expected.
AKnowles
 
Posts: 27
Joined: Sat Feb 15, 2014 6:18 am

Re: Disk Bridging Bug Report

Postby anton (staff) » Wed Feb 26, 2014 9:36 am

OK, I see! Thank you for clarification :)

AKnowles wrote:Anton,

The numbers above relate to an issue I noticed with Thick vs. Thin provisioning not specifically SSD related. The first set I posted (earlier in the thread) are on the SSD. I did not initialize the SSD between uses. Just deleted the LUN and files then recreated them.

The numbers directly above are on the Arcea 1261 RAID 5 disk (10 Seagate 7200 RPM 2.5" SATA). I have partitioned the disk in to two partitions so the first part of the disk is used for iSCSI LUNs and the rest for shared files. I will try and recreate the test on a single host to remove the possibility of vMotions, other VMs access causing contention, etc. a bit later this week since the number are not quite what I expected.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3922
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands

Next

Return to Beta

Who is online

Users browsing this forum: No registered users and 1 guest