Loopback Performance Issues (Where am i going wrong!)

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

verticalsoft
Posts: 14
Joined: Mon Oct 08, 2018 3:07 pm

Fri Oct 12, 2018 6:06 pm

Hello Vitaliy,

Any other suggestions?

Kind regards
Dave
verticalsoft
Posts: 14
Joined: Mon Oct 08, 2018 3:07 pm

Sat Oct 13, 2018 8:48 pm

Can any other members test the speed of loopback starwind storage please?

I'm not even talking about making it highly available.

Just test your underlying storage and note your speeds.

Then create a 100gb starwind storage without any caching and attach using 120.0.0.1 loopback.

I've tested on 2 Dell R-630 with same poor performance.

The underlying storage is blistering quick but the starwind is poor.

I know it's a lot to ask but I'm scratching my head here.

Kind regards
David
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Mon Oct 15, 2018 8:03 am

Hello verticalsoft,
Did you install Loopback Accelerator Driver during the process of StarWind installation?
verticalsoft
Posts: 14
Joined: Mon Oct 08, 2018 3:07 pm

Mon Oct 15, 2018 9:01 am

Oleg(staff) wrote:Hello verticalsoft,
Did you install Loopback Accelerator Driver during the process of StarWind installation?
Yes confirmed, I did the Server setup.

Kind regards
Daivd
Attachments
Setup.png
Setup.png (13.07 KiB) Viewed 5287 times
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Mon Oct 15, 2018 12:43 pm

Could you please log a support case using this form?
verticalsoft
Posts: 14
Joined: Mon Oct 08, 2018 3:07 pm

Mon Oct 15, 2018 7:11 pm

Hello Again,

I've had some phone support today with Starwind and they agreed something's not quite right!. I have to cut short the phone conversation but will continue tomorrow.
In the meantime I decided to create a Ram disk with loopback to test if the performance was the same and yet again the throughport was circa 241.02
I'm assuming that the this must now be an issue with the loopback accelerator performance as ram disk should be blistering quick.

Kind regards
Dave


Command Line: diskspd.exe -t6 -si -b64K -w0 -o64 -d60 -h -L -c20G r:\test.io

Input parameters:

timespan: 1
-------------
duration: 60s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'r:\test.io'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing read test
block size: 65536
using interlocked sequential I/O (stride: 65536)
number of outstanding I/O operations: 64
thread stride size: 0
threads per file: 6
using I/O Completion Ports
IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time: 60.00s
thread count: 6
proc count: 24

CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 18.52%| 0.86%| 17.66%| 81.48%
1| 12.81%| 0.96%| 11.85%| 87.19%
2| 15.29%| 0.60%| 14.69%| 84.71%
3| 13.52%| 1.30%| 12.21%| 86.48%
4| 9.61%| 0.55%| 9.06%| 90.39%
5| 24.09%| 1.67%| 22.42%| 75.91%
6| 8.75%| 1.41%| 7.34%| 91.25%
7| 11.28%| 0.78%| 10.49%| 88.70%
8| 5.73%| 0.42%| 5.31%| 94.27%
9| 54.82%| 0.73%| 54.09%| 45.16%
10| 5.65%| 0.81%| 4.84%| 94.35%
11| 23.36%| 0.55%| 22.81%| 76.67%
12| 0.21%| 0.03%| 0.18%| 99.79%
13| 0.83%| 0.42%| 0.42%| 99.17%
14| 0.29%| 0.00%| 0.29%| 99.71%
15| 0.21%| 0.10%| 0.10%| 99.74%
16| 0.68%| 0.16%| 0.52%| 99.32%
17| 0.21%| 0.03%| 0.18%| 99.79%
18| 0.10%| 0.03%| 0.08%| 99.87%
19| 0.16%| 0.03%| 0.13%| 99.82%
20| 0.94%| 0.36%| 0.57%| 99.09%
21| 0.13%| 0.00%| 0.13%| 99.87%
22| 0.65%| 0.08%| 0.57%| 99.30%
23| 0.49%| 0.00%| 0.49%| 99.48%
-------------------------------------------
avg.| 8.68%| 0.49%| 8.19%| 91.31%

Total IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 2239365120 | 34170 | 35.59 | 569.50 | 110.977 | 158.075 | r:\test.io (20GB)
1 | 2652635136 | 40476 | 42.16 | 674.60 | 94.874 | 6.298 | r:\test.io (20GB)
2 | 2453143552 | 37432 | 38.99 | 623.87 | 103.523 | 133.649 | r:\test.io (20GB)
3 | 2638348288 | 40258 | 41.94 | 670.97 | 95.372 | 14.160 | r:\test.io (20GB)
4 | 2622095360 | 40010 | 41.68 | 666.84 | 95.982 | 28.043 | r:\test.io (20GB)
5 | 2558263296 | 39036 | 40.66 | 650.60 | 98.383 | 62.823 | r:\test.io (20GB)
-----------------------------------------------------------------------------------------------------
total: 15163850752 | 231382 | 241.02 | 3856.38 | 99.522 | 86.340

Read IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 2239365120 | 34170 | 35.59 | 569.50 | 110.977 | 158.075 | r:\test.io (20GB)
1 | 2652635136 | 40476 | 42.16 | 674.60 | 94.874 | 6.298 | r:\test.io (20GB)
2 | 2453143552 | 37432 | 38.99 | 623.87 | 103.523 | 133.649 | r:\test.io (20GB)
3 | 2638348288 | 40258 | 41.94 | 670.97 | 95.372 | 14.160 | r:\test.io (20GB)
4 | 2622095360 | 40010 | 41.68 | 666.84 | 95.982 | 28.043 | r:\test.io (20GB)
5 | 2558263296 | 39036 | 40.66 | 650.60 | 98.383 | 62.823 | r:\test.io (20GB)
-----------------------------------------------------------------------------------------------------
total: 15163850752 | 231382 | 241.02 | 3856.38 | 99.522 | 86.340

Write IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | r:\test.io (20GB)
1 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | r:\test.io (20GB)
2 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | r:\test.io (20GB)
3 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | r:\test.io (20GB)
4 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | r:\test.io (20GB)
5 | 0 | 0 | 0.00 | 0.00 | 0.000 | N/A | r:\test.io (20GB)
-----------------------------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00 | 0.000 | N/A


%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 74.025 | N/A | 74.025
25th | 93.528 | N/A | 93.528
50th | 97.216 | N/A | 97.216
75th | 99.177 | N/A | 99.177
90th | 100.880 | N/A | 100.880
95th | 102.217 | N/A | 102.217
99th | 106.144 | N/A | 106.144
3-nines | 1615.593 | N/A | 1615.593
4-nines | 3070.565 | N/A | 3070.565
5-nines | 3087.779 | N/A | 3087.779
6-nines | 3088.561 | N/A | 3088.561
7-nines | 3088.561 | N/A | 3088.561
8-nines | 3088.561 | N/A | 3088.561
9-nines | 3088.561 | N/A | 3088.561
max | 3088.561 | N/A | 3088.561
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Tue Oct 23, 2018 3:06 pm

We are working with this case.
We will give an update to the community.
verticalsoft
Posts: 14
Joined: Mon Oct 08, 2018 3:07 pm

Wed Oct 31, 2018 11:29 am

Hello Members,

After much support from the nice guys are Starwind we have Identified that the issue looks like it was either Windows Defender or eSet server antivirus. (not sure which)

After uninstalling both the performance went up to the expected transfer rates.

If anyone else having performance issues in Windows Server 2016 (may be the same in 2012R2) you cannot disable Windows Defender and I have to uninstall with Powershell Uninstall-WindowsFeature -Name Windows-Defender

I hope this may help others.

Kind regards
David
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Wed Oct 31, 2018 2:57 pm

David,

We are glad we could help you with the issue you experienced. Thank you for the feedback.
Post Reply