The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
Hi,Lou wrote:Hi...
What's the difference between turning "asynchronous mode" on and off?
Are reads and writes to SPTI devices or image files are buffered?
Why is mksparse always creating heavily fragmented files on the host system?
Hi,Lou wrote:Hi,
1) Are there are reasons not to use asynchronous mode?
2) Are reads and/or writes to image files buffered?
3) I always used mksparse -o but the whole file is even on a blank disk heavily fragmented. Do you just open the file for writing without telling the win32 api how big the file would be?
Hi Lou,Lou wrote:Are there are plans for supporting this in the near future? Would speed up some things very much.2) No. There is no buffering for ImageFile devices.
Code: Select all
sync, unbuffered sync, buffered async, unbuffered async, buffered SMB
Linear Read 201,12 20,13 158,71 15,94 189,91 19,45 153,03 14,92 154,20 15,46
Linear Write 155,46 15,97 85,74 8,70 160,19 15,77 83,08 8,24 66,93 6,90
Random Read 58,04 5,85 56,12 5,72 55,41 5,61 52,99 5,32 119,09 11,71
Random Write 92,10 8,96 41,69 4,12 90,29 8,99 39,12 4,19 33,43 3,58
Lou,Lou wrote:Did some quick IOMeter benchmarks with different settings:
First column is IO/s, second one is MB/s. Maybe I'm making some typical workload and test the whole thing again.
No problem. Will do some further benchmark with kind of sophisticated access patterns to see where the improvements of buffering are.Thank you for sharing your test results!
Imagefile is placed on a RAID5 with 4 IDE drives and a Promise SX4000 RAID controller.It seems your test ImageFile is placed on a single drive's partition (is it an IDE drive?).
So there is no benefit from using the Async mode.
If the ImageFile is stored on a RAID partition, using the Async mode should show much better results than the Sync one...
RAID-5 is not the best mode if you need good performance.Lou wrote:Hi,Imagefile is placed on a RAID5 with 4 IDE drives and a Promise SX4000 RAID controller.It seems your test ImageFile is placed on a single drive's partition (is it an IDE drive?).
So there is no benefit from using the Async mode.
If the ImageFile is stored on a RAID partition, using the Async mode should show much better results than the Sync one...
I don't know anything about Starwind internals, but if you are handling all requests for a target within a single thread there is - as far as I know - a strictly sequential file IO and thus no benefit. The readings are to close together for a valid conclusion.
I wonder why the results of buffering are so worse. If I split up the test into different request sizes, the small request sizes until 32k are speed up while the larger request sizes are slowed down when using buffering.
Sure, but RAID5 delivers us enough performance for our projected applications. The main objective is storage consolidation.RAID-5 is not the best mode if you need good performance.
RAID-1+0 would show better throughput and redundancy.
I also agree on this but I cannot tell our applications to use a particular workload sizeAlso I'd like to point out that the total performance depends on the RAID's strip size and sizes of your test workload packets.
We'll check this next week with different hardware. We also want to take readings from other iSCSI target software.You are right, we do use sending R/W requests to the device from a single thread. But the file is opened as overlapped and the Async mode allows multiple pending requests to the file.
NTFS should definitely have benefit from overlapped I/O to file placed on a RAID-x partition. At least it does it with SCSI disk arrays...
Our expectations on the benefits of buffering are serving clients doing heavily localized IO, mostly random. The buffering should improve these requests with write-back caching and readahead. Also, performance on other request types (large sequential or random IO) mustn't drop down below using no buffering.Buffering is only good for rather short requests much less than the RAID's strip unit size especially for sequentional I/O.
But when the request size is near or more than the strip unit size, there is no benefit in using buffering for sequentional or random I/O operations.
Do your applications open files with local buffering?Lou wrote:I also agree on this but I cannot tell our applications to use a particular workload size ;-)Also I'd like to point out that the total performance depends on the RAID's strip size and sizes of your test workload packets.
Ok, please let us know about the results.We'll check this next week with different hardware. We also want to take readings from other iSCSI target software.You are right, we do use sending R/W requests to the device from a single thread. But the file is opened as overlapped and the Async mode allows multiple pending requests to the file.
NTFS should definitely have benefit from overlapped I/O to file placed on a RAID-x partition. At least it does it with SCSI disk arrays...
Yes, I agree with you that if an application reads/writes to a list of disk blocks, it should help if the blocks were cached.Our expectations on the benefits of buffering are serving clients doing heavily localized IO, mostly random. The buffering should improve these requests with write-back caching and readahead. Also, performance on other request types (large sequential or random IO) mustn't drop down below using no buffering.Buffering is only good for rather short requests much less than the RAID's strip unit size especially for sequentional I/O.
But when the request size is near or more than the strip unit size, there is no benefit in using buffering for sequentional or random I/O operations.
You could use some monitoring tool (for example, SysInternals' FileMon) to see whether your applications use buffering and what sizes of disk requests are there...Lou wrote:As our applications are commercial available software, we don't have any influence on their buffering behaviour. But it's important that the iSCSI targets honours SCSI sync cache requests to ensure that critical data is flushed to disk.
Any buffering has its pros and cons. If an iSCSI target tries to buffer data that the initiator demands to be non-buffered, the target must be fail-tolerant enough to ensure the data is written to a persistent storage.My idea of target buffering was, that if I have a target with much of unused memory, I could utilize this to cache initiators requests. Sure, most applications on initiator are using local buffering, but expensive SAN arrays do target caching, too. And on 'cheap' iSCSI target servers the buffering should give me an advantage when writing to RAID5 arrays.
I guess Linux and Windows uses different caching algorithms in their filesystems.I really want to know why there are such a performance decrease with activated buffering. This shouldn't happen, at least it didn't happen with our experimental linux iSCSI target. Maybe some trashing in Windows' cache algorithm. Maybe I get someone of our experts.