Iscsi target setup + cluster size + lan setup?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
fildenis
Posts: 8
Joined: Sun May 16, 2021 6:41 am

Sun May 16, 2021 9:27 am

Good day!
Sorry, I am using google translate
While studying StarWind, I had several questions, I could not find the answer on the Internet and the knowledge base.
1. You have two instructions on the site:
starwindsoftware.com/resource-library/starwind-virtual-san-for-hyper-v-2-node-hyperconverged-scenario-with-windows-server-2016/
starwindsoftware.com/resource-library/starwind-virtual-san-for-hyper-v-2-node-hyperconverged-scenario-with-hyper-v-server-2016/
Please explain why for target "Witness" in Connecting Targets => point 3 you DO NOT CHECK the "Enable multipath" checkbox? BUT for target "CSV1" in Connecting Targets => point 6, do you CHECK the "Enable multipath" checkbox?
I looked at video instructions on the internet, everywhere there is a CHECKBOX for ALL targets.
Why is the setting for a quorum disk different?
What should be the correct setting?

2. Which cluster size is better than 512 or 4096?
----- Hyper-V NTFS -----
----- StarWIND * .img -----
----- Hyper-V * .vhdx -----
----- Guest NTFS -----
In NTFS, the default is 4096
From the link it is clear that vhdx is also 4096
docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/storage-io-performance
The guest virtual machine also has NTFS 4096.
If you choose * .img 512 cluster size when creating a StarWind target, you get the following:
----- Hyper-V NTFS (4096) -----
----- StarWIND * .img (512) -----
----- Hyper-V * .vhdx (4096) -----
----- Guest NTFS (4096) -----
It seems to me that when creating a StarWind target, it is better to choose 4096, then you get the following scheme:
----- Hyper-V NTFS (4096) -----
----- StarWIND * .img (4096) -----
----- Hyper-V * .vhdx (4096) -----
----- Guest NTFS (4096) -----
The cluster size in all cases is the same 4096.
What cluster size is the best choice and in what scenarios does 4096 incompatibility appear?

3. Not everyone has the opportunity to install 8 network interfaces on the server (5 for StarWind with redundancy + 3 for a Hyper-V cluster).
My HP ML370G6 server has 1 network card (4 * 1Gbe)
Here is my version of traffic distribution (without redundancy):
Port 1 - Client access to guest virtual machines (production network) + Hyper-V management + StarWind management + Hyper-V heartbeat + Hyper-V sync (redundant)
Port 2 - Hyper-V heartbeat + Hyper-V sync (main) + iSCSI traffic
Port 3 - StarWind sync
Port 4 - StarWind heartbeat
It turns out that only second port can be loaded by both live migration and iSCSI traffic (1Gbe may not be enough)
Is this the correct option?

Добрый день!
Во время изучения StarWind у меня появилось несколько вопросов, в интернете и базе знаний я ответа не нашёл.
1. У Вас есть две инструкции на сайте:
starwindsoftware.com/resource-library/starwind-virtual-san-for-hyper-v-2-node-hyperconverged-scenario-with-windows-server-2016/
starwindsoftware.com/resource-library/starwind-virtual-san-for-hyper-v-2-node-hyperconverged-scenario-with-hyper-v-server-2016/
Объясните пожалуйста, почему для target "Witness" в Connecting Targets=>пункте 3 Вы НЕ ОТМЕЧАЕТЕ checkbox "Enable multipath"? НО для таргет "CSV1" в Connecting Targets=>пункте 6 вы ОТМЕЧАЕТЕ checkbox "Enable multipath"?
Я посмотрел видеоинструкции в интернете, везде ОТМЕЧАЮТ checkbox для для ВСЕХ targets.
Почему настройка для кворумного диска отличается?
Какая должна быть правильная настройка?

2. Какой размер кластера лучше 512 или 4096?
-----Hyper-V NTFS-----
-----StarWIND *.img-----
-----Hyper-V *.vhdx-----
-----Guest NTFS-----
В NTFS по умолчанию 4096
По ссылке понятно, что vhdx тоже 4096
docs.microsoft.com/ru-ru/windows-server/administration/performance-tuning/role/hyper-v-server/storage-io-performance
В гостевой виртуальной машине тоже NTFS 4096.
Если выбирать при создании таргета StarWind размер кластера *.img 512, то получится вот что:
-----Hyper-V NTFS(4096)-----
-----StarWIND *.img(512)-----
-----Hyper-V *.vhdx(4096)-----
-----Guest NTFS(4096)-----
Мне кажется, что при создании таргета StarWind лучше выбрать 4096, тогда получится такая схема:
-----Hyper-V NTFS(4096)-----
-----StarWIND *.img(4096)-----
-----Hyper-V *.vhdx(4096)-----
-----Guest NTFS(4096)-----
Размер кластера во всех случаях одинаков 4096.
Какой размер кластера лучше выбрать и в каких сценариях проявляется несовместимость 4096?

3. Не у всех есть возможность установить в сервера 8 сетевых интерфейсов(5 для StarWind с резервированием+3 для кластера Hyper-V).
В моём сервере HP ML370G6 установлена 1 сетевая карта(4*1Gbe)
Вот мой вариант распределения трафика(без резервирования):
1 порт - Клиентский доступ к гостевым виртуальным машинам(производственная сеть)+управление Hyper-V+управление StarWind+пульс Hyper-V+синхронизация(резервная) Hyper-V
2 порт - пульс Hyper-V+синхронизация(основная) Hyper-V+трафик iSCSI
3 порт - синхронизация StarWind
4 порт - пульс StarWind
Получается только 2-й порт может быть загружен и динамической миграцией и трафиком iSCSI(1Gbe может не хватить)
Это правильный вариант?
yaroslav (staff)
Staff
Posts: 2340
Joined: Mon Nov 18, 2019 11:11 am

Mon May 17, 2021 3:28 am

Welcome to StarWind Forum.
1. Multipath is to be enabled for the Witness HA but it is to be connected only over 127.0.0.1.
2. For Windows, 4096 block size is better but a 512b device will also be available to the system. I'd rather use 4096 size for a target presented to Windows, while only 512 is to be used for targets presented to ESXi.
3. Mixing traffic as you do for Ports 1 and 2 is not recommended. Use individual NICs for iSCSI and Sync. Make sure to have 2x NICs as described here https://www.starwindsoftware.com/system-requirements. One port can be used for iSCSI/Heartbeat/Live migration alone, while other ports are to be used for Sync and Management/Heartbeat/Live migration respectively. Also, we do not recommend teaming iSCSI and Sync links.
Finally, we recommend direct connections when it's possible. Please consider plugging Sync and iSCSI NICs directly.

Let me know if you have additional questions.
fildenis
Posts: 8
Joined: Sun May 16, 2021 6:41 am

Fri May 21, 2021 6:23 am

Good day! Thanks for your reply.
I agree with you, it is better to separate Hyper-V iSCSI traffic.
I spent several days studying the articles trying to figure out what could be improved.
My initial idea was to install two identical 4-port NICs in the server and configure Teaming with Hyper-V.
node1 node2
NIC1 port1 + NIC1 port1 => Team1
NIC1 port2 + NIC1 port2 => Team2
NIC1 port3 + NIC1 port3 => Team3
NIC1 port4 + NIC1 port4 => Team4
In case of failure not only of a single port, but also of any of the two network cards completely, the system will remain operational. Redundancy will be provided due to the redundancy of the physical ports of the two network cards.
The link https://www.starwindsoftware.com/best-p ... practices/ in the paragraph "Teaming and Multipathing best practices" says "StarWind Virtual SAN does not support any form of NIC teaming for resiliency or throughput aggregation. "
But following the link https://www.starwindsoftware.com/blog/w ... tionality/, it became clear to me that teaming using Windows is still supported.
(Q1) Is redundancy (Windows Teaming) of two 4-port 1 / 2.5 / 5 / 10Gbe NICs supported in StarWind?

Why am I considering the 4-port option? Now I work with HP servers, and previously I worked with 1U SuperMicro servers. Usually in 1U SuperMicro there are only 2 network interfaces + 1 PCIe slot, where you can install only one network card (2 or 4 ports). If you install an additional network card, then the SuperMicro server will have 4 (2 + 2) or a maximum of 6 (2 + 4) network ports.
******

I came up with such a scheme with one 4-port card (in the HP 4 * 1Gbe server) + install one 10GB (for example TP-Link TX401). (Why this particular TP-Link model? Firstly - the price, secondly - a good heatsink, thirdly - the official website has drivers for 32 and 64-bit Windows 7, 8, 8.1, 10, Windows Server 2008, 2012, 2016, 2019. And for Linux kernel> 3.10.)

A two-node Hyper-V cluster without external storage.
Node1 and Node2 (physical connection + network traffic distribution)

Node1 Node2
NIC1 port1 (1Gbe) ==== switch ==== NIC1 port1 (1Gbe) - Client access to guest VMs (production network) + Hyper-V management + StarWind management + Hyper-V heartbeat + Hyper-V CSV sync
NIC1 port2 (1Gbe) =============== NIC1 port2 (1Gbe) - Hyper-V heartbeat + Hyper-V CSV sync + Hyper-V live migration (fallback)
NIC1 port3 (1Gbe) =============== NIC1 port3 (1Gbe) - StarWind sync + StarWind heart rate (main)
NIC1 port4 (1Gbe) =============== NIC1 port4 (1Gbe) - StarWind pulse (optional1)
NIC2 port1 (10Gbe) ============== NIC2 port1 (10Gbe) - Hyper-V iSCSI traffic + live migration (main) Hyper-V + Hyper-V heartbeat + CSV synchronization Hyper- V + StarWind sync + StarWind pulse (optional2)

The first NIC1 ports on both nodes are connected to the switch, since needs client access to guest virtual machines. All other ports are connected directly from node to node.

Clarification from me: "Hyper-V Failover Cluster Manager" in the NIC settings can be prioritized for live migration. This is what I mean for "live migration (main + fallback) Hyper-V"
Main - high priority
Fallback - low priority
Typically live migration does not overlap with iSCSI traffic. iSCSI traffic is heavy during initial startup of guest virtual machines. And live migration can be done on weekends (for example, for host maintenance). Therefore, I combined these two traffic in one 10Gbe interface.

(Q2) Is the option (suggested by me above) with two network cards (4 * 1Gbe + 1 * 10Gbe) correct?

I think that you can add another 1 * 10Gbe to the scheme and completely separate the traffic of the live Hyper-V migration, then the scheme would turn out to be quite good.
******
And there were several questions about the StarWind cluster network operation.
When configuring a StarWind cluster at https://www.starwindsoftware.com/resour ... rver-2016/ in " Heartbeat "in point 5 when configuring the network, the interfaces for" Sync + Heartbeat "and" Heartbeat "are selected
An additional "Heartbeat" is configured so that if the "Sync + Heartbeat" interface is damaged, a split brain does not work, as written at the link https://www.starwindsoftware.com/blog/w ... and-how-to -avoid-it /
(Q3) How does StarWind choose which interface (if multiple interfaces were selected in the "Specify the interfaces for Synchronization and Heartbeat Channels" setting) will synchronization and heartbeat occur?
(Q4) Do you plan to remove "Heartbeat" in future versions, replacing it with "Sync + Heartbeat"? For example, a Hyper-V cluster does not have a dedicated "Heartbeat".
(Q5) If only "Heartbeat" stops working, will the pulse be transmitted via "Sync + Heartbeat"?
(Q6) Is it possible in StarWind to manually (force) prioritize (if the primary (high priority) interface is damaged, it switches to the backup (low priority) interface) to select the clock and heart rate interfaces?
(Q7) Do you plan to add priorities for choosing interfaces for synchronization and heartbeat in future versions of StarWind?

Now let's imagine the following situation:
Guest virtual machines are evenly distributed across two nodes of the cluster (node1 and node2). "Sync + Heartbeat" between nodes only one without redundancy.
(Q8) How will StarWind prioritize nodes (high-low) if "Sync + Heartbeat" stops working (only "Heartbeat" remains)?
(Q9) What algorithm will StarWind use which of the two nodes will become unsynchronized?
(Q10) What will be the consequences for the guest virtual machines located on node1 and node2, if for example node2 gets low priority and unsynchronized mode?
(Q11) If ALL guest VMs are on node1 only, is node2 guaranteed to get low priority and unsynchronized mode?
(Q12) I know it is possible to configure a witness to avoid this situation, but what if the witness was not configured?
******

I also wanted to suggest you correct something:
1. Correct the instructions (check the "Enable multipath" checkbox for target "Witness") in the instructions at the link starwindsoftware.com/resource-library/starwind-virtual-san-for-hyper-v-2-node-hyperconverged-scenario-with -windows-server-2016 /
2. I think that during configuration, when choosing a cluster size (512 or 4096), it is worth placing a hint (512 for ESXi, 4096 for Hyper-V)

Excuse me for such long messages, I don't want to offend anyone, maybe it's just an out-of-date documentation. Working as a system administrator, I realized that the more accurate and detailed the instructions, the fewer errors and questions users have.

I have numbered all the questions to make it easier to answer.

Добрый день! Спасибо за Ваш ответ.
Я согласен с Вами, Hyper-V iSCSI-трафик лучше отделить.
Я несколько дней изучал статьи пытаясь понять что можно улучшить.
Моя первоначальная идея состояла в том, чтобы установить две одинаковые 4-портовые сетевые карты в сервер и настроить Teaming средствами Hyper-V.
node1 node2
NIC1 порт1 + NIC1 порт1 => Team1
NIC1 порт2 + NIC1 порт2 => Team2
NIC1 порт3 + NIC1 порт3 => Team3
NIC1 порт4 + NIC1 порт4 => Team4
В случае выхода из строя не только одиночного порта, но и любой из двух сетевых карт полностью - система останется работоспособной. Будет обеспечено резервирование за счёт избыточности физических портов двух сетевых карт.
По ссылке https://www.starwindsoftware.com/best-p ... practices/ в пункте "Teaming and Multipathing best practices" написано "StarWind Virtual SAN does not support any form of NIC teaming for resiliency or throughput aggregation."
Но по ссылке https://www.starwindsoftware.com/blog/w ... tionality/ мне стало понятно, что teaming средствами Windows всё-таки поддерживается.
(Q1)Поддерживается ли резервирование(Windows Teaming) из двух 4-портовых сетевых карт 1/2,5/5/10Gbe в StarWind?

Почему я рассматриваю вариант именно с 4 портами? Сейчас я работаю с серверами HP, а ранее я работал с 1U серверами SuperMicro. Обычно в 1U SuperMicro всего 2 сетевых интерфейса + 1 слот PCIe, куда можно установить только одну сетевую карту(2 или 4 порта). Если установить дополнительную сетевую карту, то в сервере SuperMicro получится 4(2+2) или максимум 6(2+4) сетевых портов.
******

Придумал вот такую схему с одной 4-портовой картой(в сервере HP 4*1Gbe) + установить одну 10GB(например TP-Link TX401). (Почему именно эта модель TP-Link? Во-первых - цена, во-вторых - хороший радиатор, в-третьих - на официальном сайте есть драйверы для 32 и 64-разрядной Windows 7, 8, 8.1, 10, Windows Server 2008, 2012, 2016, 2019. И для ядра Linux > 3.10.)

Кластер Hyper-V из двух нод без внешнего хранилища.
Node1 and Node2(физическое соединение + распределение сетевого трафика)

Node1 Node2
NIC1 порт1(1Gbe)====switch====NIC1 порт1(1Gbe) - Клиентский доступ к гостевым виртуальным машинам(производственная сеть) + управление Hyper-V + управление StarWind + пульс Hyper-V + синхронизация CSV Hyper-V
NIC1 порт2(1Gbe)==============NIC1 порт2(1Gbe) - пульс Hyper-V + синхронизация CSV Hyper-V + живая миграция(резервная) Hyper-V
NIC1 порт3(1Gbe)==============NIC1 порт3(1Gbe) - синхронизация StarWind + пульс StarWind(основной)
NIC1 порт4(1Gbe)==============NIC1 порт4(1Gbe) - пульс StarWind(дополнительный1)
NIC2 порт1(10Gbe)=============NIC2 порт1(10Gbe) - Hyper-V iSCSI-трафик + живая миграция(основная) Hyper-V + пульс Hyper-V + синхронизация CSV Hyper-V + синхронизация StarWind + пульс StarWind(дополнительный2)

Первые порты NIC1 на обеих нодах подключёны в свитч, т.к. нужен клиентский доступ к гостевым виртуальным машинам. Все остальные порты подключены напрямую из ноды в ноду.

Пояснение от меня: "Диспетчер отказоустойчивости кластера Hyper-V" в настройках сетевых карт можно настроить приоритетность для живой миграции. Именно это я имею ввиду для "живая миграция(основная+резервная) Hyper-V"
Основная - высокий приоритет
Резервная - низкий приоритет
Обычно живая миграция не совпадает по времени с трафиком iSCSI. iSCSI трафик интенсивен при первоначальном запуске гостевых виртуальных машин. А динамическая миграция может быть проведена в выходные(например для технического обслуживания хостов. Поэтому эти два трафика я объединил в одном интерфейсе 10Gbe.

(Q2)Вариант(предложенный мной выше)с двумя сетевыми картами(4*1Gbe + 1*10Gbe) правилен?

Думаю, что в схему можно добавить ещё 1*10Gbe и полностью отделить трафик живой миграции Hyper-V, тогда схема получилась бы совсем хорошей.
******

И появилось несколько вопросов по работе сети кластера StarWind.
При настройке кластера StarWind по ссылке https://www.starwindsoftware.com/resour ... rver-2016/ в "Heartbeat" в пункте 5 при настройке сети выбираются интерфейсы для "Sync + Heartbeat" и "Heartbeat"
Дополнительный "Heartbeat" настраивается для того, чтобы в случае повреждения интерфейса "Sync + Heartbeat" не получился split brain, как написано по ссылке https://www.starwindsoftware.com/blog/w ... -avoid-it/
(Q3)Как StarWind выбирает по какому интерфейсу(если при настройке "Specify the interfaces for Synchronization and Heartbeat Channels" было выбрано несколько интерфейсов) будет происходить синхронизация и пульс?
(Q4)Планируете ли Вы в будущих версиях убрать "Heartbeat", заменив его "Sync + Heartbeat"? Например в кластере Hyper-V нет выделенного "Heartbeat".
(Q5)Если перестанет работать только "Heartbeat", по "Sync + Heartbeat" будет передаваться пульс?
(Q6)Можно ли в StarWind вручную(принудительно) назначить приоритеты(в случае повреждения основного(высокий приоритет) происходит переключение на резервный(низкий приоритет) интерфейс)для выбора интерфейсов синхронизации и пульса?
(Q7)Планируете ли Вы в будущих версиях StarWind добавить приоритеты выбора интерфейсов для синхронизации и пульса?

Теперь представим такую ситуацию:
Гостевые виртуальные машины равномерно распределены на двух нодах кластера(node1 и node2). "Sync + Heartbeat" между нодами только один без резервирования.
(Q8)Как StarWind расставит приоритеты для нод(высокий-низкий), если "Sync + Heartbeat" перестанет работать(останется только "Heartbeat")?
(Q9)По какому алгоритму StarWind выберет какая из двух нод станет несинхронизированной?
(Q10)Какие будут последствия для гостевых виртуальных машин находящихся на node1 и node2, если например node2 получит низкий приоритет и несинхронизированный режим?
(Q11)Если ВСЕ гостевые виртуальные машины будут находиться только на node1, то гарантированно ли node2 получит низкий приоритет и несинхронизированный режим?
(Q12)Я знаю, что можно настроить свидетель для исключения такой ситуации, но что делать если свидетель не был настроен?
******

Я также хотел предложить Вам кое-что поправить:
1. Поправить инструкцию(отметить checkbox "Enable multipath" для target "Witness") в инструкции по ссылке starwindsoftware.com/resource-library/starwind-virtual-san-for-hyper-v-2-node-hyperconverged-scenario-with-windows-server-2016/
2. Думаю что во время настройки при выборе размера кластера(512 или 4096) стоит разместить подсказку(512 для ESXi, 4096 для Hyper-V)

Извините меня за столь длинные сообщения, я не хочу никого обидеть, возможно дело просто в неактуализированной документации. Работая системным администратором я понял, что чем точнее и подробнее инструкции, тем меньше ошибок и вопросов у пользователей.

Я пронумеровал все вопросы, чтобы было удобнее отвечать.
yaroslav (staff)
Staff
Posts: 2340
Joined: Mon Nov 18, 2019 11:11 am

Fri May 21, 2021 8:37 am

(Q1) Is redundancy (Windows Teaming) of two 4-port 1 / 2.5 / 5 / 10Gbe NICs supported in StarWind?
No. Teaming adds extra complexity and hence stability risks to the system.
NIC1 port3 (1Gbe) =============== NIC1 port3 (1Gbe) - StarWind sync + StarWind heart rate (main)
NIC2 port1 (10Gbe) ============== NIC2 port1 (10Gbe) - Hyper-V iSCSI traffic + live migration (main) Hyper-V + Hyper-V heartbeat + CSV synchronization Hyper- V + StarWind sync + StarWind pulse (optional2)
No sync over iSCSI, please. Also, it is better to use Sync for cluster HB rather than iSCSI. There are no main/standby options for the sync channel in StarWind VSAN, sync is working in the Round Robin mode. Scheme should look as follows
NIC1 port1 (1Gbe) ==== switch ==== NIC1 port1 (1Gbe) - Client access to guest VMs (production network) + Hyper-V management + StarWind management + Hyper-V heartbeat + Hyper-V CSV sync
NIC1 port2 (1Gbe) =============== NIC1 port2 (1Gbe) - StarWind Sync + Cluster heartbeat
NIC1 port3 (1Gbe) =============== NIC1 port3 (1Gbe) - StarWind HB + iSCSI
NIC1 port4 (1Gbe) =============== NIC1 port4 (1Gbe) - StarWind HB + iSCSI
NIC2 port1 (10Gbe) ============== NIC2 port1 (10Gbe) - StarWind HB - can be anything else here BUT 1 GBE networking and underlying storage may be the bottlenecks.
(Q3) How does StarWind choose which interface (if multiple interfaces were selected in the "Specify the interfaces for Synchronization and Heartbeat Channels" setting) will synchronization and heartbeat occur?
Sync channel consists of 3 channels: DATA (sync goes here), Heartbeat (yes, there is a heartbeat channel over sync), control (VSAN internal commands). Heartbeat over Sync does not mean that Sync can be used for iSCSI or management though.
(Q4) Do you plan to remove "Heartbeat" in future versions, replacing it with "Sync + Heartbeat"? For example, a Hyper-V cluster does not have a dedicated "Heartbeat".
No, most probably: VSAN needs dedicated links to sync and transfer data.
(Q5) If only "Heartbeat" stops working, will the pulse be transmitted via "Sync + Heartbeat"?
Yup. If that Heartbeat was used for iSCSI as well, the partner will not be accessible over iSCSI.
(Q6) Is it possible in StarWind to manually (force) prioritize (if the primary (high priority) interface is damaged, it switches to the backup (low priority) interface) to select the clock and heart rate interfaces?
No, VSAN uses Round-robin to sync.
(Q7) Do you plan to add priorities for choosing interfaces for synchronization and heartbeat in future versions of StarWind?
That's a great idea. But do not think we will be mixing traffic.
(Q8) How will StarWind prioritize nodes (high-low) if "Sync + Heartbeat" stops working (only "Heartbeat" remains)?
(Q9) What algorithm will StarWind use which of the two nodes will become unsynchronized?
That's where priorities come to play. See more on HA priorities https://forums.starwindsoftware.com/vie ... f=5&t=5421. That's true only for Heartbeat failover strategy.
(Q10) What will be the consequences for the guest virtual machines located on node1 and node2, if for example node2 gets low priority and unsynchronized mode?
HA goes out of sync, CSV moves to the partner node. VMs should continue running or get a hiccup and be restarted.
(Q11) If ALL guest VMs are on node1 only, is node2 guaranteed to get low priority and unsynchronized mode?
Yes, if node 2 has the 2nd priority.
(Q12) I know it is possible to configure a witness to avoid this situation, but what if the witness was not configured?
The outcome depends on the number of nodes you have. See more about witness here https://docs.microsoft.com/en-us/window ... and-quorum.
Witness forms the majority, preventing the "live" node from going crazy in case of partner's failure.
Excuse me for such long messages, I don't want to offend anyone, maybe it's just an out-of-date documentation. Working as a system administrator, I realized that the more accurate and detailed the instructions, the fewer errors and questions users have.
All good, I do not mind long-reads here :). And, those are really good points. Will ask my colleagues to address them.

Thanks for all your time and effort. Let me know if you have more questions.
fildenis
Posts: 8
Joined: Sun May 16, 2021 6:41 am

Tue Jun 29, 2021 2:37 pm

Good day!
I should have been more attentive, my questions were already answered in the instructions.
I got a little confused with the witness disk(StarWind node majority and Hyper-v cluster with a witness disk), this question was wrong.
A lot of time has passed since my last post, I've been experimenting and reading the forum.
For simplicity, we will assume that all NICs (no matter 1 or 10GbE) are single-port, hereafter SW-StarWind, H-V-Hyper-V.

Scheme 1(minimum working), requires 2 NICs:
node1 node2
NIC1 === switch1 === NIC1 - Client access to guest VMs(Production network) + SW(Management + Heartbeat) + H-V(Heartbeat + CSV Sync + Management + Live migration) + iSCSI (traffic)
NIC2 ============== NIC2 - SW(Sync+Heartbeat) ONLY

Scheme 2(with traffic separation by ports, without fault tolerance), 4 NICs are required:
node1 node2
NIC1 === switch1 === NIC1 - Client access to guest VMs(Production network) + H-V(Management) + SW(Management) + H-V(Heartbeat + CSV Sync)
NIC2 ============== NIC2 - iSCSI (Traffic) + H-V(Heartbeat + CSV Sync)
NIC3 ============== NIC3 - SW(Sync+Heartbeat) ONLY
NIC4 ============== NIC4 - SW(Heartbeat) + H-V(Live migration)

Scheme 3(with traffic separation by ports and with fault tolerance), 8 NICs are required.
I created a virtual test cluster of two windows server 2019 according to the following scheme:
node1(s-3.test.loc) node2(s-4.test.loc)
NIC1(192.168.21.31) === switch1 === NIC1(192.168.21.32) - Client access to guest VMs(Production network) + H-V(Management) + SW(Management) + H-V(Heartbeat + CSV Sync)
NIC2(192.168.22.31) ============== NIC2(192.168.22.32) - iSCSI (traffic)1 + H-V(Heartbeat + CSV Sync)
NIC3(192.168.23.31) ============== NIC3(192.168.23.32) - SW(Sync+Heartbeat)1 ONLY
NIC4(192.168.24.31) ============== NIC4(192.168.24.32) - SW(Heartbeat) + H-V(Live migration)1
NIC5(192.168.25.31) === switch1 === NIC5(192.168.25.32) - Client access to guest VMs(Production network) + H-V(Management) + SW(Management) + H-V(Heartbeat + CSV Sync)
NIC6(192.168.26.31) ============== NIC6(192.168.26.32) - iSCSI (traffic)2 + H-V(Heartbeat + CSV Sync)
NIC7(192.168.27.31) ============== NIC7(192.168.27.32) - SW(Sync+Heartbeat)2 ONLY
NIC8(192.168.28.31) ============== NIC8(192.168.28.32) - SW(Heartbeat) + H-V(Live migration)2

I did everything according to the instructions, but in the item "Provisioning StarWind HA Storage to Windows Server Hosts" I had difficulties(please look at the attached files in the numbering order).
I understand the reason for this error, but why are there no other network interfaces in the selection list?
1_lan_sync.png
1_lan_sync.png (30.47 KiB) Viewed 2877 times
2_csv1+witness.png
2_csv1+witness.png (93.02 KiB) Viewed 2877 times
3_iscsi_portal_s-3.png
3_iscsi_portal_s-3.png (144.66 KiB) Viewed 2877 times
Last edited by fildenis on Tue Jun 29, 2021 2:58 pm, edited 4 times in total.
fildenis
Posts: 8
Joined: Sun May 16, 2021 6:41 am

Tue Jun 29, 2021 2:39 pm

4_iscsi_portal_s-4.png
4_iscsi_portal_s-4.png (178.01 KiB) Viewed 2876 times
5_iscsi__local_target_s-3.png
5_iscsi__local_target_s-3.png (142.33 KiB) Viewed 2876 times
6_iscsi__remote_target_s-3.png
6_iscsi__remote_target_s-3.png (168.17 KiB) Viewed 2876 times
fildenis
Posts: 8
Joined: Sun May 16, 2021 6:41 am

Tue Jun 29, 2021 2:45 pm

7.png
7.png (87.88 KiB) Viewed 2874 times
8_MPIO_ON_s-3.png
8_MPIO_ON_s-3.png (27.06 KiB) Viewed 2874 times
Where am I wrong?

I have another test cluster of two Hyper-V 2019 (I used standard windows as an iSCSI-target), made according to the scheme:

node1(h-1.test.loc) iSCSI target(s-2.test.loc) node2(h-2.test.loc)
192.168.21.21/24--------192.168.21.2/24---------192.168.21.22/24 Client access to guest VMs(Production network) + H-V(Heartbeat + CSV Sync) + H-V(Live migration)2
192.168.22.21/24--------192.168.22.2/24---------192.168.22.22/24 iSCSI1
192.168.23.21/24--------192.168.23.2/24---------192.168.23.22/24 iSCSI2 + H-V(Heartbeat + CSV Sync) + H-V(Live migration)1

In this cluster, when connecting, I can choose network interfaces.
9_another_cluster.png
9_another_cluster.png (54.79 KiB) Viewed 2874 times
And one more thing - while studying the forum, I found a lot of recurring questions about setting up networks. I paid more attention to topics about Hyper-V, since I use Hyper-V.
I think it will be useful to create a tool - an optimizer-a network calculator. If the program recommends distributing traffic according to the user's source data , it will make life easier for many.
yaroslav (staff)
Staff
Posts: 2340
Joined: Mon Nov 18, 2019 11:11 am

Tue Jun 29, 2021 3:58 pm

Hi,
I understand the reason for this error, but why are there no other network interfaces in the selection list?
Were those links grayed out?
Please use an iSCSI IP address, not a DNS name.
In Target portals, we have IPs that are enlisted in the Discovery Tab. To allow 2 and more iSCSI connections, please set iSCSIDiscoveryInterfaces value to 1 as described here https://www.starwindsoftware.com/resour ... rver-2016/. Sometimes, I noticed a bug of iSCSI Initiator GUI: you cannot add the second IP. The solution will be removing the only available interface from discovery, connecting everything over the "missing" iSCSI link, and adding the "preferred" iSCSI interface back.
Post Reply