Page 2 of 7

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Sun Apr 15, 2012 9:53 pm
by ticktok
Tiago Rost wrote:update,.. i have installed the RTM version of SCVMM 2012 and the same problem still exist.
Yes I have the same problem (RTM + Update Rollup 1) - but since I haven't got another SAN to test with, I am not sure whether the problem is with SCVMM 2012 or StarWind.

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Sun Apr 15, 2012 10:08 pm
by ticktok
Tiago Rost wrote:hello again,

am facing a strange problem when creating logical LUN using SCVMM 2012, after adding the Starwind to the SCVMM 2012 RC when i create new logical LUN the system center console disconnected and then give me error 1700 "the virtual machine manager server on the SCVMM server stopped while this job was running. This may have been caused by a system restart" it only happen when creating LUN, but the thing is the LUN gets created in the Starwind Server but not shown in the SCVMM console. Is it because am using RC version of System Center?
In order to see it in SCVMM 2012 you need to refresh the Storage Provider. In a clustered configuration after allocation refresh the Hyper-V hosts and Cluster to have the LUN show up in the available storage for allocation as Available Storage or Shared Volumes

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Mon Apr 16, 2012 9:19 am
by Tiago Rost
ticktok wrote:
In order to see it in SCVMM 2012 you need to refresh the Storage Provider. In a clustered configuration after allocation refresh the Hyper-V hosts and Cluster to have the LUN show up in the available storage for allocation as Available Storage or Shared Volumes

did the allocation space problem solved with you, all my LUN is shown 0 GB of available space.

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Mon Apr 16, 2012 9:29 pm
by ticktok
Tiago Rost wrote:
ticktok wrote:
In order to see it in SCVMM 2012 you need to refresh the Storage Provider. In a clustered configuration after allocation refresh the Hyper-V hosts and Cluster to have the LUN show up in the available storage for allocation as Available Storage or Shared Volumes

did the allocation space problem solved with you, all my LUN is shown 0 GB of available space.
No it still shows as 0 Gb. While experimenting I determined that if the Target is set up for clustering the LUN's show 0 Gb but when you set up a target without clustering enabled it shows the correct space and available space on the parent volume that the target's device is located on.

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Tue Apr 17, 2012 7:20 am
by Yuriy (staff)
Tiago Rost, thank you for you report.
This is known issue. We are working with Microsoft to resolve this problem.
Tiago Rost wrote: am facing a strange problem when creating logical LUN using SCVMM 2012, after adding the Starwind to the SCVMM 2012 RC when i create new logical LUN the system center console disconnected and then give me error 1700 "the virtual machine manager server on the SCVMM server stopped while this job was running. This may have been caused by a system restart" it only happen when creating LUN, but the thing is the LUN gets created in the Starwind Server but not shown in the SCVMM console. Is it because am using RC version of System Center?

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Tue Apr 17, 2012 8:37 am
by Yuriy (staff)
Dear Ticktok and Tiago Rost!
Please, see screenshot here: http://www.starwindsoftware.com/tmplink ... eSpace.PNG
It is normal that you see 0 GB of Available Capacity because we are SAN (not NAS). We can only allocate space for LUN, but can't say what part of space is currently used or free. We assume that all space is used.
ticktok wrote: No it still shows as 0 Gb. While experimenting I determined that if the Target is set up for clustering the LUN's show 0 Gb but when you set up a target without clustering enabled it shows the correct space and available space on the parent volume that the target's device is located on.

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Wed Apr 18, 2012 12:26 am
by ticktok
Yuriy (staff) wrote:Dear Ticktok and Tiago Rost!
Please, see screenshot here: http://www.starwindsoftware.com/tmplink ... eSpace.PNG
It is normal that you see 0 GB of Available Capacity because we are SAN (not NAS). We can only allocate space for LUN, but can't say what part of space is currently used or free. We assume that all space is used.
ticktok wrote: No it still shows as 0 Gb. While experimenting I determined that if the Target is set up for clustering the LUN's show 0 Gb but when you set up a target without clustering enabled it shows the correct space and available space on the parent volume that the target's device is located on.
Hi Yuri,
Please see attached image that shows the issue we are seeing.

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Wed Apr 18, 2012 7:59 am
by Anatoly (staff)
Dear ticktok,

That is exatcly what Yuriy was talking about.
Please review the matter again:
It is normal that you see 0 GB of Available Capacity because we are SAN (not NAS). We can only allocate space for LUN, but can't say what part of space is currently used or free. We assume that all space is used.
Everything is illustrated on the screenshot below (my colleague provided it earlier actually):
http://www.starwindsoftware.com/tmplink ... eSpace.PNG

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Wed Apr 18, 2012 8:08 am
by anton (staff)
I think we'll change this to match "all space" not to confuse the customers. So in your case it would be 120GB everywhere. Till end of this week :)

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Thu Apr 19, 2012 5:53 am
by ticktok
Anatoly (staff) wrote:Dear ticktok,

That is exatcly what Yuriy was talking about.
Please review the matter again:
It is normal that you see 0 GB of Available Capacity because we are SAN (not NAS). We can only allocate space for LUN, but can't say what part of space is currently used or free. We assume that all space is used.
Everything is illustrated on the screenshot below (my colleague provided it earlier actually):
http://www.starwindsoftware.com/tmplink ... eSpace.PNG
Perhaps this picture illustrates what I was talking about better:-
Here are two targets both on the same StarWind server, the top one offers a clustered target server, the bottom one designated with a -nc on the end of it's target name doesn't. The unclustered target shows available space on the StarWind server the clustered version doesn't.

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Thu Apr 19, 2012 3:22 pm
by Yuriy (staff)
Dear Ticktok and Tiago Rost!

Please report version number of your SCVMM.

Thank you!

--
Best regards,
Yuriy.

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Fri Apr 20, 2012 1:18 am
by ticktok
Yuriy (staff) wrote:Dear Ticktok and Tiago Rost!

Please report version number of your SCVMM.

Thank you!

--
Best regards,
Yuriy.
The SCVMM is the RTM version with Update Rollup 1 applied.

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Fri Apr 20, 2012 7:29 am
by Yuriy (staff)
It will be good, if you say exact numbers.
ticktok wrote: The SCVMM is the RTM version with Update Rollup 1 applied.

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Sun Apr 22, 2012 12:49 am
by ticktok
Yuriy (staff) wrote:It will be good, if you say exact numbers.
ticktok wrote: The SCVMM is the RTM version with Update Rollup 1 applied.
Version 3.0.6019.0

Re: StarWind SMI-S and integrations with SCVMM 2012

Posted: Fri Apr 27, 2012 2:45 pm
by Yuriy (staff)
Dear Ticktok and Tiago Rost!

We checked issue that you reported.

Please, check field "Device Path (same as pool path)". You will get 0 GB issue for your pools in case if you provided invalid path to device pool. Check disk letter and folder names, please.

I'm also glad to inform you that you can download updated version of Starwind SMI-S Provider preview version 0.6 here:
http://www.starwindsoftware.com/tmplink ... review.exe
(this is temporary link, permanent link will be updated soon)

--
Best regards,
Yuriy.