13:02:19 <alexpilotti> #startmeeting hyper-v
13:02:19 <openstack> Meeting started Wed Dec  2 13:02:19 2015 UTC and is due to finish in 60 minutes.  The chair is alexpilotti. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:02:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:02:22 <openstack> The meeting name has been set to 'hyper_v'
13:02:40 <alexpilotti> morning folks!
13:02:40 <claudiub> o/
13:02:41 <atuvenie> \o
13:02:45 <lpetrut> o/
13:02:45 <abalutoiu> o/
13:02:53 <atuvenie> o/
13:03:04 <atuvenie> #openstack-hyperv
13:03:51 <itoader> o/
13:04:13 <alexpilotti> sagar_nikam: hello, anybody else joining from HP?
13:04:23 <sagar_nikam> Hi
13:04:37 <sagar_nikam> i think there is some confusion on the channel name
13:04:40 <sagar_nikam> we can start
13:04:47 <sagar_nikam> i will mail others to join here
13:04:58 <alexpilotti> sagar_nikam: where are they joining?
13:05:14 <alexpilotti> I wrote on #openstack-hyper-v that it's here
13:05:23 <sagar_nikam> i think we never sent the channel name, so they might have gone to openstack-hyper-v
13:05:29 <sagar_nikam> ok
13:06:28 <primeministerp> hey guys
13:07:02 <primeministerp> Channel name should be upstream in the meeting info
13:07:08 <alexpilotti> the patch for the meeting merged and we updated the info in wiki
13:07:14 <primeministerp> *nod*
13:07:57 <alexpilotti> sagar_nikam: I dont see anybody else on #openstack-hyper-v meeting, should we start?
13:07:59 <primeministerp> alexpilotti: do we have quorum yet?
13:08:16 <sagar_nikam> alexpilotti: yes we can start
13:09:25 <alexpilotti> #topic FC development status
13:10:18 <alexpilotti> lpetrut: as discussed we started with the pass-through scenario, leaving the vsan for later
13:10:35 <alexpilotti> things are progressing well, except one blocker
13:10:37 <sagar_nikam> yes, that is implemented in the same way in libvirt
13:10:46 <alexpilotti> sagar_nikam: exactly
13:11:02 <sagar_nikam> what is the blocker ?
13:11:57 <alexpilotti> we need to map the FC mounted volume on the host to an entry in the MSVM_DiskDrive instance to be assigned to the VM's setting data host resource
13:12:26 <alexpilotti> in other wods, we need to get from the FC host config the local disk number
13:12:37 <lpetrut> basically, we get the volumes attached to the host, but I need to find a reliable way to identify them, to know which one to attach
13:13:09 <sagar_nikam> cinder provides wwpn?
13:13:14 <lpetrut> yes
13:13:22 <alexpilotti> in the iSCSI case we obtain that from the storage API and from there we map it to a MSVM_DiskDrive, but there's no direct equivalent for FC
13:13:30 <sagar_nikam> cant we use that... since that is unique and what needs to be attaced
13:13:55 <sagar_nikam> ok, this is more to do with WMI or MI
13:14:00 <sagar_nikam> and not nova and cinder
13:14:19 <lpetrut> yes, I can get the FC mapping for a specific wwpn, but that won't include information such as disk number or path
13:14:26 <alexpilotti> sagar_nikam: it's only a Windows API thing (WMI, WIn32, etc)
13:14:40 <sagar_nikam> ok
13:14:49 <lpetrut> for the record, this is what the connection info provided by Cinder looks like: http://paste.openstack.org/show/480624/
13:15:58 <sagar_nikam> target_lun=2, is that not the disk id ?
13:16:29 <lpetrut> no
13:16:42 <lpetrut> we need the one seen by the OS
13:16:43 <sagar_nikam> ok
13:17:01 <lpetrut> you can have multiple disks with the same Lun, based on how those are being exported
13:19:01 <sagar_nikam> RDM class ?
13:19:07 <sagar_nikam> WMI RDM class
13:19:07 <alexpilotti> in short, we're going to spend an extra day on this and then escalate to the storage team in MSFT if an API cant be found
13:19:40 <sagar_nikam> this is raw disk, so we may need to use the RDM class
13:19:43 <alexpilotti> the rest of the code is already done, except live migration and resize
13:20:03 <sagar_nikam> however the MSFT server team is the right team to answer
13:20:19 <sagar_nikam> ok, one question on live migration
13:20:44 <sagar_nikam> wll the volume be attached after the live migration is done ?
13:20:57 <alexpilotti> sagar_nikam: we'll do like for iSCSI
13:20:57 <primeministerp> alexpilotti: we should probably just engage the storage team now
13:21:09 <sagar_nikam> there should be a detach from source host ad attach on destination host
13:21:44 <alexpilotti> there's a pre_live_migration call on the target's nova driver instance
13:21:58 <alexpilotti> that takes care of ensuring that the storage resources are in place
13:22:08 <sagar_nikam> FC may be slightly different, since the FC volume needs to be presented to the new host
13:22:23 <sagar_nikam> ok, then it is fine
13:22:24 <sagar_nikam> makes sense
13:22:53 <alexpilotti> sicne we're at it, we also check the local disk number as it can differ from the source
13:23:16 <sagar_nikam> agree
13:23:19 <alexpilotti> e.g. LUN 3 on SAN x can be attached to local disk 1 on source and local disk 3 on target
13:23:43 <alexpilotti> so we ensure that we dont attach somebody else's disk after live migration :-D
13:24:10 <alexpilotti> the same BTW happens on host restart, as didk IDs can be reshuffled depending on attach order
13:24:15 <sagar_nikam> that care needs to be taken for all volumes that is attached to the instance
13:24:32 <sagar_nikam> since a instance can have multiple FC volumes attached
13:24:36 <alexpilotti> yeah, that's not a FC specific thing and it's already in place for iSCSI
13:24:42 <sagar_nikam> ok
13:25:00 <alexpilotti> SMB3 does not need that as the host resource is the target SMB path, which clearly doesnt change
13:25:14 <alexpilotti> only passthrough has this limitation
13:25:22 <sagar_nikam> for the tenant user, how does the volume become available, he has to use disk mgmt to mount it ?
13:25:34 <sagar_nikam> like the way we do in iSCSI
13:26:10 <alexpilotti> it's the same: the guest OS sees a new disk and decides what to do with it
13:26:18 <sagar_nikam> ok
13:26:40 <alexpilotti> there's a setting on WIndows, for example, that determines if the host needs to be attached
13:27:40 <alexpilotti> then the guest needs to format it, etc
13:27:46 <alexpilotti> same on linux
13:27:49 <sagar_nikam> ok
13:27:59 <alexpilotti> you'll see it as /dev/sdb /dev/sdc, etc
13:29:20 <alexpilotti> any other questions on the FC topic?
13:29:30 <sagar_nikam> no
13:29:37 <alexpilotti> cool
13:29:46 <sagar_nikam> actually one more
13:29:46 <alexpilotti> #topic PyMI
13:29:49 <sagar_nikam> missed it
13:29:58 <alexpilotti> d'oh, just cahnged topic, please go on :)
13:30:22 <sagar_nikam> i hope the cluster driver implemented as part of this BP https://blueprints.launchpad.net/nova/+spec/hyper-v-cluster
13:30:32 <sagar_nikam> will also support FC volumes
13:30:40 <alexpilotti> that's transparent
13:31:02 <sagar_nikam> since MS Cluster can move the VM
13:31:03 <alexpilotti> iSCSI, FC and other volume types implement a set of decoupled interfaces
13:31:13 <sagar_nikam> to another host, out of band from nova
13:31:30 <sagar_nikam> wanted to be sure that will also get implemented
13:32:13 <sagar_nikam> just a note, for us to be sure it works there as well
13:32:20 <alexpilotti> sure
13:32:22 <sagar_nikam> we can now move to next topic
13:32:28 <sagar_nikam> thanks
13:32:43 <alexpilotti> every feature that works on standalone will work on cluster
13:32:51 <alexpilotti> as a general rule
13:32:56 <sagar_nikam> ok cool... that's nice
13:33:03 <alexpilotti> ok, moving to current topic (pyMI)
13:33:13 <alexpilotti> Added event support as well
13:33:46 <alexpilotti> including async API calls
13:33:59 <alexpilotti> this means that the project is feature complete for replacing the WMI module
13:34:07 <alexpilotti> we're now running it at scale
13:34:22 <sagar_nikam> ok good
13:34:22 <alexpilotti> checking for stability issues
13:34:52 <alexpilotti> did you manage to get somebody in your team to do tests on it?
13:35:05 <alexpilotti> sagar_nikam: ^
13:35:10 <sagar_nikam> for using pyMI, requirements.txt needs to be updated, when is it planned ?
13:35:17 <alexpilotti> even on older OpenStack releases (Juno, etc)
13:35:43 <sagar_nikam> not yet, hopefully soon, we have some questions, will mail you
13:36:24 <alexpilotti> sagar_nikam: Nova does not have wmi in the requirements
13:36:42 <sagar_nikam> ok
13:36:49 <sagar_nikam> pywin32 ?
13:36:55 <alexpilotti> it's a windows specific one that we never added as the platform specific support in pip is very recent
13:37:03 <alexpilotti> same for pywin32
13:37:06 <sagar_nikam> ok
13:37:20 <alexpilotti> we'll probably drop pywin32
13:37:25 <sagar_nikam> so we need to document that users need to use pyMI instead of pywin32
13:37:27 <alexpilotti> as we need it only for wmi
13:37:52 <alexpilotti> yep, as soon as a stable release is available!
13:38:14 <sagar_nikam> ok
13:40:47 <alexpilotti> sagar_nikam: they are in global requirements:
13:40:50 <alexpilotti> https://github.com/openstack/requirements/blob/master/global-requirements.txt#L185
13:41:01 <alexpilotti> and #link https://github.com/openstack/requirements/blob/master/global-requirements.txt#L222
13:41:44 <sagar_nikam> ok, makes sense since we pyMI for cinder, neutron etc
13:42:14 <alexpilotti> we'll update all of them anyway as soon as we go "live" with PyMI
13:42:56 <lpetrut> Cinder for example will not use PyMI at all, using os-win instead
13:43:22 <sagar_nikam> ok
13:46:00 <alexpilotti> anything elase on this topic?
13:46:22 <sagar_nikam> no
13:49:19 <alexpilotti> ok, I have no additional topic for now, as this was a short week (Thanksgiving in the US last Thu and Romanian national holidays thin Mon and Tue)
13:49:43 <alexpilotti> #topic open discussion
13:50:04 <sagar_nikam> topic from me
13:50:08 <alexpilotti> cool
13:50:34 <sagar_nikam> if possible, can we have higher priority for https://blueprints.launchpad.net/nova/+spec/hyper-v-cluster
13:50:45 <sagar_nikam> lot of users asking support for MSCluster
13:51:04 <sagar_nikam> hopefully we can get it implemented in Mitaka
13:51:28 <alexpilotti> sagar_nikam: it was implemented already in Liberty on compute-hyperv
13:51:47 <alexpilotti> there's no chance IMO to have it merged in Nova IMO
13:51:47 <sagar_nikam> ok, we can try to get in nova
13:51:58 <sagar_nikam> oh ... why ?
13:52:02 <alexpilotti> as in, it's a huge set of patches
13:52:02 <sagar_nikam> the BP is approved
13:52:18 <sagar_nikam> i saw only one patch
13:52:25 <sagar_nikam> but that needed more work
13:52:48 <sagar_nikam> for volume support as well as nova DB updates
13:53:00 <alexpilotti> we can realistically merge it, given Nova's prioritization for drivers, only if it becomes the main topic for the release
13:53:05 <alexpilotti> and we focus only on that
13:53:13 <sagar_nikam> ok
13:54:04 <sagar_nikam> i think that patch also did not support CSV
13:54:13 <sagar_nikam> or atleast stats update for CSV
13:54:40 <sagar_nikam> need to confirm, but a quick review of code, i did not find it
13:54:46 <alexpilotti> the main issue back then was that on a clustered volume (CSV or S2D), you have a single large volume
13:54:59 <alexpilotti> so all Nova nodes report the same amount of free space
13:56:20 <alexpilotti> last time we talked about this with Nova at the last mid cycle, jpipes volunteered to add support for a unified cross-nodes resource
13:56:41 <sagar_nikam> ok
13:56:52 <sagar_nikam> any patches on that available ?
13:57:01 <alexpilotti> no, we need to ping him
13:57:11 <sagar_nikam> ok
13:57:15 <alexpilotti> afaik there was no progress on this
13:57:23 <sagar_nikam> ok
13:57:31 <alexpilotti> claudiub will try to ping him today
13:57:40 <sagar_nikam> that would be good
13:57:53 <sagar_nikam> lot of customer demand for MSCluster
13:58:08 <sagar_nikam> hopefully this or next release we can get it implemented
13:59:07 <sagar_nikam> it is almost the time to end the meeting, if you have any patches up for review let me know
13:59:17 <sagar_nikam> we can review it
13:59:25 <alexpilotti> sagar_nikam: yes, as usual if your customers want it in the next year or so they need to switch to compute-hyperv
13:59:52 <alexpilotti> at the current rate of Nova drivers patch merge
14:00:12 <sagar_nikam> ok
14:00:26 <alexpilotti> guys, time's up!
14:00:31 <alexpilotti> #endmeeting