13:02:15 #startmeeting hyper-v 13:02:16 Meeting started Wed May 25 13:02:15 2016 UTC and is due to finish in 60 minutes. The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:02:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:02:20 The meeting name has been set to 'hyper_v' 13:02:26 hi all 13:02:28 hello. :) 13:02:33 Hi 13:02:37 o/ 13:02:39 hello 13:02:42 Hi everyone 13:02:43 sorry I started a bit late, was pinged about nova-rescue. :) 13:02:48 o/ 13:02:56 hi 13:03:30 ok, so some status updates 13:03:39 #topic monasca Windows support 13:04:06 still in progress. I've fixed the disk checker, so now disk metrics are good. 13:04:19 there were a couple of fixes to do there. yeay. 13:04:47 claudiub: the monasca BP is approved ? 13:04:55 currently looking at process checker, seems to be working. might be dependent on psutil version. 13:04:56 Hi guys 13:05:12 not yet, for some reason. 13:05:26 hi folks 13:05:36 ok, could you discuss this in the last monasca meeting ? 13:05:40 sagar_nikam: I spoke with tpl, he seemed fine with most changes. 13:05:58 alexpilotti: Hi... welcome back ... to IRC meetings 13:06:11 yep, I've brought this up to the last monasca meeting 13:06:16 claudiub: was it roland ? 13:06:19 yep 13:06:30 let me know if i need to follow up 13:06:38 i can try it 13:06:43 sure. I'll ping him today again. 13:06:45 are you collecting vnic metrics too on hyper0v? 13:07:40 vNIC metrics are being collected by the hyperv checker, which is still in progress. still have to cleanup the refactor. 13:07:58 as for host NIC metrics, they are collected by the network checker. 13:08:06 Yeah it makes sense 13:08:33 claudiub: i think we need the followin metrics to start off .. disk, network, cpu and memory 13:08:42 for vnic, do you put the port-id as one of the dim? 13:08:47 anyways, some reviews would be nice. 13:08:54 #link https://review.openstack.org/#/q/status:open+project:openstack/monasca-agent+branch:master+topic:bp/add-windows-support 13:10:08 and you would have to revisit this when we have OVS in place on Hyper-V 13:10:10 sagar_nikam: you mean the VMs metrics? 13:10:38 first host metrics should be fine, we can then move to VMs metrics 13:10:47 sagar_nikam: yeah also on the list will be disk.iops and disk.latency 13:10:51 currently vmware driver for monasca does that 13:11:11 sonu: you mean the vNIC metrics, right? 13:11:28 then OVS plugin will be useful - https://review.openstack.org/#/c/306621/ 13:11:34 yes I meant vnic metrics 13:12:25 claudiub: how about monasca metrics if the host is in MSCluster and we are using nova cluster driver 13:12:38 we give CSV metrics ? 13:12:42 instead of disk 13:13:07 sonu: yeah, I should look at that 13:13:46 sagar_nikam: hm, the disk checker takes all the available partitions, and collects metrics 13:14:23 so, if the CSV is mounted on the host and has a drive letter, its metrics should be collected. 13:14:31 claudiub: ok.. should be fine.. 13:15:23 but there's a chance the metrics can't collected. for example, for Floppy Disk Drive and CD / DVD Disk Drive, disk metrics can't be collected. 13:15:28 even with drive letter. 13:16:18 anyways. in my list of to do, is to also look at the iis checker 13:16:35 there's some wmi stuff there. :) 13:17:00 claudiub: IIS checker ? is it for web server or some thing else 13:17:04 yep 13:17:44 ok 13:17:51 #topic nova patches 13:18:11 #link https://etherpad.openstack.org/p/newton-nova-priorities-tracking 13:18:17 i saw one nova patch merged ... nice 13:18:22 so, last week something happened, apparently. :) 13:18:26 not just one. :) 13:18:49 since thursday untill now, we've had 6-7 patches merged. 13:19:07 and we have 2 more patches with a +2. 13:19:37 which is great, it came as a nice surprise. :) 13:20:05 also, apparently, we'll have to add support for PCI passthrough on Hyper-V. 13:20:25 cluster driver not in that etherpad ? 13:20:31 it is 13:20:57 ok found it ... 13:21:05 and we'll have to investigate SRIOV as well. 13:21:09 fun times. :) 13:21:15 is PCI-SRIOV a requirement? 13:21:29 sonu: what do you mean? 13:21:46 I mean which release is this targeted for? 13:22:03 we're targetting Newton. 13:22:21 at the very least, it will land on compute-hyperv, but hopefully it will also land on nova. 13:22:28 Have you started work on this already? or yet to start. 13:22:50 we were interested in contributing this feature, if you are fine with it. 13:23:27 sonu: I've started investigating for disk passthrough. I already have an env for that. For the rest, I'll need another env. :) 13:24:20 sure, any help is welcome. :) 13:24:36 I was aiming for VM NICs as PCI and SRIOV passthrough. 13:24:38 but it will have to done until June 25, if it is to land on nova. 13:25:04 so exactly one month from now 13:25:10 sonu: afaik, NIC passthrough is available only on Windows Hyper-V Server 2016. 13:25:15 domi007: yep. 13:25:27 sonu: for the rest, there's only SRIOV 13:26:06 so you mean SRIOV has to be done till June 25 13:26:11 yep 13:26:37 anyways, moving on. 13:26:41 I shall connect with you offline on this topic thanks. 13:26:41 # os-brick status 13:26:52 sonu: cool. :) 13:27:18 so, there are 2 patches that needs to land in os-brick: 13:27:21 # link https://review.openstack.org/#/c/312999/ 13:27:26 this already has a +2, which is nice. 13:28:00 #link https://review.openstack.org/#/c/272522 13:28:15 this one doesn't have any +2s :( 13:28:59 really hope everything gets in asap. 13:29:10 so we can move on with the nova fibre channel patches. 13:29:30 agree 13:30:04 anybody hpe that i need to request for review on os-brick patches ? 13:30:26 hemna. :) 13:30:44 ok 13:31:04 # topic OVS 13:31:16 #topic OVS 13:31:40 so, there's a new blog post that you might like: 13:31:42 #link https://cloudbase.it/open-vswitch-2-5-hyper-v-part-1/ 13:32:04 should be interesting 13:32:26 :) I've seen this while in progress, good stuff, it works great in our env 13:32:59 as for OVS vif plug driver on nova, we're waiting for os-vif to become a dependency in nova. 13:33:15 domi007: nice, good to know. :) 13:33:32 the only thing I wasn't able to figure out is how to run VLAN and VXLAN networks simultenously 13:33:45 do you use OVS firewall driver now with this? 13:34:00 claudiub: some months in IRC meeting, we discussed about microsoft certification for OVS, anything happened on that ? 13:34:03 atuvenie: ^ 13:34:20 hmm, alexpilotti? ^ 13:34:26 or we should continue using WMI (or MI rather) driver 13:34:47 atuvenie told me she is working on trying to make the secgroups MI driver work on Liberty 13:34:52 on Mitaka I heard it works thanks to sonu 13:35:15 yeah, I think its because of the enhanced rpc blueprint. :) 13:35:36 yeah 13:35:37 sonu: thanks for the blueprint. :) 13:35:39 it doesn't work for us on Liberty, getting OVS running needed some cherry-picking as well - mainly because the windows OVS driver wasn't complete and working in Liberty 13:35:42 sorry folks, I'm jumping through meetings 13:35:59 sagar_nikam: we started that process a few days ago 13:36:22 alexpilotti: thanks.. let us know how it goes 13:37:26 sagar_nikam: sure, it will take some time to process, I'll keep you guys updated of course! 13:38:07 domi007: as for the vlan and vxlan networks, I haven't tried it yet. 13:38:28 domi007: simultanously. 13:38:30 claudiub: any plans for DVR ? 13:38:32 claudiub: they supposed to work, I'll try to adapt settings from KVM hosts to see if it works 13:38:52 DVR is heavy lifting for Hyper-V 13:39:25 domi007: sure, let us know how it goes. :) 13:39:33 DVR works on namespaces on Linux compute. We must evolve something equivalent on Windows. 13:40:24 sonu: getting the neutron l3-agent to work properly on Windows is gonig to be tricky. :) 13:41:06 yes, But we should have this in roadmap, since sooner we will need it 13:42:20 indeed. we'll look at it once we finish all our work in progress blueprints. 13:42:52 claudiub: sure 13:42:55 any other topic you guys want to discuss? 13:43:00 yes 13:43:09 one question 13:43:19 sure 13:43:33 From the source, I see that we handle multiple physnet scenario in Hyper-V. 13:43:47 #topic open discussion 13:43:47 meaning physnet:br1, physnet2:b2 13:44:03 so all VMs on physnet1 goes over uplink on br1 13:44:19 and on physnet2 goes over uplink on br2 13:44:31 this is a classic use case in Linux KVM.. 13:44:58 ok 13:45:06 and using our bridge mapping on hyperv neutron.conf, we can set the physnet and bridge mapping 13:45:17 yep 13:45:28 and Ia m hoping that this is a supported scenario to test. correct? 13:45:41 yep 13:45:56 we were finding some issues. So wanted blessing to continue our debugging :) 13:46:11 what issues exactly? 13:46:23 I assume it's networking-hyperv, right? 13:46:36 Vms on one physnet cannot reach out. 13:46:42 yes it is networking-hyperv. 13:46:53 We will investigate and root cause the problem tomorrow. 13:47:13 I have two little questions as well: when adding a second SMB backend to cinder the following error message can be seen in the log: http://paste.openstack.org/show/4FZfv4AbYIWL9nrmo2zH/ probably originating from this call https://github.com/openstack/cinder/blob/stable/liberty/cinder/cmd/volume.py#L86 13:47:28 the second is: have you heard anything from c64cosmin about freerdp? 13:47:31 sonu: do those VMs get an IP? 13:47:37 no they don't 13:47:43 c64cosmin: hi. :) 13:47:50 hi guys 13:48:06 hello :) 13:48:22 I'm just waiting to get my PR merged :) 13:48:44 domi007: that's a very weird error 13:48:50 c64cosmin: what is the fix for ? 13:49:03 several fixes 13:49:10 claudiub: I agree, it should work just fine with any number of backends I guess 13:49:39 c64cosmin: once they are merged, can we have a MSI in stable branch, our QA can pick and test 13:50:05 sagar_nikam: totally 13:50:13 domi007: yeah, I would assume so. what backends did you configure? 13:50:33 claudiub: I have the config file here, got it from a colleague, could it be caused because of having the same mount point base? http://paste.openstack.org/show/0Hle4B0gZIK0nBZiDDbY/ 13:50:38 c64cosmin:let me know when the MSI is ready, we pick it up 13:50:47 sagar_nikam, c64cosmin +1 13:51:42 c64cosmin: if deb is available, we can pick and test that as well... i know that is in plans... we can try both MSI and deb 13:52:17 sagar_nikam: in your case, the deb might be useful 13:52:22 domi007: how is your tests going with freerdp behind haproxy ? 13:52:27 domi007: it might be possible, actually. 13:52:52 c64cosmin: yes eagerly waiting for deb... can test as soon as it is available 13:53:03 sagar_nikam: I decided to wait for the new MSI, currently busy with networking and this SMB issue 13:53:09 domi007: do you have a full log by any chance? 13:53:25 sagar_nikam: as I said last week, you and domi007 will be anounced directly when available :) 13:53:39 claudiub: I'll try to create one and get it to you, thanks 13:53:40 wondering if lpetrut knows more about this. 13:53:47 domi007: i think you should try deb and haproxy instead of MSI and haproxy 13:53:48 c64cosmin: really appreciate it 13:53:58 domi007: sure, I'll let him know. 13:54:01 c64cosmin: thanks 13:54:14 sagar_nikam: it doesn't make a difference, I can try the deb as well, we have Ubuntu machines running as controller nodes 13:54:28 thank you 13:55:20 sonu: still here? 13:55:25 alexpilloti: claudiub: how are the tests going with WIN 2016 13:55:42 i believe we cant run MSI on it 13:55:56 sagar_nikam: really? why? 13:56:21 i remember reading it somewhere on the net that it will not be supported 13:56:24 sagar_nikam: we had a lot of tempest test runs on win 2016 until now. 13:56:33 for each cycle and release. 13:56:44 cluadiub: installed on WIN2016 using MSI ? 13:56:48 yep 13:56:53 ok. 13:57:20 what exactly is the issue? 13:57:25 is there any error, or? 13:57:35 can you provide the installer logs? 13:57:45 claudiub: not tried it.. just read it 13:58:14 claudiub: can you provide me the location from where i can download the latest TP of WIN2016 13:58:41 https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview 13:58:52 thanks 13:58:57 np. :) 13:59:13 claudiub: maybe this: "Nano Server does not include MSI as an installation technology due to dependencies" https://blogs.technet.microsoft.com/windowsserver/2015/11/16/moving-to-nano-server-the-new-deployment-option-in-windows-server-2016/ 13:59:26 yes 13:59:30 i meant nano server 13:59:35 of WIN 2106 13:59:46 sorry i should have been more clear 14:00:03 have you tried tests on nano ? 14:00:04 np my Googleing was fruitful anyway :) 14:00:27 how does it work ? 14:00:31 domi007: oh, i see what you mean. for nano, you cannot use a msi 14:00:53 claudiub: yes, we cant use MSI 14:01:48 anyways, we have to end the meeting. :) 14:01:56 thank you all 14:02:03 thanks for joining. :) 14:02:06 #endmeeting