13:00:33 #startmeeting hyper-v 13:00:34 Meeting started Wed Dec 14 13:00:33 2016 UTC and is due to finish in 60 minutes. The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:37 The meeting name has been set to 'hyper_v' 13:01:04 anyone here? 13:01:20 Hi 13:01:24 hello :) 13:01:32 I hope you had a good vacation 13:01:59 it was great, it was like a surprisingly long weekend :D 13:02:13 is someone else from your team joining? 13:02:29 no 13:02:32 we can start 13:02:35 cool 13:02:53 #topic Neutron Hyper-V Agent 13:03:24 soo there are a couple of things i'm working on for networking-hyperv 13:03:44 one of them are neutron trunk ports 13:03:46 ok 13:04:02 hi all :) 13:04:11 heey, long time no see :) 13:04:18 #link https://wiki.openstack.org/wiki/Neutron/TrunkPort 13:04:29 indeed, sorry about that :) gonna listen in again from time to time 13:04:37 neutron trunk ports have been implemented in newton, btw 13:05:07 and it could have some nfv usecases 13:05:13 so there are still plans to support HyperV's networking instead of using OVS? 13:05:25 domi007: sure, good to have you back. :) 13:05:46 domi007: so, at the moment, we are supporting both. 13:06:09 domi007: and we are currently looking into the new networking model that came with windows server 2016 13:06:25 which would mean native support for vxlan networking, as well as l3 support 13:06:40 and other features 13:06:41 so what is the change in Ocata.. if we have this already implemented in newton 13:06:56 oooo :) much has changed I see 13:07:17 sagar_nikam: neutron trunk ports have been introduced in newton in neutron (linuxbridge and neutron-ovs-agent) 13:07:37 and we didn't have that implemented in networking-hyperv 13:08:46 anyways. neutron trunk ports are not enabled in the neutron server by default. 13:08:53 i had to find out that the hard way. :) 13:09:22 you'll have to add the TrunkPlugin in neutron.conf, something like this: 13:09:24 service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.metering.metering_plugin.MeteringPlugin,neutron.services.trunk.plugin.TrunkPlugin 13:09:28 just fyi 13:09:40 good to know 13:09:44 i didn't really find any information about this specific thing 13:10:22 the other thing that i'll be looking into and implement pretty soon, would be networking qos 13:10:50 you can add qos policies to neutron networks 13:11:22 and it was first introduced in neutron in liberty. so, i think it's about time to support this. :) 13:11:52 hm, sagar dropped off. will wait for him for a bit 13:12:06 Hi.... I am back 13:12:12 wb :) 13:12:15 got disconnected 13:12:43 we had a cyclone yesterday in my state... network services are badly affected 13:12:54 i might be disconnected again 13:13:00 ooh, sorry to hear that 13:13:19 i am safe... only bad network 13:13:30 just a small recap: the next feature: networking qos. will start working on it pretty soon. it was first introduced in liberty in neutron. 13:13:53 sagar_nikam: that's good, people >> internet. :D 13:13:56 ok 13:14:05 any questions? 13:14:26 about this topic? 13:14:26 no 13:14:42 cool. moving on. 13:15:00 #topic nova staus 13:15:34 no new patches merged on nova for us since last meeting, unfortunately. 13:15:45 ok 13:16:03 regarding the nova-ovs ci, if you are curious, here are some test results: http://64.119.130.115/debug/nova-ovs/results.html 13:17:10 the test failures are unrelated, they are a bit flaky, because of tempest 13:17:40 checking 13:17:41 they are being re-run currently. 13:18:39 any perf improvement as compared to hyperv switch ? 13:19:23 the hyper-v ci doesn't test performance, it only runs tempest 13:19:50 so, it won't be able to tell any performance difference between neutron-ovs-agent and neutron-hyperv-agent 13:19:53 ok.. got it 13:20:48 sagar_nikam: also, what do you mean by performance: the VM's network throughput, or the VM spawn time? 13:21:44 anyways, I mentioned last time that I was having some weird issue regarding the neutron-ovs-agent 13:22:10 hm what was that? 13:22:37 that it would sometimes "freeze", and wouldn't process ports anymore, but it would still send its alive state to neutron 13:23:18 could you find the root cause ? 13:23:24 yep 13:23:34 ok... what was it 13:23:43 oo I see 13:23:43 managed to find a deterministic rule to reproduce it as well 13:23:56 also, this issue doesn't appear prior to newton 13:24:18 soo.. in newton, the default values of a new neutron conf options changed 13:25:07 in particular: of_interface changed from "of_interface" to "native" and ovsdb_interface changed from "vsctl" to "native" 13:25:41 ok 13:26:17 we currently do not support the "native" interface, unfortunately. but we are currently working on it, and it will most probably be included in the next windows ovs release 13:26:54 okay, any timeline on that? 13:27:04 so, until then, bear in mind, if you upgrade to newton or newer, you'll have to have these config options on the neutron-ovs-agent's neutron.conf files: 13:27:14 also is it gonna be a new major version? 13:27:21 [ovs] 13:27:21 ... 13:27:22 of_interface = ovs-ofctl 13:27:22 ovsdb_interface = vsctl 13:27:40 okay, so there is a workaround :) 13:27:44 yep :) 13:27:58 just have these config options and you'll be fine. 13:28:37 ok got it.. as of now we are still in the process of upgrading to mitaka... so not so much of a issue for us for sometime 13:28:38 great, thanks 13:29:37 domi007: most probably next minor version release. we are currently on 2.6.. so 2.7 13:30:15 as for timeline, we follow the same release timeline as the ovs timeline 13:30:40 okay, good :) thank you 13:31:10 and just fyi: functionally, there is no difference between the "native" and "vsctl / ovs_ofctl" interfaces. they do the same thing. 13:31:21 sure :) 13:31:57 the difference is in performance. the vsctl / ovs_ofctl interface does sys exec calls, while the native one connects to a tcp socket opened by ovs 13:32:24 s/performance/execution time 13:32:52 any questions on this subject? 13:33:01 not from me 13:33:04 no from me 13:33:09 cool. :) 13:33:38 #topic Hyper-V Cluster updates 13:34:10 just a minor update: we've found a few bugs, we've submitted their fixes 13:34:16 will link to them shortly 13:34:45 gerrit link ? 13:34:56 yep. gerrit seems down to me 13:35:32 the last time (which is few months back) we had tried your patch. it worked quite well 13:35:44 sagar_nikam: that's good to hear. :) 13:35:49 we can try it again once you give the gerrit 13:35:59 and everything is fixed 13:36:03 those bugs are minor, and quite edge case, but it's still good to have the fixes and know about them 13:36:13 ok 13:36:25 and what happened to cinder-volume suppprt ? 13:37:17 ok. first one is a very minor one. it happens if you have any non-nova clustered vms. the hyper-v cluster driver was trying to failover them as well, and it shouldn't. 13:37:19 #link https://review.openstack.org/#/c/406131/ 13:39:50 the second one: the nova-compute service listens to failover events, but if the service is down for any reason, and a failover occurs during that period, it won't be able to process those failovers, meaning that if any instance failovered to the "current" host, the nova-compute service couldn't let nova know about it 13:40:03 and nova would still consider it on the "old" host. 13:40:06 #link https://review.openstack.org/#/c/406069/ 13:40:40 will check and get back 13:40:51 the first one was merged on master, newton, and mitaka 13:41:06 the second one, there's a minor comment that i'll have to address. 13:41:37 sagar_nikam: also, what did you mean about the cinder-volume support? 13:42:49 nova instances having cinder volumes attached to instances running on the cluster 13:42:56 if failover happens 13:43:03 the volumes are not migrated 13:43:06 it always worked with smb volumes. 13:43:24 i think instances are not migrated since it has volumes 13:43:34 i meant iscsi volumes... i should have been clear 13:44:07 as for iscsi, it is in the pipeline at the moment 13:44:54 ok 13:45:05 let me know if you get it working 13:45:23 also last time we used your patch... it was submitted upstream in nova 13:45:55 these 2 new patches you mentioned are in compute-hyperv 13:46:00 sure, but it take some time to get it right 13:46:08 sagar_nikam: yep, they are in compute-hyperv 13:46:27 will these patches work as is in nova upstream ? 13:46:41 after i apply your previous patch which we have tested 13:46:45 yeah 13:47:52 ok... will let you know how it progress 13:48:00 cool, ty. :) 13:48:17 in the meanwhile if you get iscsi support working, let me know, i will apply that patch as well and test 13:48:27 sagar_nikam: yep, will do. :) 13:48:35 anything else on this topic? 13:49:10 no 13:49:19 #topic open discussion 13:49:31 any news from you folks? :) 13:50:05 nothing really :) 13:50:07 yes.. i have started raising tickets to get us moving with mitaka with the ops team 13:50:27 first thing... we need to clone os-win in our downstream repo 13:50:31 we're gonna get new hw around february, gonna transition to openstack-ansible and hyperv 2016 13:50:58 i have a question on os-win 13:51:02 awesome :) 13:51:06 we will use stale/mitaka 13:51:06 sagar_nikam: sure, go ahead 13:51:44 if you find bugs in ocata... do you plan to fix them in stable/mitaka as well ? 13:52:16 sagar_nikam: yep, while respecting the general openstack policy 13:52:52 ok 13:53:35 which means that currently, i'm backporting all bug fixes to newton, and to mitaka, primarely high impact / low risk bug fixes 13:53:41 i will take some time for us to get going with mitaka as creating downstream repos will take time 13:53:48 will keep you posted 13:54:23 next question 13:54:24 sure 13:54:27 any news on this 13:54:30 https://github.com/FreeRDP/FreeRDP-WebConnect/issues/149 13:56:25 it seems that there's some problem with cpprestsdk 13:57:11 yes.. remember that .. had discussed with c64cosmin: 13:57:15 and it can't do https requests to keystone 13:57:33 he tried with a newer cpprestsdk, but the problem persists. 13:57:34 any fixes planned ? 13:57:55 ok 13:59:36 hm, other than that, I don't have any other info on this matter 13:59:42 any other questions? 13:59:42 ok 13:59:51 time almost over 13:59:53 no... 13:59:57 thanks for attending 14:00:04 thanks for joining. :) 14:00:08 see you next week! 14:00:16 #endmeeting