13:00:33 <claudiub> #startmeeting hyper-v
13:00:34 <openstack> Meeting started Wed Dec 14 13:00:33 2016 UTC and is due to finish in 60 minutes.  The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:37 <openstack> The meeting name has been set to 'hyper_v'
13:01:04 <claudiub> anyone here?
13:01:20 <sagar_nikam> Hi
13:01:24 <claudiub> hello :)
13:01:32 <sagar_nikam> I hope you had a good vacation
13:01:59 <claudiub> it was great, it was like a surprisingly long weekend :D
13:02:13 <claudiub> is someone else from your team joining?
13:02:29 <sagar_nikam> no
13:02:32 <sagar_nikam> we can start
13:02:35 <claudiub> cool
13:02:53 <claudiub> #topic Neutron Hyper-V Agent
13:03:24 <claudiub> soo there are a couple of things i'm working on for networking-hyperv
13:03:44 <claudiub> one of them are neutron trunk ports
13:03:46 <sagar_nikam> ok
13:04:02 <domi007> hi all :)
13:04:11 <claudiub> heey, long time no see :)
13:04:18 <claudiub> #link https://wiki.openstack.org/wiki/Neutron/TrunkPort
13:04:29 <domi007> indeed, sorry about that :) gonna listen in again from time to time
13:04:37 <claudiub> neutron trunk ports have been implemented in newton, btw
13:05:07 <claudiub> and it could have some nfv usecases
13:05:13 <domi007> so there are still plans to support HyperV's networking instead of using OVS?
13:05:25 <claudiub> domi007: sure, good to have you back. :)
13:05:46 <claudiub> domi007: so, at the moment, we are supporting both.
13:06:09 <claudiub> domi007: and we are currently looking into the new networking model that came with windows server 2016
13:06:25 <claudiub> which would mean native support for vxlan networking, as well as l3 support
13:06:40 <claudiub> and other features
13:06:41 <sagar_nikam> so what is the change in Ocata.. if we have this already implemented in newton
13:06:56 <domi007> oooo :) much has changed I see
13:07:17 <claudiub> sagar_nikam: neutron trunk ports have been introduced in newton in neutron (linuxbridge and neutron-ovs-agent)
13:07:37 <claudiub> and we didn't have that implemented in networking-hyperv
13:08:46 <claudiub> anyways. neutron trunk ports are not enabled in the neutron server by default.
13:08:53 <claudiub> i had to find out that the hard way. :)
13:09:22 <claudiub> you'll have to add the TrunkPlugin in neutron.conf, something like this:
13:09:24 <claudiub> service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.metering.metering_plugin.MeteringPlugin,neutron.services.trunk.plugin.TrunkPlugin
13:09:28 <claudiub> just fyi
13:09:40 <domi007> good to know
13:09:44 <claudiub> i didn't really find any information about this specific thing
13:10:22 <claudiub> the other thing that i'll be looking into and implement pretty soon, would be networking qos
13:10:50 <claudiub> you can add qos policies to neutron networks
13:11:22 <claudiub> and it was first introduced in neutron in liberty. so, i think it's about time to support this. :)
13:11:52 <claudiub> hm, sagar dropped off. will wait for him for a bit
13:12:06 <sagar_nikam> Hi.... I am back
13:12:12 <claudiub> wb :)
13:12:15 <sagar_nikam> got disconnected
13:12:43 <sagar_nikam> we had a cyclone yesterday in my state... network services are badly affected
13:12:54 <sagar_nikam> i might be disconnected again
13:13:00 <claudiub> ooh, sorry to hear that
13:13:19 <sagar_nikam> i am safe... only bad network
13:13:30 <claudiub> just a small recap: the next feature: networking qos. will start working on it pretty soon. it was first introduced in liberty in neutron.
13:13:53 <claudiub> sagar_nikam: that's good, people >> internet. :D
13:13:56 <sagar_nikam> ok
13:14:05 <claudiub> any questions?
13:14:26 <claudiub> about this topic?
13:14:26 <sagar_nikam> no
13:14:42 <claudiub> cool. moving on.
13:15:00 <claudiub> #topic nova staus
13:15:34 <claudiub> no new patches merged on nova for us since last meeting, unfortunately.
13:15:45 <sagar_nikam> ok
13:16:03 <claudiub> regarding the nova-ovs ci, if you are curious, here are some test results: http://64.119.130.115/debug/nova-ovs/results.html
13:17:10 <claudiub> the test failures are unrelated, they are a bit flaky, because of tempest
13:17:40 <sagar_nikam> checking
13:17:41 <claudiub> they are being re-run currently.
13:18:39 <sagar_nikam> any perf improvement as compared to hyperv switch ?
13:19:23 <claudiub> the hyper-v ci doesn't test performance, it only runs tempest
13:19:50 <claudiub> so, it won't be able to tell any performance difference between neutron-ovs-agent and neutron-hyperv-agent
13:19:53 <sagar_nikam> ok.. got it
13:20:48 <claudiub> sagar_nikam: also, what do you mean by performance: the VM's network throughput, or the VM spawn time?
13:21:44 <claudiub> anyways, I mentioned last time that I was having some weird issue regarding the neutron-ovs-agent
13:22:10 <domi007> hm what was that?
13:22:37 <claudiub> that it would sometimes "freeze", and wouldn't process ports anymore, but it would still send its alive state to neutron
13:23:18 <sagar_nikam> could you find the root cause ?
13:23:24 <claudiub> yep
13:23:34 <sagar_nikam> ok... what was it
13:23:43 <domi007> oo I see
13:23:43 <claudiub> managed to find a deterministic rule to reproduce it as well
13:23:56 <claudiub> also, this issue doesn't appear prior to newton
13:24:18 <claudiub> soo.. in newton, the default values of a new neutron conf options changed
13:25:07 <claudiub> in particular: of_interface changed from "of_interface" to "native" and ovsdb_interface changed from "vsctl" to "native"
13:25:41 <sagar_nikam> ok
13:26:17 <claudiub> we currently do not support the "native" interface, unfortunately. but we are currently working on it, and it will most probably be included in the next windows ovs release
13:26:54 <domi007> okay, any timeline on that?
13:27:04 <claudiub> so, until then, bear in mind, if you upgrade to newton or newer, you'll have to have these config options on the neutron-ovs-agent's neutron.conf files:
13:27:14 <domi007> also is it gonna be a new major version?
13:27:21 <claudiub> [ovs]
13:27:21 <claudiub> ...
13:27:22 <claudiub> of_interface = ovs-ofctl
13:27:22 <claudiub> ovsdb_interface = vsctl
13:27:40 <domi007> okay, so there is a workaround :)
13:27:44 <claudiub> yep :)
13:27:58 <claudiub> just have these config options and you'll be fine.
13:28:37 <sagar_nikam> ok got it.. as of now we are still in the process of upgrading to mitaka... so not so much of a issue for us for sometime
13:28:38 <domi007> great, thanks
13:29:37 <claudiub> domi007: most probably next minor version release. we are currently on 2.6.. so 2.7
13:30:15 <claudiub> as for timeline, we follow the same release timeline as the ovs timeline
13:30:40 <domi007> okay, good :) thank you
13:31:10 <claudiub> and just fyi: functionally, there is no difference between the "native" and "vsctl / ovs_ofctl" interfaces. they do the same thing.
13:31:21 <domi007> sure :)
13:31:57 <claudiub> the difference is in performance. the vsctl / ovs_ofctl interface does sys exec calls, while the native one connects to a tcp socket opened by ovs
13:32:24 <claudiub> s/performance/execution time
13:32:52 <claudiub> any questions on this subject?
13:33:01 <domi007> not from me
13:33:04 <sagar_nikam> no from me
13:33:09 <claudiub> cool. :)
13:33:38 <claudiub> #topic Hyper-V Cluster updates
13:34:10 <claudiub> just a minor update: we've found a few bugs, we've submitted their fixes
13:34:16 <claudiub> will link to them shortly
13:34:45 <sagar_nikam> gerrit link ?
13:34:56 <claudiub> yep. gerrit seems down to me
13:35:32 <sagar_nikam> the last time (which is few months back) we had tried your patch. it worked quite well
13:35:44 <claudiub> sagar_nikam: that's good to hear. :)
13:35:49 <sagar_nikam> we can try it again once you give the gerrit
13:35:59 <sagar_nikam> and everything is fixed
13:36:03 <claudiub> those bugs are minor, and quite edge case, but it's still good to have the fixes and know about them
13:36:13 <sagar_nikam> ok
13:36:25 <sagar_nikam> and what happened to cinder-volume suppprt ?
13:37:17 <claudiub> ok. first one is a very minor one. it happens if you have any non-nova clustered vms. the hyper-v cluster driver was trying to failover them as well, and it shouldn't.
13:37:19 <claudiub> #link https://review.openstack.org/#/c/406131/
13:39:50 <claudiub> the second one: the nova-compute service listens to failover events, but if  the service is down for any reason, and a failover occurs during that period, it won't be able to process those failovers, meaning that if any instance failovered to the "current" host, the nova-compute service couldn't let nova know about it
13:40:03 <claudiub> and nova would still consider it on the "old" host.
13:40:06 <claudiub> #link https://review.openstack.org/#/c/406069/
13:40:40 <sagar_nikam> will check and get back
13:40:51 <claudiub> the first one was merged on master, newton, and mitaka
13:41:06 <claudiub> the second one, there's a minor comment that i'll have to address.
13:41:37 <claudiub> sagar_nikam: also, what did you mean about the cinder-volume support?
13:42:49 <sagar_nikam> nova instances having cinder volumes attached to instances running on the cluster
13:42:56 <sagar_nikam> if failover happens
13:43:03 <sagar_nikam> the volumes are not migrated
13:43:06 <claudiub> it always worked with smb volumes.
13:43:24 <sagar_nikam> i think instances are not migrated since it has volumes
13:43:34 <sagar_nikam> i meant iscsi volumes... i should have been clear
13:44:07 <claudiub> as for iscsi, it is in the pipeline at the moment
13:44:54 <sagar_nikam> ok
13:45:05 <sagar_nikam> let me know if you get it working
13:45:23 <sagar_nikam> also last time we used your patch... it was submitted upstream in nova
13:45:55 <sagar_nikam> these 2 new patches you mentioned are in compute-hyperv
13:46:00 <claudiub> sure, but it take some time to get it right
13:46:08 <claudiub> sagar_nikam: yep, they are in compute-hyperv
13:46:27 <sagar_nikam> will these patches work as is in nova upstream ?
13:46:41 <sagar_nikam> after i apply your previous patch which we have tested
13:46:45 <claudiub> yeah
13:47:52 <sagar_nikam> ok... will let you know how it progress
13:48:00 <claudiub> cool, ty. :)
13:48:17 <sagar_nikam> in the meanwhile if you get iscsi support working, let me know, i will apply that patch as well and test
13:48:27 <claudiub> sagar_nikam: yep, will do. :)
13:48:35 <claudiub> anything else on this topic?
13:49:10 <sagar_nikam> no
13:49:19 <claudiub> #topic open discussion
13:49:31 <claudiub> any news from you folks? :)
13:50:05 <domi007> nothing really :)
13:50:07 <sagar_nikam> yes.. i have started raising tickets to get us moving with mitaka with the ops team
13:50:27 <sagar_nikam> first thing... we need to clone os-win in our downstream repo
13:50:31 <domi007> we're gonna get new hw around february, gonna transition to openstack-ansible and hyperv 2016
13:50:58 <sagar_nikam> i have a question on os-win
13:51:02 <claudiub> awesome :)
13:51:06 <sagar_nikam> we will use stale/mitaka
13:51:06 <claudiub> sagar_nikam: sure, go ahead
13:51:44 <sagar_nikam> if you find bugs in ocata... do you plan to fix them in stable/mitaka as well ?
13:52:16 <claudiub> sagar_nikam: yep, while respecting the general openstack policy
13:52:52 <sagar_nikam> ok
13:53:35 <claudiub> which means that currently, i'm backporting all bug fixes to newton, and to mitaka, primarely high impact / low risk bug fixes
13:53:41 <sagar_nikam> i will take some time for us to get going with mitaka as creating downstream repos will take time
13:53:48 <sagar_nikam> will keep you posted
13:54:23 <sagar_nikam> next question
13:54:24 <claudiub> sure
13:54:27 <sagar_nikam> any news on this
13:54:30 <sagar_nikam> https://github.com/FreeRDP/FreeRDP-WebConnect/issues/149
13:56:25 <claudiub> it seems that there's some problem with cpprestsdk
13:57:11 <sagar_nikam> yes.. remember that .. had discussed with c64cosmin:
13:57:15 <claudiub> and it can't do https requests to keystone
13:57:33 <claudiub> he tried with a newer cpprestsdk, but the problem persists.
13:57:34 <sagar_nikam> any fixes planned ?
13:57:55 <sagar_nikam> ok
13:59:36 <claudiub> hm, other than that, I don't have any other info on this matter
13:59:42 <claudiub> any other questions?
13:59:42 <sagar_nikam> ok
13:59:51 <sagar_nikam> time almost over
13:59:53 <sagar_nikam> no...
13:59:57 <sagar_nikam> thanks for attending
14:00:04 <claudiub> thanks for joining. :)
14:00:08 <claudiub> see you next week!
14:00:16 <claudiub> #endmeeting