13:00:50 #startmeeting hyper-v 13:00:51 Meeting started Wed Jun 15 13:00:50 2016 UTC and is due to finish in 60 minutes. The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:54 The meeting name has been set to 'hyper_v' 13:00:58 hellooo 13:01:00 hi all :) 13:01:04 O/ 13:01:08 hello 13:01:18 hi all 13:01:28 Hi All 13:01:39 Hi 13:01:49 #topic designate status 13:02:12 hi 13:02:15 sooo... abalutoiu is at the testing phase of the designate stuff 13:02:41 he has already submitted a patch on os-win, and the one on designate is going to be submitted when it's fully reliable. 13:03:00 abalutoiu: links? 13:03:14 including the launchpad blueprint. 13:03:24 #link https://review.openstack.org/#/c/327846 13:03:59 that is the os-win patch which adds support for DNS server operations 13:04:14 blueprint on designate for msdns: https://blueprints.launchpad.net/designate/+spec/msdns-backend-support 13:04:26 #link https://blueprints.launchpad.net/designate/+spec/msdns-backend-support 13:04:29 thanks. :) 13:04:30 claudiub: we saw a mail from designate PTL last week, i saw your reply as well, can we post the os-win changes to him 13:05:03 this will let the designate team know that work is in progress 13:05:05 sagar_nikam: I think we should email him when abalutoiu says it's ready and the patch has been submitted 13:05:11 sagar_nikam: I agree. 13:05:19 sagar_nikam I already talked today with Graham about this 13:05:37 abaluntoiu: thanks 13:05:52 that should be fine 13:06:09 so, until then, we should review the os-win patch. 13:06:35 #action review the os-win dnsutils patch: https://review.openstack.org/#/c/327846 13:07:15 #monasca status 13:07:20 #topic monasca status 13:07:44 soo, no news here, just replying to comments mostly. fixing merge conflicts and so on. 13:07:58 ok 13:08:26 #topic nova patches 13:08:40 it seems nothing really merged recently. :) 13:09:11 apparently, it was a miracle so many patches merged a few weeks. ago. 13:09:40 but as far as nova goes, I've been taking a look at the OVS Hyper-V vif plugin driver on nova. 13:09:44 claudiub: do we have a list of recently merged patches, we can check them 13:10:06 The plan is to use os-vif to create the OVS ports, if necessary. 13:10:39 hm, you are doing that to enable later security groups based on ovs flows? 13:10:39 sagar_nikam: hm, not at the top of my head. I'll try to create one. 13:10:56 claudiub: thanks 13:11:24 domi007: the security groups is a neutron-ovs-agent specific job. the VIF Drivers should just create the OVS ports. 13:11:29 basically. 13:11:32 kvinod: have you seen claudiub 13:11:35 patch ? 13:11:44 on OVS vif plugin 13:11:58 claudiub: I understand, but I'm not sure I understand why you are looking at this since creating OVS ports currently works fine :) sry 13:12:08 #link Hyper-V OVS VIF driver: https://review.openstack.org/#/c/140045/ 13:12:15 ^ this is the patch. 13:12:27 sagar_nikam: no 13:12:50 will see it 13:12:52 kvinod: can you check please 13:13:02 still not fully functional. There are some linux specific issues in the current os-vif... I have to find a solution for them. 13:13:16 solved a few, but not all of them. 13:13:34 oh okay, I understand 13:13:52 anyways. This VIF Driver was already merged in compute-hyperv since liberty 13:14:02 or even Kilo, if I'm not mistaken. 13:14:09 and that works well. 13:14:42 but as far as nova goes, we have to use os-vif for this, that's how all the VIF drivers are going to be used in nova. 13:15:28 domi007: to answer your question: at the moment, you cannot bind OVS ports on Hyper-V, unless you are using compute-hyperv. :) 13:16:31 moving on. 13:16:50 #topic networking-hyperv status 13:17:11 so, first of all, I found a bug 13:17:27 #link https://bugs.launchpad.net/networking-hyperv/+bug/1592777 13:17:27 Launchpad bug 1592777 in networking-hyperv "neutron-hyperv-agent shouldn't run if the OVS extension if active" [Medium,Confirmed] 13:17:47 basically, you cannot run neutron-ovs-agent and neutron-hyperv-agent on the vSwitches. 13:18:28 it's not imediately obvious, you just see failures and exception traces in the neutron-hyperv-agent.log file. 13:18:43 so, this is also a warning, in case you've been using both. :) 13:19:21 claudiub: you mean at a time it should be any one 13:19:28 if you want to disable the OVS switch extension, run in powershell: Disable-VMSwitchExtension -VMSwitchName switch_name -Name "Open vSwitch Extension" 13:19:50 claudiub: I'm sure that they need to run at the same time to have security groups working, atuvenie worked on this recently 13:19:51 to enable: Enable-VMSwitchExtension -VMSwitchName switch_name -Name "Open vSwitch Extension" 13:21:07 kvinod: hm, don't really know why you'd want to run both agents at the same time. But if you do, they shouldn't be configured to use the same switches. 13:21:33 claudiub: agreed 13:21:38 I can confirm this bug btw as well if you run them simultenously, but it's no surprise 13:21:41 domi007: for security groups, you only need the neutron.plugins.hyperv.agent.security_groups_driver.HyperVSecurityGroupsDriver 13:22:13 claudiub: wait, so if I config this in neutron_ovs.conf it will work? 13:22:18 domi007: you only need the security groups driver, not the whole agent, as far as I understood from atuvenie. 13:22:25 in that case of course there is no need to run them together 13:22:32 would make a lot of sense 13:22:33 domi007: to my understanding, yes. 13:23:03 of course in liberty it doesn't work yet, but atuvenie is working on backporting mitaka patches 13:23:06 cool 13:23:32 fair warning: HyperVSecurityGroupsDriver on liberty doesn't have the enhanced_rpc implemented, so it won't work. 13:23:32 claudiub: how will upgrade cases be handled ? if a customer has older version with neutron-hyperv-agent and wants to move to ovs 13:23:52 domi007: but you should be able to use the HyperVSecurityGroupsDriver on mitaka. 13:23:56 so basically we can fully forget the neutron-hyperv-agent, all of its functionality can be done using ovs 13:24:27 claudiub: that's why atuvenie is creating a new liberty installer with cherry-picked backported patches from mitaka to make it work if I'm correct 13:25:34 I was about to apply the patches she sent me, but it seemed like a cumbersome job so I decided to wait for the installer 13:25:58 sagar_nikam: I'm afraid that's a whole other story. it involves neutron network migrations. Basically, if you are using neutron-hyperv-agent, the neutron ports will be bound on the neutron-hyperv-agent, of course. 13:26:48 claudiub: so we cant support this kind of upgrades ? 13:27:58 sagar_nikam: we are planning to do this kind of upgrade, but we are using only VLAN-based networks, so they should be easily transformed into OVS networks/ports (hopefully)...although current live machines might need to be rebooted I'm not sure 13:28:06 sagar_nikam: I wouldn't say it's imposible. From what I can think of, you just have to change the port's owner, or rebind the ports to the ovs-agent. 13:28:24 OK 13:28:40 claudiub: but that will have data part hit, i mean migration from hyperv to ovs 13:28:51 sorry data path 13:28:57 claudiub: i think this support should be added in future 13:29:00 domi007: I think it should work without rebooting them. 13:29:22 domi007: But I do think that in order to work, you will have to disable the neutron ports first, migrate them, then enable them. 13:29:40 domi007: when you disable a neutron port, it will also be deleted from the agents. 13:29:55 claudiub: that's what I thought, it's like removing and adding a port to a machine which if I'm right requires the machine to be turned off 13:29:55 lot of hyperv customers who are using hyper-agent would like to move to ovs ... and they will need this as a supported usecase 13:29:59 domi007: anyways, let us know how it goes. :) 13:30:06 sure thing :) 13:30:16 domi007: ports, no. Only vNICs or other devices. 13:30:32 oh cool, I learned something new today :) 13:30:39 domi007: if you are using generation 2 VMs, you can even hot-plug vNICs and other devices. 13:31:06 kvinod: I agree. there are some network downtimes on this. 13:31:27 gen2 is a whole different story, not really wishing to go down that road yet :) 13:31:29 I wouldn't expect them to be long. 13:31:53 domi007: what do you mean? :) 13:33:15 sagar_nikam: Indeed. It would be interesting to investigate what is the best way to migrate those ports from one agent to another. 13:33:37 claudiub: is gen2 code even ready and working in compute-hyperv? I just remember that for example UEFI makes them a lot harder to work with 13:33:53 gen2 support was added since Kilo. :) 13:34:36 domi007: hm, by default, uefi is disabled, unless you request it via image metadata property or flavor extra_spec. 13:34:44 claudiub: let us know how your investigation goes on it 13:35:10 image property os_secure_boot=required or flavor extra_spec os:secure_boot=required 13:35:16 sagar_nikam: sure. :) 13:35:30 claudiub: thanks 13:35:35 domi007: we've disabled it by default because not all linux guests support uefi. 13:35:52 as usual, whenever you have working, we can pick up and verify 13:36:06 and let you know our results 13:36:12 claudiub: I thought all gen2 machines need to be UEFI, but you can turn off signature verification...but it's prossible that I'm wrong :) 13:36:45 but I'll dig into that then 13:36:49 so, gen2 VMs rely on EFI boot, instead of a generic BIOS. 13:37:32 indeed, that's what I'm saying. So you can disable signature verification = secure boot, but still will be using EFI...and I'm not sure how well that plays with guests 13:37:39 having UEFI turned on, basically assures that the EFI boot section has not been altered. 13:38:58 ok, I think we are on the same page with this :) 13:39:09 domi007: :) 13:39:19 anyways. moving on. 13:39:23 #link https://bugs.launchpad.net/networking-hyperv/+bug/1586354 13:39:23 Launchpad bug 1586354 in networking-hyperv "Intermittent Issue seen --Associating a vm from one security group (having tcp rule) to another security group(not having tcp rule) does not stop ssh from happening" [Medium,Confirmed] 13:40:21 I've confirmed this bug. it should be a big commit for this. 13:40:35 i see its confirmed 13:40:39 ok 13:40:56 so it wasn't about windows firewall not being on by default 13:41:00 so do we know what is causing the issue 13:41:23 basically, when the port's security group changes, the security groups driver's update_port_filter method is not called. prepare_port_filter is called instead, which just adds new rules. 13:41:39 update_port_filter actually checks what rules are to be added, what rules are to be deleted. 13:42:03 ok 13:42:41 so fix implementation started? 13:42:56 I think this issue surfaced in our deployment as well, the symptoms are really similar 13:42:58 anyways. I am currently working on it. Now the rules are deleted / added properly. Will send it up for review when it's ready and also wrote some unit tests. 13:43:10 great! 13:43:18 when is the fix expected 13:43:19 ok 13:43:28 great, thanks 13:44:03 ETA: 1-2 days. 13:44:04 claudiub: is this networking-hyperv code or inside the FW driver? 13:44:05 any idea by when review will be posted 13:44:20 ok 13:44:35 but it depends if there are any other urgent matters appearing 13:44:42 domi007: networking-hyperv code 13:44:54 anyways. moving on. 13:44:59 thanks 13:45:03 #topic os-brick status 13:45:28 one more bug is there 13:45:35 well, we've mostly been asked for CI results. They've been posted for each connector: iSCSI, Fibre Channel, SMB. 13:45:51 waiting reviews. 13:45:52 link# https://bugs.launchpad.net/networking-hyperv/+bug/1591114 13:45:52 Launchpad bug 1591114 in networking-hyperv "Few Vm's not getting IP due to missing security group rules in scale scenarios" [Undecided,In progress] - Assigned to Krishna Kanth (krishna-kanth-mallela) 13:46:38 i donot see its confirmed 13:46:52 claudiub: nice, so hopefully we will have os-brick patches merged soon ? 13:46:57 claudiub: nice, so hopefully we will have os-brick patches merged soon ? 13:47:02 sagar_nikam: I really do hope so. 13:47:08 kvinod: #link https://review.openstack.org/#/c/328218/ 13:47:47 claudiub: lpetrut: once we have os-brick, can we push for nova patches merged as well 13:48:03 but getting the reviews, that's the hard part, IMO. :) 13:48:07 sagar_nikam: ofc. 13:48:09 FC support for hyperv is something we have been trying for sometime now 13:48:24 kvinod: will discuss it at the next topic. 13:48:32 k 13:48:52 sagar_nikam: well, until then, there's compute-hyperv. :) 13:49:04 claudiub: which we all love :) 13:49:08 <3 13:49:19 #topic open discussion 13:49:44 claudiub: is it easy to port this to mitaka ? we are on mitaka now, we can try backporting if it is feasible ? 13:50:41 claudiub: one topic from me, could you check the BP of my team mate Paul Murray, does it have any impact on FreeRDP ? 13:51:09 kvinod: sorry for missing that. So, for that, it's harder to confirm / replicate. Which is why I haven't done it yet. I see you've sent some patches up. Will take a look. 13:51:25 sagar_nikam: good thing you raised FreeRDP, we have been testing the new beta from c64cosmin and it works so far fine 13:51:36 I just finished haproxy config as well 13:51:36 k 13:51:39 kvinod: but for this: https://review.openstack.org/#/c/328218/ I don't really see how it solves the issue, as it only removes the sleep. 13:51:44 and it looks stable so far 13:51:46 Sagar_nikam: you mean the one with console objects? 13:52:12 sagar_nikam: sorry, I didn't understand, port what exactly? 13:52:19 I will talk to krishna the committer and update the bug 13:52:21 c64cosmin: yes, last time i gave the BP link 13:52:47 I've read the bp shouldn't be a problem 13:53:07 Domi007: thanks for the info 13:53:30 c64cosmin: claudiub: this is the BP http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/convert-consoles-to-objects.html 13:53:41 does it have any impact on freerdp 13:54:16 domi007: nice to hear that the new FreeRDP works for you. Does keystrokes work ? 13:54:34 domi007: we are yet to pick up the new MSI, Our QA is planning to test it soon 13:54:45 kvinod: thanks. :) Also, I do remember that that code in https://review.openstack.org/#/c/328218/ was added in order to yield to other threads before adding the rules. This way the neutron-hyperv-agent didn't miss reporting its alive state. 13:54:57 Domi, sagar : I have a pull on waiting, this will introduce a plugin system that will allow to change the query parameters 13:55:05 domi007: c64cosmin: i will be very keen to see how keystrokes work with this new MSI 13:55:09 sagar_nikam: we never had the keystroke issue, so can't really report on it 13:55:13 but we'll keep at it 13:55:17 and see how it performs 13:55:19 Also will reduce the CPU consumption 13:55:25 , drastically 13:55:35 kvinod: anyways. I don't sleeping is necessary anymore, since we have native threads, which run independently from the main thread. 13:56:06 kvinod: so, we can go forward with that patch. 13:56:18 Sagar_nikam: please keep me posted on that 13:56:18 c64cosmin: indeed I was able to see that as well, although it never was eating too much CPU, but it improved indeed 13:56:49 c64cosmin: sure, will let you know how it works 13:57:11 Its not possible to type all context behind the fix, probably I will update the bug report with the details. 13:57:11 domi007: for me the keystroke was a big issue, let me check with new MSI 13:57:19 Sagar_nikam: thanks :) 13:57:44 c64cosmin: another usal question.... what about debian ? 13:57:52 *usual 13:58:07 On it's way 13:58:22 c64cosmin: thanks 13:58:39 Almost forgot, I will also implement v3 for keystone api 13:58:52 great! good meeting today guys, thanks! 13:59:07 I need a merge first 13:59:13 c64cosmin: that would be very nice, keystone v3 13:59:30 is it planned soon ? 13:59:34 or will it take some time 14:00:00 It's the next step to take 14:00:05 ok 14:00:12 thanks, today it was actually good meeting from neutron front also 14:00:15 thanks all 14:00:30 Good day all 14:00:39 well, time's up. :) 14:00:42 i would prefer debian rather than v3, that would be my personal priority... how ever i am fine with the order in which you work 14:00:46 thanks all for joining. :) 14:00:49 see you next week! 14:00:51 for these features 14:00:56 #endmeeting