14:00:14 #startmeeting neutron_l3 14:00:15 Meeting started Wed Mar 4 14:00:14 2020 UTC and is due to finish in 60 minutes. The chair is liuyulong. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:19 The meeting name has been set to 'neutron_l3' 14:02:32 Hi there 14:02:40 #topic Announcements 14:03:13 hi 14:03:56 #link https://www.openstack.org/events/opendev-ptg-2020/ 14:04:59 Hope I could get to Vancouver. 14:05:28 I need a VISA. 14:05:51 I will try the community travel support. 14:07:08 for now we also don't know how it will be, mostly due to this coronavirus :/ 14:07:37 #link https://etherpad.openstack.org/p/neutron-victoria-ptg 14:09:15 slaweq, maybe, but the Summer is coming. 14:09:27 Topics are wanted! ^^ 14:11:00 OK, no more announcement from me. 14:11:05 let's move on. 14:11:08 #topic Bugs 14:11:21 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012766.html 14:11:27 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-March/012926.html 14:11:52 Because I was not here last week, we have two lists now. 14:12:08 First one: 14:12:09 #link https://bugs.launchpad.net/neutron/+bug/1864963 14:12:10 Launchpad bug 1864963 in neutron "loosing connectivity to instance with FloatingIP randomly" [Undecided,New] 14:12:52 I have left some questions about the reporters' deployment, that could help us to find out the real problem. 14:13:35 Mostly these questions are based on our local deployment. We met some issue on these fields. 14:15:47 thx for taking care of this 14:16:17 slaweq, np 14:16:31 Next one 14:16:33 #link https://bugs.launchpad.net/neutron/+bug/1865061 14:16:34 Launchpad bug 1865061 in neutron "When neutron does a switch-over between router 1 and router2, the router1 conntrack flows shoud be deleted" [Low,Confirmed] 14:17:10 that is something which our QE found during testing 14:17:36 but it can br problem only if router will failover twice in short period of time 14:17:57 and that's why it's set Low importance 14:18:02 Yes, that is my question, how could that "twice" happen in real world? 14:18:10 https://bugs.launchpad.net/neutron/+bug/1865061/comments/1 14:18:11 Launchpad bug 1865061 in neutron "When neutron does a switch-over between router 1 and router2, the router1 conntrack flows shoud be deleted" [Low,Confirmed] 14:18:30 We have "non-preemptive" settings for HA router keepalived. 14:19:09 So typically the "new-master" should work then. 14:19:28 The connections in the original host should be all broken. 14:19:47 excactly, so I reported it there "just for the record" that such issue theoretically can happen 14:20:08 but that shouldn't be in fact an issue in real world probably 14:23:04 extremely case is the HA networking is not stable. That could cause the HA router state change rapidly. For some deployment which running HA routers on hypervisors, the bad connection state could be a potential reason. 14:24:06 That could be another story. 14:24:22 OK, next one 14:24:33 #link https://bugs.launchpad.net/neutron/+bug/1865891 14:24:34 Launchpad bug 1865891 in neutron "Race condition during removal of subnet from the router and removal of subnet" [Medium,Confirmed] - Assigned to Slawek Kaplonski (slaweq) 14:24:48 yes, that one I'm working on now 14:25:27 it seems that sometimes if You plug subnet to the router and in parallel remove subnet, Your router port will end up as port without fixed_ips 14:25:31 Alright 14:25:36 see my comment here: 14:25:36 https://bugs.launchpad.net/neutron/+bug/1865891/comments/2 14:25:38 Launchpad bug 1865891 in neutron "Race condition during removal of subnet from the router and removal of subnet" [Medium,Confirmed] - Assigned to Slawek Kaplonski (slaweq) 14:25:57 I can image another one is to add port as router interface and concurrently delete the port. 14:26:23 I agree that maybe we will need to close it as "wontfix" 14:26:51 but I want first to dig a bit more and see what can be done there 14:28:11 yes, it is indeed an issue. We just want to find out a balance. : ) 14:28:42 OK, next one 14:28:43 #link https://bugs.launchpad.net/neutron/+bug/1865173 14:28:44 Launchpad bug 1865173 in neutron "Revision number not bumped after update of router's description" [Low,Confirmed] 14:29:33 Tested on stable/queens, it is not reproducible. 14:29:55 I was testing this on master branch 14:31:58 Alright, a regression on router revision number. 14:32:08 probably 14:32:25 but I saw it only when I tried to bump router's description 14:32:59 Interesting... 14:33:00 anyway, that's nothing really critical so I think it can stay in our backlog until someone will have some time to take a look at it 14:33:24 np, make sense to me 14:33:38 Next one: 14:33:40 #link https://bugs.launchpad.net/neutron/+bug/1865557 14:33:41 Launchpad bug 1865557 in neutron "Error reading log file from 'neutron-keepalived-state-change' in 'test_read_queue_send_garp'" [Low,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez) 14:33:59 Just a logging problem 14:34:08 The fix is simple, and it is just fails the case not raise an exception. 14:34:08 I found a problem only once, in a test 14:34:15 as commented in the bug 14:34:25 no no, we need to raise the exception 14:34:26 So I've +2ed that. 14:34:57 https://review.opendev.org/#/c/710850/1/neutron/tests/functional/agent/l3/test_keepalived_state_change.py 14:34:58 ok, not an exception but a fail (the same effect) 14:35:03 yes, I know 14:35:22 because we are executing a test, it's better to use self.fail 14:35:31 but the core of this patch is the extra log 14:35:38 OK, maybe I'm not clear here. 14:36:07 The fix is to just fail the case instead of raising an exception. 14:36:19 the effect is the same 14:36:28 Yes 14:36:34 the point is to increase the log info 14:36:44 now we have the device list with the IP addresses 14:36:50 inside the testing namespace 14:37:36 ralonsoh, great, thanks for working on this. 14:37:43 yw 14:38:22 Alright, thag 14:38:34 Alright, that's all bugs from me today. 14:38:41 I would like to talk about one also 14:38:43 https://bugs.launchpad.net/neutron/+bug/1859832 14:38:44 Launchpad bug 1859832 in neutron "L3 HA connectivity to GW port can be broken after reboot of backup node" [Medium,In progress] - Assigned to LIU Yulong (dragon889) 14:39:02 OK 14:39:03 and those 2 alternative solutions proposed by me and liuyulong for it 14:39:50 liuyulong: generally in Your approach I'm affraid those errors about fail to send garps during failover 14:40:43 and the second potential issue is IMO if we will not increase downtime during failover as neutron-l3-agent has to be noticed that failover happened and bring gateway up then 14:40:58 so 2 questions: 14:41:18 1. do You know if there is any way to delay sending of first garp, to avoid those errors from keepalived? 14:41:53 2. You said that You tested it in Your cloud, how long is downtime during failover with and without this patch? 14:42:41 I replied the comments in the patch set. Allow me quota it here: 14:42:45 We have run such code for a few months, no issue was found for such related log. Keepalived will send garp after a 60s delay by default [1], till then the L3 agent should have done qg-dev link up action. More details could be during the first phrase keepalived garp, do not send garp with no interval, it could have a 1 second delay (vrrp_garp_interval [2]). 14:42:45 [1] https://github.com/openstack/neutron/blob/master/neutron/agent/linux/keepalived.py#L165 14:42:45 [2] https://www.keepalived.org/manpage.html 14:43:47 Your first question could have the answer: vrrp_garp_interval. 14:44:18 The link up action is really quick, we have not seen any side effect on that. 14:44:49 it's quick but if router has many other things to do, isn't it queued to be processed as other events? 14:45:00 More about that is the outside world also have ARP. 14:45:04 e.g. if there would be many routers failovered in same time 14:45:49 HA state change does not have queue. 14:46:01 It's not like the L3-agent main processing loop. 14:46:32 ok, but can we maybe move this "set device up" action to the neutron-keepalived-state-change monitor process? 14:46:53 so it would be done just after keepalived would configure VIP in the namespace 14:47:21 That "enqueue_state_change" actually does not have a "queue", it's just a list of functions. 14:48:26 yes, but how about doing it here: https://github.com/openstack/neutron/blob/master/neutron/agent/l3/keepalived_state_change.py#L89 14:48:37 slaweq, are we going to add net capabilities to the neutron-keepalived-state-change agent?? 14:48:45 slaweq, I do not recommend it 14:48:56 this should be just a monitoring process 14:49:39 ralonsoh: look at the comment in https://github.com/openstack/neutron/blob/master/neutron/agent/l3/ha.py#L166 14:49:52 according to it, such plans were already some time ago :) 14:49:56 That could be a heavy change. 14:50:10 I still don't recommend it 14:50:39 we'll have another service changing the net devices 14:50:51 this should be in only one process: the l3 agent 14:51:04 We need router info from the l3-agent process to another monitor process. 14:51:13 we already have keepalived which is also changing those interfaces 14:51:22 yes 14:51:39 but this is an external process not managed/programmed by us 14:52:13 anyway, I really need to move forward with one of those potential fixes for this issue :) 14:52:20 I know 14:52:31 so first we should decide which one and then continue work on it 14:53:33 I prefer one fix for all drivers. 14:53:50 liuyulong: yes, that's adventage for Your approach for sure 14:53:56 I still don't have a clear idea 14:54:05 sorry 14:54:18 what I'm affraid, is that this may cause some longer failover time 14:54:48 but except that, I think that liuyulong's idea may be really better as it's more generic 14:54:57 And L3 issue should be handled in it's own scope by default. 14:55:17 slaweq, you have QA team I guess you mentioned in this meeting. : ) 14:55:33 so ralonsoh what do You think if we will continue with liuyulong's patch? 14:56:05 We also have a QA team, I will try to make sure they have fully tested the fail-over time. 14:56:15 I still need to check both again 14:56:34 ralonsoh: ok, thx 14:56:38 please check them 14:56:47 Another thing is I will try to add that "vrrp_garp_interval" for the VRRP of the HA router. 14:57:09 liuyulong: and one more comment to this, can You remove config option from it? I don't think we really need such config option there 14:57:18 It will be an independent change. 14:57:35 IMO this is internal implementation of HA routers and it shouldn't be configurable 14:57:36 slaweq, sure 14:58:05 ok, liuyulong please ping me if You will add this vrrp_garp_interval option 14:58:12 I will test it again on my env 14:58:18 and thx for working on this 14:58:24 slaweq, the config option is for our cloud locally, our operators would like to know the cloud code changes. 14:58:37 slaweq, np 14:58:46 ok, that's all from my side 14:58:49 thx 14:59:02 All right, we are out of time. 14:59:12 let's end here. 14:59:23 Thank you guys for attending. 14:59:25 Bye 14:59:27 bye 14:59:31 #endmeeting