15:01:07 <mlavalle> #startmeeting neutron_l3
15:01:08 <openstack> Meeting started Thu Jan 24 15:01:07 2019 UTC and is due to finish in 60 minutes.  The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:11 <openstack> The meeting name has been set to 'neutron_l3'
15:02:01 <liuyulong> hi
15:03:31 <mlavalle> I think haleyb won't be attending today
15:05:34 <mlavalle> so let's get going
15:05:46 <mlavalle> #topic Announcements
15:07:30 <haleyb> mlavalle: i'll have one eye here :)
15:07:41 <mlavalle> haleyb: cool
15:08:55 <mlavalle> Our next milestone is Stein-3: Mar 4 to Mar 8
15:09:14 <mlavalle> #link https://releases.openstack.org/stein/schedule.html
15:09:55 <mlavalle> Any other announcements we should share with the team?
15:10:45 <liuyulong> I think it's better to invite those L3 refactor big patch authors to attend this meeting.
15:11:06 <mlavalle> they are invited
15:11:27 <mlavalle> in fact we usually give them a topic during the meeting
15:11:50 <mlavalle> it's a good point, though
15:12:15 <mlavalle> ok, let's move on....
15:12:22 <mlavalle> #topic Bugs
15:13:21 <mlavalle> I've been working on https://bugs.launchpad.net/neutron/+bug/1795870
15:13:22 <openstack> Launchpad bug 1795870 in neutron "Trunk scenario test test_trunk_subport_lifecycle fails from time to time" [High,In progress] - Assigned to Miguel Lavalle (minsel)
15:14:06 <mlavalle> I have discovered that the probelm is that the server is losing contact with the L3 agent in the controller node
15:14:23 <mlavalle> so routers ared not being scheduled in the controller
15:15:11 <mlavalle> and as a consequence, the VMs scheduler on the controller don't get metadata proxy and they don't get fip connectivity either
15:15:45 <mlavalle> next step is to investigatre why we are losing contact between l3 agent and neutron server
15:16:12 <mlavalle> Next one is marked critical https://bugs.launchpad.net/neutron/+bug/1812404
15:16:14 <openstack> Launchpad bug 1812404 in neutron "test_concurrent_create_port_forwarding_update_port failed with InvalidIpForSubnet" [Critical,Fix released] - Assigned to Slawek Kaplonski (slaweq)
15:16:49 <mlavalle> ahh, but fix was released right away: https://review.openstack.org/#/c/631815/
15:17:11 <mlavalle> good job liuyulong \o/
15:17:27 <liuyulong> Thanks
15:17:39 <slaweq> 697158
15:17:48 <slaweq> ups :)
15:17:51 <slaweq> sorry
15:19:36 <mlavalle> Next one critical is https://bugs.launchpad.net/neutron/+bug/1811873
15:19:37 <openstack> Launchpad bug 1811873 in neutron "get_l3_agent_with_min_routers fails with postgresql backend" [Critical,In progress] - Assigned to Andrew Karpow (andyonce)
15:21:24 <mlavalle> I wonder why this was marked critical....
15:21:56 <mlavalle> Proposed fix is here: https://review.openstack.org/#/c/631046/
15:22:17 <mlavalle> and I plan to dig deeper with the submitter as to the reason of the bug criticality
15:23:58 <mlavalle> Also from last week we have https://bugs.launchpad.net/neutron/+bug/1812168
15:23:59 <openstack> Launchpad bug 1812168 in neutron "Neutron doesn't delete Designate entry when port is deleted" [High,New] - Assigned to Miguel Lavalle (minsel)
15:24:19 <mlavalle> I just assigned it to myself and will investigate further
15:25:49 <mlavalle> Next one is https://bugs.launchpad.net/neutron/+bug/1812118
15:25:50 <openstack> Launchpad bug 1812118 in neutron "Neutron doesn't allow to update router external subnets" [Medium,New]
15:26:53 <mlavalle> I am not sure this is a legit bug. I'll work on it at the end of the meeting
15:27:11 <mlavalle> ok, it seems I have covered all the bugs I had
15:27:26 <mlavalle> haleyb: do you have any bugs to discuss today?
15:27:45 <mlavalle> liuyulong: do you?
15:28:01 <haleyb> mlavalle: nothing new i think
15:28:02 <liuyulong> yes
15:28:24 <mlavalle> go ahead liuyulong
15:28:26 <liuyulong> https://bugs.launchpad.net/neutron/+bug/1798475
15:28:27 <openstack> Launchpad bug 1798475 in neutron "Fullstack test test_ha_router_restart_agents_no_packet_lost failing" [High,In progress] - Assigned to LIU Yulong (dragon889)
15:29:04 <liuyulong> This one, basically we find the root cause, and the review is doing now.
15:29:28 <liuyulong> https://review.openstack.org/#/c/627285/
15:30:05 <mlavalle> ah ok, I saw an abandoned patch in the bug
15:30:20 <liuyulong> Those patches are only for testing.
15:30:59 <mlavalle> is this patch ready for review?
15:31:16 <liuyulong> ^^yes, this is the fix. And I start a new function to give us more information about the fix
15:31:50 <mlavalle> cool, I'll take a look
15:31:52 <liuyulong> And, I'm also keeping eyes on the zuul failures.
15:32:20 <mlavalle> is this patch reported somewhere in the bug? if not, would you add a note pointing to it?
15:32:56 <liuyulong> Sure
15:33:43 <liuyulong> Done, and nothing else from me.
15:33:55 <mlavalle> thanks liuyulong :-)
15:34:01 <mlavalle> let's move on then
15:34:15 <mlavalle> #topic DVR openflow re-factoring
15:34:30 <mlavalle> do we have someone today to discuss this topic?
15:35:24 <liuyulong> These guys are here?
15:35:42 <mlavalle> they usually attend, but apparently not today
15:35:50 <liuyulong> Anyway, I'd like to say something about this.
15:35:58 <mlavalle> go ahead
15:37:12 <liuyulong> Since cloud deployments now are all perfer to use ovs flows to implementing some traditional functionalities. Such firewall, log etc.
15:37:24 <liuyulong> And now dvr.
15:38:18 <liuyulong> But those flow tables are growing much more than human processing ability.
15:38:29 <mlavalle> that's good point
15:39:12 <liuyulong> So, I suggest if we can add some basic trouble shooting functions during such refactor.
15:39:30 <mlavalle> it's a good suggestion
15:39:45 <mlavalle> do you suggest that the authors do that in their patch?
15:40:35 <liuyulong> Yes, that should be added to their dev list.
15:40:59 <mlavalle> would you leave a comment in the review outlining your suggestion?
15:41:27 <mlavalle> if you can provide implementation guidleines, that would be great
15:42:26 <liuyulong> OK, but I think this can be more generic for the flow based functionalities.
15:42:55 <mlavalle> that's is ok
15:44:43 <mlavalle> anything else on this topic?
15:45:11 <liuyulong> No, we met some issue really difficult to locate, such as we lost one VM's connection when we have 20K+ flows.
15:45:57 <liuyulong> Please move on, mlavalle
15:46:03 <mlavalle> great
15:46:11 <mlavalle> #topic On demand agenda
15:46:23 <mlavalle> I have one question for you liuyulong
15:46:52 <mlavalle> since we are planning to gradually expand your reponsibility and oversight of the L3 sub-team
15:47:28 <mlavalle> would it benefirt you if we moved this meeting, say one hour earlier?
15:48:17 <liuyulong> That will be great for me, then I can go to bed an hour earlier. LOL
15:48:36 <mlavalle> I think haleyb is open to this possibility
15:48:49 <mlavalle> I am going to discuss it with him and slaweq tomorrow
15:49:06 <liuyulong> OK, thanks
15:49:26 <mlavalle> I'll keep you posted, liuyulong
15:49:38 <slaweq> one hour earlier is upgrades subteam meeting IIRC
15:50:17 <liuyulong> Yep, I noticed that
15:50:20 <mlavalle> slaweq: yeah, but the teams don't necessarily overlap
15:50:40 <slaweq> then it's fine for me :)
15:50:46 <slaweq> just wanted to mention
15:51:01 <mlavalle> we can also consider moving this one to another day, say Wednesday
15:51:09 <mlavalle> would that work for you liuyulong?
15:51:53 <liuyulong> Yes, both works for me
15:51:59 <mlavalle> cool
15:52:05 <mlavalle> I'll keep you posted
15:52:15 <mlavalle> that's it from me
15:52:27 <mlavalle> does anyone else want to discuss anything today?
15:53:52 <mlavalle> ok, thanks for attending
15:54:03 <mlavalle> good night liuyulong
15:54:17 <mlavalle> enjoy the rest of your day, haleyb, slaweq
15:54:22 <mlavalle> #endmeeting