15:00:28 <mlavalle> #startmeeting neutron_l3
15:00:29 <openstack> Meeting started Thu May 17 15:00:28 2018 UTC and is due to finish in 60 minutes.  The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:32 <openstack> The meeting name has been set to 'neutron_l3'
15:00:36 <mlavalle> Hi there!
15:00:59 <haleyb> hi
15:02:01 <mlavalle> hey how are you?
15:02:36 <mlavalle> you've been super busy lately, haven't you?
15:03:04 <haleyb> good, yes, busy
15:03:16 <mlavalle> #topic Announcements
15:03:22 <haleyb> and even found an l3 issue yesterday thanks to slaweq
15:03:30 <mlavalle> that's cool
15:03:42 <mlavalle> Summit is next week
15:03:52 <mlavalle> so we will cancel next week's meeting
15:04:39 <haleyb> +1
15:04:43 <mlavalle> Next milestone is June 4 - 8
15:04:48 <mlavalle> Rocky 2
15:05:23 <mlavalle> and then Rocky 3 will be the week of July 23 - 28
15:05:31 <mlavalle> it will be here before we realize it
15:06:29 <mlavalle> any other announcements for today?
15:06:56 <mlavalle> ok, moving on
15:07:02 <mlavalle> #topic Bugs
15:07:29 <mlavalle> do you want to talk about that new L3 issue, haleyb ?
15:07:40 <haleyb> mlavalle: sure, think slaweq joined too
15:07:49 <slaweq> hi :)
15:08:20 <haleyb> in short, if running with l3-ha and you restart the backup/standby, traffic to the master router is interrupted
15:08:30 <haleyb> it's an issue we've seen before
15:09:13 <mlavalle> do we have a bug filed for this?
15:09:14 <haleyb> problem is that we toggle ipv6 forwarding on/off, which causes an MLDv2 packet, switch notices and sends packets on wrong port
15:09:27 <slaweq> mlavalle: I'm just filling a bug
15:09:46 <haleyb> we just figured it out this morning, there is a WIP patch
15:11:03 <haleyb> https://review.openstack.org/569083
15:11:25 <slaweq> mlavalle: haleyb: bug filled: https://bugs.launchpad.net/neutron/+bug/1771841
15:11:26 <openstack> Launchpad bug 1771841 in neutron "packet loss during backup L3 HA agent restart" [Undecided,Confirmed] - Assigned to Slawek Kaplonski (slaweq)
15:12:09 <mlavalle> thanks slaweq
15:12:12 <haleyb> this is one of those areas where we're good at moving the bug around, so we'll need to do some testing
15:12:35 <slaweq> yes, I was thinking about fullstack test for it
15:13:04 <slaweq> but as it might require a bit more work I thought that maybe it might be done in separate patch
15:13:13 <slaweq> after fix
15:13:51 <mlavalle> do we have a pressing need to fix this quickly?
15:15:00 <mlavalle> maybe we should wait for the test, to be sure this is the fix
15:15:06 <mlavalle> ???
15:15:07 <haleyb> yes, at least for us it is a priority
15:15:10 <slaweq> haleyb: what do You think? do we need to fix it quickly? IMO yes
15:16:00 <slaweq> I will try to add such test asap also
15:16:01 <mlavalle> in that case, let's merge it and then we create the fullstack test
15:16:41 <slaweq> I will add some UT for this patch now
15:16:41 <mlavalle> do we need unit tests in this patch?
15:16:48 <slaweq> and then will work on fullstack test
15:16:58 <mlavalle> ahh, we were thinking of the same thing
15:16:58 <slaweq> mlavalle: yes :)
15:17:04 <slaweq> LOL
15:17:32 <haleyb> yes, and we will do some additional testing downstream.  i'm still looking at the related codepaths to make sure we don't break anything, or see if there is a better way
15:17:50 <mlavalle> haleyb: in your experience, is fullstack a good way to test this?
15:17:51 <slaweq> haleyb: thx
15:18:02 <mlavalle> or would a scenario test be better?
15:18:37 <slaweq> mlavalle: but this particular case requires agent restart - can I do it in scenario test?
15:18:53 <slaweq> I think that scenario are end-to-end test from user perspective only, no?
15:18:59 <mlavalle> ahh, right
15:19:16 <slaweq> that's why I thought about fullstack
15:19:21 <mlavalle> yeah, probably fullstack is the right way to go
15:19:21 <haleyb> i think with fullstack we could trigger it since even pinging the router external IP gets interrupted
15:19:47 <mlavalle> ok, carry on, then
15:19:59 <haleyb> i'm of course assuming in a test environment we could trigger it, with real hardware we can
15:20:11 <haleyb> darn switches
15:20:26 <slaweq> haleyb: I triggered it on virtual environment yesterday
15:20:33 <slaweq> it was all on one host and vms :)
15:21:02 <haleyb> slaweq: ah, so even the virtual switch was learning MACs, forgot about that
15:21:16 <slaweq> yes, so it should be fine
15:21:31 <slaweq> but ofc ourse I will test it
15:22:52 <mlavalle> anything else on this bug?
15:23:33 <haleyb> nope
15:23:49 <mlavalle> thanks slaweq, haleyb
15:23:59 <mlavalle> Next bug is https://bugs.launchpad.net/neutron/+bug/1766701
15:24:00 <openstack> Launchpad bug 1766701 in neutron "Trunk Tests are failing often in dvr-multinode scenario job" [High,Confirmed] - Assigned to Miguel Lavalle (minsel)
15:24:30 <mlavalle> I think this is the same we discussed during the CI meeting, tis past Tuesday
15:24:53 <mlavalle> I am investigating what is going on here
15:25:21 <mlavalle> I don't have anything to report yet
15:26:01 <mlavalle> Next one is https://bugs.launchpad.net/neutron/+bug/1717302
15:26:02 <openstack> Launchpad bug 1717302 in neutron "Tempest floatingip scenario tests failing on DVR Multinode setup with HA" [High,Confirmed] - Assigned to Brian Haley (brian-haley)
15:27:09 <mlavalle> For this one I've been making progress creating a DVR + HA enviroment where to reporduce it
15:27:31 <mlavalle> I already have the environment. Haven't been able to reproduce it yet
15:27:59 <haleyb> now that i have a setup running i'll try again as well
15:27:59 <mlavalle> haleyb: do you want me to assign it to me?
15:28:21 <mlavalle> in that case, I'll live the bug assigned to you
15:28:29 <mlavalle> leave^^^^
15:28:47 <mlavalle> That's all I have today
15:28:54 <mlavalle> any other bugs we should disucss?
15:29:01 <haleyb> damn, should have kept my mouth shut :)
15:29:07 <mlavalle> LOL
15:29:52 <haleyb> i don't see any new dvr ones
15:30:01 <mlavalle> ok
15:30:07 <mlavalle> moving on then
15:30:15 <mlavalle> #topic Openflow DVR
15:30:36 <mlavalle> it seems to me that the Intel guys have given up on this
15:31:04 <mlavalle> They haven't shown up over the past month or so to the meeting
15:31:19 <mlavalle> I still would like to pursue this
15:31:28 <mlavalle> what do you think haleyb?
15:32:02 <haleyb> mlavalle: yes on both counts, i'm hoping they have the time
15:32:32 <mlavalle> mhhh, I've kind of giving up on the hope
15:32:55 <mlavalle> let's give them until the next meeting, after the summit
15:33:15 <mlavalle> if they don't give signs of life, let's assume they won't work on it anymore
15:33:33 <mlavalle> and at that point I will deploy the patch in my environment and start playing with it
15:33:42 <mlavalle> sounds fair?
15:34:04 <haleyb> fair enough, don't know where you will find the time...
15:34:36 <mlavalle> can I borrow your time making machine?
15:34:50 <mlavalle> you were boasting about it a few weeks ago
15:34:52 <janzian> I second this request :)
15:36:24 <haleyb> mlavalle: time machine broken, sorry :)
15:37:05 <haleyb> the mr fusion is not working
15:37:15 <mlavalle> LOL
15:37:19 <mlavalle> ok, moving on
15:37:28 <mlavalle> #topic Port Forwarding
15:38:15 <mlavalle> The API definition landed recently:
15:38:19 <mlavalle> https://review.openstack.org/#/c/535638/
15:38:53 <mlavalle> and we have two patches on the Neutron side:
15:38:57 <mlavalle> https://review.openstack.org/#/c/535647/
15:39:14 <mlavalle> https://review.openstack.org/#/c/533850
15:39:35 <mlavalle> They are not passing zuul quite yet
15:39:46 <mlavalle> but they are moving in the right direction
15:39:56 <mlavalle> so keep an eye on them, please
15:40:01 <haleyb> ack
15:40:05 <mlavalle> and provide feedback if possible
15:40:18 <mlavalle> #topic
15:40:29 <mlavalle> #undo
15:40:30 <openstack> Removing item from minutes: #topic
15:40:37 <mlavalle> #topic Open Agenda
15:40:50 <mlavalle> any thing else we should discuss today?
15:41:40 <haleyb> nope
15:42:12 <mlavalle> ok, thanks for attending
15:42:14 <mlavalle> o/
15:42:18 <mlavalle> #endmeeting