14:01:50 <slaweq> #startmeeting networking
14:01:50 <opendevmeet> Meeting started Tue Jun  1 14:01:50 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:51 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:53 <opendevmeet> The meeting name has been set to 'networking'
14:02:22 <rubasov> hi
14:02:24 <lajoskatona> Hi
14:02:26 <jlibosva> hi
14:02:29 <ralonsoh> hhi
14:02:29 <amotoki> o/
14:03:31 <obondarev> hi
14:03:43 <bcafarel> o/
14:04:18 <slaweq> ok, lets start
14:04:58 <slaweq> welcome in our new irc "home" :)
14:05:00 <slaweq> #topic announcements
14:05:43 <slaweq> Xena cycle calendar https://releases.openstack.org/xena/schedule.html
14:06:02 <njohnston> o/
14:06:12 <slaweq> we have about 5 more weeks before Xena-2 milestone
14:06:46 <slaweq> and I want to ask everyone to try to review some of opened specs, so we can get them merged before milestone 2 and move on with implementation later
14:07:01 <slaweq> next one
14:07:05 <slaweq> Move to OTFC is done: http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022724.html
14:07:14 <slaweq> You probably all know that already as You are here
14:07:27 <bcafarel> :)
14:07:34 <slaweq> but just for the sake of announcements
14:07:46 <slaweq> regarding to that migration I have couple of related things
14:07:59 <slaweq> if You can, please stay few more days on the old channel, and redirect people to the new one if necessary,
14:08:19 <slaweq> but please remember to not say explicitly name of the new server on the freenode channel
14:08:35 <slaweq> as that may cause that our channel there will be taken
14:08:58 <slaweq> neutron docs are updated: https://review.opendev.org/c/openstack/neutron/+/793846 - if there are any other places to change, please ping me or send patch to update that
14:09:30 <slaweq> and last thing related to that migration
14:09:34 <mlavalle> o/
14:09:43 <slaweq> hi mlavalle - You were fast connecting here :)
14:10:00 <mlavalle> LOL, took me a few minutes to figure it out
14:10:04 <slaweq> :)
14:10:28 <slaweq> tc asked teams to consider moving from the meeting rooms to the project channels for meetings
14:10:39 <slaweq> so I wanted to ask You what do You think about it?
14:10:56 <slaweq> do You think we should still have all meetings in the #openstack-meeting-X channels?
14:11:08 <slaweq> or maybe we can move to the #openstack-neutron with all meetings?
14:11:32 <amotoki> the only downside I see is that openstack bot could post messages on reviews. Otherwise sounds good.
14:12:02 <slaweq> amotoki: do You know if that is an issue for other teams maybe?
14:12:12 <amotoki> slaweq: unfortunately no
14:12:18 <slaweq> I know that some teams already have their meetings in the team channels
14:13:04 <obondarev> some ongoing discussions in team channel would have to be paused during meeting time
14:13:10 <lajoskatona> Perhaps that would be easier for collecting people to the meetings
14:13:29 <lajoskatona> make ----^
14:13:32 <slaweq> obondarev: yes, but tbh, do we have a lot of them? :)
14:13:48 <bcafarel> overall I think #openstack-neutron is generally "quiet" enough now that to have meetings in it
14:14:04 <bcafarel> but no strong opinion, I will just update my tabs :) (or follow the PTL pings)
14:14:40 <obondarev> just try to figure out downsides, I don't think it should stop us:)
14:14:48 <amotoki> obondarev's point and mine are potential downside. generally spekaing, #-neutron channel does not have many traffic, so I am fine with having meetings in #-neutron channel.
14:14:55 <rubasov> I'm also okay with both options
14:15:12 <obondarev> +1
14:15:45 <slaweq> ok, I will try to get more info from the tc why they want teams to migrate
14:15:56 <njohnston> The octavia team has met in their channel for a long time now without issues
14:16:10 <slaweq> njohnston: thx, that's good to know
14:16:35 <slaweq> do You maybe know why tc wants teams to do such migration? is there any issue with using -meeting channels?
14:16:46 * slaweq is just curious
14:17:12 <njohnston> I'm not sure, I did not attend that discussion.
14:17:19 <slaweq> ok
14:18:43 <slaweq> so I will try to get such info and as there is no strong opinions agains, I will update our meetings accordingly
14:19:06 <slaweq> so please expect some email with info about it during this week :)
14:19:24 <slaweq> ok, last reminder
14:19:38 <slaweq> Nomination for Y names' proposals is still open http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022383.html
14:19:52 <slaweq> that's all announcements/reminders from me
14:20:02 <slaweq> do You have anything else what You want to share now?
14:20:06 <amotoki> I have one thing I would like to share around OVN switch and horizon. I don't want to dig into it in detail here though.
14:20:16 <amotoki> horizon see consistent failures in the integration tests after switching the network backend to OVN. The failures happen in router related tests.
14:20:27 <amotoki> horizon see consistent failures in the integration tests after switching the network backend to OVN. The failures happen in router related tests.
14:20:41 <amotoki> We haven't understood what happens in detail, but if you notice some difference between OVS/L3 and OVN cases please let vishal (PTL) or me know it.
14:20:49 <amotoki> that's all I would like to share.
14:21:24 <slaweq> amotoki: there are for sure differences as in OVN/L3 there is no "distributed" attribute supported - routers are distributed by default
14:21:51 <slaweq> and other difference which I know is that ovn/l3 don't have l3 agent scheduler
14:22:05 <amotoki> mistake in the third comment : it should be "When we configure the job to use OVS backend, the failure (at least on the router related failures) has gone."
14:22:07 <slaweq> so all "agent" related api calls will probably fail
14:22:28 <amotoki> slaweq: yeah, I know. I think horizon needs to investigate it in more detail on what's happening.
14:22:32 <slaweq> amotoki: do You have link to the failed job, I can take a look after the meeting
14:23:23 <slaweq> amotoki: we also had similar issue with functional job in osc
14:23:27 <amotoki> I think https://zuul.openstack.org/builds?job_name=horizon-integration-tests&project=openstack%2Fhorizon&branch=master&pipeline=periodic is the easist way.
14:23:34 <slaweq> I sent patch https://review.opendev.org/c/openstack/python-openstackclient/+/793142 to address issues in that job
14:23:46 <amotoki> I saw it too.
14:23:49 <slaweq> and there are also some fixes on neutron side needed (see depends-on in the linked patch)
14:24:17 <amotoki> anyway I will continue to follow up the failure detail :)
14:25:10 <slaweq> ok
14:25:48 <slaweq> #topic Blueprints
14:26:00 <slaweq> Xena-2 BPs: https://bugs.launchpad.net/neutron/+milestone/xena-2
14:26:11 <slaweq> do You have any updates about any of them?
14:27:28 <slaweq> I have only one short update about https://blueprints.launchpad.net/neutron/+spec/secure-rbac-roles
14:27:35 <slaweq> almost all UT patches are now merged
14:27:39 <slaweq> only one left
14:29:29 <bcafarel> nice work
14:30:09 <slaweq> now I need to check how to switch some devstack based job to use those new defaults
14:30:16 <slaweq> and then to check what will be broken
14:30:35 <slaweq> but tbh I'm not sure if we should mark BP as completed now or keep it still opened?
14:30:44 <slaweq> all new default rules are there already
14:31:06 <slaweq> so IMO we can treat any issues which we will find as bugs simply and report them on LP
14:31:08 <slaweq> wdyt?
14:31:23 <obondarev> makes sense
14:31:24 <slaweq> IMO it would be easier to track them
14:31:40 <bcafarel> well the feature is implemented code is in there etc so marking BP as completed makes sense
14:32:02 <ralonsoh> yeah, we can open new LP for new issues
14:32:05 <amotoki> make sense to me too.
14:32:14 <slaweq> k, thx
14:32:49 <slaweq> and that's all update from my side regarding BPs
14:33:05 <slaweq> if there are no other updates, I think we can move on
14:33:09 <slaweq> to the next topic
14:33:13 <slaweq> #topic Bugs
14:33:20 <slaweq> I was bug deputy last week. Report http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022776.html
14:33:44 <slaweq> there is couple of bugs which I wanted to highlight now
14:33:50 <slaweq> first of all, l3-ha issue:
14:33:55 <slaweq> https://bugs.launchpad.net/neutron/+bug/1930096 - this is related to the L3ha and seems pretty serious issue which needs to be checked.
14:33:56 <opendevmeet> Launchpad bug 1930096 in neutron "Missing static routes after neutron-l3-agent restart" [High,Confirmed]
14:34:32 <slaweq> according to my test, it seems that this may be related to the change which is setting interfaces to be DOWN on the standby node
14:34:57 <slaweq> and bring them UP when router becomes active on the node
14:35:40 <slaweq> I saw in the logs locally that keepalived was complaining that there is no route to host and extra routes were not configured in qrouter namespace
14:35:53 <slaweq> if anyone will have cycles, please try to check that
14:36:07 <slaweq> I may try but not this week for sure
14:36:10 <ralonsoh> I can check it
14:36:22 <slaweq> ralonsoh: thx
14:36:56 <obondarev> ralonsoh has infinite cycles it seems :)
14:37:08 <slaweq> obondarev: yes, that's true
14:37:12 <slaweq> :)
14:37:52 <mlavalle> There are several ralonsoh s in parallel universes all working together
14:38:10 <slaweq> mlavalle: that can be the only reasonable explanation ;)
14:38:18 <ralonsoh> hehehe
14:38:23 <amotoki> :)
14:38:43 <slaweq> ok, next one
14:38:49 <slaweq> https://bugs.launchpad.net/neutron/+bug/1929523 - this one is gate blocker
14:38:51 <opendevmeet> Launchpad bug 1929523 in neutron "Test tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details is failing from time to time" [High,Confirmed]
14:39:10 <slaweq> and as our gate isn't in good condition now, we need to take a look into that
14:39:22 <ralonsoh> there was a related patch from Candido
14:39:23 <slaweq> I may try to check that tomorrow or on Friday
14:39:29 <ralonsoh> I'll send you the link
14:39:38 <slaweq> ralonsoh: but I'm not sure if it's the same issue really
14:39:50 <slaweq> please send me link and I will take a look
14:39:56 <ralonsoh> it is, the DNS entries
14:40:17 <slaweq> would be great :)
14:40:33 <ralonsoh> https://review.opendev.org/c/openstack/tempest/+/779756
14:40:46 <ralonsoh> of course, maybe we need something more
14:42:05 <slaweq> thx ralonsoh
14:42:10 <slaweq> I will take a look at it
14:42:20 <slaweq> and the last bug from me:
14:42:45 <slaweq> https://bugs.launchpad.net/neutron/+bug/1929821 - that seems to be low-hanging-fruit bug, so maybe there is someone who wants to take it :)
14:42:46 <opendevmeet> Launchpad bug 1929821 in neutron "[dvr] misleading fip rule priority not found error message" [Low,New]
14:43:57 <slaweq> and that are all bugs which I had for today
14:44:02 <slaweq> any other issues You want to discuss?
14:44:21 <mlavalle> not me
14:45:19 <slaweq> bug deputy this week is hongbin
14:45:25 <slaweq> he is aware of it and should be ok
14:45:33 <slaweq> next week will be haleyb's turn
14:45:43 <haleyb> hi
14:46:00 <slaweq> hi haleyb :)
14:46:07 <mlavalle> hi haleyb!
14:46:18 <slaweq> are You ok being bug deputy next week?
14:46:43 <haleyb> yes, that's fine
14:46:51 <slaweq> great, thx
14:46:58 <slaweq> so, let's move on
14:47:05 <slaweq> #topic CLI/SDK
14:47:12 <slaweq> OSC patch https://review.opendev.org/c/openstack/python-openstackclient/+/768210 is merged now
14:47:36 <slaweq> so with next osc release we should have possibility to send custom parameters to the neutron
14:47:46 <bcafarel> \o/
14:47:51 <slaweq> thanks all for reviews of that patch :)
14:47:51 <amotoki> really nice
14:48:03 <slaweq> I proposed also neutronclient patch https://review.opendev.org/c/openstack/python-neutronclient/+/793366
14:48:13 <slaweq> please check that when You will have some time
14:48:36 <slaweq> and I have 1 more question - is there anything else we should do to finish that effort?
14:49:03 <amotoki> generally no, but I'd like to check --clear behavior consistency in detail. in some API, None needs to be sent to clear the list (instead of []).
14:49:07 <slaweq> AFAIR that OSC thing was last missing piece but maybe I missed something
14:49:14 <amotoki> I am not sure it should be handled via CLI or API itself.
14:50:25 <slaweq> amotoki: You mean to check if it works ok for all our resources already?
14:51:02 <amotoki> slaweq: yeah, I would like to check some API which uses None to clear the list, but I don't think it is a blcoking issue for final cal.
14:51:05 <amotoki> *call.
14:51:28 <slaweq> amotoki: great, if You will find any issues, please report bug for OSC and we can fix them
14:51:35 <amotoki> slaweq: sure
14:51:43 <slaweq> amotoki++ thx
14:51:57 <slaweq> so we are finally approaching to the EOL for neutronclient CLI :)
14:52:01 <slaweq> please be aware
14:52:41 <slaweq> and that are all topics from me for today
14:52:53 <obondarev> raise hand who still use neutronclient o/ :D
14:53:03 <slaweq> obondarev: o/
14:53:13 <slaweq> but it's time to move to the OSC now :)
14:53:25 <obondarev> yeah, sad but true
14:53:31 <amotoki> using neutron in mitaka env I still need to take care :p
14:53:45 <slaweq> amotoki: WOW, that's old one
14:53:58 <slaweq> I hope You don't need to do backports from master to that mitaka branch :P
14:54:31 <amotoki> hehe. it's not so surprising in teleco env,
14:54:36 <slaweq> :)
14:54:43 <bcafarel> true
14:54:45 <slaweq> ok, thx for attending the meeting
14:54:49 <slaweq> and have a great week
14:54:50 <mlavalle> o/
14:54:51 <ralonsoh> bye
14:54:53 <slaweq> o/
14:54:54 <amotoki> o/
14:54:54 <rubasov> o/
14:54:54 <bcafarel> o/
14:54:55 <slaweq> #endmeeting