15:00:15 <slaweq> #startmeeting neutron_ci
15:00:17 <slaweq> hi
15:00:20 <openstack> Meeting started Wed Sep  2 15:00:15 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:24 <openstack> The meeting name has been set to 'neutron_ci'
15:01:13 <bcafarel> o/
15:01:37 <lajoskatona> Hi bcafarel!
15:02:04 <slaweq> hi bcafarel and lajoskatona :)
15:02:10 <bcafarel> hey lajoskatona slaweq :)
15:02:19 <bcafarel> setuptools 50 is capped now so we can say CI is good right?
15:02:19 <slaweq> I think we can start as ralonsoh is on pto today
15:02:20 <lajoskatona> Hi everybody :-)
15:02:39 <slaweq> bcafarel: yes, I think we can possible say that ;)
15:03:43 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:03:55 <slaweq> #topic Actions from previous meetings
15:04:01 <slaweq> ralonsoh to check timing out neutron-ovn-tempest-full-multinode-ovs-master jobs - bug https://bugs.launchpad.net/neutron/+bug/1886807
15:04:02 <openstack> Launchpad bug 1886807 in neutron "neutron-ovn-tempest-full-multinode-ovs-master job is failing 100% times" [High,Confirmed] - Assigned to Maciej Jozefczyk (maciejjozefczyk)
15:04:12 <slaweq> I will assign it to ralonsoh for next week
15:04:18 <slaweq> #action ralonsoh to check timing out neutron-ovn-tempest-full-multinode-ovs-master jobs - bug https://bugs.launchpad.net/neutron/+bug/1886807
15:04:27 <slaweq> next one
15:04:29 <slaweq> slaweq to ask jlibosva and lucasgomes if they can check https://bugs.launchpad.net/neutron/+bug/1890445
15:04:29 <openstack> Launchpad bug 1890445 in neutron "[ovn] Tempest test test_update_router_admin_state failing very often" [Critical,Confirmed]
15:04:34 <slaweq> I asked them today about it :)
15:04:43 <slaweq> so I hope jlibosva will check this issue this week
15:04:51 <slaweq> next one
15:04:53 <slaweq> slaweq to propose neutron-tempest-plugin switch to focal nodes
15:05:00 <slaweq> Patch https://review.opendev.org/#/c/748367
15:05:17 <slaweq> now I'm waiting for new results after all this setuptools issues are fixed
15:05:21 <slaweq> I hope it will be ok now
15:06:45 <bcafarel> nice, I had not seen that one
15:06:46 <slaweq> ralonsoh to check issue with pep8 failures like https://zuul.opendev.org/t/openstack/build/6c8fbf9b97b44139bf1d70b9c85455bb
15:07:11 <slaweq> that is the last one and I will assign it to ralonsoh for next week as I saw such issue even today
15:07:17 <slaweq> #action ralonsoh to check issue with pep8 failures like https://zuul.opendev.org/t/openstack/build/6c8fbf9b97b44139bf1d70b9c85455bb
15:08:26 <slaweq> and that are all actions from last week
15:08:34 <slaweq> lets move to the next topic
15:08:35 <slaweq> #topic Switch to Ubuntu Focal
15:08:40 <slaweq> Etherpad: https://etherpad.opendev.org/p/neutron-victoria-switch_to_focal
15:08:56 <slaweq> except that neutron-tempest-plugin patch I didn't check anything else this week
15:08:59 <slaweq> did You maybe?
15:09:15 <bcafarel> small fixes https://review.opendev.org/#/c/734304/ and https://review.opendev.org/#/c/748168/
15:09:31 <bcafarel> (sorry with rebasing I lost all the +2 on first one)
15:09:49 <bcafarel> it will get functional and lower-constraints out of the "to fix" list
15:10:35 <slaweq> +2'ed both already
15:11:02 <lajoskatona> I checked last wee kbagpipe but that's failing with some volume tests
15:11:13 <lajoskatona> https://64274739a94af6e87bea-e6e77a9441ae9cccde1b8ed58f97fc24.ssl.cf5.rackcdn.com/739672/3/check/networking-bagpipe-tempest/4a5e824/testr_results.html
15:12:16 <slaweq> lajoskatona: is it all the time the same issue?
15:13:17 <lajoskatona> for last few runs yes
15:14:06 <slaweq> can You ask cinder team to check that?
15:14:42 <lajoskatona> yes, I just rechecked and if it still exists I ask them with fresh logs
15:14:49 <slaweq> lajoskatona: ok, thx
15:15:18 <slaweq> ok, I think we can move on to the next topic
15:15:21 <slaweq> #topic standardize on zuul v3
15:15:26 <slaweq> Etherpad: https://etherpad.openstack.org/p/neutron-train-zuulv3-py27drop
15:15:56 <slaweq> I did only small progress on neutron-grenade-ovn job last week
15:16:10 <slaweq> I will try to continue it this week if I will have some time
15:16:34 <tosky> of the 3 jobs in openstack-zuul-jobs, 2 have been removed, and one seems an artifact of my scripts (legacy-periodic-neutron-dynamic-routing-dsvm-tempest-with-ryu-master-scenario-ipv4) because it's not defined in master
15:17:21 <slaweq> tosky: great, thx for info
15:17:30 <bcafarel> tosky: for last one a recent fix, I think I cleaned it same time as unused playbooks
15:17:33 <slaweq> so we still have neutron-grenade-ovn and networking-odl-grenade to finish
15:17:38 <slaweq> and then midonet jobs
15:17:47 <tosky> the big open question about midonet :)
15:17:58 <slaweq> there is 2 guys who wants to maintain this project
15:18:06 <bcafarel> there was some recent discussion on ML about it
15:18:38 <slaweq> and I already discussed with them that making those jobs working on Ubuntu 20.04 and migrating to zuulv3 is most important for now
15:20:03 <lajoskatona> I update my todo list to go back to odl grenade....
15:20:26 <slaweq> thx lajoskatona
15:21:19 <slaweq> with that I think we can move on
15:21:23 <slaweq> next topic is
15:21:25 <slaweq> #topic Stable branches
15:21:39 <slaweq> Ussuri dashboard: http://grafana.openstack.org/d/pM54U-Kiz/neutron-failure-rate-previous-stable-release?orgId=1
15:21:40 <slaweq> Train dashboard: http://grafana.openstack.org/d/dCFVU-Kik/neutron-failure-rate-older-stable-release?orgId=1
15:21:49 <slaweq> bcafarel: any issues with stable branches?
15:22:38 <bcafarel> I have a few rechecks in backlog after setuptools issue, but from what I saw, all branches are back in working order :)
15:22:51 <slaweq> that's good news
15:23:39 <slaweq> so I think we can quickly move to the next topic
15:23:41 <slaweq> #topic Grafana
15:23:45 <slaweq> #link http://grafana.openstack.org/d/Hj5IHcSmz/neutron-failure-rate?orgId=1
15:24:23 <slaweq> in general after setuptools issue now things are getting back to normal
15:25:12 <slaweq> pep8 job is failing too often recently but IMO it's related to this issue which is assigned to ralonsoh already
15:25:39 <slaweq> and other issue is with periodic jobs
15:26:35 <slaweq> and TBH I didn't found any specific new issues in functional/fullstack nor scenario jobs
15:27:06 <slaweq> only issue which is worth to mention here is related to the periodic job openstack-tox-py36-with-ovsdbapp-master
15:27:10 <bcafarel> I am not complaining that you did not find any new issues
15:27:22 <slaweq> 2 tests are failing every day since 28.08
15:27:40 <slaweq> so we should definitely check and fix that before we will release new ovsdbapp version
15:27:50 <slaweq> and we should release new version this week :/
15:27:59 <slaweq> anyone wants to check that?
15:28:33 <slaweq> most likely https://review.opendev.org/#/c/745746/ is the culprit of that issue
15:28:50 <bcafarel> I can try to look into it end of this week (or Monday probably)
15:29:16 <slaweq> bcafarel: monday may be too late as this week is final release of non-client libraries
15:29:30 <slaweq> so tomorrow we should do release of ovsdbapp for victoria
15:29:39 <slaweq> I will ask otherwiseguy to check that issue maybe
15:29:41 <bcafarel> :/
15:29:54 <bcafarel> yeah I was about to suggest to drag otherwiseguy in
15:31:51 <slaweq> I just pinged him about it
15:33:28 <slaweq> ok, that's all from me for today
15:33:39 <slaweq> do You have anything else regarding our ci?
15:34:10 <bcafarel> nothing from me
15:35:08 <slaweq> lajoskatona: anything else from You?
15:35:23 <lajoskatona> nothing
15:35:34 <slaweq> ok, so lets finish meeting earlier today
15:35:37 <slaweq> thx for attending
15:35:45 <slaweq> o/
15:35:47 <slaweq> #endmeeting