15:00:04 <slaweq> #startmeeting neutron_ci
15:00:05 <openstack> Meeting started Wed Sep 30 15:00:04 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:08 <openstack> The meeting name has been set to 'neutron_ci'
15:00:54 <ralonsoh> hi
15:01:32 <lajoskatona> Hi
15:01:35 <bcafarel> o/
15:03:22 <slaweq> ok, lets start
15:03:32 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:03:53 <slaweq> #topic Actions from previous meetings
15:04:13 <slaweq> ralonsoh to check timing out neutron-ovn-tempest-full-multinode-ovs-master jobs - bug https://bugs.launchpad.net/neutron/+bug/1886807
15:04:15 <openstack> Launchpad bug 1886807 in neutron "neutron-ovn-tempest-full-multinode-ovs-master job is failing 100% times" [High,Confirmed] - Assigned to Maciej Jozefczyk (maciejjozefczyk)
15:04:28 <ralonsoh> please read comment in https://bugs.launchpad.net/neutron/+bug/1886807/comments/3
15:04:50 <ralonsoh> long story short: I think the error is related to https://bugs.launchpad.net/nova/+bug/1863021
15:04:53 <openstack> Launchpad bug 1863021 in OpenStack Object Storage (swift) "[SRU] eventlet monkey patch results in assert len(_active) == 1 AssertionError" [Undecided,In progress] - Assigned to Chris MacNaughton (chris.macnaughton)
15:05:12 <ralonsoh> and I psuhed https://review.opendev.org/#/c/755256/, I think this should fix this error
15:05:28 <ralonsoh> but I don't think this is related to OVN or neutron
15:05:44 <slaweq> that's great news, thx ralonsoh :)
15:05:47 <ralonsoh> yw
15:05:57 <slaweq> ok
15:06:00 <slaweq> next one is
15:06:02 <slaweq> slaweq to report bug with neutron-dhcp-agent and Fedora 32
15:06:11 <slaweq> I didn't but bug is reported https://bugs.launchpad.net/neutron/+bug/1896945
15:06:12 <openstack> Launchpad bug 1896945 in neutron "dnsmasq >= 2.81 not responding to DHCP requests with current q-dhcp configs" [High,In progress] - Assigned to Dan Radez (dradez)
15:06:18 <slaweq> and radez already works on that
15:06:30 <slaweq> I think he found that it is "fedora only" issue
15:06:51 <lajoskatona> yeah this was the one I checke with ubuntu20.10, and on that it worked
15:07:21 <slaweq> and that is also good news for us, at least from u/s PoV :)
15:07:21 <radez> o/ sry stepped away and missed the start
15:07:33 <slaweq> radez: hi
15:07:49 <slaweq> don't worry, I was just saying that You are working on this issue with dnsmasq and F32
15:08:11 <radez> saw that thx :)
15:08:30 <slaweq> thank You for taking care of this issue :)
15:08:43 <slaweq> and with that I think we can move on to the next topic
15:08:45 <slaweq> which is
15:08:49 <slaweq> #topic Switch to Ubuntu Focal
15:09:00 <slaweq> Etherpad: https://etherpad.opendev.org/p/neutron-victoria-switch_to_focal
15:09:04 <slaweq> I updated it today
15:09:16 <slaweq> we are basically mostly done for most of the jobs
15:09:21 <slaweq> (finally)
15:09:46 <ralonsoh> good work everyone!
15:09:46 <slaweq> but we should just check once again e.g. stadium projects if we didn't miss anything
15:11:10 <slaweq> I hope that next week we will be able to remove that topic from this meeting :)
15:11:25 <slaweq> ok, lets move on
15:11:28 <slaweq> #topic Stadium projects
15:11:33 <bcafarel> https://review.opendev.org/#/c/754068/ for sfc is still in queue :)
15:11:59 <slaweq> bcafarel: sorry, I missed that somehow
15:12:07 <slaweq> I will look at it just after this meeting
15:12:51 <slaweq> bcafarel: (and others) if You have any other patch related to migration to Focal which we missed, please add it to that etherpad
15:13:05 <slaweq> so it will be easier to find and review it (at least for me) :)
15:13:44 <slaweq> ok, I think we can move on to the next topic
15:13:59 <slaweq> so stadium projects
15:14:04 <slaweq> and migration to zuulv3
15:14:09 <slaweq> Etherpad: https://etherpad.openstack.org/p/neutron-train-zuulv3-py27drop
15:14:20 <slaweq> we still have just those last 2 jobs to convert:
15:14:26 <slaweq> networking-odl: https://review.opendev.org/#/c/725647/
15:14:28 <slaweq> neutron: https://review.opendev.org/#/c/729591/
15:15:22 <slaweq> for neutron-ovn-grenade job it seems that it passed now :)
15:15:27 <ralonsoh> yes!
15:15:29 <slaweq> so please review this patch
15:15:55 <bcafarel> nice!
15:16:05 <lajoskatona> I just check odl logs, but now it seems better
15:16:08 <slaweq> and lajoskatona, regarding odl patch I see that grenade job is still failing
15:16:16 <slaweq> is it ready to review or not yet?
15:17:02 <lajoskatona> hmmm, it failed originally as well, the same true for tempest jobs sadly
15:17:32 <lajoskatona> I check the logs in details, and "report" on irc the status and on etherpad
15:18:15 <slaweq> so, if that is failing same way like old job, and failure isn't related to the job's configuration than I think we should be good to go with it now
15:18:22 <slaweq> and fix job later
15:18:33 <lajoskatona> yeah
15:18:40 <slaweq> ok, so I will review it also
15:19:34 <slaweq> ok, anything else regarding stadium projects' CI?
15:20:52 <slaweq> if not, than lets move on
15:20:55 <slaweq> #topic Stable branches
15:21:33 <slaweq> first of all I think that we need to update our dashboards for "previous" and "older" stable release
15:21:36 <bcafarel> that makes me think we should switch the stable dashboards to victoria soon
15:21:40 <slaweq> :)
15:21:40 <bcafarel> :)
15:21:44 <slaweq> :D
15:21:51 <slaweq> every time I'm first :P
15:22:06 <bcafarel> I should train on fast typing :)
15:22:26 <slaweq> actually You are typing faster than me usually :)
15:22:44 <slaweq> bcafarel: can You take care of that?
15:23:38 <bcafarel> sure
15:23:41 <slaweq> thx a lot
15:24:02 <slaweq> #action bcafarel to update our grafana dashboards for stable branches
15:24:25 <slaweq> except that, do we have anything else regarding stable branches for today?
15:24:36 <slaweq> I think that ci for stable branches is pretty ok, isn't it?
15:26:01 <bcafarel> yes patches got in without too many rechecks
15:26:31 <bcafarel> and as list of opened backports is short, I sent https://review.opendev.org/#/c/755203/ for stable releases
15:26:57 <slaweq> thx
15:28:07 <slaweq> ok, so I think we are good with stable branches now
15:28:13 <slaweq> lets move on to the next topic
15:28:15 <slaweq> #topic Grafana
15:28:25 <slaweq> #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:29:25 <slaweq> grafana is starting to look better now
15:29:37 <slaweq> ovn jobs and neutron-tempest-plugin jobs are going down finally
15:29:55 <slaweq> functional job is failing pretty often
15:30:08 <slaweq> but I didn't really found any specific issues there so far
15:30:53 <slaweq> anything else You want to add here?
15:31:19 <ralonsoh> no thanks
15:32:04 <slaweq> ok, so lets move on
15:32:14 <slaweq> #topic Tempest/Scenario
15:32:30 <slaweq> recently, after I opened 3 new bugs:
15:32:37 <slaweq> https://bugs.launchpad.net/neutron/+bug/1897898
15:32:38 <openstack> Launchpad bug 1897898 in neutron "Scenario test test_multiple_ports_portrange_remote is unstable" [Critical,Confirmed]
15:32:38 <slaweq> https://bugs.launchpad.net/neutron/+bug/1897326
15:32:39 <openstack> Launchpad bug 1897326 in neutron "scenario test test_floating_ip_update is failing often on Ubuntu 20.04" [Critical,In progress] - Assigned to Slawek Kaplonski (slaweq)
15:32:43 <slaweq> https://bugs.launchpad.net/neutron/+bug/1896735
15:32:44 <openstack> Launchpad bug 1896735 in neutron "Scenario tests from neutron_tempest_plugin.scenario.test_port_forwardings.PortForwardingTestJSON failing due to ssh failure" [Critical,In progress] - Assigned to Slawek Kaplonski (slaweq)
15:33:03 <slaweq> first 2 are happening after migration to Focal
15:33:09 <slaweq> at least I didn't saw them before
15:33:32 <slaweq> and for https://bugs.launchpad.net/neutron/+bug/1897326  I just sent patch https://review.opendev.org/755314 which should hopefully fix it
15:33:51 <slaweq> and https://bugs.launchpad.net/neutron/+bug/1896735 is happening pretty often also
15:33:52 <openstack> Launchpad bug 1896735 in neutron "Scenario tests from neutron_tempest_plugin.scenario.test_port_forwardings.PortForwardingTestJSON failing due to ssh failure" [Critical,In progress] - Assigned to Slawek Kaplonski (slaweq)
15:33:59 <slaweq> so I will probably mark this test as unstable for now too
15:34:02 <slaweq> wdyt?
15:34:19 <bcafarel> sounds in line with the other 2 yes
15:34:35 <ralonsoh> last one you say?
15:35:29 <ralonsoh> why https://bugs.launchpad.net/neutron/+bug/1896735 is related to https://review.opendev.org/#/c/748367/?
15:35:30 <openstack> Launchpad bug 1896735 in neutron "Scenario tests from neutron_tempest_plugin.scenario.test_port_forwardings.PortForwardingTestJSON failing due to ssh failure" [Critical,In progress] - Assigned to Slawek Kaplonski (slaweq)
15:35:59 <slaweq> sorry, I mixed some links probably
15:36:05 <ralonsoh> np
15:36:58 <slaweq> ahh, it is related because is marking this test as unstable due to that bug
15:37:46 <ralonsoh> right, thanks!
15:38:22 <slaweq> also, after our recent issues with merging patches to neutron-tempest-plugin repo, I was checking some of the failures and I proposed some small patches
15:38:27 <slaweq> https://review.opendev.org/#/c/725526/
15:38:33 <slaweq> https://review.opendev.org/#/c/755112/
15:38:38 <slaweq> https://review.opendev.org/#/c/755122/
15:39:02 <slaweq> https://review.opendev.org/#/c/755112/  should help with some of the Authentication failures on stable branches
15:39:27 <slaweq> and other 2 should at least add some extra logging of the console log and network config on host in case of failures
15:39:43 <slaweq> if You will have few minutes, please check those patches
15:40:28 <slaweq> and that's all from my side regarding tempest/scenario jobs for today
15:40:33 <slaweq> do You have anything else?
15:40:45 <ralonsoh> no thanks
15:41:00 <bcafarel> so cirros 0.5.1 can help? will take a look at patch later
15:41:05 <slaweq> yes
15:41:30 <slaweq> because maciejjozefczyk did patch to cirros to retry failed calls to metadata
15:41:34 <slaweq> and it is in 0.5.1
15:41:50 <slaweq> so in case if call for public-keys will fail once, it will be retried
15:42:00 <slaweq> and key should be configured in the guest vm
15:42:15 <bcafarel> oh nice!
15:42:46 <slaweq> ok, so I have one more topic for today
15:42:53 <slaweq> #topic Periodic
15:43:06 <slaweq> first of all thx bcafarel for fixing fedora job :)
15:43:48 <slaweq> but now it seems that openstack-tox-py36-with-ovsdbapp-master is failing contantly since about a week
15:44:34 <slaweq> every time same 2 tests are failing
15:44:45 <slaweq> anyone wants to report a bug and check it?
15:45:04 <ralonsoh> I can
15:45:09 <slaweq> thx ralonsoh
15:45:34 <slaweq> #action ralonsoh to report a bug and check failing openstack-tox-py36-with-ovsdbapp-master periodic job
15:45:50 <slaweq> ralonsoh: You can find logs in https://zuul.openstack.org/buildsets?project=openstack%2Fneutron&pipeline=periodic&branch=master
15:45:56 <ralonsoh> thanks
15:46:04 <slaweq> thank You
15:46:20 <slaweq> ok, and that was last thing from me for today
15:46:29 <slaweq> do You have anything else You want to discuss?
15:46:43 <bcafarel> not from me
15:47:53 <slaweq> so I think we can finish now :)
15:48:00 <slaweq> thx for attending
15:48:02 <slaweq> o/
15:48:05 <slaweq> #endmeeting