15:01:21 #startmeeting neutron_ci 15:01:21 Meeting started Tue Mar 12 15:01:21 2024 UTC and is due to finish in 60 minutes. The chair is ykarel. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:21 The meeting name has been set to 'neutron_ci' 15:01:32 ping bcafarel, lajoskatona, mlavalle, mtomaska, ralonsoh, ykarel, jlibosva, elvira 15:01:33 \o 15:02:11 o/ 15:03:11 o/ 15:03:26 Miro and Bernard will not be joining today 15:04:09 i think we can start with low attendance 15:04:17 #topic Actions from previous meetings 15:04:30 ykarel to check failure with test_connectivity 15:04:57 I checked it and here ping command didn't timed out as expected, found an old issue with similar symptoms 15:05:05 https://bugs.launchpad.net/neutron/+bug/1588731 15:05:32 not seen the issue again in CI, and not able to reproduce locally 15:06:20 o/ 15:06:49 will check again with ping command options in the version in ubuntu jammy 15:07:22 moving to next 15:07:27 ykarel to check failure unmaintained/yoga, report bug and raise in neutron meeting 15:07:39 reported https://bugs.launchpad.net/neutron/+bug/2056276 15:07:47 pushed https://review.opendev.org/q/topic:%22bug/2056276%22 15:07:59 and missed raising it in neutron meeting :( 15:08:16 ykarel: thanks, will check the open patches 15:08:50 bcafarel, already commented on the bug we will consider the issues in unmaintained branches as best efforts only 15:09:08 +1 15:09:09 like it used to be extended maintainance releases 15:09:40 any comments on this? or we all agree with the reasoning above? 15:10:18 I agree, and it is in syc with previous discussion and with old policy 15:10:20 it's good for me 15:11:00 ok thanks 15:11:18 * slaweq needs to leave for a while but will be lurking on the phone 15:11:31 k moving to next 15:11:32 mtomaska to check fullstack failure 15:12:02 Miro is not around but he notified me he noticed some races in those tests and will continue looking this week 15:12:22 #action mtomaska to check fullstack failure from last week 15:12:30 #topic Stable branches 15:12:39 Bernard is not around 15:13:38 apart from some periodic job failures, stable branches looks good 15:13:55 no much activity on stable branch in last week though 15:14:04 #topic Stadium projects 15:14:12 periodic line was green 15:14:15 all green as I see from zuul 15:14:20 yes 15:14:29 nothing special from me 15:14:39 k thanks, it's good to see those green :) 15:14:50 #topic Rechecks 15:15:20 all good here, it's better than previous weeks 15:15:41 and only 2 bare rechecks out of 30 15:15:47 #topic fullstack/functional 15:15:49 great! 15:16:05 we still seeing NetworkInterfaceNotFound in functional job 15:16:37 that's a known issue and rodolfo planned to push some patches to collect debug logs to trace it further 15:16:50 some sample failures 15:16:51 https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_fd2/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-sqlalchemy-master/fd29a44/testr_results.html 15:16:51 https://6355e428223de2a8abab-7be64e411bd94d4a66b71f09ef725f83.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-pyroute2-master/07764d8/testr_results.html 15:16:51 https://2e1cede1d7956dc8ea10-c38099e6f683500395476cc7835a36a5.ssl.cf2.rackcdn.com/910889/6/gate/neutron-functional-with-uwsgi/37c7e1a/testr_results.html 15:17:05 next one 15:17:06 est_cleanup_on_vm_delete oslo_privsep.daemon.: privsep helper command exited non-zero (96) 15:17:15 https://149508874915607fdcac-8ec2ce4c808d48e5dc938b34ec8c6e3a.ssl.cf2.rackcdn.com/910968/2/gate/neutron-functional-with-uwsgi/724749a/testr_results.html 15:17:29 seen once 15:17:53 anyone recall seeing it in past? 15:18:24 a question for NetworkInterfaceNotFound, is that visible only with DVR jobs? 15:18:34 or DVR tests? 15:19:49 don't recall exactly now, but most of the failures i recall were dvr ones 15:20:04 ykarel: ack 15:21:29 for other failure for now will see if it reproduces again 15:21:51 #topic Periodic 15:21:59 stable/zed and stable/2023.1 15:22:09 tobiko job is broken in these branches 15:22:18 with a recent tobiko patch 15:22:23 lajoskatona: I think I also saw them only in some DVR related tests 15:22:31 https://zuul.openstack.org/builds?job_name=devstack-tobiko-neutron&project=openstack%2Fneutron&branch=stable%2Fzed&branch=stable%2F2023.1&skip=0 15:22:40 reported https://bugs.launchpad.net/neutron/+bug/2057492 15:23:26 will reach out to someone from tobiko on how to handle it for old branches running on focal 15:23:54 #action ykarel to check on tobiko issue in stable branches 15:24:14 I can check that tobiko issue this week 15:24:30 Thanks slawek 15:24:32 #undo 15:24:32 Removing item from minutes: #action ykarel to check on tobiko issue in stable branches 15:24:44 #action slaweq to check on tobiko issue in stable branches 15:25:04 there are failures in unmaintained/yoga periodics too 15:25:21 which we already discussed and are being tracked in https://bugs.launchpad.net/neutron/+bug/2056276 15:25:31 that's it on failures 15:25:34 Btw haleyb: I see in that bug that Isabel is doing something like bug deputy 🙂 15:26:26 oh, ok 15:27:03 #topic Grafana 15:27:09 https://grafana.opendev.org/d/f913631585/neutron-failure-rate 15:28:41 overall looks good to me 15:28:54 yes, looks good to me too 15:29:08 there are some spikes in check queue but those likeley related to patches itself 15:29:39 #topic On Demand 15:29:47 anything else you would like to raise now? 15:30:11 not from me 15:31:12 nothing from me 15:31:46 k thanks everyone, in that case let's close early and have everyone 28 minutes back :) 15:31:49 #endmeeting