15:00:23 #startmeeting qa 15:00:23 Meeting started Tue May 23 15:00:23 2023 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:23 The meeting name has been set to 'qa' 15:00:29 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours 15:00:32 agenda ^^ 15:01:10 o/ 15:01:37 \o 15:05:47 o/ 15:05:47 let's get started, the usual 15:05:48 #topic Announcement and Action Item (Optional) 15:06:06 \o\ 15:06:38 no announcements this week 15:06:54 #topic Bobcat Priority Items progress 15:06:59 #link https://etherpad.opendev.org/p/qa-bobcat-priority 15:07:07 * kopecmartin checking the status 15:08:21 'New cirros image update' is almost done, the last patch is being merged now 15:08:44 \o/ 15:09:35 regarding 'Tempest tests prepare the server with SSHABLE by default' 15:09:48 it seems that the following patch is the last one 15:09:52 #link https://review.opendev.org/c/openstack/tempest/+/842240 15:10:09 in merge conflict atm, gmann^ 15:10:33 I don't see any other updates 15:10:55 #topic Gate Status Checks 15:11:00 #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 15:11:48 one patch, already approved 15:12:02 anything urgent to review? 15:12:43 https://review.opendev.org/c/openstack/grenade/+/883742 would be good to get fixed 15:13:01 this is to resolve a zuul config error because the heat job was deleted 15:14:26 failing with neutron-fwaas missing, not sure why 15:15:19 we could make the job n-v to merge the above fix or I could just force-merge 15:15:37 at least I don't care too much about those old branches 15:15:49 just want to get rid of config errors 15:17:56 well, clearly the patch is not related to the error in any way, the logs confuse me , no apparent failure 15:18:07 i'm ok with force merging it 15:18:21 although the issue with the job will remain 15:18:40 but i don't see a reason why suffer from config issues till it's resolved 15:19:10 ok, will do 15:19:23 dansmith: frickler: problems like that are why we added ansible to devstack to help with multinode. It is just easier to express that sort of thing in a tool like ansible 15:19:35 clarkb: ++ 15:22:27 Merged openstack/grenade stable/stein: [stable-only] Remove grenade-heat job https://review.opendev.org/c/openstack/grenade/+/883742 15:22:46 great, moving on 15:22:49 #topic Bare rechecks 15:22:54 #link https://etherpad.opendev.org/p/recheck-weekly-summary 15:23:22 8 rechecks over the last 7 days, none of them bare \o/ 15:23:34 #topic Periodic jobs Status Checks 15:23:34 periodic stable full 15:23:34 #link https://zuul.openstack.org/builds?pipeline=periodic-stable&job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-zed&job_name=tempest-full-2023-1 15:23:36 periodic stable slow 15:23:38 #link https://zuul.openstack.org/builds?job_name=tempest-slow-2023-1&job_name=tempest-slow-zed&job_name=tempest-slow-yoga&job_name=tempest-slow-xena 15:23:40 periodic extra tests 15:23:42 #link https://zuul.openstack.org/builds?job_name=tempest-full-2023-1-extra-tests&job_name=tempest-full-zed-extra-tests&job_name=tempest-full-yoga-extra-tests&job_name=tempest-full-xena-extra-tests 15:23:44 periodic master 15:23:46 #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic 15:24:48 seems like the centos9-stream-fips is broken 15:25:07 and tempest-all occasionally timeouts 15:25:42 *-no-admin-py3 job is a known issue, no progress on the LP yet 15:26:48 kopecmartin: Not yet. I'm sorry. I will try to focus on it during this week. 15:27:34 no worries, it's been broken forever 15:28:22 i don't understand why the fips job failed o.O 15:30:24 "/usr/bin/python3.9: No module named pip" ? 15:31:18 is there an earlier yum failure? iirc we still ignore those 15:31:43 maybe c9s mirroring is broken once again 15:32:12 don't see any, i see it reinstalled python3-setuptools, seems that successfully and then it failed on still not having pip 15:32:57 but even before that 15:32:59 "No match for argument: liberasurecode-devel" 15:33:15 "Error: Unable to find a match: liberasurecode-devel" 15:34:01 anyway, moving on 15:34:12 #topic Distros check 15:34:12 cs-9 15:34:12 #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0 15:34:14 fedora 15:34:16 #link https://zuul.openstack.org/builds?job_name=devstack-platform-fedora-latest&skip=0 15:34:18 debian 15:34:20 #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&skip=0 15:34:22 focal 15:34:24 #link https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-ubuntu-focal&skip=0 15:34:26 rocky 15:34:28 #link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx 15:34:30 openEuler 15:34:32 #link https://zuul.openstack.org/builds?job_name=devstack-platform-openEuler-22.03-ovn-source&job_name=devstack-platform-openEuler-22.03-ovs&skip=0 15:34:59 all green \o/, what a nice surprise 15:35:15 do we want to drop fedora if nobody makes f38 support? 15:35:30 ianw: ^^? 15:36:36 our "fedora-latest" job is actually "fedora-eol" now 15:37:18 right, fedora-36 is EOL since last week 15:37:23 #link https://docs.fedoraproject.org/en-US/releases/eol/ 15:37:52 well, if there won't be fedora-38 label we could use in the job, we'll have to drop the support as it won't be tested in the CI 15:38:35 right I sent email asking if anyone needed fedora since the lack of updates indicated that the answer was probably no 15:38:44 we didn't get any feedback from anyone that needed or wanted fedora 15:39:21 that makes it easier 15:39:26 I've already started some of the safer fedora cleanups (for unused bits) but I expect that to progress through to complete removal 15:39:44 this is all in opendev's nodepool stuff not devstack but devstack cleanup would be a necessary step if we do that 15:41:12 o.k., I'll prepare some patches 15:41:24 sure, let me know it starts, i'll help with patches and reviews too 15:41:40 that'll probably go backwards through some cycles 15:42:04 the main thing seems to be that for forward looking CI centos stream is a better fit for what we do anyway 15:42:17 because it is a platform with multiple yaers of support unlike fedora 15:42:24 reduces churn and all that 15:42:58 +1 15:43:01 stream is continously breaking, so I don't think it is too well suited, too 15:43:25 well that too, but I think it breaks in smaller chunks than the major uplifts fedora does 15:43:33 which is a big reason why fedora updates have fallen behind recently 15:46:19 right 15:46:21 moving on 15:46:22 #topic Sub Teams highlights 15:46:26 Changes with Review-Priority == +1 15:46:31 #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 15:46:33 no reviews there 15:46:37 #topic Open Discussion 15:46:40 anything for the open discussion? 15:47:42 the mysql tune down has been enabled by default 15:47:54 not sure if that was mentioned earlier, but thoughti t was worth calling out 15:48:23 good point, on thing to watch out for is whether there are timeouts because of that 15:48:31 in case mysql things get too slow 15:48:58 yep, although I've seen nothing but improvement from the jobs we enabled it on early :) 15:49:06 oh, so maybe that's why tempest-all started timeouting occasionally? 15:49:24 kopecmartin: it merged middle of the day yesterday, so depends on when you're talking about 15:49:36 not related then :) 15:49:40 :) 15:52:27 if there isn't anything else 15:52:31 #topic Bug Triage 15:52:36 #link https://etherpad.openstack.org/p/qa-bug-triage-bobcat 15:52:54 one bug down in tempest - as the cirros update is going forward 15:53:10 other than that, it's been quite quiet on the bug front 15:53:33 and that's all from my side 15:54:20 thanks everyone 15:54:25 #endmeeting