15:00:23 #startmeeting qa 15:00:23 Meeting started Tue Aug 1 15:00:23 2023 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:23 The meeting name has been set to 'qa' 15:00:28 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours 15:00:30 agenda & 15:00:33 ^^ 15:01:54 o/ 15:02:09 o/ 15:04:34 \o 15:04:48 o/ 15:04:55 let's get to it 15:05:02 #topic Announcement and Action Item (Optional) 15:05:31 btw, i've signed up the qa team for the upcoming virtual ptg 15:05:45 and i've registered myself for the event as well 15:06:05 #link https://ptg2023.openinfra.dev/ 15:06:25 that's it regarding the announcements 15:06:33 #topic Bobcat Priority Items progress 15:06:37 #link https://etherpad.opendev.org/p/qa-bobcat-priority 15:06:43 anything new on this front? 15:06:47 * kopecmartin checking the doc 15:07:53 no further reviews for global venv :( 15:08:07 #link https://review.opendev.org/c/openstack/devstack/+/558930 15:08:15 gmann: dansmith ^^ what do you think? 15:08:50 about the venv thing? 15:09:06 Ihaven 15:09:12 yes, the patch looks ready (well, from what i can tell) 15:09:32 I haven't looked at that in a long time so I'd need to look again.. I'm bummed we have to do this, but I know something has to change 15:09:46 I guess I'll have a look, hopefully today 15:10:01 dansmith: you may also want to look at the wheel build issue mentioned in https://review.opendev.org/c/openstack/devstack/+/887547/5/.zuul.yaml 15:10:54 frickler: I don't see any issue mentioned there... 15:11:23 # TODO(frickler): drop this once wheel build is fixed - MYSQL_GATHER_PERFORMANCE: false 15:11:31 L373N 15:12:13 737? 15:12:29 737, yes, sorry 15:12:38 but ack, I understand we need to do something for this and other reasons 15:13:13 I'll pull this down and play with it while I wait for rechecks to need rechecking 15:15:17 sounds good, thanks 15:15:23 let's get moving 15:15:24 #topic Gate Status Checks 15:15:29 #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 15:15:53 one patch, waiting for the CI 15:15:57 (again) :D 15:16:02 more like still 15:16:12 that failed on one thing I've seen lately, 15:16:30 which is the osc functional test failing with things that look like maybe apache running out of connections or otherwise having trouble connecting to backends 15:16:51 I've seen that a handful of times this week 15:17:06 I dunno really anything about that job, so I'm not sure why that might be happening 15:17:25 related to https://review.opendev.org/c/openstack/openstacksdk/+/888240 maybe? 15:17:49 maybe, don't know either, the job looks quite stable 15:17:49 I dunno, I've seen it with things other than placement 15:17:50 #link https://zuul.opendev.org/t/openstack/builds?job_name=openstacksdk-functional-devstack&project=openstack/devstack 15:17:56 in comparison to others at least 15:18:25 seems the same failure was in the original patch https://zuul.opendev.org/t/openstack/build/ae68c91aa4994e499bbbdfc10af0f576 15:18:40 ack, I've just seen failures on glance and devstack with it, not always with placement 15:18:48 and I'm not sure I've ever seen it fail before that 15:19:55 I can ping stephenfin over in sdks to have a look 15:20:38 kopecmartin: looking at it this way makes it look worse: https://zuul.opendev.org/t/openstack/builds?job_name=openstacksdk-functional-devstack&result=FAILURE&skip=0 15:20:58 :o 15:21:09 perhaps it's just general instability, I just never see it fail and have seen it multiple times lately, so figured something must be up 15:21:22 is that affected by the increased concurrency change/ 15:22:10 that at least makes it look like it didn't start yesterday 15:22:35 yeah 15:22:49 re: concurrency, would it hit just one test? 15:23:15 but hard to tell, in openstack everything is connected in some way :D 15:23:22 yeah 15:23:42 anyway, it just struck me because I hadn't seen that fail on my stuff until recently 15:25:49 right, let's keep an eye on that 15:25:50 #topic Bare rechecks 15:25:55 #link https://etherpad.opendev.org/p/recheck-weekly-summary 15:26:03 considering all the rechecks we're quite ok 15:26:07 interesting fact though 15:26:30 even though there's a little cheating going on 15:26:31 QA had the most rechecks over the last 90 days ... 480 15:26:39 *cough*kopecmartin*cough* :) 15:26:51 :D yeah, i know , guilty as charged 15:27:10 I'm just kidding, it's pretty hard to keep caring on recheck 934 15:28:01 plus when we can see, it's always a different job 15:28:04 which fails 15:29:10 #topic Periodic jobs Status Checks 15:29:11 periodic stable full 15:29:11 #link https://zuul.openstack.org/builds?pipeline=periodic-stable&job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-zed&job_name=tempest-full-2023-1 15:29:13 periodic stable slow 15:29:15 #link https://zuul.openstack.org/builds?job_name=tempest-slow-2023-1&job_name=tempest-slow-zed&job_name=tempest-slow-yoga&job_name=tempest-slow-xena 15:29:17 periodic extra tests 15:29:19 #link https://zuul.openstack.org/builds?job_name=tempest-full-2023-1-extra-tests&job_name=tempest-full-zed-extra-tests&job_name=tempest-full-yoga-extra-tests&job_name=tempest-full-xena-extra-tests 15:29:21 periodic master 15:29:23 #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic 15:32:27 tempest-full-parallel fails more often, but with different tests, no clear pattern 15:32:45 #topic Distros check 15:32:45 cs-9 15:32:45 #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0 15:32:47 fedora 15:32:49 #link https://zuul.openstack.org/builds?job_name=devstack-platform-fedora-latest&skip=0 15:32:51 debian 15:32:53 #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&skip=0 15:32:55 focal 15:32:57 #link https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-ubuntu-focal&skip=0 15:32:59 rocky 15:33:01 #link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx 15:33:03 openEuler 15:33:05 #link https://zuul.openstack.org/builds?job_name=devstack-platform-openEuler-22.03-ovn-source&job_name=devstack-platform-openEuler-22.03-ovs&skip=0 15:33:31 fedora started failing constantly, but we're about to drop that (more about that in a minute 15:33:51 tempest-full-centos-9-stream has been failing last 2 days 15:34:14 yatin proposed openstack/devstack stable/2023.1: Use RDO official CloudSIG mirrors for C9S deployments https://review.opendev.org/c/openstack/devstack/+/890120 15:35:18 Failed to discover available identity versions when contacting http://localhost:5000/v3/. Attempting to parse version from URL 15:35:25 and it ends with Bad Request 15:35:42 yatin proposed openstack/devstack stable/zed: Use RDO official CloudSIG mirrors for C9S deployments https://review.opendev.org/c/openstack/devstack/+/890221 15:35:56 yatin proposed openstack/devstack stable/yoga: Use RDO official CloudSIG mirrors for C9S deployments https://review.opendev.org/c/openstack/devstack/+/890222 15:36:01 that ^^ might help actually :D 15:36:28 perfect timing 15:36:35 #topic Sub Teams highlights 15:36:39 Changes with Review-Priority == +1 15:36:44 #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 15:37:03 seems like we're ready to drop fedora 15:37:04 #link https://review.opendev.org/c/openstack/devstack/+/885467 15:37:22 patches for all the consumers were proposed more than a month ago 15:37:24 #link https://review.opendev.org/q/topic:drop-old-distros 15:38:22 frickler: if you agree, i can +w it now 15:38:54 ack 15:39:25 thanks, done 15:39:36 #topic Open Discussion 15:39:40 anything for the open discussion? 15:39:48 I just have a quick question whether it would be ok to add option to tempest.conf to indicates which backup driver cinder uses. Currently the volume backup tests do not clean up swift containers. This causes failure of object storage tests with pre-prov creds. We could use this option to clean up this container when swift is used as a backup 15:39:48 driver. 15:40:02 #link https://bugs.launchpad.net/tempest/+bug/2028671 15:40:52 it's just strange that we would need an option for the cleanup of a test and not for the test itself 15:41:08 agree 15:41:16 Example of the error: 15:41:18 #link https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_8a2/periodic/opendev.org/openstack/tempest/master/tempest-full-test-account-no-admin-py3/8a2ef81/testr_results.html 15:42:41 what about a try except block .. attempt to delete everything except any errors, then move on and hope for the best 15:42:46 I understand but without the option it is probably hard to tell whether swift is used as a back up driver (?). Maybe I can try to delete the containers if swift service is present. But the presence of swift service does not guarantee that it is used as a backup driver? I'm not sure. 15:43:13 maybe that swift test is bogus really? can't always assume a clean environment with no active containers 15:44:13 hmm, the test could check the content of the container at the beginning and check again at the end .. as far as the test is concerned there shouldn't be any additional resources 15:44:19 doesn't matter what was there before 15:44:25 that still won't make it stable 15:44:35 unless it runs in parallel with the backup test 15:44:42 because it could be running in parallel where more things are being created/destroyed 15:44:55 it really needs to be able to look at only the things it cares about and ignore anything else 15:45:11 I'm surprised tenant isolation doesn't just solve it though... 15:45:20 oh, right 15:45:35 dansmith: the issue only emerges when pre-prov creds are used. 15:45:43 yeah exactly :) 15:45:48 so the test is broken 15:45:59 then the test needs to be skipped in that scenario 15:46:11 So when the test reuses a user which was used for volume back up testing ... The test fails. That is my guess. 15:47:07 frickler++ 15:47:10 frickler: So you think skipping the test when pre-prov creds are used? 15:47:15 yes 15:48:02 frickler: Ok, I will propose a patch:). Maybe we can continue the discussion there. 15:48:12 ach, the test is part of interop, which runs mostly with preprovisioned creds .. that means we'll need to kick it out 15:48:35 lpiwowar: agree, let's start with a patch and continue there 15:48:35 kopecmartin: That is a good point. I wasn't thinking about it. 15:50:46 hm, if the issue really is that the test reuses a user which has left some resources behind, in that case my proposal could work - if preprov creds, save the initial content of the container and compare at the end of the test - the parallel issue whouldn't be a problem as the user is used by this test at that time 15:50:57 unless other tests could use that user too 15:51:06 that's the point though, other tests use them 15:51:12 when cinder uses swift for backup 15:51:46 any test that asserts you start with a blank slate doesn't make sense with shared creds, IMHO 15:51:50 right, i hoped there more clever logic .. then skipping the test is the easiest way out of it 15:52:55 kopecmartin: Are volume backup tests part of interop? I know it is not a nice solution but as long as volume backup tests are not in there then it should be ok. 15:53:24 lpiwowar: what if I have other swift containers present before I run even with the backup tests disabled/ 15:53:44 surely glance with swift would have the same problem? 15:54:23 dansmith: Not tested. But if the containers are visible for the user in the accounts.yaml then it would probably fail. 15:54:50 yeah, which will definitely happen (a lot) with glance being backed by swift, as any snapshot and potentially the test image itself would be visible 15:55:57 Ok, from what I'm reading it seems that the issue is really on the side of the object storage tests. I thought that the fix would be required for the volume back up tests. 15:56:23 So we have to either fix the object storage test(s) or skip them when pre-prov creds used. 15:57:19 correct 15:57:21 I will take a look at it more and will propose a patch:) 15:57:27 thanks 15:57:45 let's move on 15:57:46 #topic Bug Triage 15:57:51 #link https://etherpad.openstack.org/p/qa-bug-triage-bobcat 15:57:59 numbers recorded ^ 15:58:24 but htat's all i had time for unfortunately 15:58:46 that's it 15:58:49 thanks everyone 15:58:54 #endmeeting