15:00:17 #startmeeting qa 15:00:17 Meeting started Tue Oct 3 15:00:17 2023 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:18 The meeting name has been set to 'qa' 15:00:26 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours 15:00:28 agenda ^^^^ 15:01:38 o/ 15:01:56 #topic Announcement and Action Item (Optional) 15:02:27 we're in the release time 15:02:53 patches proposed by gmann 15:02:55 #link https://review.opendev.org/q/topic:qa-2023-2-release+status:open 15:03:05 #link https://review.opendev.org/q/topic:qa-2023-2-release+ 15:03:09 https://review.opendev.org/q/topic:qa-2023-2-release 15:03:12 ah 15:03:14 #link https://review.opendev.org/q/topic:qa-2023-2-release+ 15:03:18 .... 15:03:55 we released tempest 36.0.0 last week 15:04:37 gmann, anything I should do related to release process? thank you for proposing all the patches btw 15:05:23 moving on 15:05:24 #topic Bobcat Priority Items progress 15:05:29 #link https://etherpad.opendev.org/p/qa-bobcat-priority 15:06:30 not many updates .. we'll triage that during PTG 15:06:39 #topic OpenStack Events Updates and Planning 15:06:43 the next PTG will be held virtually, October 23-27, 2023 15:07:27 you can propose topics that we will discuss during PTG here 15:07:28 #link https://etherpad.opendev.org/p/oct2023-ptg-qa 15:07:42 don't forget to register .. 15:07:43 #link http://ptg2023.openinfra.dev/ 15:08:29 you may also influence dates of QA PTG sessions, just fill this: 15:08:30 #link https://framadate.org/f26R3EcZ2BOo7r8Q 15:08:59 #topic Gate Status Checks 15:09:11 #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 15:09:22 nothing there, anything urgent to review? 15:09:38 I know we discussed it already here. This change was reverted https://review.opendev.org/c/openstack/tempest/+/894269 because when ceph is used as a backup driver we can not use the "container" parameter in the API call. I was wondering whether it would be ok to create new config option that will indicate what backup_driver is used by cinder. 15:09:58 I just wanted to mention it here kopecmartin. 15:10:55 sure, why not? 15:11:33 Because last time we talked about it someone was against the new option. I do not remember who. 15:11:42 I can maybe check the logs. 15:11:50 me neither, i don't remember the discussion at all :D 15:11:52 yeah 15:11:57 I understand :D 15:12:09 check that and we can discuss that in Open Discussion 15:12:15 #topic Bare rechecks 15:12:17 kopecmartin: +1 15:12:21 #link https://etherpad.opendev.org/p/recheck-weekly-summary 15:13:02 all good here .. although interesting number - the QA team has the biggest number of rechecks over the last 90 days 15:13:15 #topic Periodic jobs Status Checks 15:13:15 periodic stable full 15:13:15 #link https://zuul.openstack.org/builds?pipeline=periodic-stable&job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-zed&job_name=tempest-full-2023-1&job_name=tempest-full-2023-2 15:13:17 periodic stable slow 15:13:19 #link https://zuul.openstack.org/builds?job_name=tempest-slow-2023-2&jjob_name=tempest-slow-2023-1&job_name=tempest-slow-zed&job_name=tempest-slow-yoga&job_name=tempest-slow-xena 15:13:21 periodic extra tests 15:13:23 #link https://zuul.openstack.org/builds?job_name=tempest-full-2023-2-extra-tests&job_name=tempest-full-2023-1-extra-tests&job_name=tempest-full-zed-extra-tests&job_name=tempest-full-yoga-extra-tests&job_name=tempest-full-xena-extra-tests 15:13:25 periodic master 15:13:27 #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic 15:15:27 all seems as expected 15:16:02 #topic Distros check 15:16:02 cs-9 15:16:04 #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0 15:16:06 debian 15:16:08 #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&job_name=devstack-platform-debian-bookworm&skip=0 15:16:10 rocky 15:16:12 #link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx 15:16:14 openEuler 15:16:16 #link https://zuul.openstack.org/builds?job_name=devstack-platform-openEuler-22.03-ovn-source&job_name=devstack-platform-openEuler-22.03-ovs&skip=0 15:16:18 jammy 15:16:20 #link https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-ubuntu-jammy-ovn-source&job_name=devstack-platform-ubuntu-jammy-ovs&skip=0 15:19:40 * kopecmartin still checking the results 15:21:20 i see a few failures that happened last week but i vaguely remember there were known failures due to all the releases that are happening right now .. 15:21:29 seems like now it's all on track 15:21:37 #topic Sub Teams highlights 15:21:41 Changes with Review-Priority == +1 15:21:45 #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 15:21:53 no patches 15:21:56 #topic Open Discussion 15:22:00 anything for the open discussion? 15:23:02 Nothing from my side:) 15:24:00 did you find who was objecting the approach in your patch? 15:25:17 #link https://meetings.opendev.org/meetings/qa/2023/ 15:25:18 Not yet ... 15:25:27 no idea when we could discuss that :/ 15:25:28 I'm not able to find the correct meeting. It was long time ago 15:25:36 yeah :/ 15:28:07 here 15:28:13 #link https://meetings.opendev.org/meetings/qa/2023/qa.2023-08-01-15.00.log.html 15:29:36 Thanks! I was searching for "config" 15:30:08 It looks like you were against it and dansmith. 15:30:37 But I remember that I agreed with you. 15:30:49 which patch was I against? 15:31:56 Against a new config option to tempest. The option would tell what backup driver is used by cinder. It would help us to do a proper clean up for volume backup tests. 15:32:35 We are talking about this patch: 15:32:48 the original LP: 15:32:50 #link https://bugs.launchpad.net/cinder/+bug/2028671 15:32:50 #link https://review.opendev.org/c/openstack/tempest/+/890798 15:33:28 .. but we had to revert that because of a new LP: 15:33:30 #link https://bugs.launchpad.net/tempest/+bug/2034913 15:33:46 so we're practically at the beginning 15:33:59 okay I don't see me on any of that and don't recall any such conversation 15:34:03 * kopecmartin still loads the context of the issue 15:34:42 lpiwowar: please imlement it as you think is right, we'll discuss that during review, as always :) 15:34:42 The conversation is here: 15:34:46 #link https://meetings.opendev.org/meetings/qa/2023/qa.2023-08-01-15.00.log.html 15:35:02 kopecmartin: ok:) 15:35:40 it's always easier to discuss a specific solution if it is executed in the CI - we have a proof it works etc 15:36:44 ou, i'm starting to remember , lpiwowar you wanted to create a new opt just becuase of the cleanup, not a test 15:36:52 that's strange 15:37:10 and not a good approach 15:37:14 This is how it would look like: https://review.opendev.org/c/openstack/tempest/+/896011/8/tempest/api/volume/base.py 15:37:42 Yeah, I agree. It is strange. But currently I'm not sure how to do it without it. 15:38:34 The issue is line 197 (previous link). I want to add this option only when Swift is used as a backup driver. 15:39:05 does the patch only revert the previous patch or are there some modifications on top of that? 15:39:58 can't we add 2 addCleanups? .. one for when swift is used the other if it isn't .. one will always fail but we can ignore that failure 15:40:17 I'm little bit lost in the all patches. I do not know which one do you mean right now :D. 15:40:33 the one you shared 15:40:36 If I understand the issue correctly it will not work. 15:41:18 But the issue is not in the clean up but in the creation of the backup itself. 15:42:08 When Swift is used as a backup driver we want to be able to tell through the API that we want the backup to be stored in a specific container. 15:42:15 So that we can clean it up properly later. 15:42:56 oh, ok , i see it now 15:42:58 It works fine when Swift is enabled. But when Ceph is used as a backup driver we get an error because Ceph does not understand the concept of container. 15:43:08 ok:) 15:43:38 there is one danger in that, we will have 2 testing paths - when swift is enabled (or whatever) we create the container with different options 15:43:49 not saying it's an issue, it's just something that needs to be taken into account 15:44:26 ... in this case, it seems like another config opt makes sense 15:44:28 however 15:45:11 new config opt means new opt that needs to be set by the user as well as our jobs in the CI .. so, if we go that way, how many jobs will we need to edit? 15:46:12 Well I was thinking that we can set it by default to ceph. This should not influence any job because we will have a different behaviour only when Swift is used as a backup driver. 15:46:57 And for the jobs which use Swift as a backup driver I think we can update devstack/lib/tempest file so that it updates tempest.conf with the correct option (?)/ 15:47:14 sounds good, that would work 15:47:41 Ok, awesome:) 15:48:52 just avoid stating smth like this "adding new option to cleanup container properly" .. it's more like adding a new option so that we can create a resource properly and avoid cleanup issues 15:49:51 Ack, I understand 15:50:36 also address https://bugs.launchpad.net/tempest/+bug/2034913 in your patch https://review.opendev.org/c/openstack/tempest/+/896011 .. and maybe it would be better to change the title as it's not a pure revert 15:50:48 it's more like a second try to resolve the original LP 15:51:01 while taking the new LP into account 15:51:29 #topic Bug Triage 15:51:31 Ack 15:51:37 #link https://etherpad.openstack.org/p/qa-bug-triage-bobcat 15:51:52 numbers recorded, that's unfortunately all i had time for 15:51:59 that's all from my side 15:52:02 anything else? 15:52:20 Nothing from my side 15:52:51 cool, than we're done for today .. 15:52:53 thanks 15:52:56 #endmeeting