14:00:30 #startmeeting qa 14:00:31 Meeting started Tue Jan 26 14:00:30 2021 UTC and is due to finish in 60 minutes. The chair is masayukig. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:34 The meeting name has been set to 'qa' 14:01:03 elod: may be we need to check with train fix only and other EM branch also if fix is already there 14:01:30 Hi, who all here today? 14:01:31 sorry for interrupt but it's the time anyway 14:01:46 masayukig: hi 14:01:48 yeah 14:01:50 o/ 14:01:56 o/ 14:02:04 ok, let's start 14:02:18 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting 14:02:23 Here's the agenda 14:02:44 #topic Announcement and Action Item (Optional) 14:03:03 I have no announcement and I don't see any items on the agenda. 14:03:07 So, let's skip 14:03:19 #topic Wallaby Priority Items progress 14:03:29 #link https://etherpad.opendev.org/p/qa-wallaby-priority 14:03:49 Any updates on the priority? 14:04:07 I have some progress 14:04:13 cool 14:04:52 I need some feedback on the way I am doing the run_validation , Is this the best way #link https://review.opendev.org/c/openstack/tempest/+/763925 so if you guys can take a look that would be great 14:04:55 we have all the patches up for tempest-horizon test moving 14:04:59 #link https://review.opendev.org/q/topic:%22merge-horizon-test%22+(status:open%20OR%20status:merged) 14:05:19 tempest one is merged so dashbaord test run as part of tempest-full* job 14:06:13 paras333: thanks. ack, I hope I have the time to look at it this week 14:06:23 masayukig: yeah no rush 14:06:25 on RBAC testing, we have all the patches approved now 14:07:14 #link https://review.opendev.org/q/topic:%22admin-auth-system-scope%22+(status:open%20OR%20status:merged) 14:07:34 next we need to start moving test to new policy for compute and keystone 14:07:39 gmann: masayukig: for creating the gate job, what would be the place to start, I have never created the gate jobs before so might need your help for guest image work? 14:08:01 gmann: great. I'll check tempest-horizon ones. 14:08:14 I am thinking to use windows image for the guest image and run sanity tests on it? 14:08:29 paras333: you can create in https://github.com/openstack/tempest/blob/master/zuul.d/tempest-specific.yaml 14:08:49 as they will be tempest specific 14:09:01 gmann: ok I will take a look 14:09:03 paras333 or you are planning to run those in other project gate too ? 14:09:13 gmann: no just on tempest for now 14:09:23 ok then tempest-specific.yaml is fine 14:09:27 if we see value we can add this to others as well 14:09:48 sure, that time we can move in integrated file 14:09:55 gmann: ack 14:10:45 gmann: I didn't see errors in stable/ussuri as we specify USE_PYTHON3=True. stable/ussuri should fail if USE_PYTHON3=False though I am not sure we need to cover such case. It looks like during an office hour, so I don't comment more now. 14:11:14 amotoki: we will have that topic next so you are on perfect time :) 14:11:46 ok, let's move on to the next topic if we have nothing else on this topic. 14:12:12 #topic OpenStack Events Updates and Planning 14:12:16 let's skip this topic 14:12:26 #topic Gate Status Checks 14:12:32 this is the topic 14:12:45 yeah 14:13:04 amotoki: i do not think we need to support USE_PYTHON3=False for ussuri onwards 14:13:50 and we have removed many py2 hack in ussuri onwards so supporting that case seems like reverting the plan of python-only 14:13:51 gmann: I just proposed a fix for branches where python3_enabled() is conditional. 14:13:56 python3-only 14:14:15 may be we should remove that in ussuri. 14:14:16 ++ 14:14:20 yoctozepto: frickler ^^? 14:14:26 gmann: USE_PYTHON3 was dropped in victoria... 14:14:41 i am okay with either. 14:15:21 yeah as we were transitioning to python3-only in ussuri so backporting 'USE_PYTHON3 drop' to ussuri make sense now 14:16:09 i think the backport is not necessary 14:16:14 simply ignore that 14:16:15 supporting fix in train I agree. other em branch is also fine if fix is there 14:16:24 i mean the ussuri patch 14:16:30 elod: yeah that also work but sometime it come again 14:16:54 yes, for train and older the patch is OK 14:17:18 amotoki: elod thanks for that, I will check those today 14:17:21 gmann: py2 is not even used in ussuri, so it's not necessary, i think 14:17:24 so should I drop my fix for ussuri? 14:17:44 gmann: thanks too, and thanks amotoki for the patch, too! 14:17:52 amotoki: yeah, we can drop for ussuri and if anyone need py2 on ussuri thenj we say 'not supported' 14:18:01 ++ 14:18:02 i mean drop fix 14:18:21 okay, if so, we need to update the commit message from a train fix and other backports (i.e. to drop the cherry-picked line) 14:18:37 amotoki: I can do that if you want 14:18:38 ah yeah. 14:18:41 is it worht rerunning CIs for all fixes in train and older? 14:19:14 good question :) maybe not? :) 14:19:28 I think it is tricky if no corresponding cherry-picked commit is abandoned... 14:19:39 s/no// 14:19:41 ok, i am fine for that and anyone can know by checking ussuri patch status 14:20:25 amotoki: anything ok for me, fixing cmt msg is perfect but up to you 14:21:43 gmann: okay, so I will keep the train fix as-is. and will abandon the ussuri fix. 14:22:15 ok 14:22:24 one minor note: currently train patch requires stein patch to merge first (grenade) 14:22:38 the others are not dependent on each other 14:22:42 elod: yeah we need to merge in reverse way 14:22:52 ++ 14:23:20 thanks 14:23:51 granade job is non-voting. perhaps it allows us to land the fix in any order. 14:24:19 anyway it is up to the qa team 14:24:37 amotoki: except @ train :) strange, but that is how it is now :) 14:24:39 let's see gate result i think we made it voting in stein. or not yet n-v it 14:24:51 yeah and in train it is voting 14:25:22 tempest-full-train-py3 in periodic fail with same get-pip error - https://zuul.openstack.org/build/dac028e6585b410c8bc108390b614f5a 14:25:30 this is python3 job 14:25:40 I mean strangely in ussuri grenade is non-voting 14:25:55 elod: :) we forget to undo that may be 14:26:03 ah. I did not notice elod added Depends-On to the train patch. 14:26:14 amotoki: elod tempest-full-train-py3 in periodic fail with same get-pip error - https://zuul.openstack.org/build/dac028e6585b410c8bc108390b614f5a 14:26:18 ? 14:26:33 gmann: yes, the same error 14:26:46 gmann: yes the same error 14:27:07 but it is py3 path 14:27:38 in train, we always insatll pip for py27 14:27:46 and the error happens here. 14:27:55 latest get-pip.py does not work with py35 either 14:28:02 ah right 14:28:15 py35 is until stable/rocky only afaik 14:28:39 ok. let's get those fix in and we will have stable/train gate back 14:28:40 tempest-full-train-py3 fails in rocky and queens 14:29:04 elod: yeah they use py35 and stein onwards it is py36 due to bionic node migration 14:29:14 exactly 14:29:57 ok, anything else to discuss for this topic 14:29:59 ? 14:30:25 nothing from me, may be we can move next 14:30:54 ok 14:31:06 #topic Periodic jobs Status Checks 14:31:22 This is the similar topic, though 14:31:23 we discussed it for train which is failing now 14:31:27 yeah 14:31:35 for ussuri and victoria it is green 14:32:14 tempest-all is failing on Periodic master 14:32:24 yeah that is still broken 14:32:28 same issue 14:32:49 gmann: sorry, If i missed it. Is this job fixed https://zuul.opendev.org/t/openstack/build/fdb70e77db1e43a8b793d4058bb8b8b8 ? 14:33:01 low-constraints one 14:33:21 paras333: no, we are still discussing in TC and ML if to drop the l-c or fix 14:33:30 gmann: ack thanks 14:33:39 gmann: ok, we need to fix the issue, anyway 14:33:42 paras333: we can make it n-v until than and unblock the gate 14:33:57 masayukig: yeah 14:34:04 let's move on to the next topic if nothing 14:34:05 gmann: yeah that make sense, I will add the patch for hacking then 14:34:05 someone need to debug it 14:34:12 paras333: cool. 14:34:29 yeah 14:34:53 paras333: ++ 14:35:09 ok, let's move on to the next topic 14:35:10 #topic Sub Teams highlights (Sub Teams means individual projects under QA program) 14:35:47 #link https://review.openstack.org/#/q/project:openstack/tempest+status:open 14:35:52 #link https://review.openstack.org/#/q/project:openstack/patrole+status:open 14:35:52 #link https://review.openstack.org/#/q/project:openstack/devstack+status:open 14:35:52 #link https://review.openstack.org/#/q/project:openstack/grenade+status:open 14:35:52 #link https://review.opendev.org/#/q/project:openstack/hacking+status:open 14:35:59 we still have slow rate for tempest reivew 14:36:26 yeah, there are a lot of reviews missing second core vote :/ 14:36:50 yeah 14:37:05 one way is to change the policy to single core approval 14:37:44 also it is not 2nd core missing but many of them are not yet reviewed at all 14:38:11 *it is not only 2nd core... 14:38:24 yeah.. 14:39:24 I feel it is time to go with single core approval and if we feel we need another core to have a look we can always have that 14:39:26 Artom Lifshitz proposed openstack/whitebox-tempest-plugin master: WIP: Different approach to TripleO job https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/762866 14:39:39 yeah, but I think we need to discuss to change the policy to single core approval, at least 14:39:54 on IRC and/or ML 14:40:01 yeah, here we are doing :) 14:40:16 or patch for the policy document 14:40:18 yeah :) 14:40:20 I think we can discuss in IRC 14:40:31 as it is upto tempest team 14:40:43 if we decide then i can push patch 14:40:49 if all are ok with that 14:40:59 gmann: masayukig: If I understand correctly we do need atleast one other reviewer as well right even though he/she was not the core reviewer? 14:41:11 to merge the patch? 14:41:26 paras333: yeah at least one core review can merge it 14:41:34 paras333: currently it need two core 14:41:41 yes 14:42:21 correct 14:42:37 we need two +2 to approve the patch basically 14:43:16 There are some exceptions such as very urgent, very easy, etc., though 14:43:17 yeah I know that I am just thinking , do we need one more reviewer to merge it even though they can just +1 ? 14:43:24 single core + domain expert (including non-core) is an option if you don't have enough core review bandwidth. I am not sure it works for you casre 14:43:39 so one +1 and one +2 basically 14:43:58 paras333: yeah with +1 we will still need at least one +2 14:44:32 correct yeah I am totally onboard with this 14:44:39 amotoki: yeah that is issue, tempest team is facing review bandwidth (two core to merge) since an year or so 14:44:40 amotoki: yeah, that's a good idea. 14:45:07 as amotoki suggested this should be the great idea 14:45:17 amotoki: masayukig we can always ask domain expert review from tempest team or other team anyways 14:45:21 to have one core expert reviewing as well 14:45:31 like sometime i ask neutron team to ask neutron test changes 14:45:41 I haven't the bw recently.. :( 14:45:43 gmann: yeah 14:46:21 mostly we stuck with kopecmartin and myself waiting for other +2 for our patches 14:47:13 sorry about that.. don't blame me :p 14:47:41 :) no its complete team i think not just you. you are doing already more than your current bw 14:48:07 so if i propose the patch and then we can review it if any oppose in thay 14:48:08 that 14:48:23 and if we merge that then i can push it on ML as notification 14:48:27 is it fine? 14:48:40 sure 14:48:42 another idea is to apply some exception that a patch from a core reivewer can be approved by a single +2. 14:48:47 gmann: thanks :) yeah, that's good for me 14:49:55 amotoki: i see but not sure if that sounds good and fair for non-core? any other project have such policy 14:50:39 that can solve the things at some extend for core patches but non-core patches still will face the issue 14:50:52 gmann: that's just an idea. horizon stable has such policy that a backport from stable core can be approved by a single +2, but it is just about stable backports. 14:51:08 ohk 14:51:33 intersting 14:51:38 few exception we still have for gate fix and trivial changes 14:51:53 yeah #link https://docs.openstack.org/tempest/latest/REVIEWING.html#when-to-approve 14:52:45 gmann: we kinda have that policy for stable in nova 14:52:51 in case of backport, a stalbe core is expected to be familiar with the stalbe policy, so such rule can be reaonable but it is not usually true for master changes. 14:52:57 right ^ 14:53:10 it's a little different than a review on master though because the patch isn't net-new 14:53:34 but I think that it probably makes sense to do what gmann is suggesting, especially for smaller things that are unlikely to generate problems and can be validated by a SME 14:53:35 yeah for backport it might be easy as code changes already reviewed by two core on master 14:53:48 yeah 14:53:57 #topic Open Discussion 14:54:04 we're in this topic 14:54:13 can I bring up devstack things here? 14:54:33 dansmith: yes 14:54:44 so I have been working on this: https://review.opendev.org/c/openstack/devstack/+/771505 14:55:01 it makes devstack setup 25% faster in my local env 14:55:24 it's really not very complicated and easy to disable to that it becomes exactly like the current code in terms of behavior 14:55:51 there is a lot left to optimize to get it even faster I think, I've only really parallelized nova and some of the project setup 14:56:20 25% faster sounds great! 14:56:21 there's a lot of serialized work we can squeeze out of the devstack process, which (a) could help the gate and (b) makes a quick devstack run locally much more palatable 14:56:59 I would love to keep working on it, but I don't want this one patch to grow too large, and I don't want to put in the effort if people here aren't likely to accept it 14:57:20 dansmith: Thanks, I will be checking it sometime this week. so you change is enabling it by default but a way to disable too? 14:57:25 so, I'm just hoping to get some reviews on it (I know, right after the discussion about review bandwidth :) 14:57:33 gmann: thanks for the reply re: devstack based barbican job 14:57:38 https://review.opendev.org/c/openstack/devstack/+/771505/6/inc/async 14:57:56 dansmith: i see 14:57:57 it's off by default, but I'd definitely like to enable it on some jobs and then flip to default at some point 14:58:22 dansmith: do you have some job with enable? 14:58:44 gmann: it ran enabled by default before I added the toggle in the very last job 14:59:15 gmann: I haven't seen it fail the devstack job yet, and the tempest jobs mostly have worked, but some failed for other unrelated reasons 14:59:37 so there should be several logs 14:59:44 but I can push a patch on top to change the devstack job config to async if you want just to make it obvious 15:00:01 dansmith: or may be a new devstack-platform job can be added in same patch to see the enable behaviors ? 15:00:32 devstack-platform-async 15:00:40 sure, although you can't really compare the runtimes of two jobs to tell the improvement, you have to see it over many runs 15:00:44 sorry, we're running out of the time for the office hour. So, I'm closing it. 15:01:00 Paras Babbar proposed openstack/hacking master: Updating lower-constarints job as non voting https://review.opendev.org/c/openstack/hacking/+/772556 15:01:01 yeah we can close office hour and continue 15:01:02 I was hoping to show those numbers, but the same worker will yield vastly different runtimes on the same job from minute to minute :) 15:01:08 #endmeeting