09:00:09 #startmeeting qa 09:00:10 Meeting started Thu Apr 20 09:00:09 2017 UTC and is due to finish in 60 minutes. The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:13 The meeting name has been set to 'qa' 09:00:18 who all here topday 09:00:22 o/ 09:00:25 o/ 09:00:38 o/ 09:01:08 \o 09:01:14 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_April_20th_2017_.280900_UTC.29 09:01:16 ^^ today agenda 09:01:34 #topic Previous Meeting Action review 09:01:46 seems like no open action item from last meeting 09:01:57 #topic The Forum, Boston 09:02:13 we have 1 sessions approved 09:02:18 #link http://forumtopics.openstack.org/openid/login/?next=/cfp/details/111 09:03:06 gmann: yeah I'm curious to see how that one turns out 09:03:19 andreaf that was decided by TC? 09:03:31 I've just read the proposal, lgtm 09:03:34 there was a sessions review committee 09:03:39 I know Tempest is used in production at several places 09:03:44 other seems rejected #link http://forumtopics.openstack.org/openid/login/?next=/cfp/details/112 http://forumtopics.openstack.org/openid/login/?next=/cfp/details/113 09:03:44 I don't think it was the TC 09:03:50 ohk 09:04:23 andreaf: whats the plan for that? 09:04:36 for the rejected ones? 09:04:46 no approved one 09:05:07 andreaf: that is 40 min or 15 one? 09:05:24 * gmann still get confused with Forum and onboard sessions 09:05:39 I think it's a 40min one 09:05:45 ok 09:06:35 gmann: andreaf what about hosting a hangout session or recorded session for the rejected forum topics after the summit and put it on youtube. just a thought 09:06:36 I plan to leave it really open ended - if no-one has feedback we can use it to answer questions 09:07:19 andreaf: i see, any presentation etc from our side? 09:08:00 gmann: I think this is more like a design session in the old summit - I will setup an etherpard with some details about our projects but that's about it 09:08:03 chandankumar: we can see how much space/time we have in summit but not sure about recording 09:08:15 andreaf: i see. cool 09:08:18 chandankumar: heh I was thinking about starting in the ML 09:08:47 chandankumar: if there is enough interest we can setup some kind of meeting, IRC or hangout or whatever works best 09:09:12 andreaf: that will be good. 09:09:28 #action andreaf to setup etherpad for Forum session 09:09:31 andreaf: ^^ 09:09:32 chandankumar: the main idea behind this cross-session is to have interactive discussion with folks so a recorded session might not be that effective 09:09:53 Onboarding session: #link https://etherpad.openstack.org/p/BOS-QA-onboarding 09:09:55 andreaf: then ML would be a better idea 09:10:03 chandankumar: if there is enough to discuss when we get to the next PTG we will re-submit them there 09:10:11 +1 09:10:15 feel free to put ideas on Onboarding etherpad 09:10:28 andreaf: chandankumar yea. that will be nice 09:11:09 i am hoping to have at least half a day for code sprint in summit or at least QA area with feature discussions 09:11:27 including new developers 09:11:50 gmann: heh yeah I think there will be some space for unconference type of meetings 09:12:15 yea. 09:12:34 i have my presentation on Monday so free for next 3 days :) 09:13:02 till my boss doe not bother me :) 09:13:21 #topic Gate Stability - status update 09:13:30 something goes up in check queue 09:13:32 #link https://goo.gl/ptPgEw 09:14:03 this one #link http://status.openstack.org/elastic-recheck/#1355573 09:14:19 but i did not get chance to look into detail 09:14:29 andreaf: jordanP any updates from your side 09:14:30 first I think we can say that most of the libvirt issues are gone 09:14:43 thanks to the usage of UCA packages 09:14:55 so the situation is much much better now 09:14:56 \o/ 09:15:03 \o/ 09:15:22 yay!! 09:15:33 ceph job is failing due to pre merge of infra patch 09:15:43 i am reverting that #link https://review.openstack.org/#/c/458349/ 09:15:56 in case anyone curious about failure. 09:16:02 i replied on ML also 09:16:04 we also have a couple of periodic job with 100% failure, but it's not a big deal to fix them 09:16:43 yea 'all' job was broken since long and nobody noticed 09:17:02 any suggestion to track those? track status in weekly meeting? 09:17:30 periodic jobs are not top priority, not sure it's worth to talk about them in meetings 09:17:39 just have a look at them from time to time 09:17:40 at least we will have eyes on periodic job status 09:17:42 yeah 09:18:10 may be just status not any further discussion which take time in meeting 09:19:01 ok 09:19:03 may be under gate stability we can check. and in agenda we can have link to have a glance 09:19:07 #topic Specs Reviews 09:19:10 jordanP, gmann: we might want to have something like a periodic job liason or a rotation like for bug triage - I'll think about it 09:19:34 andreaf: how about merging that in bug triage 09:19:46 gmann: yeah we could do that 09:19:57 and same people can provide status instead of tracking separately 09:20:07 ok 09:20:27 gmann: ok can you add that on to the bug etherpad? 09:20:34 sure 09:20:52 #action gmann to add periodic job status tracking on bug triage etherpad 09:21:00 on specs 09:21:03 #link https://review.openstack.org/#/q/status:open+project:openstack/qa-specs,n,z 09:21:24 * andreaf forgot the power adapater and will run out of battery at some point... 09:21:36 we have HA testing spec updated from samP 09:21:40 #link https://review.openstack.org/#/c/443504/ 09:21:51 i doubt i can have look into before summit 09:22:09 if anyone else have time feel free to provide feedaback 09:22:12 feedback 09:22:27 ok will do I didn't manage this time 09:23:08 andreaf: thanks 09:23:19 anything else on specs? 09:23:23 btw, the upgrade testing tools spec was abandoned - I assume the entire osic crue was affected by Intel pulling the plug on osic 09:23:38 #link https://review.openstack.org/#/c/449295/ 09:23:54 I didn't manage to talk to castulo yet though 09:23:58 andreaf: yea, thats was not good for upstream 09:24:27 OSIC folks doing great contribution on nova side too 09:25:15 #topic Tempest 09:25:27 #link https://review.openstack.org/#/q/project:openstack/tempest+status:open 09:25:55 open review ^^ 09:26:33 1 thing we should merge first is cinder API version things 09:26:55 i saw 1 patches doing cinder v3 tests and adding duplicate client for v3 also 09:27:28 i hope we will go with non version client but its not clear that based on catalog_type is best approach 09:27:55 oomichi might have something thinking about those 09:29:08 gmann: ok I need to check back on those 09:29:15 mainly we need to merge the v2 and v3 clients in single place and continue on v3 testing without adding duplicate client for v3 09:29:24 ideally the catalog should be unversioned and a version list can be retrieved and cached or so (but it must be project independent) 09:29:36 andreaf: yes, 09:29:42 I'm not quite sure what the status is on the catalog work 09:30:07 currently those are version ed and devstack register diff endpoints for v2 and v3 under different catalog_type 09:30:29 yeah in devstack - that is not very nice 09:31:17 from a Tempest side we still replace the version in the URL - an alternative could be to pull the list of versions and cache it somewhere to avoid the extra round trip every time 09:31:21 #link https://github.com/openstack-dev/devstack/blob/master/lib/cinder#L383-L398 09:31:41 but for now I think rewriting the version in the URL is still the best bet 09:31:53 I haven't heard of anyone having issues with that 09:32:14 andreaf: and version things will be fetched from test case or clients? 09:32:56 andreaf: if from client then we have to have separate client classes with version overridden which i do not like actually 09:33:03 gmann: if we wanted to avoid rewriting the version in the URL we could have a singleton which fetches the versioned URLs the first time 09:33:22 like this - https://review.openstack.org/#/c/442691/18/tempest/lib/services/volume/v3/volumes_client.py 09:34:26 andreaf: you mean depends on which version people/job want to run tests can be loaded on url at starting ? 09:34:30 gmann: yeah basically we can continue to assume that the URL contains v2 or v3 and just append that to the base URL like we do today 09:35:02 andreaf: but that need dummy client class for each version like - https://review.openstack.org/#/c/442691/18/tempest/lib/services/volume/v3/volumes_client.py 09:36:02 gmann: I"m just talking about how we obtain the version specific endpoint 09:36:23 ok. 09:36:27 anyways let's discuss that separately. we know the where the prob is and we can find best solution 09:36:38 gmann: the proper way would be to get it from the catalog - but I think it's fine if we continue to just build it by hand like today 09:36:51 gmann: yeah let's move on 09:37:02 yea 09:37:03 next is Bug Triage: 09:37:15 #link https://etherpad.openstack.org/p/pike-qa-bug-triage 09:37:31 it was mkopec turn this week 09:37:41 yes, I've confirmed two bugs 09:37:50 and created a report in the etherpad 09:37:54 thanks. 09:37:57 #link https://etherpad.openstack.org/p/tempest-weekly-bug-report 09:38:44 martinkopec: anything urgent we need to check on any bug, like critical one etc 09:38:48 wow 8 high importance and 145 open! 09:39:20 gmann, I think it would be good to finish some bugs 09:39:24 martinkopec: thank you for doing the bug triage! :) 09:39:27 because the number is quite huge 09:39:36 martinkopec: +1 09:39:36 *the number of open bugs 09:39:47 martinkopec: yea. thanks a lot. 09:39:56 open bugs we should kill 09:40:12 martinkopec: did you look in new bugs only or old ones as well? 09:40:31 andreaf, I looked on new bugs mainly 09:40:39 martinkopec: ok 09:41:01 next turn is oomichi 09:41:17 we might want to do a bug smash at some point because the back log is growing too much now 09:41:39 +1 09:41:47 and I'm really sure the data is valid anymore - e.g. how can we have 8 high importance issue which are not addressed? 09:42:15 #link https://bugs.launchpad.net/tempest/+bugs?search=Search&field.importance=High&field.status=New&field.status=Incomplete&field.status=Confirmed&field.status=Triaged&field.status=In+Progress&field.status=Fix+Committed 09:42:24 ^^ high importance one 09:42:36 few of them are very old 09:43:15 for instance, https://bugs.launchpad.net/tempest/+bug/1609156, I forgot to mark it as resolved :P 09:43:17 Launchpad bug 1609156 in tempest "Test accounts periodic job is broken" [High,Fix committed] - Assigned to Andrea Frittoli (andrea-frittoli) 09:43:38 heh. \o/ 1 less now :) 09:44:03 some are pending on patches side like #link https://review.openstack.org/#/c/392464/ 09:44:19 let's move next 09:44:38 #topic patrole 09:45:12 patrole team need suggestion on patrole release model 09:45:19 there is ML #link http://lists.openstack.org/pipermail/openstack-dev/2017-April/115624.html 09:45:30 is anyone around from Patrole? 09:45:52 gmann: yes thanks for replying to that 09:45:56 i think they 09:46:17 are online on 17utc 09:46:20 IMO it should be brachless and release same way as Temepst 09:46:31 gmann: I guess patrole should run the release job eventually 09:46:36 but more feedback suggestion are welcome 09:46:46 +1 09:47:11 gmann: yeah branchless it to ensure consistency across releases which applies to policy testing as well 09:47:21 yea. 09:47:50 gmann: one thing I'm really worried about patrole though is that there is all done by folks working from the same company 09:48:10 gmann: and the rest of us really as little involvement / insight into it 09:48:24 only burden might be feature flag but policy things change very rarely in those term. at least i can tell from nova perspective 09:48:29 andreaf: yea thats true 09:48:39 gmann: we need to change that before we start investing more in Patrole I think 09:48:58 but that's just my feeling 09:49:08 andreaf: i occasionally review there but we should start putting review bandwidth there 09:49:35 well I would like for the Patrole folks to actively seek for new non-at&t contributors :) 09:49:43 I will raise the point at the next meeting 09:49:57 honestly I won't have much bw to review patrole code myself 09:50:11 true, contributor from different companies are key in upstream 09:50:44 10 min left let's move next 09:50:47 #topic DevStack 09:51:07 jordanP patch about swift services was merged :) 09:51:14 yeah, that's good 09:51:16 nice. 09:51:30 jordanP: also sdague did a lot of work around uswgi 09:51:40 yea 09:51:41 I think we are back to a normal memory consumption, but we should always be aware of it 09:51:53 I am not sure what the impact of uwsgi will be 09:52:01 in terms of memory 09:52:41 also, given that I feel we should stick to serial scenarios run and concurrency = 2 for api, we also should be very aware of the overall run time of our jobs 09:53:12 1h20 is a lot already. And it's super hard to delete tests, so when new tests are added, we should make super sure it's not a duplicate test 09:53:40 yes, +1 for serial on scenario 09:54:04 jordanP: yeah well it depends a lot on the test node, it can be below 1h in the best case 09:54:16 but I agree 1h 20' is a lot 09:54:45 for instanc https://review.openstack.org/#/c/458092/, it's all over 1hour 09:54:53 jordanP: if we manage to make gate-tempest-dsvm-neutron-scenario-multinode-ubuntu-xenial-nv voting we could move all scenario tests there 09:55:05 1h05 is the fastest job 09:55:32 yeah 09:55:48 jordanP: but I'm not sure having two jobs is really ideal either 09:55:53 but that multinode job is already slow, so not sure it will help 09:56:01 no, we have already too many jobs :) 09:56:16 we should merge both 09:56:31 will it be too slow? 09:56:44 gmann: well we just separated them I don't think we should merge them back 09:57:17 not curerent scenario one. i mean multinode and scenario multinode 09:57:24 gmann, jordanP: if we increased concurrency to two in gate-tempest-dsvm-neutron-scenario-multinode-ubuntu-xenial-nv, or even better if we removed some unnecessary scenario tests, the job would be much faster 09:57:55 if there is coverage somewhere else then we can remove otherwise we should not i think 09:58:08 andreaf, +1. We should skip some scenarios 09:58:12 we should get rid of slow API tests if somehow we can 09:58:23 2 min left 09:58:25 let's move 09:58:31 gmann, it's hard, slow api tests make sense, 09:58:37 i will skip grenade and o-h 09:58:43 we can merge some API tests to not pay the setup cost twice 09:58:47 but this as some side effects 09:58:57 *has 09:58:59 jordanP: humm 09:59:05 #topic Destructive Testing 09:59:10 we already talked about it 09:59:33 @topic open discussion 09:59:37 jordanP: perhaps we should have a review of our jobs with some proposals and discuss about it in the next meeting 09:59:40 anything on open 09:59:47 jordanP: can you setup an etherpad for that? 09:59:50 #topic open discussion 09:59:51 andreaf, yes ! 09:59:58 will do 10:00:02 gmann: Tempest 16.0.0 is out 10:00:04 jordanP: thanks 10:00:05 jordanP: thanks 10:00:07 #endmeeting