16:00:49 <eglute> #startmeeting defcore
16:00:49 <openstack> Meeting started Wed Mar 23 16:00:49 2016 UTC and is due to finish in 60 minutes.  The chair is eglute. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:50 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:52 <openstack> The meeting name has been set to 'defcore'
16:01:11 <eglute> Hello Everyone! here is the agenda for today: #link https://etherpad.openstack.org/p/DefCoreRing.16
16:01:12 <rockyg> o/ I'm here and I was early!
16:01:17 <dwalleck> o/
16:01:18 <catherineD> o/
16:01:21 <docaedo> o/
16:01:30 <luzC> hello
16:01:38 <gema_> o/
16:01:39 <eglute> Good to see everyone here! And if you havent yet, raise your hand o/
16:01:40 <hogepodge> o/
16:01:59 <eglute> Please review agenda and amend as needed
16:02:09 <eglute> #chair markvoelker
16:02:10 <openstack> Current chairs: eglute markvoelker
16:02:16 <markvoelker> o/
16:02:19 <brunssen> o/
16:02:23 <eglute> #topic Tempest tests analysis results
16:02:49 <eglute> dwalleck did a lot of analysis on current tests, as discussed during the midcycle
16:02:57 <eglute> dwalleck, go ahead
16:03:53 <dwalleck> So based on feedback from the midcycle, I tinkered a bit the Tempest subunit output and found we could get the steps the test took easily from the output
16:04:10 <dwalleck> Any HTTP requests that is. SSH or other steps should be possible to
16:04:35 <dwalleck> This was the result: https://github.com/dwalleck/defcore-tools/blob/master/generated_test_analysis.txt (this was supposed to be markdown, but I had some last minute parsing issues)
16:05:26 <dwalleck> Some tests do some very weird and unexpected things. I'm still looking through the reporting now
16:05:56 <dwalleck> I also did some manual analysis of the tests, which I'll wrap up this week: https://github.com/dwalleck/defcore-tools/blob/master/manual_test_analysis.txt
16:06:19 <gema_> dwalleck: you generated that report automatically, right?
16:06:36 <gema_> dwalleck: could you do that for the whole of tempest? or is that all of it?
16:06:38 <dwalleck> There's definitely opportunities for refactoring based on what I'm seeing
16:07:08 <eglute> this also shows how not-atomic some of these tests are
16:07:23 <dwalleck> gema_: You could do it for all of Tempest. I have links to the code we put together to do this in the etherpad for this week's meeting
16:07:39 <gema_> dwalleck: awesome, thanks!
16:07:56 <dwalleck> The subunit parser: : https://github.com/arithx/subunit-parser and the pretty printer/reporter: https://github.com/dwalleck/defcore-tools/blob/master/process_results.py
16:08:23 <gema_> yep, got it now, sorry I got carried away by the report
16:08:25 <dwalleck> This should live somewhere in an OpenStack repo if this turns out to be useful
16:08:30 <dwalleck> No worries!
16:09:13 <rockyg> might be good in refstack repo.  or osops
16:09:29 <eglute> what does everyone think of this kind of analysis? I think tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_delete_image test is a good example to look at
16:09:33 <dwalleck> My goal for next week is to have an etherpad with issues I've encountered that I think we can improve on
16:10:04 <rockyg> It's a great start and a great tool.
16:11:02 <rockyg> QA could use it to prune/tweak tests to increase coverage and decrease duplication and run times
16:11:20 <eglute> i agree rockyg
16:11:34 <dwalleck> So, to make sure I'm doing the right thing, does what I'm doing match up with what we talked about in Austin?
16:11:39 <eglute> i also think we can use it for analysis what we want in defcore tests
16:11:56 <markvoelker> dwalleck: yes, this looks useful to me
16:12:19 <gema_> dwalleck: definitely very useful, we can see gaps (not only good/bad test cases)
16:13:01 <dwalleck> Good deal! Then I'll keep going down that path. I think slowrie and I are pretty close to done
16:13:19 <eglute> i think this really illustrates the need for defcore to own the tests. perhaps as a defcore tempest plugin. what does everyone think?
16:13:31 <hogepodge> eglute: why?
16:13:38 <gema_> dwalleck: I am a bit worried that some like line 564 don't have output
16:13:39 <eglute> why which part?
16:13:46 <gema_> dwalleck: not sure how to interpret that
16:13:58 <hogepodge> eglute: I don't see how defcore having to "own" the tests follows
16:14:28 <dwalleck> gema_: If I'm thinking of the right issue, that test may not have any HTTP requests. It may just be doing SSH work
16:14:46 <dwalleck> Ideally we'd have the asserts in here too, but that's phase 2 :)
16:14:52 <gema_> dwalleck: ok, will look into it, what if it is using a client, would it show up?
16:15:13 <eglute> right now we are dependent on these tests, that are part of the gate, etc. if we pull them into our own plugin, we could start fixing them, making them more atomic
16:15:31 <gema_> eglute: or we could do that leaving them in place also
16:15:34 <hogepodge> eglute: the tests aren't broken.
16:15:38 <dwalleck> eglute: I think with the list of potential refactors in hand, we can have a good conversation with the QA team to see if they find them useful
16:15:55 <eglute> the tests are not atomic. and not testing interop.
16:15:58 <hogepodge> eglute: and nothing is stopping anyone fron contributing to tempest directly. It's what the tempest team wants
16:16:05 <catherineD> I see that the auto generated report provides a good statistic on the API usages.  The manual report describes the tests themself.  Both are useful.  How could we scale to generate the manual report for all tests?
16:16:25 <rockyg> dwalleck,
16:16:27 <rockyg> ==
16:16:32 <rockyg> ++
16:16:34 <hogepodge> plus, we don't gate on plugins (yet)
16:16:59 <hogepodge> so a tempest plugin would not have the advantage of hundreds of tests running every day on every patch. Tempest has that directly
16:17:07 <eglute> we could have defcore gate job, no? either voting or non-voting?
16:17:27 <dwalleck> hogepodge: There's definitely more going on here than interop. Some of them don't even test what they say there's testing (The AutoDisk/ManualDisk config tests don't actually test anything related to disk size and partition schema)
16:18:08 <rockyg> also to add to what hogepodge says, it makes it very easy for our tests to diverge from tempest/gate tests and functionality
16:18:08 <dwalleck> We can try to fix them, but that's assuming the QA team agrees with the aspect of keeping tests atomic
16:18:12 <catherineD> dwalleck: Is the a plan to auto generate the report this is generated maually ?
16:18:34 <dwalleck> catherineD: Yes, but that's a bit more work
16:18:41 <hogepodge> dwalleck: I'm arguing to not throw away a valuable resource in tempest. If there's a weakness in the test suite, I want it fixed in the most effective way. Throwing away years of work and a highly functioning team is not the way to go about fixing the issues
16:18:51 <gema_> hogepodge: +1
16:18:56 <docaedo> If a test is supposed to do something, but it doesn't, we shouldn't we be working on fixing the actual test, not making a different one somewhere else?
16:19:17 <dwalleck> hodgepodge: I'm not advocating to throw it away. I'm talking about growing it forward
16:19:20 <rockyg> I think QA is swamped and needs help, but right now they are trying to fix framework issues.  If they had people to help with framework, they could review and accept more tests
16:19:32 <hogepodge> docaedo: fix the test upstream, or add  a new test to the existing test suite.
16:19:38 <eglute> hogepodge i am not suggesting we depart from tempest... i am thinking of a tempest plugin, starting with existing tests.
16:19:50 <gema_> eglute: what is the benefit of that
16:19:51 <dwalleck> And I think that I have to demonstrate at least some examples of the issues I'm discussing for this discussion to have any meaning
16:19:55 <gema_> vs keeping them in tempest?
16:19:56 <docaedo> hogepodge: yes, I agree - work with tempest directly
16:20:11 <hogepodge> eglute: no, that's a fork, and a fork is not in the spirit of open collaboration
16:20:22 <catherineD> I think that these reports can at the minimum help DefCore to have an in-sight to the tests ...  at least they are useful in the selection of the must-pass tests
16:20:24 <docaedo> oh I typeoed - we SHOULD be working on fixing the actual test I meant
16:20:29 <rockyg> A spec from defcore for how to atomize (heheh) tests that defcore needs would go a long way in broadcasting both our needs and intentions for *tempest* tests
16:21:05 <dwalleck> rockyg: To rocky's point, if we defined a spec backwards from the tests as is right now, you'd have a lot of noise
16:21:19 <gema_> dwalleck: we should define the spec regardless of the tests
16:21:24 <gema_> and then make the tests match the spec
16:21:44 <dwalleck> If these tests are the strict guidelines of what defcore is, then they need to be strict and concise
16:21:45 <rockyg> We need to support Qa, not go aroun it.  At least where it makes sense.  We should fix the tests that are important to us and are in tempest before we go and start a new test project
16:22:48 <dwalleck> My concern is that a functional test doesn't always equal a defcore test. I don't want to compromise what we're testing based on functional testing
16:23:00 <dwalleck> For example, the suggestion to remove all negative tests from Tempest
16:23:10 <rockyg> Let's first attempt to refactor/fix what we use, then we can discuss expanding/changing.  I think simply acting on the analysis will take stress off both defcore and QA
16:23:14 <gema_> I think we are talking how to fix an issue that we don't even know if we have or what the extent of it is until dwalleck finishes the analysis
16:23:37 <rockyg> dwalleck, ++
16:23:56 <rockyg> gema ++
16:24:00 <gema_> dwalleck: we'll have to talk to them about negative tests then
16:24:01 <dwalleck> Which I think now has been tabled, but could have heavily impacted DefCore. For my own sake outside of DefCore or gating, I would've made my own plugin repo because I use those tests
16:24:13 <hogepodge> dwalleck: I'm not disagreeing that we can't have better api tests. I am disagreeing that we run off on our own with a plugin. It will be met with resistance, and part of our mandate is to be community driven. It's good analysis, but it does not follow that tempest must be abandoned, especially when they want to work with us
16:24:30 <catherineD> gema_: ++ At the minimum dwalleck: 's complete reports can let DefCore knows the typed of must pass test chosen today
16:24:47 <eglute> perhaps i misunderstand the tempest plugins. just trying to figure out how we can have better tests without running into various barriers that were brought up during midcycle and other discussions.
16:24:53 <rockyg> I think defcore/QA need a join design summit session
16:25:03 <markvoelker> hogepodge: genereally I'm ++ on working with the existing tests when possible.  I really don't want to get into the business of having two definitions for what defines "working" for any given feature.
16:25:09 <gema_> eglute: we are not running into any barriers afaik we are not writing tests yet
16:25:12 <gema_> or are we?
16:25:13 <dwalleck> hogepodge: I'm not arguing that we run off and do something else. I'm saying I want to have this conversation with the QA team based on data and concepts
16:25:30 <rockyg> markvoelker, exactly
16:25:55 <rockyg> dwalleck, ++
16:26:03 <catherineD> dwalleck: we need to identify the set of tests to talk with QA
16:26:13 <catherineD> and your reports provide that
16:26:32 <rockyg> Let's see if we can schedule a discussion session with QA, then also a working session after to demo what dwalleck is talking about.
16:26:39 <dwalleck> At the end of the day, any of my analysis is my opinion. I'm not going to run off and work in a silo. But I think these are things worth discussing
16:26:52 <gema_> dwalleck: absolutely
16:27:10 <eglute> +1 on having a session with the QA team.
16:27:12 <hogepodge> dwalleck: I'm not disagreeing about that
16:27:15 <rockyg> POC.  any chance we could have a single/few tests refactored based on your tool analysis by summit?
16:27:20 <gema_> dwalleck: but the more we can engage QA in the discussion and in the improvements / interop requirements, the better
16:28:25 <markvoelker> So dwalleck: perhaps with the Summit just around the corner that would be a good time to circle up w/QA?
16:28:33 <markvoelker> That would give you some time to wrap up your analysis
16:28:41 <markvoelker> And for the rest of us to digest it
16:28:50 <dwalleck> to a lot of people: yes, I'm just talking about the analysis and a few examples to help drive the possible discussion with QA. I think I've talked a lot about of these issues, and I think one or two examples would provide context
16:29:10 <catherineD> dwalleck: ++
16:29:13 <gema_> +1
16:29:16 <rockyg> ++
16:29:23 <luzC> ++
16:29:48 <eglute> #action markvoelker hogepodge eglute dwalleck schedule design session with QA team during the summit
16:29:57 <dwalleck> And if the decision is to do nothing, that's fine :)
16:30:22 <catherineD> Let single in one or two example and clearly acticulate why DefCore likes and does not like the tests.
16:30:24 <eglute> i think so far everyone agrees that the analysis is great and want to see it done on all tests
16:30:34 <hogepodge> dwalleck: that report is very valuable, it's useful to see the sequence of events that happens in a test. I'm really happy about it
16:30:35 <catherineD> eglute: yes
16:30:40 <eglute> +1 catherineD suggestion too
16:31:11 <eglute> and it sounds like everyone would like a discussion with the QA team as well, correct?
16:31:29 <dwalleck> Thanks for the spirited discussion :) I have to duck out, but I'll follow up with an email in a bit
16:31:43 <eglute> dwalleck has links to scripts on the etherpad
16:31:48 <rockyg> eglute, ++
16:31:49 <markvoelker> thanks dwalleck
16:32:00 <eglute> now that we ran him off, time for next topic :)
16:32:00 <dwalleck> thanks folks!
16:32:07 <eglute> #topic scoring
16:32:32 <eglute> markvoelker, i will hand this off to you :)
16:32:43 <markvoelker> Patches have been posted for Neutron and for moving the existing advisory capabilities to required per discussion at the midcycle
16:32:48 <markvoelker> See etherpad for links
16:33:12 <markvoelker> On the Neutron side, after speaking with the PTL we actually found one test currently in advisory that needs to be dropped as uses admin privs
16:33:12 <eglute> Subnet pools: #link https://review.openstack.org/#/c/296426/
16:33:20 <markvoelker> Patch for that is up and listed in the etherpad too
16:33:34 <eglute> #link https://review.openstack.org/#/c/295313/ which removes an admin test from advisory
16:33:50 <markvoelker> I'm actually not going to propose any new capabilities for Neutron this time...we did a pretty big addition last time, so we're fairly "caught up"
16:34:12 <markvoelker> The best candidate was subnet pools, but after investigating it's going to fall short on a couple of criteria.
16:34:36 <markvoelker> I've posted a patch the scoring sheet with some of those findings so we'll have the info for next time...I think it's a capability we'll add down the road, just not quite yet.
16:35:27 <markvoelker> So with that: remember that we need to get all remaining scoring patches posted in the next few days!
16:35:36 <hogepodge> I'm going to be suggesting new capabilities for object storage/swift. I'll also test the swift plugin to see if it can be run from tempest. I don't think that can land this time around, it seems partially done
16:36:03 <hogepodge> plus new capabilities for cinder, some things we missed in the last round of scoring. no new information there.
16:36:36 <markvoelker> hogepodge: thanks.  Let us know when patches are up. =)
16:36:58 <markvoelker> I think dwalleck had to step out, but from the etherpad it looks like he's ID'd some possible additions for Nova
16:36:58 <catherineD> hogepodge: RefStack supports running with tempes plugin now  ... it woudl be great if you use RefStack and let us know if there is issue
16:37:15 <markvoelker> gema: anything you need help with on Keystone?
16:37:27 <hogepodge> catherineD: +1
16:37:46 <catherineD> hogepodge: thx
16:38:25 <gema_> markvoelker: sorry, was commenting on your patch
16:38:28 <gema_> I have only one question
16:38:38 <gema_> can I add tests from keystone tempest plugin?
16:39:02 <gema_> or consider tests, rather
16:39:21 <markvoelker> gema_: RefStack should support running tests via tempest plugin now
16:39:28 <gema_> (I see scoring doesn't need fixes to the json just yet, right?)
16:39:44 <gema_> markvoelker: alright, will do then
16:39:52 <gema_> consider those capabilities as well :)
16:39:57 <markvoelker> gema_: Well, that depends...if you have a capability that you think scores high enough to be included, you should also add it to the json in the patch
16:40:12 <markvoelker> I didn't for the Neutron one because I don't think it'll make the cut
16:40:23 <gema_> markvoelker: what is high enough?
16:41:19 <gema_> (in your experience)
16:41:47 <markvoelker> gema_: WE've generally used 74 as a rough cutoff
16:42:02 <markvoelker> Errr, >74 that is
16:42:13 <gema_> alright, will send the patch then soon, please review in depth as I have never done this before
16:42:21 <markvoelker> gema_: sure thing
16:42:24 <gema_> markvoelker: thanks!
16:42:31 <markvoelker> Anyone else with scoring updates?
16:43:21 <markvoelker> Ok then.  Let's get those patches in folks. =)
16:43:47 <markvoelker> #topic Update refstack-client to latest Tempest
16:44:02 <markvoelker> catherineD: was this your topic?
16:44:08 <catherineD> yes
16:44:15 <markvoelker> The floor is yours madame. =)
16:44:37 <catherineD> thx.  Any recommendation of which SHA we should use?
16:45:10 <catherineD> or should I just take the latest ... QA has not pulished an official tag version yet
16:46:00 <markvoelker> catherineD: I haven't tested in a few weeks at least so I don't think I have a SHA recommendation at the moment. I usually use latest though, personally.
16:46:57 <catherineD> Ok .. once we identify a SHA I will check the test names against the JSON file ... I expect some changes in the test names
16:47:18 <catherineD> do we want to document which SHA is used in DefCore?
16:47:40 <rockyg> yeah, mtreinish already talked about a number of changes
16:47:42 <catherineD> Is there a place to hold the SHA name?
16:47:44 <markvoelker> catherineD: we actually don't have an official SHA to use per the interop site
16:48:40 <rockyg> markvoelker, being able to have a sha for a working set of tests is good, though
16:49:14 <rockyg> it gives folks with issues a known good place to start debugging their setp
16:49:14 <markvoelker> rockyg: Sure.  Whenever I've noticed that the refstack default isn't working for some reason, I've submitted a patch to refstck to update it's default
16:50:08 <markvoelker> rockyg: See https://review.openstack.org/#/c/203077/ for example
16:50:52 <markvoelker> I think what catherineD is asking is whether anyone knows of a reason to update that to something newer right now...which I don't. =)
16:51:22 <rockyg> thanks, markvoelker
16:51:28 <eglute> not me. catherineD i trust your decision here
16:52:06 <rockyg> yeah.  wait until scores and capabilities firm up.  we have time
16:52:08 <catherineD> Ok... I will update RefStack to the latest version of Tempest ... our current version is September , 2015 ... time to update
16:53:01 <markvoelker> catherineD: I think we have some folks running tests in the next couple of weeks, so I'll ask them to use latest and let me know if they run into any problems. Will relay to you if they find anything.
16:53:22 <catherineD> markvoelker: great ...
16:53:30 <gema_> catherineD: do you do testing around that?
16:53:37 <catherineD> gema_: yes
16:53:46 <gema_> catherineD: ++ thanks!
16:54:02 <markvoelker> Ok, anything else on this topic?
16:54:15 <catherineD> I have 3 environment of different OpenStack release to test
16:54:30 <catherineD> no from me
16:54:46 <eglute> sounds like we can end a few minutes early
16:54:53 <gema_> \o/
16:55:02 <eglute> thanks everyone!!!
16:55:07 <eglute> #endmeeting