17:04:47 #startmeeting openstack-qa 17:04:48 Meeting started Thu Feb 7 17:04:47 2013 UTC. The chair is sdague. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:04:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:04:51 The meeting name has been set to 'openstack_qa' 17:04:59 sweet, off we go 17:05:13 ok lets start with reviews 17:05:22 #topic outstanding reviews 17:05:52 https://review.openstack.org/#/c/20746/ 17:05:57 does anyone have particular reviews they are struggling with? 17:06:04 #link https://review.openstack.org/#/c/20746/ 17:06:35 afazekas: ok, where do we stand on that? 17:06:41 Now we are skipping testing without a technical reason 17:07:08 "SKIP: Need multiple users for this test." "SKIP: FlavorExtraData extension not enabled." appears in the log message before this patch 17:07:33 ok, so the issue being that our new skip decorator doesn't tell us why? 17:08:29 The generic_setup_package evaluated after the skip decision made 17:08:42 ok 17:09:33 why it is not fixed as soon as possible ? 17:09:46 ok, lets get mordred and lifeless on that review to see if there is a testrepository fix to make this right 17:10:08 we'll also need jaypipes to drop his -2 17:10:12 we have other solutions probably, but it can work now 17:10:21 sdague: aroo? 17:10:36 mordred: if you could have a look at https://review.openstack.org/#/c/20746/ 17:10:40 (looking) 17:10:43 jay wanted your input 17:11:04 I seem to be confused by https://review.openstack.org/#/c/20681/ and hope another core reviewer can approve it or respond. 17:11:32 I will review that 17:11:48 davidkranz: I'll need to dive into that a bit deeper, haven't really looked at that one yet 17:11:55 #link https://review.openstack.org/#/c/20681/ 17:12:07 https://review.openstack.org/#/c/20901/, Where can we place the tests that are scenario based that are not a part of normal smoke / gating tests in tempest? I had submitted a blueprint for adding Nova VM lifecycle tests in tempests (modified version of tests/compute/servers/test_server_basic_ops.py). Reviewers have given a comment not to add it as a part of normal gating tests. 17:12:11 sdague: Thx 17:12:21 #link https://review.openstack.org/#/c/20901/ 17:12:36 Nithya: I think we need an attribute for non-gating tests. 17:12:38 Nithya: those should live in another directory, like the stress tests 17:12:59 I will do the changes and submit a patch. Thank you 17:13:27 Nithya: we should create new folder for them 17:13:29 I think that we need to have a Havana design session on exactly how to handle tests and attributes, as lifeless has added some attr support to testrepository now 17:13:38 sdague: ++, but we need some job to run these tests. Nightly? 17:14:15 davidkranz: I'm not convinced the lifecycle tests are something we want to run 17:14:30 they are way too stateful 17:14:47 sdague: whoever wants to run , can run 17:14:50 I'm ok with them being available, but I think in a real environment they are going to break a lot 17:14:54 sdague: If we don't want to run them then why are they in tempest? 17:15:02 anyway it is not gated tests , 17:15:06 davidkranz: we have stress/ 17:15:11 those we don't run all the time 17:15:17 or in any automated way 17:15:23 davidkranz: for home usage 17:15:27 sdague: Because we cant right now. But that was the goal. 17:15:39 in general I think it would be good to have a repo for non-gating tests 17:15:56 yes. I would like to eventually run stress tests in some sort of gating manner 17:16:06 for instance tests which are targeting multinode environments - e.g. scheduler tests 17:16:14 \/nongation folder for these tests without any sub folder 17:16:15 or race condition tests 17:16:17 my wquestion would be what are the characteristics of the non-gating tests you are talking about that would make them non-gating 17:16:32 mordred: too many false negatives 17:16:35 multinode is something I intend to get working in the gate 17:16:43 IMO, nongating == "slow of flaky" 17:16:51 ^^^ or 17:16:57 yeh, basically 17:17:03 ok. I'm TOTALLY fine with "slow or flaky" as the definition 17:17:13 we had to do a lot of work to make current tempest extremely deterministic 17:17:19 I would like to create tests which has higher chance to cause flaky issues, with high performance, but still in python 17:17:34 so things which we think aren't deterministic, we need to keep out of the gate 17:17:39 ++ 17:17:43 actually running those tests in a non-gating job would highlight whether those tests are slow and/or flacky] 17:17:45 afazekas: yes, agreed 17:18:04 afrittoli: fair 17:18:08 sdague: We need to have a way to "incubate tests" in non-gating to be moved to gating when stable. 17:18:14 ++ 17:18:20 ++ 17:18:21 however, I'm going to table the philosophy for Havana summit if we could 17:18:32 sdague: Sure. 17:18:41 I think we need a session on this, but there is a lot to discuss, especially details wise, so lets do it there :) 17:18:45 I think they can live in the same git repository anyway 17:18:52 afazekas: yes, agreed 17:19:17 sdague: ++ 17:19:37 ok, so I wanted to chat about this - https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/speed-up-tempest,n,z 17:19:42 #link https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/speed-up-tempest,n,z 17:20:06 afazekas brings up the good point that we should make sure we don't loose the smoke functionality as that comes in before testr is ready 17:20:13 even though we aren't running it 17:20:46 so I'm going to propose to the list (as soon as I get time) that chris redo those as a series, and wrap the decorator so that it does the right thing for nose and testr 17:20:57 so it's all linked together 17:21:04 afazekas that sound reasonable to you? 17:21:23 ok 17:22:08 ok cool 17:22:33 afazekas: so if you can rebase this https://review.openstack.org/#/c/20091/ - that can go in 17:22:39 looks like jenkins just failed to merge it 17:23:15 otherwise people should get Jenkins to pass, I'm not looking at any Jenkins fails. :) 17:23:25 and I think those are most of the outstanding reviews 17:23:34 sdague: I need to violate the T302 rule, since otherwise I got circular issue 17:23:47 afazekas: do you have an example of where that happened? 17:25:01 I'll trace it , but it is because of the __init__ and clients cross references. (some part was function originally in another location, but I had to move them on a review) 17:25:43 the big T302 patch merges the clisnts.py to the __init__ 17:25:55 ok, lets figure that one out, would be good to get to the bottom 17:26:07 ok, anything else on reviews? 17:26:14 going once... 17:26:22 going twice... 17:26:34 https://review.openstack.org/#/c/20681/ 17:26:48 I'd like to talk about that review 17:27:02 ok, davidkranz did bring it up before, but go for it 17:27:16 Currently the Keystone API does not return a token for /v3/token 17:27:57 so currently the submitted review is to create a new v3restclient 17:28:03 donald_hp: right 17:28:13 no changes for token in v3 17:28:18 it is same as v2 17:28:19 is /v3/token going to be implemented? 17:29:09 since no tests exist for token, we are trying to submit tests . we need to check version . but nothing changed for V3 17:29:49 ravikumar_hp: is devstack bringing up v3 in the gate?�(I thought I saw dean doing something with that recently) 17:30:07 donaldngo_hp: ok, will take some time to look at it 17:30:12 so for that review all tests depending on the /v3/ keystone api will need to use the new restclient api 17:30:23 sdague: not sure 17:30:29 sorry not api but implementation like /v3/domain 17:30:31 also, for people asking for reviews... if you could spend some time reviewing other patches in the queue, those of us with +2 would have more to go on 17:31:13 ok, lets move on from reviews 17:31:21 #topic coverage and additional tests 17:31:43 mtreinish, how about you talk about the coverage analysis and some of the additional test lists 17:31:55 sdague: ok 17:32:06 #link https://etherpad.openstack.org/MissingTempestTests 17:32:17 #link https://etherpad.openstack.org/coverage-analysis 17:32:48 so using the results from the periodic coverage runs I've done some analysis of gaps in the tempest tests 17:33:00 then compiled a list of proposed tests to fill the gaps 17:33:24 the list is pretty big so far, but it is still incomplete 17:33:41 timello, got the first patch merged for this effort 17:34:07 so if people want to tackle one just mark your name next to a test name on the list 17:34:19 awesome 17:34:29 nice 17:34:36 what does "test_multiple_create" do? 17:35:02 server create allows you to create more than one guest at a time 17:35:05 donaldngo_hp: multiple create is an extension that does multiple server creates in one action 17:35:51 just to understand, this is collecting coverage from the API servers, but not from compute yet? I've seen a blueprint on nova side to enable a backport in compute for coverage report... is this in place and used already? 17:36:19 afrittoli: yeah this just based on the api server. 17:36:20 for instance coverage for the virt driver is zero, so I assume that's not collected 17:36:30 mtreinish: ok thanks 17:36:44 the coverage extension supports using backdoor ports for other services but its not turned on in the periodic run yet 17:37:02 are we continuing to accept negative tests? 17:37:17 most of the needed tempest tests on the etherpad are for negative tests 17:37:34 donaldngo_hp: yes, negative testing is important 17:37:41 donaldngo_hp: that is where the biggest gaps were 17:37:57 sdague: some time ago hold was put for negative tests 17:37:59 we exposed some interesting nova bugs because of them 17:38:04 in past we put a hold on negative tests 17:38:10 and there was talk about fuzz testing 17:38:10 afrittoli: http://wiki.openstack.org/Nova/CoverageExtension gives some info on setting up the coverage extension 17:38:12 we have added lot of negative tests 17:38:36 at this point no one's stepped up to do fuzz testing, so I'm all for negative tests 17:38:54 especially as they tend to be pretty cheap on execution 17:39:03 sdague: ok 17:39:39 negative test are fast if we do not need a booted machine.. 17:40:40 #topic open discussion 17:40:54 ok, other topics of note, or anything else people want to chat about? 17:41:00 I'd like to bring up the glance client discussion 17:41:01 python -c "import this" 17:41:13 "Flat is better than nested." 17:41:24 #topic glance client 17:41:29 our code structure os more like java style than python now 17:41:39 so there is a ML started on this at: http://lists.openstack.org/pipermail/openstack-qa/2013-February/000199.html 17:42:06 the big open question is whether using the http lib from python-glanceclient is different enough for writing a tempest glance client 17:42:32 or is it too similar to testing using python-glanceclient (like we do currently) 17:42:39 what does http lib give us? 17:42:47 is that the chunking implementation? 17:43:19 sdague: Back. The hold on negative tests was for the sort that were written the same as positive tests. 17:43:22 sdague: that is in it. which is the main motivation for wanting to use it 17:43:51 if we can just take the chunking implementation, I'm ok with that 17:43:51 sdague: https://github.com/openstack/python-glanceclient/blob/master/glanceclient/common/http.py 17:43:55 mtreinish:Do you get the merged response and the staus code ? 17:43:58 sdague: THe idea was to have negative tests be expressed more concisely and declaritively. 17:44:13 sdague: Like in fuzz testing. 17:44:28 davidkranz: ok, so that's probably a design summit session as well 17:44:40 the reality is, no one stepped up to do fuzz testing in grizzly 17:44:47 afazekas: I don't think so, we'd have to wrap around the response to convert it 17:44:48 sdague: daryl said they had something almost ready to submit. 17:45:05 so I'd say lets move forward with actually adding tests for release 17:45:11 sdague: We should ping him about that. 17:45:24 sdague: i have a question on test submission for incubated projects like load balancer as service or database as service 17:45:38 Do we want to test is the chunked encoding well formatted or just accepted by the lib, and working for some reason ? 17:45:47 can we submit ? or wait until those become core projects? 17:46:01 ravikumar_hp: those at least need to end up in a seperate directory 17:46:16 sdague: that sounds good 17:46:38 afazekas: that's a good question 17:46:45 do we have a way to maintain test suites? 17:46:48 afazekas: it was more for functional testing of the glance api. I wasn't planning on verifying the chunk encoding formatting. 17:46:54 mtreinish: is there another upstream python lib that does the chuncking? 17:47:11 afrittoli: I'm not sure I understand the question 17:47:13 if the answer is no, we do not need to reinvent the wheel. 17:47:32 afazekas: I think we're mostly concerned with API testing at this point 17:47:54 sdague: yeah httplib works fine with chunking. (That's what the glanceclient lib uses) 17:47:58 so lets solve the first issue that we don't test the glance api at all 17:48:08 because we always use python-glanceclient 17:48:16 afrittoli: attributes are a way to maintain test suites 17:48:46 then if we decide later we don't want to trust the chunking implementation, we can tackle that 17:48:59 attributes are going away with testtools 17:49:00 sdague: ok, sounds reasonable 17:49:18 afrittoli: they will still be there, but different 17:49:30 sdague: ok that's great 17:49:32 testtools: not fully, They implemented something.. 17:49:53 afrittoli: not fully, They implemented something.. 17:49:59 a big part of it is lifeless wants to see the use cases a little more to enhance the implementation 17:50:07 that will also be a portland conversation 17:50:26 it's also something we could start to hash out on the mailing list 17:50:33 I thinkit would be good to have a list of tests (or folders) which are part of gating 17:50:56 afrittoli: right, and right now that's tempest/tempest 17:51:08 as of last week we are gating on the full set 17:51:16 afrittoli: just grep any jenkins log 17:52:00 ok, so as cyeoh, ivan, and lifeless are on the other side of the planet, and they are doing a lot of this, lets take this to the mailing list, as that's the right place to hash it out 17:52:09 this meeting is in the middle of the night for all of them 17:52:24 ok, mtreinish you have enough to move forward on glanceclient? 17:52:37 sdague: yeah I should 17:52:41 cool 17:52:48 #topic open discussion 17:52:55 ok, anything else from folks? 17:53:10 about multinode tests 17:53:36 ? 17:53:42 do we have plans to split tests on multiple nodes, or to run them against a multinode environment 17:54:13 ++ multinode gating test 17:54:23 that could be a way of speeding-up on one side 17:54:27 We are using CPU time even before the tempest stating 17:55:24 and would allow us to run tests which make sense only on multinode environments 17:55:24 I think on multinode someone just needs to start working on the approach with the CI team 17:55:26 One tempest instance can load a big environment, if it could run on multithread 17:55:42 afrittoli: I encourage folks to work on that who are interested 17:56:06 I think getting us to full gate in grizzly was a huge step forward 17:56:20 and that would be another great step forward 17:56:30 ok, we're about at the end of our time 17:56:34 anything else from folks? 17:56:40 sdague: yes going full gate was a great step forward indeed 17:57:13 python way of code structuring 17:57:29 last question about pep8, there are a lot of strict requirements such as alphabetical order of imports 17:57:31 afazekas: ok, go ahead 17:57:48 afazekas: sorry I got in the middle 17:57:58 afrittoli: no worries 17:58:01 the python python -c "import this" tells us the basic rules 17:58:24 "Flat is better than nested." means we should use less folder 17:58:36 we should not put a single class to single file 17:58:52 afazekas: I don't really read it that way 17:59:01 I read it as complex nesting in a file 17:59:03 We should have duplicated word in a an absolute path 17:59:34 the directory and file structure I think is the least of my concerns right now :) 17:59:59 yes , it has multiple explanation :) , but generally we have too long absolute paths we repeated words 18:00:18 afazekas: ok, how about we take it to the mailing list, as we're kind of out of time 18:00:36 if we start applying the T302 style guide is will be more obvious 18:00:40 ok 18:00:41 afazekas: ok 18:00:47 ok, I'm going to call it a meeting 18:00:58 follow on discussions, jump on #openstack-qa 18:01:00 or the mailing list 18:01:04 thanks everyone 18:01:07 #endmeeting