17:02:31 #startmeeting qa 17:02:32 Meeting started Thu Feb 21 17:02:31 2013 UTC. The chair is davidkranz. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:02:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:02:35 The meeting name has been set to 'qa' 17:03:10 First, remember that summit.openstack.org is open for proposals. 17:03:16 hi, i'm new. 17:03:24 fnaval: Welcome! 17:03:32 davidkranz: thanks 17:03:35 I take the chance, I'm also new 17:03:44 There are currently two proposals in the QA track. 17:04:00 I encourage every one to submit proposals for topics of interest. 17:04:04 davidkranz: i am going to add 2 proposals 17:04:17 Ravikumar_hp: Excellent 17:04:21 davidkranz: yeh, I've got a few, but it will take me a week to sort it out 17:04:30 sam and I have a few as well 17:04:34 still burned out from the rush of nova reviews 17:04:39 sdague: We still have some time. I just wanted to remind folks. 17:04:45 so my brain's a little broken right now :) 17:05:09 #topic Managing Reviews 17:05:34 Looking at the list, we seem to be diong better on timely reviews. 17:06:20 There is currently only one review without a -1 that is older than yesterday. 17:06:33 https://review.openstack.org/#/c/22112/ <- this ? 17:06:51 afazekas: Yes. 17:06:59 davidkranz: sometimes new reviewers added very late in review and it goes to some more patches 17:07:47 Ravikumar_hp: Sure. I mentioned it because last week we said we would evaluate after two weeks whether we needed some kind of review days for core reviewers. 17:08:06 suggest 3 reviewers from start to end 17:08:25 Ravikumar_hp: I thought two Core reviewers was sufficient. 17:08:39 yeh, 2 cores is fine 17:08:41 davidkranz: ++ 17:09:05 OK. I have a few topics but does any one else have a topic to bring up first? 17:09:07 though I highly encourage other folks to review that aren't core. I definitely take that input into account 17:09:28 davidkranz: I want to report on the assignment the team gave me last week. The Jenkins problem with Quantum: https://jenkins.openstack.org/view/Tempest/job/gate-tempest-devstack-vm-quantum-full/ 17:09:32 sdague: Yes, I agree. But the core reviewers are the ones on the hook. 17:09:53 mlavalle: Please. 17:10:07 davidkranz: the problem is test_network.py. It's a year old test that uses the minimal network REST client provided by Tempest. The client is for version 1.1 of the Quantum API, which is not available in DevStack anymore. 17:10:30 davidkranz: I can patch the client to use Quantum API version 2. But before doing that, I want the team's feedback. For a Jenkins gate test, shouldn't we use the Quantum Python client instead of the minimal REST client in Tempest? 17:10:40 davidkranz: I can patch the client to use Quantum API version 2. But before doing that, I want the team's feedback. For a Jenkins gate test, shouldn't we use the Quantum Python client instead of the minimal REST client in Tempest? 17:10:47 mlavalle: no 17:11:02 because that hides bugs that the clients paper over 17:11:13 tempest should always have it's own REST client implementations 17:11:15 mlavalle: you should use the restclient. we are testing the api not the project client implementations 17:11:15 sdague: ++ 17:11:37 sdague: I agree, but the tempest rest client could be a copy of the "real" one as was done pretty much with glance. 17:11:59 davidkranz: we only copied one piece for chunked encoding upload 17:12:02 davidkranz: the glance one is only there for one thing that httplib2 couldn't do 17:12:03 davidkranz: ok in that case I will upgrade the rest client to use version 2 of the Quatum API 17:12:05 Are we going to rewrite all tests in tempest/tests which using the libraries ? 17:12:07 because it's kind of complicated 17:12:33 davidkranz: It really should be more than a copy. Logging, serialization, metrics. There's a lot of data we can get from clients 17:12:33 My point was that tempest should not need to reinvent the client code. 17:12:35 afazekas: what other tests use client libraries, I thought it was only glance 17:12:42 smoke 17:12:50 It just needs to "own" its own client code. 17:12:52 whitebox 17:13:16 I think that core tempest should not use the clients 17:13:19 davidkranz: I should have a patch ready for the Quantum rest client early next week 17:13:25 I'm ok with other directories of tests that do 17:13:33 mlavalle: OK, great. 17:13:49 we're adding other things like cli tests, so we have precidence 17:14:59 sdague: what does cli test mean here? 17:15:04 dwalleck: I don't disagree. But the client is a really good starting point in most cases. 17:15:21 Anyway, that is up to the contributor of the tempest client. 17:15:48 sdague: someone should announce it on the ML, before I staring to eliminate these client library anomalies .. 17:15:51 hi 17:15:53 chunwang: Testing the cli for the real clients in tempest. 17:16:01 davidkranz: I partially agree. However, it would be better to have consistency across projects 17:16:11 afazekas: sure 17:16:31 dwalleck: Yes, reducing it to a previously unsolved problem :) 17:16:46 chunwang: comand line, jog0 has been working to add tests that run the openstack command line tools to make sure they don't blow up 17:16:58 which they do under some surprising circumstances 17:17:28 sdague: That reminds me of the negative testing with fuzz issue? 17:17:41 Any one have any status on that? 17:17:57 davidkranz: as far as I know no ones worked on it 17:18:14 matt tesauro is leading that up from my group 17:18:36 dwalleck: Any status worth mentioning? 17:18:37 I believe he has a working prototype, bouncing it around a bit before pushing it in 17:19:03 dwalleck: Cool. I'd love to get a look at it. 17:19:04 He'll actually be online for the sec meeting next. I can ping him to see what his schedule looks like 17:19:11 it sort of raises another topic. I was hoping in the havana cycle we could lean on launchpad a bit more and actually set h-1, h-2, h-3 goals for blueprints we wanted to land 17:19:30 sdague: +1 17:19:34 dwalleck: would be nice to see that in progress to understand how it fits with the rest of things 17:19:36 sdague: do you mean the test for sth like nova-client? Does it mean the tempest script will call the nova command line directly then validate the result whether as expected? 17:19:41 incremental is good 17:20:05 chunwang: yes, it's a seperate directory, so it won't be in the main tempest tests, but yes 17:20:07 sdague: ++ 17:20:22 sdague: ++ 17:20:41 as for the discussion on the regular python client libraries from before... we have had that discussion probably 15 times over the last couple years. 17:20:43 * sdague tries to figure out what the value of sdague is now 17:20:49 people want it both ways. 17:21:02 sdague: 3? 17:21:04 jaypipes: well cli tests kind of give us both ways :) 17:21:15 sdague: got it. 17:21:37 sdague: ++ 17:21:45 sdague: along that note are we going to branch a grizzly version of tempest? 17:22:03 mtreinish: yes, we should at milestone proposed 17:22:05 mtreinish: Wr have to 17:22:09 mtreinish: another thing we always say we're going to do, then do it, and nobody maintains it. 17:22:12 so there is a version that works on stable 17:22:27 jaypipes: well it's more important that it doesn't change 17:22:33 it's still used on stable gate jobs 17:22:37 ok, because I didn't think we did for folsom 17:22:51 Can we backport tests to the Folsom too ? 17:22:51 Right. It's purpose is to prevent regressions on stable releases. 17:22:56 oh, maybe not 17:23:01 but the gate is only smoke 17:23:07 for < grizzly 17:23:15 so the chance of break is smaller 17:23:33 sdague: For grizzly we should make it behave just as it did before becoming a stable branch. 17:23:58 sdague: Gating on all stable/grizzly projects. 17:24:01 nm, it was juat hidden on github ui for folsom 17:24:34 davidkranz: right, so I assume we cut stable at rc1 for the rest of the projects? 17:24:44 actually, I don't know what previous policy was there 17:25:22 sdague: That would be reasonable. 17:26:03 IMO, we should not spend time backporting tempest changes to stable branches without a compelling reason. 17:26:41 I would like to bring up the topic of how we are going to test both v2 and v3 of keystone. 17:27:13 Though we will have a similar issue with other projects that have more than one version, perhaps glance. 17:27:36 Has any one thought about that? 17:27:40 davidkranz: for right now all the glance tests are v1 17:28:00 mtreinish: That is not a good situation if v2 is part of grizzly. 17:28:13 davidkranz: it would be good to test both versions I think 17:28:31 nova's going to have the same desire in havana as we cut a v3 17:28:42 which is going to change some of the tempest tests as we normalize return codes 17:28:44 davidkranz: yeah it would be good to test both versions. I'll start working on that. 17:29:04 sdague: Right. But it is tough to do with this being specified in tempest.conf unless we run a separate gate for each version, which doesn't really scale. 17:29:09 note: our glance endpoint definition does not contains a version.. 17:29:10 davidkranz: that's a good topic. in general how are we going to test multiple configurations on nova side? e.g. using autoassigned IPs or not, using libvirt driver or xen 17:29:52 I believe this is actually a big architectural issue (and performance one) that we have not addressed. 17:29:57 davidkranz: for gate it doesn't scale, but perhaps we need other jobs to verify different configs 17:29:59 davidkranz: ok, well lets go experiment a bit and figure out if we have a compelling way to address it 17:30:06 davidkranz: can we consult Keystone PTL 17:30:21 Will the tempest developer team still recommend user to use latest master for test of different version after grizzly version branch? 17:30:27 we are adding V3 tests as the functioanlity is getting complete 17:30:30 Ravikumar_hp: Sure, but I think this is really a tempest issue. 17:30:52 davidkranz: the keystone endpoint should support BOTH v2 and v3 APIs simulataneously. 17:31:14 jaypipes: agreed 17:31:23 yes. it does 17:32:01 The question is do we run all the tests in both versions, plus new tests for the new version? 17:32:06 davidkranz: so IMHO, we should just have the tempest keystone identity client query the endpoint base URI and get the v2 and v3 API root endpoints from the returned 301 17:32:25 davidkranz: we should run whatever the components support 17:32:28 jaypipes: Sounds good. 17:32:36 eventually the deprecate the old apis, then we can pull them out 17:32:36 davidkranz: we run all tests for whatever versions are returned in the 301 17:32:41 as I mentioned on the ML, the services (clients), should not decide the endpoint on their own, they should get the base url from the constructor 17:33:05 afazekas: they should get the base URI from the uri value in the config file :) 17:33:28 jaypipes: one for v2 and one for v3 ? 17:33:41 jaypipes: So we just need to change the code that processes the admin_url and remove the /2 in the config? 17:34:01 afazekas: no, the root URI is auth.example.com:5000/ 17:34:16 jaypipes: Are all OpenStack APIs going to behave the same way with multiple versions? 17:34:18 afazekas: have the rest_client subclass' tack on the v2.0/ and v3/ 17:34:19 yes, and it gets back service endpoint 17:34:28 davidkranz: everything other than swift should. 17:34:35 jaypipes: :) 17:35:21 So we just need to find out from the keystone devs when this will be ready to work. Or does it already? 17:35:22 davidkranz: yeh, it's supposed to in the spec. nova does it in theory, though we only have one API version right now 17:35:36 davidkranz: should work already. 17:35:45 jaypipes: +1, v3 auth merged yesterday 17:35:45 davidkranz: glance, too. 17:35:54 jaypipes: now every client acquires a token, and parses the token for endpoint, do we really need to do it in the "clients" ? 17:36:12 afazekas: no, we should cache it. 17:36:18 So now we just need to know who is planning to make this change. 17:37:14 We should cache it with global cache used be all rest clients to hide design glitches ? 17:37:23 afazekas: but I will say... that the more we write code in these tempest rest clients, the more similar they end up looking like the darn upstream python client libs, so other than fuzz testing, I'm really beginning to question the need for them, frankly. 17:37:48 afazekas: especially with certain folks' push to make them object-oriented/resource-exposing. 17:38:07 afazekas: which essentially will just make them identical to the upstream python client libs, for good or bad. 17:39:12 jaypipes: If we could eavesdrop the traffic done by the upstream clean, we would not need to implement our own clients in most cases 17:39:29 afazekas: we can easily eavesdrop. 17:39:29 jaypipes: If we could eavesdrop the traffic done by the upstream client, we would not need to implement our own clients in most cases 17:39:48 afazekas: monkeypatch the base request call with a simple decorator. 17:40:44 jaypipes: but in this case we need to modify code, which supposed to an analyses code 17:41:08 afazekas: I'm not following you, could you elaborate what you mean by that? 17:42:06 If we using the client libraries, we should be try to use them, as it is considered not modified , and we are able to check the traffic, at the same time 17:43:12 afazekas: I'm talking about adding a simple decorator on the base client class' request() method that listens and records (with fixtures.Fixture.addDetail()) on the incoming and outgoing HTTP request calls 17:43:38 in other words... 17:43:47 import novaclient.v1_1.client as c 17:43:52 from mock import patch 17:43:57 after module load, you would hook them.. 17:44:02 it is good 17:44:07 patch(client.request, some_listener) 17:44:18 that's all... 17:44:32 nothing more than a simple listener/recorder, nothing changing the way the client worked. 17:44:36 First it is sounds good.. 17:46:07 We can discuss the details later 17:47:28 So is there a proposal to change tempest policy in favor of "real" clients? 17:47:48 It needs detailed plan before implementing anything 17:48:11 afazekas: I meant a proposal in concept. 17:48:20 afazekas: Or policy. 17:48:21 May be it will not fly , because of some unforeseen detail 17:49:07 afazekas: In order to evaluate that we would need a real requirement list for the client to determine if real clients could meet it. 17:49:31 davidkranz: lets consider it just an idea, until we do not have more detailed plan 17:49:48 I'm not sure the list of objections to real clients was ever explicit. 17:49:50 davidkranz: I think it's a good discussion for the summit. 17:49:58 jaypipes: +1 17:50:18 jaypipes: Can you or afazekas submit one? 17:50:24 may be they are not letting us to make really bad thing, and some case is not testable 17:50:46 like violating pre-conditions 17:50:56 davidkranz: the more we add to the tempest rest_client(s), the more they look virtually identical to the upstream libs. and until fuzz testing is handled in an automated/grammar-based way, the only way we can do negative testing is to use the tempest rest client and not the upstream libs. 17:50:57 afazekas: We have already excepted negative tests with fuzzing. 17:51:31 jaypipes: I understand. 17:51:36 davidha: exactly, so IMHO, we can't get rid of the tempest rest client until we have a grammar-based tool for negative testing 17:51:46 do we have a good fuzz-er anywhere ? 17:52:08 afazekas: Daryl is going to report on that. See above in this meeting. 17:52:40 ok 17:52:55 jaypipes: Agreed. I was thinking about new tests and new projects coming into integration. 17:53:10 jaypipes: Whether we make them write their own client. 17:53:23 jaypipes: tempest client, that is. 17:53:47 davidkranz: yeah... good point. not sure... I'd lean on the side of saying no. 17:54:13 jaypipes: :) At the beginning of this meeting we told mlavalle "yes" I think. 17:55:03 Obviously this needs more discussion. 17:55:14 Any other issues before we close? 17:55:43 does anybody know about any swift destruction tests? 17:56:06 simulating disk failures and such 17:56:13 mkollaro: I think all we know about is what is in Tempest now. 17:56:21 :/ 17:57:13 https://review.openstack.org/#/c/22112/ <- please review it :) 17:58:44 OK, ready to close... 17:59:10 Thanks all. 17:59:20 #endmeeting