17:00:52 #startmeeting qa 17:00:53 Meeting started Thu Aug 8 17:00:52 2013 UTC and is due to finish in 60 minutes. The chair is sdague. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:54 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:56 The meeting name has been set to 'qa' 17:01:03 ok, who's around for the QA meeting? 17:01:06 hi 17:01:10 Hi! 17:01:17 I am here 17:01:35 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Proposed_Agenda_for_August_8_2013 17:01:41 ok, agenda for today 17:01:46 hi 17:01:50 first up... 17:01:50 here 17:02:00 #topic Volunteers to run meeting while sdague out (sdague) 17:02:12 sdague: When? 17:02:12 ok, I'm going to miss the next 3 meetings due to vacation 17:02:30 so basically for the rest of august 17:02:33 sdague: I'm out next week. 17:02:40 sdague: I can take care of that I guess 17:02:42 do we have volunteers for the various weeks? 17:02:59 mtreinish: ok, you volunteer for running the next 3 weeks? 17:03:01 sdague: I can do week after next 17:03:09 or you and dkranz want to take turns? 17:03:18 sdague: whatever 17:03:33 mtreinish: Can you do next week? 17:03:40 dkranz: yeah I'll be here 17:03:47 mtreinish: I know I can do the third week. 17:04:00 mtreinish: Not sure about the second. 17:04:09 dkranz: I can just handle all 3 weeks its not a problem for me 17:04:11 ok 17:04:27 #info mtreinish to run the QA meeting for the rest of August 17:04:42 ok, I think that's easiest, one person to drive through the whole time 17:04:55 I think so too ;) 17:04:59 #topic Blueprints (sdague) 17:05:11 ok, lets hit the big ones first, then ask for other updates 17:05:25 #topic testr progress (mtreinish) 17:05:31 mtreinish: you first 17:05:52 ok, so I'm still moving along with fixing races 17:06:07 last friday a nova change got merged to completely destroyed az and aggregates 17:06:20 (when running in parallel) 17:06:32 I've got a fix to nova for that and added extra locking to get around it 17:06:49 other than that the only blocking race is the security group one in scenarios 17:06:51 cool, hows the state of the nova patch? 17:07:07 it will be very interesting to see what's hiding behind these races 17:07:23 sdague: it's got a -1 from cyeoh about v3 tests but I don't think there is any need for them 17:07:26 mtreinish: which scenario ? 17:07:38 mtreinish: you have the review? 17:07:43 afazekas: all of them there is an inherent race between the scenario tests and security groups 17:07:46 Is the security group deletions happens after the server reported as deleted ? 17:07:59 they all try to add the same rules the same default sec group 17:08:05 tenant isolation will fix it 17:08:10 I've just got to work through it 17:08:19 sdague: https://review.openstack.org/#/c/40424/ 17:08:29 mtreinish: Are we running each test in parallel, or each class? 17:08:39 Do we have reason not to use a new security group in the scenario test ? 17:08:39 dkranz: this is class level parallel 17:08:40 it's each class (which for scenario is each test) 17:08:52 mtreinish: OK, good. 17:09:10 afazekas: that's essentially what I'm doing by adding tenant isolation to the scenario tests 17:09:20 each tenant has it's own default sec group 17:09:39 yes 17:09:55 cool, good stuff. 17:09:56 it's a good thing to add to the scenario tests to prevent this kind of contention on any tenant level resources 17:10:18 agreed 17:10:23 also back to overall testr status I've got a patch up to move to serial testr 17:10:41 this will help people get used to the new ui before we make the switch to parallel 17:11:01 mtreinish: great! link for that review? 17:11:02 I would prefer an isolation per worker process solution in the future 17:11:18 sdague: https://review.openstack.org/#/c/40723/ 17:11:30 but I need to respin it adalbas found a small issue with it 17:12:11 mtreinish: yep see that, cool 17:12:15 that will be nice 17:12:22 mtreinish: Looks like saving the logs is not solved yet 17:12:39 afazekas: it is, I'm going to add the debug flag to devstack 17:13:02 mtreinish: ok 17:13:15 ok, anything else on the testr front? 17:13:34 sdague: I don't think so 17:14:02 #topic stress tests (mkoderer) 17:14:09 ok 17:14:15 mkoderer: how goes stress tests? 17:14:32 it uses the common logger now ;) 17:14:45 nice :) 17:14:57 we had general discussions about the purpose in some reviews 17:15:09 I think we should discuss about them 17:15:17 so about https://review.openstack.org/#/c/38980/ 17:15:35 https://review.openstack.org/#/c/40566/ this should be approved for the logger :) 17:15:50 afazekas: thanks ;) 17:16:19 so about https://review.openstack.org/#/c/38980/ ... IMHO every test could be a stress test 17:16:37 I would like to be able to run several api test a stress test 17:16:45 and scenario test as well 17:17:02 I already started a discussion on that in the mailing list 17:17:10 mkoderer: ok, that sounds interesting 17:17:19 giulivo: are you there? 17:17:22 what are the concerns on it? 17:17:36 mkoderer, I'm all, +1 on all you said from me 17:18:31 so I mean we could complely remove all test in tempest/stress 17:18:39 giulivo: It's got your -1 at present 17:18:40 and use the existing ones 17:18:58 mkoderer: oh, and just use the existing tests. And stress would just be a framework? 17:18:59 so we would stop duplicate code 17:19:01 I like that idea 17:19:04 I think the only issue was about a particular submission https://review.openstack.org/#/c/39752 where we were trying to find out how to organize the tests proposed 17:19:20 sdague: that would be my finial idea 17:19:24 dkranz, I know but that is only because of minor suggestions with the code not the actual feature 17:19:46 and maybe we could add a decorator to specify which test can be used as "stresstest" 17:20:12 I think not all test are useful for stress test 17:20:13 giulivo: ok, so it's just a details thing? I think we can work it through in the normal review process then. 17:20:20 and maybe some of them don't work 17:20:58 mkoderer: right, it also seems like you'd want to know if some tests allocate a lot of resources, so could only be run a few N way, and not 100x at a time 17:21:01 yes if we all agree that this is the way to go.. let's use the reviews to finalise it 17:21:16 sdague, definitely we can, the discussion was about the scope of the tests, the link to the review was more an example of the potential problems facing now 17:21:37 mkoderer: in the existing stress framework we can make tests more parametric https://review.openstack.org/#/c/40680/4/tempest/stress/etc/ssh_floating.json 17:21:43 I think that's good. I imagine that we'll probably have plenty of work to do post havana for this, but I think having a summit session on it would be very cool 17:21:58 and I like the idea of using the existing tests in another way, so we aren't duplicating our work 17:22:00 I'm fine with this but we should still allow there being a stress case that does not go in api or scenario. 17:22:21 So if current stress tests in the stress dir are redundant we can remove them 17:22:25 dkranz: well is there a stress case that you don't think would fit into either one? 17:22:29 dkranz: yes ok thats right 17:22:43 mkoderer: for example we can add random wait times for certain tasks 17:22:54 sdague: Not off the top of my head. 17:23:06 it seems like we backed up a little because we want to make sure things get run in the normal gate to make sure they work, so if they are scenario tests, we can do that 17:23:18 then in stress we can run those scenarios N-way over and over again 17:23:19 sdague: I just don't like saying that if you add a stress test it must fit into the other infrastructure. 17:23:40 dkranz: I sort of do, otherwise we have to figure out how to not bit rot them in a different way :) 17:24:12 sdague: Well, I guess we can cross that bridge when we come to it. 17:24:12 but regardless, we can decide that later 17:24:16 agreed 17:24:21 ok great 17:24:32 this, however, I think is a very fruitful direction, which I like 17:24:33 so we have something to do ;) 17:24:39 nice job mkoderer 17:24:39 sdague: agreed 17:24:47 sdague: thanks 17:25:01 cool, ok other blueprints we should talk about? 17:25:10 those two have been the top recurring ones 17:25:22 sdague: quantum, but don't know what there is to say. 17:25:36 #topic quantum in gate 17:25:41 sdague: It is really odd that no one is fixing this. 17:25:45 well, it's worth an update 17:25:47 so the way I'm reading it it means we would not accept submissions like this https://review.openstack.org/#/c/39752/ 17:25:49 so smoke tests are back on 17:26:07 * giulivo was typing too slow, sorry 17:26:15 giulivo: I think that's right. 17:26:18 giulivo: right, not for now, as we want to go down mkoderer's path 17:26:34 #topic neutron in gate 17:26:42 I have to get used to using the new name 17:26:49 so neutron smoke gate is back on 17:26:50 :) 17:26:55 though there are still some races 17:27:02 I haven't looked at full runs recently 17:27:18 anyone working on neutron full gate runs around? 17:27:44 IMHO there are some not yet implemented not too core feature on the full neutron gate 17:28:05 I know I've put my foot down on the devstack side that neutron isn't going to be default there until it makes the whole gate 17:28:29 sdague: But is any one actually working on it? 17:28:30 afazekas: if we end up with a handful of tests that aren't applicable, I'm ok in skipping those 17:28:35 We should skip them until implemented, and try to get the full neutron gate job as a job what can say success 17:28:49 but I'd like to see someone working on it 17:29:38 it's the same approach on the cells side, if we can narrow it down to a few skips based on high priority nova bugs, we can do that. 17:29:55 but we'll leave the project teams to owning getting us there 17:30:04 ok, anything other blueprints? 17:30:22 sdague: Just heat, but that is its own agenda topic. 17:30:26 sure 17:30:36 #topic high priority reviews 17:30:52 ok, who's got reviews that need extra attention? 17:30:52 sdague: I will reread the etherpad related to the neutron blueprint 17:31:00 afazekas: great, thanks 17:31:38 https://review.openstack.org/#/c/35165/ looks like here I got the opposite comment then I get the beginning 17:31:43 sdague: Not sure any are special, just a lot not yet approved. 17:32:30 sdague: I will get to some today. 17:32:30 dkranz: yeh, that happens. But I think we're keeping up not too bad 17:32:37 yes, I will as well 17:32:42 https://review.openstack.org/#/c/39346/ is still belongs to the leaking stuff 17:33:03 afazekas: great, that one I wanted to come back to again today 17:33:15 that's an important one to get right and get in 17:33:32 I'll review again today 17:33:36 other reviews? 17:33:56 afazekas: I made a comment that one of sdague's comments was not addressed. 17:34:27 sdague: BTW, are we saying that tearDownClass must never throw and always call super? 17:34:28 dkranz: he noted a sub function 17:34:46 dkranz: yes, I think that's what we need 17:35:01 otherwise we won't get to the end of the inheritance chain 17:35:16 sdague: Sure, but I think we are way short of that currently. 17:35:24 dkranz: agreed 17:35:29 I never found a try block insinde of a tearDown 17:35:30 this is going to go in phases 17:35:41 sdague: And there is the question of whether that applies to "expected" exceptions or if all need blanket try 17:36:00 dkranz: yes, we're going to need to tackle these one by one 17:36:23 I guess the question is should we fix tearDown's with a blanket decorator? 17:36:37 we could probably do that pretty generically 17:36:41 sdague: Yes, if we are going to be hard core about this requirement 17:36:55 and it would be easier to catch people's mistakes 17:37:01 sdague: +1 17:37:25 And easier to not make them in the first place 17:37:28 yep 17:37:44 we could probably get away with 2, one for teardown, one for teardownclass 17:38:14 Sound's reasonable. 17:38:17 afazekas: you want to tackle that as part of this? 17:38:34 sdague: There is also setup 17:38:39 sdague: at can added after that 17:38:47 afazekas: ok 17:38:53 which should call super and call teardown if failing 17:39:08 dkranz: I'm less concerned with setup right now 17:39:13 That's another thing we do only sporadically 17:39:19 sdague: OK 17:39:27 teardown is the big racey bit right now 17:39:44 decorator makes things more tractable. 17:39:57 yep 17:40:06 ok, other reviews to talk about? 17:40:37 #topic Status of slow heat tests (dkranz) 17:40:43 ok, it's your show dkranz 17:40:53 So I've been working with sbaker on this 17:41:03 I pushed a new heat-slow tox job today 17:41:30 I just need a new yaml job and a change to devstack-gate to set a new env variable used by the slow tests. 17:41:34 cool 17:41:54 The only question I had was if we should run the new job serial or parallel? 17:41:57 I'm assuming the heat job will need to go on all the projects, as it drives them all 17:41:58 I submitted it as serial 17:42:17 sdague: Yes, but I was going to start it as non-voting on tempest 17:42:20 dkranz: does it run in parallel? 17:42:38 dkranz: at first I would at least add it non-voting on heat and tempest 17:42:41 sdague: Not with my pending patch (please review :) 17:42:51 dkranz: ok :) 17:43:00 sdague: Did you see my last two comments ? :) 17:43:05 dkranz: how many test classes are there with the slow tag? 17:43:15 mtreinish: There are just two tests now. 17:43:28 mtreinish: Steve is waiting on some image build stuff for the rest. 17:43:34 ok then probably not much benefit to running in parallel... 17:43:40 dkranz: right, I'd like to also run it non-voting on heat, so as they make changes it will trigger there 17:43:44 but it'd be nice to try it 17:43:51 sdague: Sure 17:43:58 but other than that, seems cool 17:44:04 sdague: That's it. 17:44:08 great 17:44:15 #topic Open Discussion 17:44:20 o/ 17:44:20 any other topics? 17:44:27 any news regarding the core reviewers topic? ;) 17:44:42 sdague: Did you ever put out the call for reviewers? 17:44:54 dkranz: I didn't, my bad. I'll do that today 17:45:00 sdague: np 17:45:19 sdague: Also contributors :) 17:45:31 I think we'll look at core reviewer nominations at H3, that gives us the rest of the month 17:45:50 and will give us potentially additional people during the rc phase to help review tests 17:46:06 as tempest work kind of peaks post H3 17:46:17 sdague: Sound's good. 17:46:38 I'm still running after these global requirements issues in the gate 17:46:47 sry.. what is H3? 17:46:53 which we have mostly nailed in devstack/tempest 17:46:58 mkoderer: havana 3 17:47:00 mkoderer: havanna-3 milestone 17:47:01 Havana-3 milestone 17:47:08 which is first week of Sept 17:47:10 ahh sorry ;) 17:47:13 looks like I still can't spell :) 17:47:16 no worries :) 17:47:24 mtreinish: that's not new :) 17:48:18 we have a new way we can break the gate though, with clients releasing and breaking other people's unit tests 17:48:23 which... was fun yesterdya 17:48:35 mordred and I are trying to come up with solutions 17:48:55 ok, anything else from folks? 17:48:57 trying trying 17:48:58 not sure if it is my turn, I just want to let you know that I would like to attend a -qa bootcamp 17:49:02 sdague: There is a blueprint for better testing of python clients 17:49:11 so for me to attend one, there needs to be one 17:49:18 sdague: But no progress yet. 17:49:27 so I'd like there to be a -qa bootcamp, please 17:49:37 devstack still not OK on fedora :( 17:49:49 anteaya: yeh, I suspect that probably won't happen before Icehouse summit 17:49:56 but something we can figure out if makes sense after 17:49:58 sdague: yeah, I agree 17:50:04 after works for me 17:50:04 it's a lot to organize though 17:50:06 thank you 17:50:08 it is 17:50:16 let me know if I can do anything to help 17:50:23 afazekas: yeh, did you look at dtroyer's patch? 17:50:32 I think that got russellb fixed yesterday 17:50:36 if you want to have it in Canada, icehouse in February, just let me know 17:50:37 there is still the requirements issue 17:50:55 afazekas: will try to get that sorted today 17:51:02 I know that's biting fedora folks a lot 17:51:13 sdague: Bit me :) 17:51:14 dkranz, that blueprint about the python clients, how about after we have some more scenario tests we try to use the scenario tests with different versions of the client libraries? 17:51:19 sdague: I will try that 17:51:29 sdague: thx 17:51:30 giulivo: Yes, that was the idea. 17:51:40 afazekas / dkranz: from the red hat side, any volunteers to figure out how to get f19 in the gate so we don't break ourselves like this in the future? 17:51:48 yeah i got it working ... i had to change one thing, but i think it's either a fedora issue or rackspace fedora mirror issue 17:52:05 sdague: We're working on that. 17:52:10 dkranz: cool 17:52:17 sdague: Is there any doc about exactly what needs to be done? 17:52:21 commented out the python-libguestfs requirement, because it was pulling in qemu, which was pulling in qemu-system-*, and some weren't available 17:52:39 and they're not all needed, so i just didn't install libguestfs for now 17:52:50 other than that one line, devstack works for me right now on f19 fwiw 17:52:58 sdague: It's been waiting on fedora/devstack working. 17:53:03 russellb: ok, cool. any chance in laying a patch on top of dtroyer's fixes? 17:53:05 russellb: Great. 17:53:25 not sure it's a devstack fix for what i saw, so no patches coming 17:53:31 ok 17:53:35 no worries 17:54:02 dkranz: on what's needed, run at mordred :) 17:54:10 sdague: OK 17:54:17 arro? 17:54:21 and the rest of -infra, they can tell you. I think the lack of public glance api was still an issue 17:54:34 mordred: We wil do the work to get a fedora/devstack gating job. 17:54:42 ah - great. 17:54:45 mordred: on how we test fedora in the gate with devstack, so we don't break those guys all the time 17:54:46 mordred: But not sure exactly what needs to be done. 17:55:07 dkranz: cool. find us in infra and let's chat about it 17:55:12 but we can take that offline from here 17:55:15 mordred: Will do. 17:55:19 I've got a hard stop at the top of the hour 17:55:20 sdague: IMHO there will be a volunteer soon, just it is summer and vacations .. 17:55:28 afazekas: totally understood 17:55:40 sdague: It is high priority 17:55:52 I just look forward to the day where we've got fedora in the gate, I think it will be good for the project 17:56:15 ok, anything else from folks? 17:56:15 Yes 17:56:33 I mean it would be good for the project :) 17:56:35 :) 17:56:39 ok, lets call it 17:56:43 * afazekas end 17:56:49 thanks everyone for coming, see you on other irc channels 17:56:52 #endmeeting