17:00:52 <sdague> #startmeeting qa
17:00:53 <openstack> Meeting started Thu Aug  8 17:00:52 2013 UTC and is due to finish in 60 minutes.  The chair is sdague. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:54 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:56 <openstack> The meeting name has been set to 'qa'
17:01:03 <sdague> ok, who's around for the QA meeting?
17:01:06 <afazekas> hi
17:01:10 <mkoderer> Hi!
17:01:17 <mtreinish> I am here
17:01:35 <sdague> #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Proposed_Agenda_for_August_8_2013
17:01:41 <sdague> ok, agenda for today
17:01:46 <giulivo> hi
17:01:50 <sdague> first up...
17:01:50 <dkranz> here
17:02:00 <sdague> #topic Volunteers to run meeting while sdague out (sdague)
17:02:12 <dkranz> sdague: When?
17:02:12 <sdague> ok, I'm going to miss the next 3 meetings due to vacation
17:02:30 <sdague> so basically for the rest of august
17:02:33 <dkranz> sdague: I'm out next week.
17:02:40 <mtreinish> sdague: I can take care of that I guess
17:02:42 <sdague> do we have volunteers for the various weeks?
17:02:59 <sdague> mtreinish: ok, you volunteer for running the next 3 weeks?
17:03:01 <dkranz> sdague: I can do week after next
17:03:09 <sdague> or you and dkranz want to take turns?
17:03:18 <mtreinish> sdague: whatever
17:03:33 <dkranz> mtreinish: Can you do next week?
17:03:40 <mtreinish> dkranz: yeah I'll be here
17:03:47 <dkranz> mtreinish: I know I can do the third week.
17:04:00 <dkranz> mtreinish: Not sure about the second.
17:04:09 <mtreinish> dkranz: I can just handle all 3 weeks its not a problem for me
17:04:11 <sdague> ok
17:04:27 <sdague> #info mtreinish to run the QA meeting for the rest of August
17:04:42 <sdague> ok, I think that's easiest, one person to drive through the whole time
17:04:55 <mkoderer> I think so too ;)
17:04:59 <sdague> #topic Blueprints (sdague)
17:05:11 <sdague> ok, lets hit the big ones first, then ask for other updates
17:05:25 <sdague> #topic testr progress (mtreinish)
17:05:31 <sdague> mtreinish: you first
17:05:52 <mtreinish> ok, so I'm still moving along with fixing races
17:06:07 <mtreinish> last friday a nova change got merged to completely destroyed az and aggregates
17:06:20 <mtreinish> (when running in parallel)
17:06:32 <mtreinish> I've got a fix to nova for that and added extra locking to get around it
17:06:49 <mtreinish> other than that the only blocking race is the security group one in scenarios
17:06:51 <sdague> cool, hows the state of the nova patch?
17:07:07 <sdague> it will be very interesting to see what's hiding behind these races
17:07:23 <mtreinish> sdague: it's got a -1 from cyeoh about v3 tests but I don't think there is any need for them
17:07:26 <afazekas> mtreinish: which scenario ?
17:07:38 <sdague> mtreinish: you have the review?
17:07:43 <mtreinish> afazekas: all of them there is an inherent race between the scenario tests and security groups
17:07:46 <afazekas> Is the security group deletions happens after the server reported as deleted ?
17:07:59 <mtreinish> they all try to add the same rules the same default sec group
17:08:05 <mtreinish> tenant isolation will fix it
17:08:10 <mtreinish> I've just got to work through it
17:08:19 <mtreinish> sdague: https://review.openstack.org/#/c/40424/
17:08:29 <dkranz> mtreinish: Are we running each test in parallel, or each class?
17:08:39 <afazekas> Do we have reason not to use a new security group in the scenario test ?
17:08:39 <sdague> dkranz: this is class level parallel
17:08:40 <mtreinish> it's each class (which for scenario is each test)
17:08:52 <dkranz> mtreinish: OK, good.
17:09:10 <mtreinish> afazekas: that's essentially what I'm doing by adding tenant isolation to the scenario tests
17:09:20 <mtreinish> each tenant has it's own default sec group
17:09:39 <afazekas> yes
17:09:55 <sdague> cool, good stuff.
17:09:56 <mtreinish> it's a good thing to add to the scenario tests to prevent this kind of contention on any tenant level resources
17:10:18 <sdague> agreed
17:10:23 <mtreinish> also back to overall testr status I've got a patch up to move to serial testr
17:10:41 <mtreinish> this will help people get used to the new ui before we make the switch to parallel
17:11:01 <sdague> mtreinish: great! link for that review?
17:11:02 <afazekas> I would prefer an isolation per worker process solution in the future
17:11:18 <mtreinish> sdague: https://review.openstack.org/#/c/40723/
17:11:30 <mtreinish> but I need to respin it adalbas found a small issue with it
17:12:11 <sdague> mtreinish: yep see that, cool
17:12:15 <sdague> that will be nice
17:12:22 <afazekas> mtreinish: Looks like saving the  logs is not solved yet
17:12:39 <mtreinish> afazekas: it is, I'm going to add the debug flag to devstack
17:13:02 <afazekas> mtreinish: ok
17:13:15 <sdague> ok, anything else on the testr front?
17:13:34 <mtreinish> sdague: I don't think so
17:14:02 <sdague> #topic stress tests (mkoderer)
17:14:09 <mkoderer> ok
17:14:15 <sdague> mkoderer: how goes stress tests?
17:14:32 <mkoderer> it uses the common logger now ;)
17:14:45 <sdague> nice :)
17:14:57 <mkoderer> we had general discussions about the purpose in some reviews
17:15:09 <mkoderer> I think we should discuss about them
17:15:17 <mkoderer> so about https://review.openstack.org/#/c/38980/
17:15:35 <afazekas> https://review.openstack.org/#/c/40566/ this should be approved for the logger :)
17:15:50 <mkoderer> afazekas: thanks ;)
17:16:19 <mkoderer> so about https://review.openstack.org/#/c/38980/  ... IMHO every test could be a stress test
17:16:37 <mkoderer> I would like to be able to run several api test a stress test
17:16:45 <mkoderer> and scenario test as well
17:17:02 <mkoderer> I already started a discussion on that in the mailing list
17:17:10 <sdague> mkoderer: ok, that sounds interesting
17:17:19 <mkoderer> giulivo: are you there?
17:17:22 <sdague> what are the concerns on it?
17:17:36 <giulivo> mkoderer, I'm all, +1 on all you said from me
17:18:31 <mkoderer> so I mean we could complely remove all test in tempest/stress
17:18:39 <dkranz> giulivo: It's got your -1 at present
17:18:40 <mkoderer> and use the existing ones
17:18:58 <sdague> mkoderer: oh, and just use the existing tests. And stress would just be a framework?
17:18:59 <mkoderer> so we would stop duplicate code
17:19:01 <sdague> I like that idea
17:19:04 <giulivo> I think the only issue was about a particular submission https://review.openstack.org/#/c/39752 where we were trying to find out how to organize the tests proposed
17:19:20 <mkoderer> sdague: that would be my finial idea
17:19:24 <giulivo> dkranz, I know but that is only because of minor suggestions with the code not the actual feature
17:19:46 <mkoderer> and maybe we could add a decorator to specify which test can be used as "stresstest"
17:20:12 <mkoderer> I think not all test are useful for stress test
17:20:13 <sdague> giulivo: ok, so it's just a details thing? I think we can work it through in the normal review process then.
17:20:20 <mkoderer> and maybe some of them don't work
17:20:58 <sdague> mkoderer: right, it also seems like you'd want to know if some tests allocate a lot of resources, so could only be run a few N way, and not 100x at a time
17:21:01 <mkoderer> yes if we all agree that this is the way to go.. let's use the reviews to finalise it
17:21:16 <giulivo> sdague, definitely we can, the discussion was about the scope of the tests, the link to the review was more an example of the potential problems facing now
17:21:37 <afazekas> mkoderer: in the existing stress framework we can make tests more parametric https://review.openstack.org/#/c/40680/4/tempest/stress/etc/ssh_floating.json
17:21:43 <sdague> I think that's good. I imagine that we'll probably have plenty of work to do post havana for this, but I think having a summit session on it would be very cool
17:21:58 <sdague> and I like the idea of using the existing tests in another way, so we aren't duplicating our work
17:22:00 <dkranz> I'm fine with this but we should still allow there being a stress case that does not go in api or scenario.
17:22:21 <dkranz> So if current stress tests in the stress dir are redundant we can remove them
17:22:25 <sdague> dkranz: well is there a stress case that you don't think would fit into either one?
17:22:29 <mkoderer> dkranz: yes ok thats right
17:22:43 <afazekas> mkoderer: for example we can add random wait times for certain tasks
17:22:54 <dkranz> sdague: Not off the top of my head.
17:23:06 <sdague> it seems like we backed up a little because we want to make sure things get run in the normal gate to make sure they work, so if they are scenario tests, we can do that
17:23:18 <sdague> then in stress we can run those scenarios N-way over and over again
17:23:19 <dkranz> sdague: I just don't like saying that if you add a stress test it must fit into the other infrastructure.
17:23:40 <sdague> dkranz: I sort of do, otherwise we have to figure out how to not bit rot them in a different way :)
17:24:12 <dkranz> sdague: Well, I guess we can cross that bridge when we come to it.
17:24:12 <sdague> but regardless, we can decide that later
17:24:16 <sdague> agreed
17:24:21 <mkoderer> ok great
17:24:32 <sdague> this, however, I think is a very fruitful direction, which I like
17:24:33 <mkoderer> so we have something to do ;)
17:24:39 <sdague> nice job mkoderer
17:24:39 <dkranz> sdague: agreed
17:24:47 <mkoderer> sdague: thanks
17:25:01 <sdague> cool, ok other blueprints we should talk about?
17:25:10 <sdague> those two have been the top recurring ones
17:25:22 <dkranz> sdague: quantum, but don't know what there is to say.
17:25:36 <sdague> #topic quantum in gate
17:25:41 <dkranz> sdague: It is really odd that no one is fixing this.
17:25:45 <sdague> well, it's worth an update
17:25:47 <giulivo> so the way I'm reading it it means we would not accept submissions like this https://review.openstack.org/#/c/39752/
17:25:49 <sdague> so smoke tests are back on
17:26:07 * giulivo was typing too slow, sorry
17:26:15 <dkranz> giulivo: I think that's right.
17:26:18 <sdague> giulivo: right, not for now, as we want to go down mkoderer's path
17:26:34 <sdague> #topic neutron in gate
17:26:42 <sdague> I have to get used to using the new name
17:26:49 <sdague> so neutron smoke gate is back on
17:26:50 <afazekas> :)
17:26:55 <sdague> though there are still some races
17:27:02 <sdague> I haven't looked at full runs recently
17:27:18 <sdague> anyone working on neutron full gate runs around?
17:27:44 <afazekas> IMHO there are some not yet implemented not too core feature on  the full neutron gate
17:28:05 <sdague> I know I've put my foot down on the devstack side that neutron isn't going to be default there until it makes the whole gate
17:28:29 <dkranz> sdague: But is any one actually working on it?
17:28:30 <sdague> afazekas: if we end up with a handful of tests that aren't applicable, I'm ok in skipping those
17:28:35 <afazekas> We should skip them until implemented, and try to get the full neutron gate  job as a job what can say success
17:28:49 <sdague> but I'd like to see someone working on it
17:29:38 <sdague> it's the same approach on the cells side, if we can narrow it down to a few skips based on high priority nova bugs, we can do that.
17:29:55 <sdague> but we'll leave the project teams to owning getting us there
17:30:04 <sdague> ok, anything other blueprints?
17:30:22 <dkranz> sdague: Just heat, but that is its own agenda topic.
17:30:26 <sdague> sure
17:30:36 <sdague> #topic high priority reviews
17:30:52 <sdague> ok, who's got reviews that need extra attention?
17:30:52 <afazekas> sdague: I will reread the etherpad related to the neutron blueprint
17:31:00 <sdague> afazekas: great, thanks
17:31:38 <afazekas> https://review.openstack.org/#/c/35165/ looks like here I got the opposite comment then I get the beginning
17:31:43 <dkranz> sdague: Not sure any are special, just a lot not yet approved.
17:32:30 <dkranz> sdague: I will get to some today.
17:32:30 <sdague> dkranz: yeh, that happens. But I think we're keeping up not too bad
17:32:37 <sdague> yes, I will as well
17:32:42 <afazekas> https://review.openstack.org/#/c/39346/ is still belongs to the leaking stuff
17:33:03 <sdague> afazekas: great, that one I wanted to come back to again today
17:33:15 <sdague> that's an important one to get right and get in
17:33:32 <sdague> I'll review again today
17:33:36 <sdague> other reviews?
17:33:56 <dkranz> afazekas: I made a comment that one of sdague's comments was not addressed.
17:34:27 <dkranz> sdague: BTW, are we saying that tearDownClass must never throw and always call super?
17:34:28 <afazekas> dkranz: he noted a sub function
17:34:46 <sdague> dkranz: yes, I think that's what we need
17:35:01 <sdague> otherwise we won't get to the end of the inheritance chain
17:35:16 <dkranz> sdague: Sure, but I think we are way short of that currently.
17:35:24 <sdague> dkranz: agreed
17:35:29 <mkoderer> I never found a try block insinde of a tearDown
17:35:30 <sdague> this is going to go in phases
17:35:41 <dkranz> sdague: And there is the question of whether that applies to "expected" exceptions or if all need blanket try
17:36:00 <sdague> dkranz: yes, we're going to need to tackle these one by one
17:36:23 <sdague> I guess the question is should we fix tearDown's with a blanket decorator?
17:36:37 <sdague> we could probably do that pretty generically
17:36:41 <dkranz> sdague: Yes, if we are going to be hard core about this requirement
17:36:55 <sdague> and it would be easier to catch people's mistakes
17:37:01 <dkranz> sdague: +1
17:37:25 <dkranz> And easier to not make them in the first place
17:37:28 <sdague> yep
17:37:44 <sdague> we could probably get away with 2, one for teardown, one for teardownclass
17:38:14 <dkranz> Sound's reasonable.
17:38:17 <sdague> afazekas: you want to tackle that as part of this?
17:38:34 <dkranz> sdague: There is also setup
17:38:39 <afazekas> sdague: at can added after that
17:38:47 <sdague> afazekas: ok
17:38:53 <dkranz> which should call super and call teardown if failing
17:39:08 <sdague> dkranz: I'm less concerned with setup right now
17:39:13 <dkranz> That's another thing we do only sporadically
17:39:19 <dkranz> sdague: OK
17:39:27 <sdague> teardown is the big racey bit right now
17:39:44 <dkranz> decorator makes things more tractable.
17:39:57 <sdague> yep
17:40:06 <sdague> ok, other reviews to talk about?
17:40:37 <sdague> #topic Status of slow heat tests (dkranz)
17:40:43 <sdague> ok, it's your show dkranz
17:40:53 <dkranz> So I've been working with sbaker on this
17:41:03 <dkranz> I pushed a new heat-slow tox job today
17:41:30 <dkranz> I just need a new yaml job and a change to devstack-gate to set a new env variable used by the slow tests.
17:41:34 <sdague> cool
17:41:54 <dkranz> The only question I had was if we should run the new job serial or parallel?
17:41:57 <sdague> I'm assuming the heat job will need to go on all the projects, as it drives them all
17:41:58 <dkranz> I submitted it as serial
17:42:17 <dkranz> sdague: Yes, but I was going to start it as non-voting on tempest
17:42:20 <sdague> dkranz: does it run in parallel?
17:42:38 <sdague> dkranz: at first I would at least add it non-voting on heat and tempest
17:42:41 <dkranz> sdague: Not with my pending patch (please review :)
17:42:51 <sdague> dkranz: ok :)
17:43:00 <dkranz> sdague: Did you see my last two comments ? :)
17:43:05 <mtreinish> dkranz: how many test classes are there with the slow tag?
17:43:15 <dkranz> mtreinish: There are just two tests now.
17:43:28 <dkranz> mtreinish: Steve is waiting on some image build stuff for the rest.
17:43:34 <mtreinish> ok then probably not much benefit to running in parallel...
17:43:40 <sdague> dkranz: right, I'd like to also run it non-voting on heat, so as they make changes it will trigger there
17:43:44 <mtreinish> but it'd be nice to try it
17:43:51 <dkranz> sdague: Sure
17:43:58 <sdague> but other than that, seems cool
17:44:04 <dkranz> sdague: That's it.
17:44:08 <sdague> great
17:44:15 <sdague> #topic Open Discussion
17:44:20 <anteaya> o/
17:44:20 <sdague> any other topics?
17:44:27 <mkoderer> any news regarding the core reviewers topic? ;)
17:44:42 <dkranz> sdague: Did you ever put out the call for reviewers?
17:44:54 <sdague> dkranz: I didn't, my bad. I'll do that today
17:45:00 <dkranz> sdague: np
17:45:19 <dkranz> sdague: Also contributors :)
17:45:31 <sdague> I think we'll look at core reviewer nominations at H3, that gives us the rest of the month
17:45:50 <sdague> and will give us potentially additional people during the rc phase to help review tests
17:46:06 <sdague> as tempest work kind of peaks post H3
17:46:17 <dkranz> sdague: Sound's good.
17:46:38 <sdague> I'm still running after these global requirements issues in the gate
17:46:47 <mkoderer> sry.. what is H3?
17:46:53 <sdague> which we have mostly nailed in devstack/tempest
17:46:58 <dkranz> mkoderer: havana 3
17:47:00 <mtreinish> mkoderer: havanna-3 milestone
17:47:01 <sdague> Havana-3 milestone
17:47:08 <sdague> which is first week of Sept
17:47:10 <mkoderer> ahh sorry ;)
17:47:13 <mtreinish> looks like I still can't spell :)
17:47:16 <sdague> no worries :)
17:47:24 <sdague> mtreinish: that's not new :)
17:48:18 <sdague> we have a new way we can break the gate though, with clients releasing and breaking other people's unit tests
17:48:23 <sdague> which... was fun yesterdya
17:48:35 <sdague> mordred and I are trying to come up with solutions
17:48:55 <sdague> ok, anything else from folks?
17:48:57 <mordred> trying trying
17:48:58 <anteaya> not sure if it is my turn, I just want to let you know that I would like to attend a -qa bootcamp
17:49:02 <dkranz> sdague: There is a blueprint for better testing of python clients
17:49:11 <anteaya> so for me to attend one, there needs to be one
17:49:18 <dkranz> sdague: But no progress yet.
17:49:27 <anteaya> so I'd like there to be a -qa bootcamp, please
17:49:37 <afazekas> devstack still not OK on fedora :(
17:49:49 <sdague> anteaya: yeh, I suspect that probably won't happen before Icehouse summit
17:49:56 <sdague> but something we can figure out if makes sense after
17:49:58 <anteaya> sdague: yeah, I agree
17:50:04 <anteaya> after works for me
17:50:04 <sdague> it's a lot to organize though
17:50:06 <anteaya> thank you
17:50:08 <anteaya> it is
17:50:16 <anteaya> let me know if I can do anything to help
17:50:23 <sdague> afazekas: yeh, did you look at dtroyer's patch?
17:50:32 <sdague> I think that got russellb fixed yesterday
17:50:36 <anteaya> if you want to have it in Canada, icehouse in February, just let me know
17:50:37 <sdague> there is still the requirements issue
17:50:55 <sdague> afazekas: will try to get that sorted today
17:51:02 <sdague> I know that's biting fedora folks a lot
17:51:13 <dkranz> sdague: Bit me :)
17:51:14 <giulivo> dkranz, that blueprint about the python clients, how about after we have some more scenario tests we try to use the scenario tests with different versions of the client libraries?
17:51:19 <afazekas> sdague: I will try that
17:51:29 <afazekas> sdague: thx
17:51:30 <dkranz> giulivo: Yes, that was the idea.
17:51:40 <sdague> afazekas / dkranz: from the red hat side, any volunteers to figure out how to get f19 in the gate so we don't break ourselves like this in the future?
17:51:48 <russellb> yeah i got it working ... i had to change one thing, but i think it's either a fedora issue or rackspace fedora mirror issue
17:52:05 <dkranz> sdague: We're working on that.
17:52:10 <sdague> dkranz: cool
17:52:17 <dkranz> sdague: Is there any doc about exactly what needs to be done?
17:52:21 <russellb> commented out the python-libguestfs requirement, because it was pulling in qemu, which was pulling in qemu-system-*, and some weren't available
17:52:39 <russellb> and they're not all needed, so i just didn't install libguestfs for now
17:52:50 <russellb> other than that one line, devstack works for me right now on f19 fwiw
17:52:58 <dkranz> sdague: It's been waiting on fedora/devstack  working.
17:53:03 <sdague> russellb: ok, cool. any chance in laying a patch on top of dtroyer's fixes?
17:53:05 <dkranz> russellb: Great.
17:53:25 <russellb> not sure it's a devstack fix for what i saw, so no patches coming
17:53:31 <sdague> ok
17:53:35 <sdague> no worries
17:54:02 <sdague> dkranz: on what's needed, run at mordred :)
17:54:10 <dkranz> sdague: OK
17:54:17 <mordred> arro?
17:54:21 <sdague> and the rest of -infra, they can tell you. I think the lack of public glance api was still an issue
17:54:34 <dkranz> mordred: We wil do the work to get a fedora/devstack gating job.
17:54:42 <mordred> ah - great.
17:54:45 <sdague> mordred: on how we test fedora in the gate with devstack, so we don't break those guys all the time
17:54:46 <dkranz> mordred: But not sure exactly what needs to be done.
17:55:07 <mordred> dkranz: cool. find us in infra and let's chat about it
17:55:12 <sdague> but we can take that offline from here
17:55:15 <dkranz> mordred: Will do.
17:55:19 <sdague> I've got a hard stop at the top of the hour
17:55:20 <afazekas> sdague: IMHO there will be a volunteer soon, just it is summer and vacations ..
17:55:28 <sdague> afazekas: totally understood
17:55:40 <dkranz> sdague: It is high priority
17:55:52 <sdague> I just look forward to the day where we've got fedora in the gate, I think it will be good for the project
17:56:15 <sdague> ok, anything else from folks?
17:56:15 <afazekas> Yes
17:56:33 <afazekas> I mean it would be good for the project :)
17:56:35 <sdague> :)
17:56:39 <sdague> ok, lets call it
17:56:43 * afazekas end
17:56:49 <sdague> thanks everyone for coming, see you on other irc channels
17:56:52 <sdague> #endmeeting