17:02:09 #startmeeting qa 17:02:10 Meeting started Thu Oct 17 17:02:09 2013 UTC and is due to finish in 60 minutes. The chair is mtreinish. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:02:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:02:13 The meeting name has been set to 'qa' 17:02:21 hi sorry we're a couple min late, who is here? 17:02:26 hi 17:02:56 hey, folks 17:03:04 yeh, who all is around... o/ 17:03:05 hi 17:03:12 o/ 17:03:13 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting 17:03:19 hi, want to report on Neutron testing 17:03:23 today's agenda is a single item? 17:03:41 o/ 17:03:42 mtreinish: being release week, I didn't populate from last week 17:03:45 my bad 17:03:49 ok, lets start with neutron 17:03:56 #topic neutron 17:03:56 then the ceilometer one 17:04:00 mlavalle: take it away 17:04:15 sdague: i've been working on https://bugs.launchpad.net/bugs/1209446 17:04:18 Launchpad bug 1209446 in nova "nova security group extension doesn't handle neutron exception properly" [Medium,Fix released] 17:04:24 hi 17:04:35 hi 17:04:43 mlavalle: ok, great 17:04:43 hi 17:04:45 sdague: I did a very thorough trace of network traffic in all the ports 17:05:06 sdague: involved. So i'm pretty sure the network piece is ok 17:05:41 sdague: last night I ran the test but added my own tearDownClass to keep the instance alive 17:06:01 sdague: an attempted login with the key pair created by the test 17:06:29 an it fails. so the problem is somewhere with the key pair machinery 17:06:31 * kashyap waves hi 17:07:02 mlavalle: wasn't that similar issue to what was happening before in the basic network scenario test? 17:07:07 the ssh i ran was manual 17:07:22 yeah, it's the same test 17:07:28 mlavalle: does cirros cloud-init handle key injection properly? perhaps logging the console-log from the VM may help 17:07:46 andreaf: that's my suspicion now 17:08:02 because the ssh was done manually by me 17:08:11 andreaf: You cannot inject key into the cirros image, but it should get as metadata 17:08:53 (Just a small note - cirros does run a bunch of networking commands as part of boot & throws them in its serial console log) 17:08:54 afazekas: so, any advice as to how to proceed? 17:09:25 afazekas: yes that's what I meant, wrong wording sorry. the metadata goes through cloud-init, neutron and nova, so many points in which it could fail 17:09:28 afazekas: were you using config drive to populate in your tests? 17:09:36 afazekas: what approach were you taking 17:09:45 kashyap: i'll take a look at that 17:10:46 ok, cool 17:10:50 anything else on neutron? 17:10:57 jd__: you around? 17:11:02 mlavalle, If you're curious what commands it runs, when I started exploring OpenStack, I noted down them here (scroll to the end of post) - https://kashyapc.wordpress.com/2013/04/06/finding-serial-console-log-of-a-nova-instance/ 17:11:15 sdague: yeah…. 17:11:19 sdague: The instances are able to get the key in three way, one is the file injection (not working), the another one is via the metadata service, and the config drive 17:11:32 just arrived 17:11:36 afazekas: right, which way were you doing it? 17:11:39 We are using the metadata service in almost all cases 17:11:52 sdague: one more thing…. doing this exercise i had to set a lot of tcpdump traces manualy across all the ports involved….. 17:12:02 we should probably change that to config drive, which is a little more reliable from my experience 17:12:08 sdague: yeah it's normally from the metadata service (except in the config drive tests :) ) 17:12:10 especially when we are playing with networking 17:12:27 sdague: would there be interest in adding debug code to tempest to be able to trace network traffic in tests? 17:12:27 I could imagine us doing something totally crazy that gets in the way of the metadata service 17:12:32 mlavalle: yes 17:12:39 sdague: i would implement it 17:12:49 mlavalle: you need sudo for tcpdump? 17:13:06 sdague: +1 there's a lot of the nw debugging which could be automated 17:13:07 sdague: yes i do need sudo for tcpdump 17:13:08 we are currently running under a separate tempest user in the gate 17:13:21 but we have a sudo file for it, which has ip and iptables in it 17:13:27 adding tcpdump is probably an option 17:13:45 mlavalle lets take that over to -qa after the meeting 17:13:47 we should log the nova console-log on unexpected failure, but AFAIK the metedata service is always expected to work, when the instance has connection to a router 17:13:48 sdague: ok, i will create a blueprint and target it at icehouse 17:14:03 mlavalle: great 17:14:14 sdague: that's all i have 17:14:32 mlavalle: actually that brings up another thing I probably should have said earlier. 17:14:39 afazekas: we had some interesting issues with metadata service in real environments, so moved to config drive 17:14:40 This morning we branched tempest stable/havana 17:15:11 so all merged commits are now for the icehouse release of tempest 17:15:21 right 17:15:27 #info Tempest master is now icehouse 17:15:35 #info stable/havana open for backports 17:15:38 sdague: that's more because of our weird network setup in the lab getting in the way 17:15:45 mtreinish: cool 17:15:56 sdague: AFAIK we returned back to the metadata 17:16:13 mtreinish: yeh, but still, when the tests are manipulating the networks, it seems like removing moving parts might be good 17:16:17 anyway, just a thought 17:16:29 ok, let's get on to the ceilo item 17:16:37 #topic Unblocking of Ceilometer QA testing (jd__, sileht) 17:16:48 jd__, sileht: you're up 17:16:53 thanks 17:17:09 so we're trying to add tests on Ceilometer using tempest for a while now 17:17:25 we hit a bug, it took a _very_ long time to track it down and we're trying to solve it 17:17:49 the main point of this item in the agenda today is to stress you a bit on how important it is for Ceilometer :) 17:18:09 so can you guys reorder the patches to put this as the bottom - https://review.openstack.org/#/c/51623/ - that way it won't be caught up on top of others 17:18:51 I'll work with dtroyer to land that one today 17:18:55 ok cool 17:19:06 but it will be simpler if it's not 3 deep in a patch queue 17:19:08 I think sileht can do that it shouldn't be long, sileht? 17:19:16 I do it now :) 17:19:24 sdague: you'd prefer just one patch? 17:19:38 cool, thanks 17:20:02 actually, it's good to have all of them, but that's the only one blocking you guys, so let's put it at the bottom and do it first 17:20:14 okay :) 17:20:17 as the others are less urgent 17:20:36 indeed 17:20:50 cool, great 17:20:56 so that's it for us if it's solved quickly without much debate :) 17:21:01 thanks! 17:21:06 sdague, jd__ I have reorder the topic 17:21:11 great 17:21:17 jd__: ok, cool 17:21:19 that was fast :) 17:21:28 sdague, don't hesitate to ping me on irc, if you want a other change 17:21:32 quickly 17:21:37 sileht: will do 17:21:43 #topic Design Summit Initial Planning (sdague) 17:21:49 lets move on then 17:21:53 sounds good 17:22:08 So I've been processing - http://summit.openstack.org/ so far 17:22:26 and trying to align it with this - https://etherpad.openstack.org/p/icehouse-qa-session-planning 17:22:37 #link https://etherpad.openstack.org/icehouse-qa-session-planning 17:22:52 i have added one session today 17:23:01 when it will be finalized? 17:23:10 so far we have 6 for sure topics that we need to hit, and my feeling is scenario comes up into the list as well, and there is another grenade talk (yet proposed) 17:23:20 ravikumar_hp: end of next week 17:23:44 there are some cross over topics with infra & process that I need to coordinate with jeblair and ttx to make sure are covered 17:24:42 on topics we've seen before, but haven't seen code delivery in the cycle, I'm going to require a lot of up front justification and details to put them in 17:24:58 because I want to make sure we are maximizing our time on things that we can make progress on in icehouse 17:26:07 so now is a good time to propose both in the etherpad, and on the summit page. I supose I should send out an email shaking out any other proposals :) 17:26:19 I'll look to do that tomorrow 17:26:29 any questions on summit? 17:26:35 also, who all expects to be there? 17:26:35 #action sdague to send out an email shaking out any other proposals 17:26:39 o/ 17:26:47 sdague: o/ 17:27:26 man...that will be a small summit :) 17:27:39 guess we are missing a bunch of folks today in the meeting anyway 17:27:43 heh, yeah 17:27:49 ok, I guess open discussion time 17:27:59 #topic open discussion 17:28:14 does anyone have anything they'd like to bring up? 17:28:18 so, anything else on folks minds? 17:29:01 May be Fault injection testing can be an additional topic on the summit 17:29:27 afazekas: possibly, is there a real plan of attacking that in icehouse? 17:29:47 afazekas: actually, something I'd *really* like to see early in icehouse is fedora in the gate 17:30:05 any idea if there is anyone at Red Hat that would take that on 17:30:36 sdague: yes 17:30:51 one topic that interest me is how to collect and do stats on test results - on the gate we expect all tests to pass most of the time, but running tempest against a large scale cloud can bring up ~random errors, and I don't have the tools now to try and correlate such errors properly 17:31:00 I will add it the the etharpd 17:31:09 afazekas: great 17:31:24 should we target f19 or f20 ? 17:31:36 andreaf: would that be covered under the elastic-recheck session? 17:31:55 afazekas: the details right now are less important to me, then someone that's committed to driving that forward 17:32:23 afazekas, (Side note - F20 release date is around 1st week of Dec) 17:32:26 sdague: ok 17:32:32 I'm totally happy setting asside a summit session to sort out all that would be required to get fedora into the pipeline 17:32:36 andreaf: it also might have overlap with the parallel testing moving forward session 17:32:44 part of that will be about tooling to figure out what is going on 17:32:46 making sure all the right infra folks are in the room 17:33:06 and what makes sense 17:33:38 andreaf: yeh, do you feel like we could cover it in one of those sessions? or should we plan something else on it? 17:34:35 ok, other topics? 17:34:47 sdague mtreinish yes it fits partly with both 17:34:47 sdague, I participate in Fedora work from time to time. Is there any etherpad that has any issues that are specific to Fedora here, I maybe able to help in some small way here (or ping right people w/ expertise). 17:35:15 kashyap: well more importantly we need someone to actually work through the details of getting it into the devstack gate 17:35:35 I'm not sure that there is a list of issues per say, it's just a bunch of work, that no one has done 17:36:00 and I feel that we as a community have said Ubuntu and Fedora are our targets, but there isn't any fedora upstream testing 17:36:08 which means things like devstack break on fedora all the time 17:36:29 Noted. I currently don't have hands-on expertise about gating, (and I'm focusing my energies on a couple of other things at $ day job). But, I use Fedora for all my work, will see what I can do here. 17:36:33 it really needs a leader to do the integration 17:36:54 cool, thanks 17:37:13 ok, any other topics? 17:37:34 I think we can probably take other discussions over to #openstack-qa, and call it a meeting 17:37:43 and Happy Release Day folks! 17:37:58 all your efforts are a huge part of what made us a successful Havana release 17:38:15 #endmeeting