17:00:50 #startmeeting ironic_qa 17:00:50 Meeting started Wed Jan 20 17:00:50 2016 UTC and is due to finish in 60 minutes. The chair is jlvillal. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:54 The meeting name has been set to 'ironic_qa' 17:01:14 o/ 17:01:15 o/ 17:01:19 o/ 17:01:24 Hello all. 17:01:26 o/ 17:01:32 As always the agenda is at: https://wiki.openstack.org/wiki/Meetings/Ironic-QA 17:01:46 #topic Announcements 17:02:02 I don't have any announcements. Does anyone else? 17:02:33 hey \o 17:02:39 Okay. Moving on in 5 17:02:42 4 17:02:43 3 17:02:45 2 17:02:46 1 17:02:54 #topic Grenade testing 17:04:03 So I am continuing to work on Grenade testing. Currently at the point where five tempest tests fail for stable/liberty portion of Grenade. All say node unavailable. 17:04:26 #info jlvillal continues to work on Grenade testing. Currently at the point where five tempest tests fail for stable/liberty portion of Grenade. All say node unavailable. 17:04:30 * jroll lurking 17:04:47 jlvillal: node unavailable as in out of capacity or? 17:05:06 jroll: Let me see if I can find message 17:05:31 ok we can dig in after the meeting too if you like 17:06:21 jroll: Okay, lets do that. 17:06:31 Rather than me hunting around for log files right now :) 17:06:44 Any other comments/questions on grenade before moving on? 17:07:05 Okay moving on 17:07:10 #topic Functional testing 17:08:15 So I haven't heard any updates since last week 17:08:38 Does anyone have anything to add? 17:08:53 I think Serge was investigating but he had some higher priority things to work on. 17:09:13 #info No updates this week 17:09:18 If nothing else, moving on 17:09:40 #topic 3rd Party CI (krtaylor) 17:09:50 krtaylor: For you :) 17:10:00 No updates this week for me either 17:10:19 unfortunately, I have had other priorities as well 17:10:24 Okay. 17:10:32 Anyone else have anything for 3rd Party CI? 17:10:34 so, M-2 this week. 17:10:45 supposedly third party CIs are supposed to have accounts 17:10:52 and be sandbox commenting by M-3 17:11:05 krtaylor: wondering if you can send a reminder to the dev list cc vendor peeps 17:11:33 jroll, sure, but I don't know the list of drivers that have responded 17:11:40 #info M-2 is this week. 3rd Party CIs should have accounts and be doing sandbox commenting by M-3. krtaylor to send out email reminder 17:11:47 but I can send a global reminder 17:11:48 There is etherpad and we updated the status. https://etherpad.openstack.org/p/IronicCI 17:11:57 krtaylor: yeah, thingee might know, not sure 17:12:30 #info Etherpad available at: https://etherpad.openstack.org/p/IronicCI 17:12:33 rajinir, y, but not sure how current that is 17:12:36 yeah might need to double check sandbox and do some emailing 17:13:03 let me get back to you. I won't be present next week, but can drop a note on the ML to reference in this meeting 17:13:22 Anything else from anyone? 17:13:31 thingee, great, and I'll follow up on that, thanks again 17:13:51 krtaylor: You good? 17:13:56 Okay to move on? 17:14:01 yep, all for me 17:14:08 Thanks 17:14:11 Okay moving on 17:14:20 #topic Open discussion / General QA topics 17:14:31 If anyone has anything speak up :) 17:14:47 I'll give it a couple minutes 17:14:51 so how about that gate? 17:15:19 http://tinyurl.com/j5yc4yr 17:15:47 there's the devstack bug hurting us, but our fail rate is very high without that 17:15:50 It doesn't look that much better. Maybe a little better than last week 17:16:17 IMO if we're going to have a QA subteam, the gate should be that subteam's #1 focus 17:16:17 jroll: is there a known root cause at the moment? 17:16:22 Do we have any people who are interested in trying to figure out why are gate fails so much? People with free time? 17:16:29 no sense in working on other things if the gate is falling apart 17:16:45 mjturek1: timeouts, timeouts everywhere 17:16:53 tl;dr nested virt is slow 17:17:01 #info Looking for people to help troubleshoot gate issues as we have very high failure rate: http://tinyurl.com/j5yc4yr 17:17:03 right 17:17:19 mjturek1, re: timeouts ^^ 17:17:37 it makes me sad to see people constantly rechecking things instead of actually working on the real issue :( 17:17:55 #info Gate has many failures due to timeouts. 17:18:22 part of this is nested virt, but Ironic has been using nested virt for a long time. So something else has also changed most likely 17:18:23 fair enough, I have a patch in review that failed the gate I'll use that as an opportunity to see if I notice anything 17:19:01 #link https://review.openstack.org/#/c/259089/ 17:19:03 #link https://review.openstack.org/#/c/234902/ 17:19:11 ^ tinyipa work in an effort to speed things up 17:19:26 jroll: I thought they added new cloud providers for the build. Is that true? 17:19:34 It used to just be RackSpace and HP 17:19:42 But I think they have added OVH and maybe others? 17:19:47 jlvillal: over the last 6 months or so, yes 17:19:58 I think that may be slightly related, but there's nothing we can do about that 17:20:27 Okay, so timeline doesn't quite line-up with our failures. 17:20:51 Any other opens? 17:20:58 new question: is there a timeframe for the tempest plugin to land/is that definitely going to happen? 17:21:05 oh right 17:21:10 devananda has been working on that a bit 17:21:15 we'd like it in very soon 17:21:16 ohhai 17:21:19 :) 17:21:27 a wild deva 17:21:28 it's blocked on the devstack fix that has been in the gate 17:21:40 devananda: which fix specifically?? 17:21:52 once our gate is unblocked, I want to land the tempestlib changes immediately 17:21:53 oh, right, for the agent_ssh job 17:22:13 mjturek1: https://review.openstack.org/#/c/268960/ 17:22:24 devananda: cool thanks! 17:22:27 #info tempest plugin is waiting to land. Currently blocked waiting for fix to devstack to get merged. devananda will try to get it merged ASAP 17:22:38 #link https://review.openstack.org/#/c/268960/ 17:22:55 Anything else? 17:23:07 devananda: you still need reviews on tempest things yeah? 17:23:18 jroll: the first two patches look good 17:23:36 I am still a bit nervous that we haven't tested them inthe gate env yet, bu that's a chicken-and-egg problem right now 17:23:39 devananda: let me rephrase, do they have +A? :) 17:23:56 no - but they have enough +2's 17:24:01 cool 17:24:14 no need to put them in the queue until they have a chance of passing 17:24:16 we need yuiko to un-wip the one it seems 17:24:18 right 17:24:53 * jroll leaves a comment 17:25:05 Feel free to add a #info if needed for meeting minutes. If not already covered. 17:25:20 so going back a moment 17:25:35 Any other comments? 17:25:38 I would really like to request anyone working on QA things focus on the gate problems, rather than something else 17:25:45 ++ 17:26:08 jroll: Okay. I can switch over to that from Grenade. 17:26:17 it's the biggest detriment to our velocity 17:26:18 our gate's failure rate from simple timeouts is really troubling 17:26:19 See what I can figure out. 17:26:41 do we have any indication how much tinyipa will help with that? 17:26:43 But would love to have multiple people collaborating on this! :) 17:26:58 devananda: I don't have data but it's a non-negligible improvement 17:27:02 cool 17:27:30 we should, you know, get data on that :) 17:27:58 yah 17:28:02 Of course the devstack gate breakage is making it hard to troubleshoot at the moment :( 17:28:07 the n-v job will help with that 17:28:15 jlvillal: that only affects the agent_ssh job 17:28:25 or rather, things using the agent driver 17:28:29 pxe driver is fine 17:28:58 Ah right. I guess that is good than that they affect two different jobs. 17:29:11 'good' being debatable... 17:29:24 Anything else? 17:29:28 rather, the devstack breakage only affects agent driver 17:29:36 no driver is safe from the timeout stuff 17:29:45 #info devstack breakage only affects agent driver 17:30:06 #info timeout issues are seen across multiple drivers 17:30:27 Okay. I think we are done. 17:30:34 Any objections to ending the meeting? 17:30:53 Thanks everyone! Talk to you next week 17:31:04 #endmeeting