17:00:29 <jlvillal> #startmeeting ironic_qa
17:00:30 <openstack> Meeting started Wed May 18 17:00:29 2016 UTC and is due to finish in 60 minutes.  The chair is jlvillal. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:31 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:34 <openstack> The meeting name has been set to 'ironic_qa'
17:00:37 <vdrok> o/
17:00:38 <sambetts> o.
17:00:40 <jlvillal> Hello everyone
17:00:41 <mjturek1> o/ hey
17:00:42 <sambetts> o/
17:00:46 <jlvillal> As always the agenda is at: https://wiki.openstack.org/wiki/Meetings/Ironic-QA
17:01:16 <jlvillal> #topic Announcements
17:01:27 <thiagop> o/
17:01:39 * thiagop watches more than participate :)
17:01:41 <wajdi> hello
17:01:49 <jlvillal> I will save my Grenade related announcements until the Grenade section
17:01:55 <jlvillal> Any other announcements from anyone?
17:02:07 <jlvillal> No audio bridge this week, as an FYI
17:02:32 <jlvillal> Will move on in 10 seconds if no response :)
17:02:33 * devananda notices the time and runs over to the room
17:02:51 <jlvillal> #topic Grenade
17:03:04 <jlvillal> #info Great work and progress this week on Grenade!!!
17:03:18 <rloo> CLAP CLAP
17:03:51 <jlvillal> #info Big thanks to vsaienko and vdrok for their work.
17:03:53 <mjturek1> \o/
17:04:00 <sambetts> \o/
17:04:29 <jlvillal> #info vsaienko's network patch https://review.openstack.org/#/c/317082/  was the big key in unblocking us
17:04:35 <cdearborn> o/
17:04:44 * jlvillal thinks vdrok was involved in that too...
17:05:16 <vdrok> jlvillal: in all the other things except that networking bit :)
17:05:46 <jlvillal> #info jlvillal does not understand exactly the patch and why we need to create a second network. Hoping to get a better description from vsaienko. But it fixes things :)
17:05:56 <krtaylor> o/
17:06:02 <jlvillal> #info Ironic Grenade whiteboard: https://etherpad.openstack.org/p/ironic-newton-grenade-whiteboard
17:06:06 <sambetts> So seeing this patch got me thinking, should we be running devstack + ironic with not a real flat neutron network ?
17:06:39 <jlvillal> We are working on the list of patches that have got Grenade to pass. Trying to get them merged into various projects.
17:07:07 <jlvillal> #info jlvillal has a goal (possibly unrealistic) to have Grenade working by end-of-next week.
17:07:18 <vdrok> sambetts: what exactly do you mean?
17:07:19 <jlvillal> sambetts: I'm not sure.
17:07:42 <jlvillal> sambetts: I guess I want to get what we have working to be merged. And then we can work on enhancements later.
17:07:53 <rloo> jlvillal: so with the patches that are 'in flight', we are fairly sure that once they land, grenade will be working for ironic?
17:07:55 <jlvillal> sambetts: But I don't understand the question/suggestion
17:08:17 <sambetts> we are sort of "fudging" the networking in devstack right now (going behind neutrons back and connecting into a tenant network), and I was wondering what would happen if we run ironic as intended, by turning off tenant networking and running a flat network
17:08:24 <jlvillal> rloo: We have a pretty good feeling about it. vdrok and vsaienko got a green test pass early today in Jenkins.
17:08:36 <vdrok> rloo: I'm running all of them (I hope) locally so will see
17:08:36 <sambetts> would it try to make a new tenant network if we aren't using it
17:09:05 <rloo> jlvillal, vdrok, vsaienko: awesome!
17:09:14 <mjturek1> jlvillal: is there a list of the in-flight patches? The only one I know of that isn't merged is https://review.openstack.org/#/c/317082/
17:09:24 <jlvillal> mjturek1: https://etherpad.openstack.org/p/ironic-newton-grenade-whiteboard
17:09:35 <mjturek1> ahhh thanks
17:09:39 <vdrok> jlvillal: that was false positive - n-cpu failed, and because we were running only baremetal test after the upgrade we didn't notice it, only api tests were run
17:10:03 <rloo> sambetts: good question. probably worth investigating.
17:10:20 <jlvillal> vdrok: Ah and oh :(  But I still feel pretty good. A lot of progress has happened this week and we are very very close now.
17:10:27 <rloo> sambetts: but fudging is good :)
17:10:48 <jlvillal> Like the 2nd to the last line of the script is current failure.
17:10:53 <rloo> sambetts: (for now anyway)
17:11:25 <sambetts> rloo: I'm just wondering if these patches are actually required if we made our devstack deployment work that way :/
17:11:27 <jlvillal> sambetts: No objections from me and exploring that area. I will defer to you and others on networking stuff.
17:11:38 <jlvillal> s/me and exploring/me on exploring/
17:11:56 * jlvillal does not understand enough about neutron and how things should be done :(
17:12:03 <sambetts> jlvillal: These are the settings I have in my devstack config in my CI to make devstack setup a flat network to work with Ironic http://paste.openstack.org/show/497564/
17:12:30 <jlvillal> sambetts: Can you add that to the whiteboard? as an FYI or something?
17:12:35 <sambetts> sure
17:12:54 <jlvillal> #info sambetts has a devstack config for a flat network setup for Ironic: http://paste.openstack.org/show/497564/
17:13:27 <jlvillal> So overall I think we are in a good place. Still a fair amount of work to be done.
17:13:52 <jlvillal> Have to work on getting the patches merged into various projects and probably responding to review comments on the patches.
17:14:25 <vdrok> sambetts: AIUI, grenade case is special as greanade creates a new net during resource create phase and tries to boot servers in it, in usual devstack run there is no such problem, and it will still be doing so no matter how we setup devstack
17:14:40 <jlvillal> Overall I feel quite good about where we are today compared to last week.
17:15:04 <jlvillal> Any questions/comments about the Grenade stuff?
17:15:23 <jlvillal> We still need to keep pushing forward and get it finished.
17:15:25 <sambetts> vdrok: what I'm thinking though is if you define a flat network, then making a new network doesn't make sense because a flat network normally maps onto the real world, so I wondered if it would leave it alone
17:16:02 <jlvillal> sambetts: The resource phase creates this new network and then has nova use it. Outside of the devstack stuff.
17:16:26 <jlvillal> sambetts: Look in grenade/projects/*/resources.sh files   Especially 50_neutron and 60_nova.
17:16:31 <devananda> one thing to consider with the grenade network things -- how _should_ this work, when we introduce proper neutron integration and multitenant network support?
17:16:38 <sambetts> isn't grenade aware of what was previously defined though?
17:16:49 <devananda> it sounds like there are two options right now in how we're implementing grenade support
17:17:08 <vdrok> sambetts: it tries to create things itself to check that upgrade went smoothly and everything newly created is preserved
17:17:11 <devananda> I'm curious if one of htem will set us up better for the neutron integration and upgrade testing to that
17:18:59 <jlvillal> I don't know. At the moment I want to continue on to get grenade working. Making it better in the future is good. But getting it working is 1st priority to me. Then improving it without breaking it.
17:19:01 <sambetts> vdrok: right but we currently don't support multitenant networking so should we be fudging testing it? If our current real world deployment is a flat networks which can't just pop out of thin air should be be testing that case
17:19:59 <vdrok> sambetts: yes, we should not :) but ironic is different from all the other projects in this case I guess
17:20:09 <vdrok> I guess grenade itself should be changed
17:20:38 <sambetts> does grenade not have support for if Q_USE_PROVIDER_NETWORKING=True is turned on?
17:21:26 <jlvillal> sambetts: git grep Q_USE_PROVIDER_NETWORKING   comes back with nothing in grenade
17:21:50 <vdrok> I think it will do whatever is said in local.conf
17:22:07 <vdrok> whatever is written to local.conf by grenade plugin settings
17:22:15 <devananda> jlvillal: that's sort of my question. if we get grenade working with a fudged network setup that is not how we actually recommend it to be deployed, how do we go from that to proper multitenant network testing?
17:22:26 <jlvillal> I want to continue on with our current course of action to get basic grenade functionality working.
17:22:52 <jlvillal> devananda: I worry that we will get derailed for a long time if we change plans.
17:23:12 <devananda> jlvillal: perhaps I missed it in catching up on this. what is the current course of action w.r.t. network setup?
17:23:30 <vdrok> devananda: https://review.openstack.org/#/c/317082/2/devstack/upgrade/resources.sh
17:23:38 <devananda> thanks, reading
17:23:46 <jlvillal> I am worried about: https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good
17:24:03 <jlvillal> I would like to get something working. Then improve it.  iterate.
17:24:19 <devananda> jlvillal: agreed. I'm not suggesting perfection, but would prefer we also don't do something that prevents upgrades :)
17:24:30 <devananda> it may be that we just introduce a new test
17:24:49 <devananda> when multitenant network support lands, we need to know that we can still do flat networks too
17:24:59 <sambetts> ++
17:25:09 <vdrok> devananda: so we'll be testing upgrade for both flat-flat and flat-multitenant?
17:25:15 <jlvillal> That is fine with me. I just want to get to that grenade test up and running. Then we have a base that we can improve upon and make sure it keeps working.
17:25:38 <devananda> vdrok: flat->flat && multitenant->multitenant
17:25:51 <vdrok> ah, ok
17:26:44 <devananda> grenade doesn't need to cover flat->multitenant migration in tests ... I'm not even sure we need to support that migration path, or what it would look like for a deployer right now ...
17:27:11 <vdrok> yep, that would be harder
17:27:32 <jlvillal> #info Contine to get grenade working. Need to figure out how to test upgrades for flat to flat and multitenant to multitenant. jlvillal would prefer that happen after grenade is up and running.
17:27:49 <devananda> jlvillal: ++
17:28:06 <jlvillal> Any other questions/comments?
17:28:30 <jlvillal> Okay moving on :)
17:28:40 <jlvillal> Thanks for all the good feedback :)
17:28:45 <jlvillal> #topic Functional testing
17:28:55 <jlvillal> #info No progress as focus has been on Grenade work.
17:29:07 <jlvillal> Unless someone has been working on it???
17:29:13 * jlvillal assumes not
17:29:26 <jlvillal> #topic 3rd Party Testing (krtaylor)
17:29:38 * jlvillal hands the mic and #info action to krtaylor :)
17:29:45 <krtaylor> sure, not much to report, I should update the requirements status table in the driver wiki
17:30:06 <krtaylor> but I did push https://review.openstack.org/#/c/314768/
17:30:06 <krtaylor> based on some discussions
17:30:09 <krtaylor> around new drivers
17:30:20 <rloo> krtaylor: don't know if this has already been discussed but at the summit, you had mentioned that you needed help/info for something (i don't remember what).
17:30:25 <rloo> krtaylor: are you blocked on anything?
17:30:52 <krtaylor> rloo, that was for the status table
17:31:08 <krtaylor> rloo, but I think I need to send email for correctness
17:31:33 <krtaylor> and get responses, then I'll start organizing a doc patch
17:31:39 <rloo> krtaylor: ok. i think there is some other driver list that OpenStack has, but to get into it, one has to put their driver via stackalytics?
17:32:13 * rloo fuzzy on the details (if that wasn't obvious)
17:32:17 <krtaylor> yes, their driver list, that is input for the marketplace
17:32:35 * krtaylor learned that at summit
17:32:46 <rajinir> https://www.openstack.org/marketplace/drivers/
17:32:47 <rloo> krtaylor: right. so we probably need to put a list of things-to-do for driver maintainers/marketeers/whatever.
17:33:22 <krtaylor> rloo, well, in the spec we decided not to, but hm...
17:33:28 <krtaylor> maybe we need to update that
17:33:40 <krtaylor> based on the marketplace input
17:33:43 <devananda> ++ to having a document for what driver-maintainers are expected to do
17:33:50 <krtaylor> agreed
17:33:56 <devananda> right now, it's institutional knowledge at best, which means it reallyshould be written down
17:33:58 <rloo> krtaylor: 'spec' is not user documentation.
17:34:03 <devananda> and yea, not a spec
17:34:04 <rajinir> we noticed, dell drac driver is missing
17:34:10 <devananda> but actually in our docs/ tree
17:34:17 <sambetts> I think this aligns with the conversation we had about what makes a 3rd party CI verified too
17:34:20 <krtaylor> yes, all good input for the doc, but not replacement for
17:35:15 <rloo> rajinir: i could be wrong but i thought someone ping'ed jroll about that driver being missing and he said he'd do 'something' about it (more fuzziness)
17:35:40 <rajinir> that must be chris dearborn
17:36:02 <rloo> rajinir: possibly and probably :)
17:36:13 <cdearborn> yes, i pinged jroll about it, and he said that what was listed in the marketplace was very old and out of date, and that he was trying to get to updating it this week
17:36:15 <krtaylor> um, drac is there
17:36:29 <krtaylor> rajinir, sambetts among others, I need email on what to change specifically
17:36:31 <krtaylor> drac -> https://wiki.openstack.org/wiki/Ironic/Drivers#3rd_Party_CI_required_implementation_status
17:37:00 <cdearborn> krtaylor: it's not listed here: https://www.openstack.org/marketplace/drivers/
17:37:10 <krtaylor> ah
17:37:19 <krtaylor> ok, yeah, thats the driver log add
17:37:29 <sambetts> Didn't work out at the summit that market place pulls from stackalyics?
17:37:34 <sambetts> Didn't we *
17:37:47 <krtaylor> yes, see scrollback :)
17:38:04 <sambetts> ah :)
17:38:07 <sambetts> missed that
17:38:08 <krtaylor> and that we need to document that
17:38:16 <krtaylor> I agree, I'll get on it
17:39:26 <krtaylor> #info krtaylor will get email thread going about wiki driver completeness
17:40:06 <krtaylor> #info krtaylor will add information about the need for drivers to add to stackalytics to be listed in marketplace
17:40:50 <krtaylor> ok, jlvillal seems like we are winding down on CI
17:40:53 <krtaylor> anything else?
17:41:00 <jlvillal> Not from me
17:41:24 <jlvillal> Okay moving on then. Thanks krtaylor
17:41:31 <jlvillal> #topic Open Discussion
17:41:47 <jlvillal> Anything anyone wants to discuss?  Now is your chance.
17:42:39 <jlvillal> I'll give it another minute of silence before ending the meeting...
17:43:26 <jlvillal> Thanks everyone for attending.
17:43:31 <jlvillal> #endmeeting