17:00:41 <devananda> #startmeeting ironic
17:00:41 <openstack> Meeting started Mon Aug  8 17:00:41 2016 UTC and is due to finish in 60 minutes.  The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:44 <openstack> The meeting name has been set to 'ironic'
17:00:45 <vdrok> o/
17:00:46 <devananda> hi folks! i'm standing in for jroll as he's had to go AFK unexpectedly
17:00:49 <rpioso> o/
17:00:50 <davidlenwell> o/
17:00:57 <jlvillal> o/
17:00:58 <mat128> o/
17:00:59 <milan> o/
17:01:00 <lucasagomes> o/
17:01:00 <alineb> o/
17:01:12 <rama_y> o/
17:01:14 <TheJulia> o/
17:01:19 <NobodyCam> o/
17:01:27 <jaoh> o/
17:01:31 <devananda> as usual, our agenda can be found here: https://wiki.openstack.org/wiki/Meetings/Ironic
17:01:37 <krtaylor> o/
17:01:39 <devananda> today looks fairly light
17:01:43 <devananda> #chair NobodyCam
17:01:44 <openstack> Current chairs: NobodyCam devananda
17:01:47 <mjturek1> o/
17:01:49 <rloo> o/
17:02:03 <devananda> #topic announcements
17:02:31 <devananda> I believe jroll plans to cut 6.1.0 this week, pending the gate getting fixed
17:02:43 <devananda> oops, I should info that
17:02:51 <devananda> #info jroll plans to cut 6.1.0 release this week
17:02:56 <Madasi> o/
17:03:11 <devananda> also, I see an announcement form TheJulia (thanks for putting on the wiki!)
17:03:15 <devananda> #info The driver composition defaults call will be this Wednesday at 4PM UTC. Bridge #7777
17:03:42 <vdrok> gate should be fixed soon, g-r update for the os-client-config is on review already
17:03:53 <lucasagomes> o/
17:04:02 <devananda> vdrok: that's great, thanks!
17:04:36 <devananda> lots of our netwon-critical work landed last week - let's finish polishing those with follow ups in the next few days
17:04:40 <devananda> anyone else have announcements to share?
17:05:32 <devananda> ok, moving on
17:05:34 <devananda> #topic subteam status reports
17:05:35 <mgould> o/
17:05:38 <devananda> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:05:58 <devananda> please take a minute or two to update your sections -- the status report starts around line 80
17:08:21 <devananda> jlvillal: anything change in our gate tests (jobs promoted or removed) last week?
17:08:24 <rloo> oh.. so there is a chance that the serial console will land in newton? i see that the -2 was removed from nova patch.
17:08:43 <devananda> rloo: oh? I think that would be fantastic
17:08:48 <jlvillal> devananda: Last week the python-ironicclient had functional tests added as a voting job
17:09:10 <rloo> devananda: yeah, how did that happen?
17:09:34 <devananda> rloo: not sure?
17:09:40 <lucasagomes> devananda, rloo yes! jroll just pointed me to it today
17:09:45 <jlvillal> Not sure if anyone else knows about gate changes related to us.
17:10:14 <lucasagomes> #link https://review.openstack.org/#/c/328157/
17:10:26 <rloo> well, let's review that nova patch and help get it in! ^
17:10:37 <lucasagomes> rloo, ++
17:11:01 <rloo> any oneview folks here? i thought i saw a bunch of oneview-related patches get merged. but i don't see any status
17:11:22 <TheJulia> thiagop: xavierr: you guys there?
17:11:52 <thiagop> I am
17:12:27 <thiagop> rloo: was helping infra folks here, didn't have time to update the whiteboard
17:12:41 <thiagop> doing it now
17:13:24 * devananda is looking for jroll's nova mulitcompute patch
17:14:01 <mat128> devananda: https://review.openstack.org/#/c/348443/
17:14:08 <thiagop> rloo: done
17:14:21 <rloo> thx thiagop
17:14:49 <devananda> mat128: got it -t hanks
17:15:06 <devananda> updates look good, thanks everyone!
17:15:49 <rloo> the bugs stats hasn't been updated?
17:15:57 <devananda> dtantsur is out sick today
17:17:48 <devananda> going to move on, but feel free to continue updating the whiteboard
17:18:15 <devananda> #topic discussion: API v2?
17:18:43 <devananda> I didn't plan on running the meeting and also introducing the only topic on the agenda ... but here we go :)
17:19:01 <devananda> back at the last summit, and then again at the midcycle, the topic of a possible v2 API was brought up
17:19:12 <devananda> I took the action item of drafting a detailed "problem description"
17:19:22 <devananda> with the goal of presenting that at the barcelona summit
17:19:31 <devananda> I've written up many of hte points here:
17:19:33 <devananda> #link https://etherpad.openstack.org/p/ironic-v2-api
17:19:38 <devananda> and would like to solicit feedback and discussion
17:19:59 <devananda> I'll cap this at 20 minutes during hte meeting today so there's time for open discussion too
17:20:28 <devananda> there are 13 problems identified so far, listed in that 'pad in no particular order
17:20:32 * lucasagomes looks
17:20:34 <NobodyCam> devananda: wow awesome work
17:20:42 <JayF> devananda: At the nova mid-cycle, it was suggested by some nova folks that instead of a true v2 API, we just add more things to v1 and cycle out old
17:20:55 <devananda> JayF: yep
17:21:16 <devananda> one of the things that has stood out ot me, as I've documented these "problems" is that a lot of them appear to be things we _could_ fix incrmentally
17:21:21 <JayF> devananda: I know that's more abuot the journey than the destination, but I'm curious if this is intended to be a "true v2" API version change or just a general document on where we want the api to evolve to
17:21:34 <mat128> devananda: awesome =D
17:21:58 <devananda> JayF: this document is not intedted to be a description of where we want to end up, or how we get there
17:22:21 <JayF> devananda: just sorta a "list of issues with v1 api" then?
17:22:23 <devananda> merely "here are the usability / operability pain points we've seen"
17:22:24 <devananda> yes
17:22:39 <devananda> if we don't describe the problems we want to fix, we can't fix them
17:22:46 <JayF> ++
17:22:58 <devananda> conversely, if we all agree on what the problems are, we can then have a good discusison on how to fix them
17:23:09 <JayF> just a little curious about the wording, b/c we keep calling this a v2 api effort which sounds maybe a little heavier than what it is today :)
17:23:20 <mat128> JayF: API-next ;D
17:23:21 <devananda> JayF: agreed
17:23:33 <devananda> I'm calling it v2 because that is what we called it at the last two meetings
17:23:40 <devananda> but I agree that it may not actually become /v2/
17:23:53 <devananda> it might be v1.99 ... dunno :)
17:24:42 <devananda> if you've faced one of the problems listed there, please add a +1. if you don't think it's a problem, please -1 and explain why.
17:30:16 <thiagop> mat128: indeed, it's more like #6, thanks
17:30:22 <mat128> :)
17:31:52 <devananda> I see some folks discussing on the pad - thanks!
17:32:11 <devananda> I also want to ask, specifically, are any of the things desribed there NOT problems for someone?
17:32:43 <mat128> well #7 was odd for me (callback vs long-polling)
17:33:03 <mat128> I do not feel it as a problem, if the intent is really using ironic via curl, a callback is probably even harder
17:33:18 <devananda> mat128: fair point
17:33:28 <devananda> callbacks are useful for intra-service coordination, but not for client-server
17:33:30 <mat128> long-polling (a la nova's '--poll') feels like the easiest way out
17:33:30 <thiagop> I think that once we have tasks, polling becomes cheaper...
17:34:06 <mat128> so if I take thiagop's proposal, nova could create a deployment task and nova would track it's progress?
17:34:11 <mat128> optionally having a callback on it?
17:34:52 <mat128> then I guess Ironic could emit notifications (like nova's instance.create.start/end/error) and it's 'just' a matter of catching those (i know, i know.. not that easy ;))
17:35:49 <devananda> mat128: that would improve the nova.virt.ironic driver significantly: rather than polling ironic continuously during the deploy, it could sleep / listen for an event or callback
17:36:05 <devananda> as Nova does with Neutron now for some port actions
17:36:26 <thiagop> maybe an hybrid solution? Tasks to track and notifications once the task is done, than a consumer may not need to poll the API, just listen the bus for the task...
17:37:19 <TheJulia> The consumer can't be expected to have access to the message bus, if the consumer in the context is an end user with an api client
17:37:35 <devananda> TheJulia is correct
17:37:54 <thiagop> uhmm
17:38:23 <devananda> the internal rabbit message bus is not suitable for communicating with non-service clients
17:38:44 <thiagop> might work for nova, but not for other purposes (that may continue polling...)
17:38:54 <mat128> devananda: agreed, but I fear this turns into a "if you're not a service, there are some parts of Ironic you cannot use"
17:38:57 <thiagop> or we can come up with another solution :)
17:39:21 <devananda> mat128: indeed...
17:39:26 <JayF> mat128: Sure, but are you going to configure your Ironic APIs to allow callbacks to arbitrary URLs? I certainly won't...
17:39:59 <mat128> JayF: nope
17:40:01 <JayF> That's my concern, right? I don't want to have internal details of my ironic-apis leaking out
17:40:15 <mat128> JayF: even if I wanted, ir-api won't be able to reach anyone
17:40:20 <JayF> exactly
17:40:28 <mat128> other than bus/db for ir-cond communications
17:40:43 <mat128> but if Ironic starts emitting notifications like node.deploy.end
17:40:51 <mat128> as a regular API user I can't "catch" those
17:40:52 <JayF> Yep, exactly. And I would not want my individual apis spitting out api calls back to callers
17:40:54 <thiagop> JayF: might not be that hard, what's harder is assuming/expect/demand that a consumer service will have that sort of "arbitrary endpoint" to be notified
17:40:59 <JayF> and giving up their internal ip addresses, etc
17:41:26 <JayF> mat128: I see that as a /good thing/
17:41:51 <devananda> thank you all for the feedback. I'd like to cap this discussion in a couple minutes, though I'm happy to continue after the meeting / in the 'pad
17:41:58 <devananda> if you'd like to have a longer focused discussion on this before the summit, please add your name to the list at the top of the pad
17:42:11 <mat128> JayF: thats fine, but if tomorrow we remove the node field for deployment status, the *only* way to get it becomes the notification
17:42:18 <mat128> devananda: ok sorry :)
17:42:21 <devananda> I will collate all this into a spec about two weeks before the summit
17:42:31 <devananda> mat128: no appology needed :)
17:42:52 <devananda> I'm just making sure we have time for open discusssion
17:42:55 <JayF> mat128: that's why we shouldn't do that, lol
17:43:43 <devananda> #topic open discussion
17:45:24 <devananda> I, for one, am really looking forward to the remainder of this cycle. we've got so many great features almost done right now :)
17:45:41 <thiagop> https://review.openstack.org/#/c/340596/ 3-line patch with some tests that will make our CI happier! If someone can take a look (without rush now) :)
17:46:03 <chopmann> hi everyone,  we would like to get some feedback https://review.openstack.org/#/c/350570/ :-) Besides the spelling and wakeonlan, how can we improve the ref ?
17:46:10 <JayF> thiagop: that's failing oneview ci right now though
17:46:43 <JayF> As I emailed the list about last week; I'm going to abandon any ironic-specs with no updates in the last six months. I'll do that immediately after this meeting so if you have an objection speak now :)
17:47:03 <thiagop> JayF: that failure was due to our gitlab problems last week
17:47:04 <devananda> chopmann: interesting
17:47:21 <thiagop> JayF: but I can recheck
17:47:28 <JayF> thiagop: I'd love to see that passing before voting +2 on  a patch affecting that driver
17:47:30 <mat128> chopmann: so you turned that idea into a spec, good :)
17:47:34 <chopmann> we proposed the ref some time back, we were working on a poc
17:47:50 <JayF> Question somewhat related to metrics: Do we have a definiative list and/or timeline for when drivers being dropped out of tree this cycle are being dropped/
17:47:53 <chopmann> and understanding more of openstack :p
17:47:56 <devananda> chopmann: describing the changes would be good
17:48:02 <JayF> Mainly I'm trying to avoid writing metrics for drivers that are going to fall out of tree soon
17:48:15 <devananda> chopmann: the first things that come to my mind are, beyond a new boot and deploy driver, and a new ramdisk, are there other changes you'd need?
17:48:47 <devananda> JayF: I believe jroll plans on sendin email about dropping drivers early this week
17:48:57 <devananda> but I do not recall the timeline to actually remove them, sorry
17:49:06 <JayF> Along those lines, we do need to change our ipmitool jobs to voting
17:49:20 <JayF> is anyone working on stabilizing them? Or has anyone looked at metrics around passing % of them lately?
17:49:24 <chopmann> the thing with the ramdisk is: "black-box" switches or embedded devices rarely have a ramdisk
17:49:39 <lucasagomes> JayF, I have to look at that yes
17:50:01 <JayF> lucasagomes: IMO we probably should just make them voting now, at least for ironic projects, and feel the pain if they fail
17:50:10 <JayF> lucasagomes: otherwise it'll be deprioritized naturally
17:50:21 <devananda> chopmann: you could no-op the boot phase, and hvae the deploy phase write the new firmware
17:50:31 <JayF> lucasagomes: and use that pain to improve the jobs before pushing them to other projects (like nova)
17:50:33 <mat128> ^ exactly
17:50:42 <thiagop> #info I'm doing a patch to remove our (out of date) tests from tempest for good
17:50:46 <devananda> JayF: +1 to making it voting so we need to fix it
17:50:58 <lucasagomes> JayF, yeah that would def help
17:51:08 <devananda> JayF: also +1 to doing tat before we get too close to FF / end of cycle
17:51:09 <mat128> chopmann: I commented in your review, but I'd like to know more about the devices that this driver will support, along with new tests required
17:51:23 <thiagop> I was unable to work on it last week, but plan to resume it later today
17:51:28 <JayF> We should leave ssh jobs running until after we have the nova tests moved off them, right?
17:51:33 <lucasagomes> btw, someone knows where's is jarrod (from pyghmi) ?
17:52:06 <JayF> lucasagomes: honestly, I have a little concern about virtualbmc being a dep of our tests with a project with a SPOF as a major piece :x
17:52:06 <devananda> chopmann: another thing to point out - for in-tree drivers, we will require either the ability to do CI testing in a virtualized environment using entirely open source components, or third-party CI jobs
17:52:10 <chopmann> devananda: no-op would be fine i guess. "Power Control" neeeds to be optional too (no-op) or with a second device ("remote power")
17:52:13 <JayF> lucasagomes: because of you having to ask questions like that.
17:52:44 <lucasagomes> thiagop, code-wise it looks good, but that said, it would be good to have a bug ticket for that problem as well :-)
17:52:46 <chopmann> devananda: got that. :-)
17:53:08 <devananda> JayF: lucasagomes: does anyone else understand virtualbmc well enough to fix/update it without jarrod?
17:53:11 <devananda> chopmann: awesome
17:53:19 <chopmann> we'll do our best for it to be open-testable
17:53:27 <thiagop> lucasagomes: will do
17:53:36 <mat128> chopmann: it's (now) a requirement for an in-tree driver
17:53:42 <mat128> s/an/any/
17:53:42 <lucasagomes> JayF, yeah it's concerning... I was thinking whether we could take ownwership of pyghmi as part of the ironic project really
17:53:43 <devananda> chopmann: perhaps you could describe, in the spec, what the provisioning workflow will look like for this class of hardware
17:54:06 <lucasagomes> devananda, virtualbmc is rather simple, the bulk of the logic converting the IPMI commands lives in pyghmi
17:54:15 <JayF> lucasagomes: I think devananda's question is telling: do we have enough expertise in that realm to manage it?
17:54:24 <JayF> lucasagomes: I certainly don't :/
17:54:27 <mat128> chopmann: I can probably help you as we have shared mid-term goals
17:54:42 <devananda> lucasagomes: ah. thx for the clarification - I mean the whole stack that we need to do that testing
17:54:55 <lucasagomes> JayF, I don't either. I've fixed a couple of things in pyghmi but it's not near enough to say I'm an expert on it
17:55:05 <chopmann> mat128: what are those goals? :-)
17:55:06 <mat128> lucasagomes, devananda: what's the current problem with pyghmi?
17:55:15 <mat128> chopmann: appliance provisioning
17:55:20 <mat128> blackbox mostly
17:55:25 <chopmann> ahh :-)
17:55:41 <devananda> mat128: its development is not managed by anyone on the ironic core team
17:56:00 <lucasagomes> mat128, there's only one maintainer at the moment
17:56:06 <mat128> ah
17:56:09 <devananda> though I do have commit rights on the project, I don't claim to understand it enough to meaningfully review / contribute to it
17:56:11 <lucasagomes> and I'm not sure anybody else has a good understanding of that code base
17:56:17 <devananda> it's really just jarrod right now
17:56:47 <devananda> so making that a required component in ironic's gate is raising some concerns
17:57:34 <chopmann> mat128: i'll get back to you later this week. Once we had our team meeting :-)
17:57:42 <mat128> one guy on my team and I can handle the IPMI stuff if all you need is more ironic presence in there, but not sure that help getting ir-core people onboard
17:58:05 <mat128> s/help/helps
17:58:06 <JayF> mat128: at least from my perspective, the concern is SPOF, not that "all of us can't do it"
17:58:18 <mat128> JayF: yeah
17:58:29 <devananda> mat128: thanks. that would make me feel more comfortable
17:58:30 <JayF> if we have 2/3 regular contributors, in addition to jarrod, working on it, most of my concerns are alliviated
17:58:41 * JayF doesn't mention the self-merge that happens constantly in that project :/
17:58:50 <mat128> JayF: don't tell me about it :(
17:58:56 <mat128> we call it insta-merge here
17:59:15 <mat128> but thanks to lucasagomes we have the virtualbmc gate running
17:59:18 <lucasagomes> JayF, I would like to add console support for virtualbmc, that will require changes in pyghmi
17:59:24 <TheJulia> 1 minute
17:59:28 <lucasagomes> I probably will understand it more once I start playing with it
17:59:35 <mat128> lucasagomes: I think we can submit those
17:59:38 <mat128> (patches)
18:00:15 <devananda> lucasagomes, mat128: let's start a discussion with jarrod about adding more cores to that project
18:00:21 <lucasagomes> devananda, ++
18:00:44 <devananda> we're at time - thanks everyone!
18:00:56 <devananda> #endmeeting