05:00:33 <devananda> #startmeeting ironic
05:00:34 <openstack> Meeting started Tue Jan  6 05:00:33 2015 UTC and is due to finish in 60 minutes.  The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot.
05:00:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
05:00:37 <openstack> The meeting name has been set to 'ironic'
05:00:38 <NobodyCam> hyn all
05:00:43 <jroll> \o
05:01:06 <devananda> happy new year, and good evening/morning/afternoon/other time of day :)
05:01:24 <NobodyCam> :-p
05:01:27 <devananda> as usual, our agenda is up on the wiki here: https://wiki.openstack.org/wiki/Meetings/Ironic
05:02:00 <devananda> apologies in advance if I'm typing slower than usual, it's late for me -- but I'm glad to see a couple folks that I don't usually see :)
05:02:26 <devananda> #chair NobodyCam
05:02:27 <openstack> Current chairs: NobodyCam devananda
05:02:29 <devananda> #topic announcements
05:02:43 * NobodyCam will also be slow
05:03:09 <devananda> only announcement for me is just a reminder to folks
05:03:33 <devananda> that I've posted details for an early February meetup / sprint to the mailing list
05:03:44 <devananda> during the break, so I wanted to draw attention to it in case anyone missed it
05:03:49 <devananda> #link http://lists.openstack.org/pipermail/openstack-dev/2014-December/053617.html
05:04:18 <devananda> and some further thoughts, since it seems like a lot of US folks may not make it, are posted here
05:04:25 <devananda> #link http://lists.openstack.org/pipermail/openstack-dev/2014-December/053618.html
05:04:46 <NobodyCam> are their any details on the SF meetup? like where should I look for a hotel?
05:05:01 <wanyen> remtoe dial-in is available?
05:05:24 <devananda> NobodyCam: no. TBD. I just floated the idea and it seems people liked it, so need to arrange that this week
05:05:39 <NobodyCam> lol ack :)
05:05:43 <devananda> wanyen: it will be a developer sprint, not a planning meeting
05:06:28 <jroll> devananda: NobodyCam: fwiw, we have plenty of room at our office in soma
05:06:50 <jroll> I think russell even booked a room, plus we have this big open area we could hack at
05:06:58 <jroll> free to use it if you'd like
05:07:05 <devananda> jroll: let's iron out the details of that so we can announce at next week's (more US-time-friendly) meeting
05:07:12 <jroll> devananda: yep, just a heads up :)
05:07:17 <devananda> jroll: ack. ty
05:07:21 <naohirot> devananda: If I had an issue to be discussed with core members, it it possible to attend SF meeting via WebEX or something?
05:07:21 <NobodyCam> jroll: with a white board or two too?
05:07:44 <jroll> NobodyCam: yessir
05:07:53 <jroll> and TVs for code pairing
05:08:15 <naohirot> devananda: I can provide WebEX.
05:08:48 <devananda> naohirot: I'm trying specifically to avoid having the sprint be(come) a planning meeting
05:09:18 <devananda> naohirot: so if there are design issues, we should try to discuss those via IRC, email, etherpad, etc -- all the usual means
05:09:35 <wanyen> deva, what will be covered in the sprint?
05:10:07 <naohirot> devananda: Yeah, Okay. What does "sprint" mean?
05:10:08 <devananda> wanyen: it's a sprint - who ever is there will work on writing code for the open specs. or something like that, I hope :)
05:10:18 <jroll> write all the code.
05:10:23 <jroll> hack the planet
05:10:28 <naohirot> jroll: I see.
05:10:28 <JayF> It's much less interesting than I suspect you all expect it will be :)
05:10:37 <devananda> ah, i see. I assume everyone knows what a "code sprint" means
05:11:18 <devananda> let's move on for now, and come back to this if there's time at the end
05:11:23 <devananda> #topic subteam status reports
05:11:25 <jroll> +1
05:11:35 <devananda> #link https://etherpad.openstack.org/p/IronicWhiteBoard
05:11:55 <devananda> hm, bug stats look quite old
05:12:09 <devananda> I count 19 NEW bugs right now :(
05:12:20 <jroll> "dtantsur on PTO/holidays, back on Jan 5th"
05:12:25 <devananda> yea...
05:12:32 <jroll> he was in meetings all day today, probably didn't catch a break to count
05:12:34 * devananda makes a note to go do bug triage
05:12:36 <Haomeng> devananda: yes, I raise some bugs during my testing
05:13:16 <lintan> devananda: I see some old but high bugs, are these still import for Ironic?
05:13:42 <devananda> lintan: I suspect they are miscategorized
05:13:56 <NobodyCam> ieek about 20 + NEW bugs
05:14:08 <devananda> launchpad doesn't separate "impact" from "urgency"
05:14:15 <jroll> at least people are filing them :)
05:14:16 <devananda> so we end up with bugs that are high impact but low urgency
05:14:29 <devananda> s/urgency/priority/
05:14:39 <jroll> we should poke dtantsur in the morning and see if he wants help, maybe a bug day in the next couple weeks
05:15:14 <devananda> re-visiting all the bugs and re-assessing them would be good
05:15:23 <jroll> yeah
05:15:28 <devananda> probably some stale, definitely a lot of new ones
05:15:58 <devananda> wanyen: any updates on third-party CI ?
05:16:32 <devananda> #info it looks like we still need to get reviews on the iRMC and AMT drivers
05:16:58 <wanyen> deva: tried to setup 3rd-party CI but found out that we need more hardware.
05:16:59 <naohirot> devananda: Yes, please
05:17:14 <devananda> #info many new bugs, and lots of stale bugs. we should clean this up before kilo-3 (at the latest)
05:17:14 <NobodyCam> devananda: code, spec's or both
05:17:23 <devananda> NobodyCam: code, it looks like
05:17:36 <JayF> iirc iRMC has a pending spec
05:17:46 <NobodyCam> at least one
05:18:07 <lintan> AMT driver has both....
05:18:08 <JayF> https://review.openstack.org/#/c/134865/ yeah; I've been looking at this. Others should too.
05:18:27 <devananda> wanyen: hm, I see
05:18:52 <devananda> NobodyCam: ok, both it is.
05:19:00 <NobodyCam> :p
05:19:10 <devananda> any other status updates?
05:19:46 <JayF> I have a patch for IPA we should talk about; but I called it out specifically on the agenda. Other than that nothing notable for IPA
05:20:00 <devananda> JayF: ack, I'll make sure we get to it
05:20:01 <naohirot> JayF: I appreciated your view :)
05:20:35 <devananda> ok, thanks everyone for keeping the status page up to date!
05:20:38 <devananda> moving on ...
05:20:47 <devananda> #topic new state machine code reviews
05:20:54 <devananda> NobodyCam: that's you. well, and me ...
05:21:05 <devananda> NobodyCam: but you put it on the agenda :p
05:21:46 <NobodyCam> I was looking to get eyes on the state machine reviews
05:22:03 * jroll makes a note to review those this week
05:22:11 <jroll> NobodyCam: what's the topic for those?
05:22:25 <NobodyCam> I must admit I forgot to look them over today with all the catch up
05:22:30 <devananda> #link https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bp/new-ironic-state-machine,n,z
05:22:32 <JayF> jroll: it's linked from agenda
05:22:35 <jroll> thanks
05:22:45 <devananda> NobodyCam: I haven't posted a new rev since the break
05:22:45 <jroll> oh, maybe I should look at the agenda then :P
05:22:54 * jroll thought he was just here for moral support :P
05:23:23 <devananda> short version - these are laying the foundation for the Finite State Machine work that we need to move forward
05:23:29 <NobodyCam> I will look them over first thing tomorrow over coffee
05:23:34 <devananda> but so far, there have been very few reviews on the code I wrote
05:23:53 <devananda> mostly rloo and NobodyCam, with a few from Shrews
05:24:15 <devananda> it would be good to have more eyes on it, since it is refactoring some central parts of the ConductorManager
05:24:33 <NobodyCam> ++
05:24:42 <naohirot> devananda: In case of iRMC deploy, is it enough to follow the implementation of https://review.openstack.org/#/c/140883/?
05:25:30 <devananda> naohirot: excellent question -- this is one reason I'd really like to get more eyes on it, and hopefully land that
05:25:40 <naohirot> devananda: I'm wondering  how new state machine affects iRMC deploy implementation.
05:26:25 <devananda> naohirot: in my opinion, yes, but other cores must approve of it as well
05:26:51 <naohirot> devananda: Yes, of course
05:27:02 <devananda> ok. moving on because of time
05:27:05 <jroll> approve of using process_event() in iRMC? or approve of iRMC following that patch in the chain?
05:27:46 <devananda> jroll: the implementation pattern. my patches are affecting drivers
05:27:52 <jroll> ok, yeah
05:27:55 <devananda> jroll: so naturally anyone who is writing a driver now is affected ...
05:28:01 <devananda> #topic stable branch maintenance
05:28:07 <devananda> #link http://lists.openstack.org/pipermail/openstack-dev/2014-December/053366.html
05:28:07 <jroll> naohirot: assuming that patch goes through as-is, yes, it will follow
05:28:14 <devananda> I posted this question to the list last month
05:28:35 <devananda> should we officially say we're no longer supporting icehouse?
05:28:38 <JayF> +1
05:28:43 <naohirot> jroll: Okay, I got it, thank!
05:28:48 <jroll> :)
05:29:05 <devananda> and how long should we commit to supporting juno?
05:29:10 <devananda> but didn't get any replies on the ML
05:29:57 <devananda> opinions now, or discuss on the ML ?
05:30:12 <NobodyCam> I have seen question in channel about Icehouse support at least a few days before the break
05:30:18 <jroll> with my trunk-chaser hat on, I don't care much for stable maintenance... I think we should support juno for two cycles like other projects do
05:30:28 <jroll> icehouse seems fairly useless to support
05:31:07 <devananda> jroll: so juno until "L" is released?
05:31:25 <jroll> devananda: I guess so, yeah, that's what other integrated projects do, yes?
05:31:39 <devananda> also, fwiw, I think we need adam_g to weigh in on this, as he's been doing the lion's share of stable maint for us so far
05:31:47 <devananda> and it'd be great to have >1 person doing that
05:31:48 <jroll> indeed
05:32:17 <devananda> jroll: yea, generally I think it's 2 cycles after a release
05:32:30 <devananda> so that distros and users have a reasonable migration window
05:32:41 <devananda> where reasonable is somehow defined as "one year"
05:33:11 <jroll> right
05:33:21 <jroll> I don't really understand it, but willing to do the right thing :)
05:33:33 <devananda> jroll: apt-get install
05:33:35 <devananda> jroll: that's why
05:33:44 <NobodyCam> i would agree with where reasonable is somehow defined as "one year"
05:33:57 <jroll> devananda: I know that, I still don't understand :P
05:34:15 <devananda> jroll: heh :)
05:34:28 <JayF> I think if distros want Juno maintained for a year, they should help do it :)
05:34:38 <NobodyCam> lb -s /bin/apt-get /usr/bin/pip
05:34:44 <NobodyCam> ln not lb
05:34:46 <jroll> JayF: they do, to be fair
05:35:03 <JayF> I know; it's just I wish we were able to spend all our time on the future instead of the past :)
05:35:49 <devananda> jroll: hm, but to be precise, they aren't doing stable maint for ironic, afaik
05:36:01 <devananda> anyway
05:36:38 <devananda> I can draft something more formal-like and sjhare at the next meeting
05:37:04 <devananda> at least no one seems to have initially objected :)
05:37:09 <devananda> moving on ..
05:37:14 <devananda> #topic AMT spec
05:37:19 <devananda> lintan: hi!
05:37:32 <lintan> Hi all guys
05:37:51 <lintan> I need your opinions about the design
05:38:18 <lintan> One thing is to discuss about where to put amt_boot_device
05:38:26 <lintan> in driver_info or extra
05:39:11 <JayF> If it's driver specific, wouldn't it belong in driver_info?
05:39:24 <devananda> it shouldn't be exposed to users via either JSON field, though
05:40:11 <devananda> lintan: i'm not sure why it needs to be stored at all
05:40:36 <lintan> AMT/vPro only accept the first boot device and ignore the rest if we send multiple _set_boot_device_order requests to AMT nodes.
05:40:53 <jroll> devananda: AMT doesn't support persistent=true, and doesn't have a "get boot device" command, AIUI
05:40:58 <devananda> lintan: the hardware ignores repeated requests?
05:41:07 <devananda> jroll: right. I think that can be worked around
05:41:07 <lintan> yes
05:41:23 <devananda> but if the hardware only accepts the FIRST one
05:41:31 <devananda> then we can't change it before the next reboot
05:41:33 <devananda> that's a problem
05:41:36 <devananda> lintan: is ^ what you mean?
05:42:09 <lintan> devananda:I mean it doesn't support
05:42:18 <jroll> I don't see that as a problem, when do you need to change it twice between boots?
05:42:35 <jroll> my question is, if you do "set boot device pxe", and then "set boot device pxe", does it pxe boot?
05:43:09 <lintan> jroll: it does pxe boot in your case
05:43:26 <devananda> jroll: if I do "set boot device hdd" then "set boot device pxe" then "reboot" -- which one does it boot?
05:43:33 <devananda> oops, lintan ^
05:43:37 <jroll> devananda: why would you do that
05:44:04 <jroll> as long as this works: "set boot device hdd" then "reboot" then "set boot device pxe" then "reboot"
05:44:06 <jroll> should be fine
05:44:36 * devananda waits for lintan's answer before stating why that would bork things
05:44:43 <lintan> jroll: but other drives support "set boot device hdd" then "set boot device pxe" then "reboot"
05:45:04 <jroll> when/why do we do that?
05:45:12 <devananda> our API exposes set-boot-device, so if a user (for what ever reason) issues such a command manually, it would, i
05:45:12 <jroll> why would you set bootdev to hdd if you're not going to boot from hdd
05:45:15 <jroll> oh.
05:45:17 <jroll> that.
05:45:20 <devananda> yep
05:45:45 <lintan> jroll: another critical concern is for persistent boot
05:45:47 <jroll> which this whole "remember the boot device" doesn't help with
05:46:02 <lintan> in pxe boot processing, we have two pxe boot
05:46:06 <jroll> lintan: that's a different issue that can be solved generically, I think
05:46:46 <devananda> lintan: I'm sad that AMT hardware doesn't support changing this option multiple times
05:47:21 <jroll> another point to make here: Haomeng managed to find ipmi hardware that doesn't support persistent=true
05:47:31 <devananda> lintan: can it be worked around in the hardware somehow, eg. by issuing another command just before hand to "erase" a previous request?
05:47:33 <Haomeng> jroll: YES
05:47:34 <jroll> which means the latter problem isn't just an AMT problem
05:47:43 <devananda> oh :(
05:47:48 <devananda> ok then
05:47:50 <Haomeng> jroll: for some hardware, it ignore persistent=true
05:47:57 <devananda> that's awesome
05:48:00 <jroll> Haomeng: right
05:48:02 <jroll> so awesome
05:48:09 <lintan> for most case it should work right ?
05:48:12 <jroll> on the plus side, it forces us to solve this generically
05:48:21 <jroll> most isn't good enough, unfortunately
05:48:34 <devananda> ok - lintan, can you propose that as a separate change?
05:48:51 <lintan> for persistent issue?
05:48:51 <Haomeng> jroll: I just tested with two machine, not working with pxe set set bootdev persistent=true
05:48:56 <devananda> I think we'll need a new table to store "requested but not applied changes"
05:49:07 <devananda> or "persistent things we need to set every time"
05:49:08 <devananda> or something
05:49:12 <jroll> Haomeng: yeah, I believe you. hardware is bad.
05:49:24 <Haomeng> jroll: :)
05:49:35 <Haomeng> jroll: maybe:)
05:50:15 <NobodyCam> devananda: uggh :(
05:50:15 <devananda> lintan: I think the general approach you have is fine, but this shouldn't be saved in a JSON field like driver_info or extra
05:50:27 <devananda> lintan: and it needs to be available for other drivers to leverage as well
05:50:29 <wanyen> Haomeng, is this the problem for bios or uefi mode?
05:50:31 <jroll> ok, so someone (maybe lintan) should propose a spec to add a table or something to deal with this
05:50:41 <Haomeng> jroll: I understand some hardware does not implement all ipmi stand to support more options such as persistent=true
05:50:45 <lintan> OK, I am willing to do that
05:50:53 <Haomeng> wanyen: bios mode
05:51:12 <wanyen> Haomeng, I see.
05:51:19 <devananda> #agreed we need a generic way to store a user-requested persistent boot device settign which has not been applied yet, and then only apply it during the reboot phase
05:51:45 <devananda> #note this issue affects some IPMI-based hardware as well as AMT hardware
05:51:45 <lintan> I have another issue for AMT
05:51:48 <jroll> do we have time for JayF?
05:51:50 <jroll> mmm
05:52:00 <rameshg87> devananda, do we expect the user to reboot through Ironic only every time ?
05:52:05 <jroll> lintan: what's the issue
05:52:08 <devananda> lintan: we're almost out of time - can you be quick?
05:52:12 <lintan> During PXE deploy processing, the target machine will boot by itself.
05:52:13 <JayF> Not having time for me is not an option after staying up :)
05:52:30 <wanyen> deva,for hardware that does support persistent boot then no need to use the table.  So the use of table is optional.  right?
05:52:43 <jroll> rameshg87: that's an interesting question, I don't think we can expect that, unfortunately :(
05:52:48 <lintan> AMT Driver has to call ensure_next_boot_device again in_continue_deploy().
05:53:02 <jroll> wanyen: I think that's irrelevant
05:53:02 <devananda> wanyen: dunno. we'll discuss that on the relevant spec, when it's proposed
05:53:08 <rameshg87> jroll, and ironic cli doesn't have an option for reboot or soft-reboot
05:53:23 <jroll> rameshg87: node-set-power-state reboot (also nova reboot)
05:53:34 <rameshg87> jroll, oh :D
05:53:46 <devananda> ok - I am going to need to cap this so we can get to JayF
05:53:46 <Haomeng> devananda: another one -should ironic support force-delete to follow nova commad?
05:53:48 <rameshg87> jroll, soft reboot is missing afaik
05:53:51 <jroll> rameshg87: though both are hard power off, power on
05:53:53 <jroll> yeah
05:54:00 <jroll> I'd love to add this
05:54:06 * jroll wants to #topic
05:54:07 <rameshg87> jroll, +1 me too
05:54:12 <devananda> #note need to discuss this more
05:54:17 <devananda> #topic breakign change for IPA
05:54:20 <devananda> #link http://lists.openstack.org/pipermail/openstack-dev/2014-December/053662.html
05:54:26 <NobodyCam> jroll: you want soft poweroff?
05:54:29 <jroll> JayF: gogogo
05:54:36 <jroll> NobodyCam: yes
05:55:14 <JayF> So tl;dr: when looking at splitting up our hardware manager into smaller, more sharable pieces, I realized our hardware manager loading didn't work the wqy I expected it to -- or the way josh and I modeled it at the summit
05:55:31 <JayF> so I proposed https://review.openstack.org/#/c/143193 to allow for multiple simultaneous hardware managers
05:55:42 <JayF> the only downside is this would be a breaking-api change for any out of tree hardware managers
05:55:58 <JayF> Of which; the only one I know exists today is the one I maintain, heh
05:56:12 <JayF> So I wanted to generally get visibility on this and a general "OK" from the group for doing this
05:56:37 <rameshg87> JayF, very soon i would like to use this - for raid configuration
05:56:46 <devananda> #note If you are using Ironic-Python-Agent with an out-of-tree hardware manager, please respond to JayF's email (linked above)
05:57:08 <JayF> Also reviews very much appreciated; particularly by those who may want to consume this interface later
05:57:09 <jroll> I've voiced my opinion to JayF elsewhere, but again here: posted to mailing list, no response, should be ok
05:57:27 <devananda> rameshg87: is there any reason you will not submit your hardware manager upstream?
05:57:49 <rameshg87> devananda, there isn't. i will want to submit it to upstream :)
05:57:55 <JayF> devananda: you think highly hardware specific managers should go upstream?
05:58:14 <devananda> JayF: yes
05:58:38 <JayF> devananda: what about in cases (almost all, in cases I've seen) that they require a proprietary tool to work?
05:58:41 <jroll> I tend to agree
05:58:51 <devananda> JayF: if Ironic supports a given vendor's hardware, IPA ought to as well ...
05:58:53 <JayF> that's always been what has slowed me down when looking at upstreaming things
05:58:55 <rameshg87> JayF, adding ironic-agent element in dib solves this
05:58:57 <jroll> but when it comes to firmware things, that will almost certainly be downstram in most cases
05:59:16 <jroll> though you could just say "requires crappy-vendor-tool.sh"
05:59:17 <rameshg87> JayF, i can build a ironic-python-agent ramdisk by including my custom element
05:59:38 <JayF> rameshg87: sure; that's what I do now; but should we ship code that doesn't work without a custom element that we probably couldn't even open source
05:59:43 <devananda> if the separation is clean -- custom element is just "include proprietary utility.sh"
05:59:54 <devananda> and the interface to taht is in IPA -- I think it's good
05:59:57 <jroll> rameshg87: it's also straightforward to build IPA with a custom manager without DIB
06:00:07 <devananda> JayF: yes. we do that in Ironic today.
06:00:13 <devananda> JayF: look at all the third-party drivers
06:00:14 <wanyen> JayF, iLO teamis interested in adding Proliant hardware manager to IPA
06:00:32 <jroll> devananda: so you're ok with IPA shelling out to "utility.sh", where utility.sh is unspecified?
06:00:42 <JayF> wanyen: rameshg87: The big thing this change (Which you should apparently review) that matters is that you can do small HardwareManager pieces
06:00:47 <devananda> jroll: or is specified by the hardware manager
06:01:01 <JayF> i.e. I'd rather see 5 hardware managers for 5 components than a single, large hardware manager targetted at specific hardware mixes
06:01:03 <JayF> but that's just my vision :)
06:01:07 <rameshg87> JayF, sure i will take a look asap
06:01:11 <devananda> JayF: ++
06:01:17 <JayF> Thanks all
06:01:18 <jroll> devananda: sure, but is quanta-modelxyz-hwmanager ok? this could grow extremely large
06:01:27 <jroll> and keep in mind everything in tree bloats the ramdisk
06:01:36 <jroll> I'm not saying we shouldn't, it's just a somewhat hard proiblem to solve
06:02:01 <devananda> it shouldn't install all the driver's req's all teh time -- that does need to be configurable in some way
06:02:13 <devananda> or -- on large platforms, maybe that's fine
06:02:16 <NobodyCam> we're over time
06:02:20 <devananda> anyway, we're over time
06:02:26 <jroll> right
06:02:27 <wanyen> JayF, the bottomline is to allow vendor to add hardware management functionalitites.  We will take a look at your proposal.
06:02:32 <JayF> Good meeting, thanks all, see you tomorrow
06:02:35 <jroll> ok, thanks everyone :)
06:02:39 <devananda> please go check out the change / respond on the ML if you're interested in IPA and hardware managers
06:02:41 <NobodyCam> ty all
06:02:46 <Haomeng> ok
06:02:47 <devananda> thanks all! see you next time
06:02:55 <Haomeng> see you
06:02:56 <lintan> see you
06:02:59 <devananda> #endmeeting