17:00:23 <jroll> #startmeeting ironic
17:00:24 <openstack> Meeting started Mon Dec 14 17:00:23 2015 UTC and is due to finish in 60 minutes.  The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:28 <jroll> it's that time again!
17:00:28 <openstack> The meeting name has been set to 'ironic'
17:00:31 <vdrok> good morning
17:00:36 <devananda> o/
17:00:37 <TheJulia> o/
17:00:39 <rloo> o/
17:00:39 <rpioso> o/
17:00:51 <lucasagomes> hi all
17:00:57 <cinerama> o/
17:01:01 <mjturek1> o/
17:01:06 <jroll> welcome to the party, everyone
17:01:08 <davidlenwell> o/
17:01:10 <jroll> as always, agenda is here:
17:01:12 <krtaylor> o/
17:01:14 <jlvillal> o/
17:01:14 <jroll> #topic https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
17:01:17 <NobodyCam> o/
17:01:17 <stendulker> o/
17:01:18 <jroll> ...
17:01:19 <jroll> #undo
17:01:20 <JayF> o/
17:01:20 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x9c7b450>
17:01:23 <jroll> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
17:01:26 <NobodyCam> welcome davidlenwell :)
17:01:42 <jroll> let's jump right in
17:01:47 <jroll> #topic announcements / reminders
17:02:10 <jroll> just a couple things here
17:02:26 <cdearborn> o/
17:02:27 <jroll> I'm working on moving to devstack and grenade plugins, to make changing that stuff up easier
17:02:31 <jroll> those patches are here
17:02:33 <jroll> #link https://review.openstack.org/#/q/status:open++branch:master+topic:ironic-devstack-plugin,n,z
17:02:37 <jroll> good progress so far
17:02:46 <jroll> I'd love to do the same thing with tempest soon
17:03:18 * lucasagomes ported a patch from devstack to ironic today
17:03:22 <lucasagomes> good work with that jroll
17:03:23 <jroll> second thing, our gate failure rates are getting worse and worse
17:03:28 <jroll> #link http://tinyurl.com/p8t8osb
17:03:28 <jlvillal> jroll, Thanks for the work on that!
17:03:46 <jroll> we've been talking about some ways to mitigate, I just wanted to point out to folks how terrible that is
17:03:51 <NobodyCam> ya, great to have that in our tree under our control :) ++ on the great work
17:04:09 <jroll> terrible use of the gate resources there :(
17:04:31 <jroll> I'm going to make that my number one priority once devstack is running from our tree, would love some help if folks are free
17:04:47 * jlvillal is willing to help out
17:04:58 <jroll> sweet
17:05:07 <lucasagomes> ++ I can help as time permits, I took a quick look last week on it
17:05:18 <jroll> anyone have other announcements and things?
17:05:19 <krtaylor> jroll, we will be testing in our environment also
17:05:30 <lucasagomes> and we talked about it already so I won't repeat it here (we can talk on #openstack-ironic)
17:05:40 <NobodyCam> will there be a meeting next week?
17:05:53 <jroll> krtaylor: I have a feeling it's a VM gate thing, but thanks
17:05:54 <jlvillal> Or week after?
17:05:59 <jroll> NobodyCam: sure, I'll be around for both
17:06:10 <jroll> if nobody shows up, it'll be easy mode :)
17:06:11 <lucasagomes> I won't be around for the 28th
17:06:38 <betherly> o/ sorry I'm late!
17:06:47 <jroll> don't feel like you need to be here, but I'll start a meeting and if folks are around we can chit chat
17:06:48 * jlvillal On vacation next week and week after...
17:06:56 <NobodyCam> hp start holiday shutdown after this week
17:07:06 <NobodyCam> s/start/starts/
17:07:38 * devananda will probably be around, hacking on stuff
17:07:56 <jroll> devananda: if everyone else is gone, me and you can just rewrite everything
17:08:02 * TheJulia anticipates being around
17:08:08 <devananda> jroll: sounds good
17:08:15 <rloo> jroll, devananda: you need one more core :)
17:08:32 <lucasagomes> devananda, jroll just don't do it in JS
17:08:47 <jroll> :P
17:08:56 <NobodyCam> I may be around :/
17:09:02 <devananda> lucasagomes:  ....
17:09:42 * jroll moves on before a language war happens
17:09:51 <NobodyCam> lol
17:09:53 <jroll> #topic subteam status reports
17:09:57 <jroll> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:10:04 <jroll> ^ updates are there as usual
17:10:16 * jroll gives folks a few minutes
17:11:01 <jroll> ironic-lib is done \o/
17:11:04 <rloo> wrt bugs vs features (RFE). Do we want them all lumped under 'bugs'?
17:11:05 <jroll> just being stomped on by the gate
17:11:36 <rloo> i mean, maybe dmitry can just report bug numbers w/o rfe's. or do folks care?
17:11:39 <jroll> rloo: they're marked as wishlist, right? so we can do the math there
17:11:51 <rloo> jroll: they're tagged i thought, with 'rfe'.
17:11:56 <jroll> or that
17:12:01 <lucasagomes> yeah there's a tag
17:12:04 <jroll> I think I'd like them separated
17:12:07 <lucasagomes> and usually the name has a [RFE] as well
17:12:09 <rloo> i suppose it doesn't matter. i just look at the relative number/changes anyway :)
17:12:19 <jroll> oh, no dmitry today
17:12:24 <devananda> we should be able to filter in LP - it has a pretty rich query syntax
17:12:25 <NobodyCam> if it not a bunch of extra work it would be good to know to know the numbers split out
17:12:35 <jlvillal> We are using ironic-lib now?
17:12:42 <jroll> jlvillal: change is in the gate
17:12:47 <lucasagomes> jlvillal, patch's being merged at the moment
17:12:51 <lucasagomes> *fingers crossed*
17:12:55 <jlvillal> Woot! :)
17:13:25 <jroll> now the next question - who's going to work on the IPA side of that? :)
17:13:34 <NobodyCam> also should remove rameshg87 from the raid?
17:13:37 <jroll> I think zer0c00l expressed some interest
17:13:53 <NobodyCam> TY
17:14:03 <rloo> jroll: right, but i also thought syed or some other HP person wanted to do the ipa side too but i could be wrong
17:14:19 <jroll> rloo: that's fine too, we should sync them up :)
17:14:28 <lucasagomes> cool there's enough volunteers... also that means partition image support for IPA
17:14:31 <rloo> jroll: yup. as long as *someone* does it :)
17:14:39 <lucasagomes> IPA == agent_*
17:14:40 <lucasagomes> sorry
17:15:03 <jroll> rloo: I made a note to bug people
17:15:12 <rloo> lucasagomes: yup. hopefully in M*?
17:15:19 <rloo> thx jroll
17:15:19 <lucasagomes> rloo, ++ hope so
17:15:32 <jroll> RAID question: that's done except needs manual cleaning and docs, right? lucasagomes are you planning to do the docs for that?
17:15:48 <rloo> jroll: there is a patch for the RAID docs.
17:16:01 <JayF> lucasagomes: I can't help with the development of it, but feel free to point any of the relevant IPA changes to me and I'll help with reviews on it.
17:16:03 <jroll> oh, awesome
17:16:05 <rloo> jroll: i think it is in pretty good shape; i looked awhile ago. just waiting for manual cleaning to be done first.
17:16:06 <JayF> lucasagomes: super excited about iroinc-lib
17:16:11 <lucasagomes> jroll, yeah since ramesh is gone I will do that... but I don't actally have an enviroemnt for RAID
17:16:12 <NobodyCam> rloo: you have the number handy?
17:16:18 <jroll> rloo: ++ ty
17:16:25 <lucasagomes> JayF, that would be extremelly useful, thank you!
17:17:21 <rloo> NobodyCam: not handy, just found it: https://review.openstack.org/#/c/226330/
17:17:39 <NobodyCam> awesome ty rloo :)
17:17:52 <lucasagomes> thanks rloo
17:18:28 <rloo> there's a patch for CLI part of RAID too although I haven't looked at that one yet
17:19:11 <NobodyCam> I've opened that one .. will review today :)
17:19:24 <stendulker> rloo, lucasagomes: Nisha is working on both the RAID patches CLI and doc
17:19:39 <rloo> stendulker: thx!
17:19:48 <lucasagomes> nice thank you stendulker
17:19:52 <wanyen> RAID CLI https://review.openstack.org/#/c/226234/ is still under review
17:20:59 <rloo> i have a question about the official docs. i'm not sure i understand the part about 'to change old versions of such docs'...
17:21:19 <rloo> is that to change the config reference only?
17:21:45 <NobodyCam> wanyen: that patch is merge conflict
17:22:03 <jroll> rloo: context?
17:22:18 <rloo> NobodyCam, wanyen: maybe I should put a -2 on those RAID patches; they shouldn't merge before manual clean
17:22:26 <rloo> jroll: just reading in the subteam report
17:22:29 <jroll> rloo: or depends-on
17:22:35 <rloo> jroll: yeah, or depends-on
17:22:58 <jroll> oh, liliars doesn't seem to be here
17:23:06 <stendulker> rloo: Will add depends-on for the both the RAID patches
17:23:15 <rloo> thank you stendulker
17:23:26 <stendulker> rloo: Which manual cleaning patch I can add for the same?
17:23:31 <rloo> jroll: i can ask her later.
17:23:48 <thiagop> liliars is on the physiotherapy right now, couldn't attend
17:23:49 <jroll> rloo: so yeah, I think it's just config reference. which shouldn't really need to change much on stable versions :)
17:23:54 <rloo> stendulker: will irc with you later about that
17:24:09 <stendulker> rloo: ok. Thanks
17:24:25 <rloo> jroll: ok, i don't think we have a need to update the config. it is the install guide that is of concern.
17:24:36 <jroll> rloo: right, which doesn't have ironic stuff
17:24:48 <jroll> there's that -docs thread going on
17:25:02 <rloo> jroll: so it looks like IF it is supported, it will be in a second install guide. i'll take a look at that docs thread later.
17:25:23 <jroll> yep
17:25:49 <rloo> wrt testing, 'discussion' about valuable drivers, who/where is that discussion taking place?
17:26:12 <jroll> there was some discussion in the spec, and we punted for now
17:26:42 <rloo> jroll: ah, ok. as long as we have that discussion at some point. punting for now is fine.
17:26:47 <jroll> there's no ongoing discussion atm
17:26:55 * lucasagomes likes devananda's options on that spec
17:27:37 <jroll> okay, should we open up open discussion?
17:27:44 <jroll> or anything else on this topic?
17:28:03 <jroll> #topic open discussion
17:28:08 <jlvillal> Anyone know the status on fixing the gate-tempest-ironic-ilo-driver-iscsi_ilo job?
17:28:13 * jroll throws the mic in the air for whoever can catch it
17:28:27 <jroll> jlvillal: deray is the point of contact for that, don't see them here
17:28:49 <lucasagomes> jlvillal, that's the 3rd party CI from HP right? Not sure who's working on that but it's great that it's already voting in the patches
17:28:55 <NobodyCam> jroll: your suspost to hold straght out on front of you and drop it, to be kewl :p
17:29:15 <jlvillal> lucasagomes, It is voting. But I only see it vote -1 so far.
17:29:16 <stendulker> jlvillal: We are working on it.. on the baremetal we are getting failire son heartbeat.
17:29:30 <lucasagomes> jlvillal, yeah, probably it's not completed yet
17:29:33 <jroll> it was working for a little bit
17:29:42 <jlvillal> stendulker, Great. Good to hear that it is being worked on :)
17:29:58 <rloo> stendulker: are you updating the status of those tests on that third party CI web page?
17:30:09 <stendulker> sorry for delay and failures on every patch...
17:30:17 <lucasagomes> since it's open-discussion I would like to ask for reviews at https://review.openstack.org/#/c/253605/ , this is to make some JSON fields indexable in the DB which will later be used to build the filter API
17:30:32 <lucasagomes> there's a link in the comments to a POC patch which you can test the latest version of that spec
17:30:44 <lucasagomes> and a step by step on how to do it (to see the migration on-the-fly and so on)
17:30:49 <stendulker> rloo: I'm not sure... Can you pls share the link to same.
17:30:58 <jlvillal> stendulker,  As long is work is being done on it. That is good. I was just worried it wasn't being worked on. But good to hear it is in progress. Thanks.
17:31:11 <jroll> stendulker: https://wiki.openstack.org/wiki/ThirdPartySystems
17:31:36 <stendulker> jroll: thanks
17:32:51 <rloo> lucasagomes: will try to look at your spec this week. i really want the capabilities separated but one step at a time :)
17:33:01 <gabriel> Oh, I have a question regarding CI servers: are they expected to be on the 'check experimental' pipeline? For how long? Would it be a good idea to have a "check smthngelse" pipeline just for the 3rd party driver CIs?
17:33:12 <lucasagomes> rloo, right, yeah the spec will serve as plumbing work for that (the fact table)
17:33:26 <jroll> gabriel: I'd do whatever the third party CI docs say
17:33:37 <lucasagomes> rloo, but since capabilities needs backward compat I prefer to do it in a separated spec
17:33:39 <jroll> gabriel: I suspect experimental is the wrong thing
17:33:50 <rloo> lucasagomes: yup, it makes sense as a separate spec
17:33:53 <devananda> gabriel: main pipeline. it's all in the third-party docs
17:34:01 <jlvillal> stendulker, If you update the status info. Can dates be put in on when the status was updated?
17:34:05 <lucasagomes> rloo, thanks!
17:34:08 <jroll> I'd also love to invite folks to chime in on devananda's thread: http://lists.openstack.org/pipermail/openstack-dev/2015-December/082011.html
17:34:16 <devananda> gabriel: if you want to start it in the experimental pipeline while you iron out anything on your end, that's also reasonable
17:34:16 <jroll> very important discussion we need to have there
17:34:34 <jroll> it's about the multiple compute work that is currently pretty stuck
17:34:37 <rloo> jroll: yuppers, VERY important
17:34:37 <devananda> jroll: thanks. I was going to bring that up.
17:34:46 <gabriel> I see. Thank you, devananda and jroll .
17:34:50 <krtaylor> gabriel, start with http://docs.openstack.org/infra/system-config/third_party.html
17:35:00 * lucasagomes will look
17:35:09 <krtaylor> gabriel, and come to the wednesday meetings for ironic-qa :)
17:35:23 <mgould> jroll, do I understand right that the nova scheduler won't scale to the level Ironic needs?
17:35:32 <devananda> jroll: should we go poke some nova folks too?
17:35:34 <jroll> mgould: correct
17:35:44 <jroll> devananda: yes please
17:35:56 <mgould> jroll, in that case I'm surprised it scales well enough for Nova...
17:36:06 <gabriel> I guess liliars and others from our team are already participating, krtaylor. Thanks for inviting, though.
17:36:07 <jlvillal> gabriel, https://wiki.openstack.org/wiki/Meetings/Ironic-QA
17:36:12 <jroll> mgould: well, about that... :)
17:36:15 <devananda> mgould: well, ask anyone running a large openstack cloud ....
17:36:16 <mgould> heh
17:36:19 <jroll> mgould: it doesn't scale to thousands of things
17:36:22 <jroll> hence, cells
17:36:25 <mgould> aha
17:36:26 <devananda> ^ right
17:36:30 <lucasagomes> yeah :-(
17:36:35 <devananda> but cells are a poor[er] fit for ironic
17:36:42 <jroll> we can run ironic in cells, but that requires an ironic deployment per cell
17:36:43 <mgould> so it scales iff you use cells, but we can't use cells for ironic
17:36:47 <jroll> we can
17:36:55 <jroll> just, not one ironic for all cells AIUI
17:37:01 <mgould> OK
17:37:07 <jroll> s/AIUI//
17:37:14 <mgould> so we'd need an ironic per cell, which would be a PITA
17:37:23 <jroll> it is a PITA, yes
17:37:28 <devananda> and that creates a problem for folks who want to integrate ironic with other inventory mgmt systems at large scale
17:38:00 <lucasagomes> cells are being refactored in nova right? Are they considering the ironic use case when designing it?
17:38:01 <mgould> right, I think I understand now
17:38:02 <devananda> mgould: also, performance of nova-sched isn't hte only issue there
17:38:08 <devananda> lucasagomes: not afaik
17:38:14 <lucasagomes> (thinks he knows the answers)
17:38:17 <jroll> lucasagomes: idk, but the architecture will be fine for ironic I believe
17:38:17 <lucasagomes> devananda, yeah :-/
17:38:33 <mgould> I was wondering if the Right Thing was to make nova-scheduler scale better
17:38:41 <devananda> mgould: well, about that .... :)
17:38:57 <jroll> lucasagomes: cells v2 isn't terribly different as to what ironic cares about
17:39:45 <jroll> it's still scheduler/conductor/computes segregated by cells
17:39:46 <lucasagomes> right, ty. I will try to take a look at their specs and so on
17:39:53 <jroll> but the way the communication works and such is different
17:40:11 <mgould> devananda, right, there's all the filtering stuff as well
17:40:25 <devananda> mgould: a) there are already folks working on that. if you want to do that - I encourage you to. b) that performance isn't the only issue. nova-scheduler makes assumptions that resources are subdivisible
17:40:41 <mgould> aaaah
17:41:30 <mgould> which obviously isn't the case with baremetal
17:41:43 <devananda> mgould: really, the way nova scales (limit failure to a single n-cpu host; if that host dies, assume all instances on it are dead) and the way ironic scales (redundant control plane, instances are independent and manageable by any part of hte control plane) are different.
17:41:52 <devananda> mgould: that creates another layer of friction for scheduling
17:41:56 <gabriel> How is the work going on the neutron integration part? Do you see it being done by Mitaka? Or earlier? I'm looking at the WhiteBoard but can't find work yet to be done except for some patches with -1 on nova and ironic
17:42:02 <lucasagomes> mgould, it's most ironic, since the same conductor can potentially manage all nodes in the datacenter
17:42:08 <gabriel> I mean, it is not explicit there
17:42:33 <gabriel> -ed
17:42:39 <jroll> gabriel: the code wokrs afaik, just needs reviews
17:42:43 <jroll> and testing :)
17:42:46 <devananda> jroll: I started reviewing the neutron patches last week, but it looks like some of them are >1mo old now
17:42:56 <lucasagomes> and nova assumes each nova-compute only manages a specific number of nodes/VMs
17:42:56 <jroll> devananda: yeah, they need updates too I guess
17:42:58 <mgould> devananda, thanks!
17:42:59 <devananda> like they stopped getting updated / rebased
17:43:41 <gabriel> jroll: do you mean automated testing by that?
17:43:55 <rloo> do you think they stopped updating/rebasing cuz they didn't think anyone cared?
17:43:58 <jroll> gabriel: both, I guess. there's devstack patches there
17:45:43 <jroll> not sure, rloo
17:45:51 <jroll> I feel like some of the people disappeared
17:46:05 <rloo> jroll: oh, that isn't good
17:46:23 <devananda> jroll: hm. what can we do to get that going again / landed?
17:46:31 <gabriel> In face of ironic's (and maybe neutron's and nova's) priorities, do you see it being left for N instead of Mitaka?
17:46:41 * jlvillal hopes not
17:46:44 <jroll> devananda: get people working hard on it
17:46:56 <jroll> it's on my list to pick up if nobody else does
17:47:14 <jroll> my top priorities right now: 1) devstack plugin, 2) fix our crappy gate, 3) land neutron stuff
17:47:28 <devananda> jroll: is the devstack side written? I can poke at it, if there's a way to test it -- but I do not know the networking aspects well enough to write them
17:47:46 <jroll> devananda: yep, see the etherpad, there are patches to ironic's devstack plugin
17:47:52 <devananda> jroll: link?
17:47:54 <jroll> I haven't had a chance to look at them
17:48:00 <jroll> devananda: whiteboard, subteam updates
17:48:09 <jroll> https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bp/ironic-ml2-integration,n,z
17:48:29 <gabriel> #link https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bp/ironic-ml2-integration,n,z
17:48:36 <devananda> jroll: ahhh. I was looking at the other etherpad, linked in #TODO's -- that looks wrong
17:48:55 <devananda> jroll: perhaps it should be removed from the whiteboard ?
17:49:11 <jroll> devananda: which etherpad? removed from where on the whiteboard?
17:49:23 <jroll> oh
17:49:25 <jroll> I see it
17:49:25 <rloo> devananda: just delete it :)
17:49:26 <devananda> jroll: L91
17:49:45 <devananda> cool
17:49:48 * jroll fixed
17:50:08 <jroll> I wonder how up to date that entire todo section is
17:50:18 * jroll wants to nuke the whole whiteboard and start over sometimes
17:50:23 <devananda> i'm guessing it's not
17:51:01 <devananda> jroll: maybe on day 1 of the midcycle?
17:51:07 <jroll> sure
17:51:49 <jroll> we're almost at time here, does anyone have anything else to discuss?
17:52:52 <NobodyCam> great meeting everyone!
17:53:01 <gabriel> I'm done for today. Thanks, folks.
17:53:04 <jroll> yep, thanks all for joining :)
17:53:06 <jroll> #endmeeting