17:00:34 <dtantsur> #startmeeting ironic
17:00:35 <openstack> Meeting started Mon Sep 12 17:00:34 2016 UTC and is due to finish in 60 minutes.  The chair is dtantsur. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:37 <lucasagomes> o/
17:00:38 <openstack> The meeting name has been set to 'ironic'
17:00:41 <dtantsur> #chair devananda
17:00:44 <openstack> Current chairs: devananda dtantsur
17:00:55 <rama_y> o/
17:00:59 <rpioso> o/
17:01:01 <dtantsur> welcome all! our pretty light agenda can be found at
17:01:02 <NobodyCam> o/
17:01:03 <dtantsur> #link https://wiki.openstack.org/wiki/Meetings/Ironic
17:01:04 <jroll> \o
17:01:07 <rloo> o/
17:01:11 <dtantsur> #chair jroll
17:01:13 <openstack> Current chairs: devananda dtantsur jroll
17:01:18 * jroll has most of his attention on another meeting right now
17:01:48 <dtantsur> #topic Announcements / Reminders
17:01:52 <gabriel-bezerra> o/
17:02:03 <mjturek> o/
17:02:04 <sambetts> o/
17:02:06 <dtantsur> This week is an OpenStack RC week
17:02:22 <xhku> o/
17:02:23 <dtantsur> while we don't follow the schedule strictly, it's really a good time to wrap up all major work
17:02:32 <jroll> ++
17:02:35 <dtantsur> and prepare to mostly do bug fixing the remaining time
17:02:49 <dtantsur> jroll, I think we plan on services releases this week, right?
17:03:00 <rloo> what is a 'services' release?
17:03:06 <dtantsur> ironic and ironic-inspector
17:03:11 <dtantsur> and ironic-python-agent
17:03:13 <jroll> dtantsur: was considering it, not sure
17:03:20 <jroll> maybe just wait til next week and call it final?
17:03:26 <dtantsur> ok, so it *might* happen :)
17:03:35 <dtantsur> anyway, it's coming soon, jroll to decide :)
17:03:42 <jroll> :P
17:03:52 <rloo> so we're targetting next week as the newton-RC right?
17:03:59 <dtantsur> https://trello.com/b/ROTxmGIc/ironic-newton-priorities still lists three priorities
17:04:16 <dtantsur> two of them are CI-related, the last one is notifications
17:04:17 <rloo> we punted port groups
17:04:18 <jroll> rloo: that was a random thought, not a question :)
17:04:34 <jroll> er, s/question/decision
17:04:56 <rloo> jroll: well, let's see what is 'left' to do in newton
17:05:01 <dtantsur> #link https://releases.openstack.org/newton/schedule.html Newton release schedule
17:05:21 <jroll> indeed
17:05:24 <dtantsur> Sep 26-30 is stated as a final week for intermediary releases
17:05:27 <rloo> wrt trello, am moving portgroups to ocata, yes?
17:05:38 <jroll> +1
17:05:41 <rloo> i moved it :)
17:05:42 <dtantsur> meaning, we have up to 2 weeks before the final release
17:05:48 <lucasagomes> virtualbmc should be much more stable now, I will look at some data tomorrow and if so, should we already propose the patch to change the jobs removing pxe_ssh and replacing with ipmitool drivers ?
17:06:03 * lucasagomes means *_ssh
17:06:09 <dtantsur> lucasagomes, let's maybe leave it for open discussion?
17:06:10 <rloo> jroll: wrt the three left in trello, are they must-haves for newton?
17:06:16 <lucasagomes> dtantsur, sure
17:06:47 <jroll> so here's my thought on these
17:06:50 <rloo> keystone policy support, CI:switch to virtualbmc, security groups for provisioning/cleaning network
17:07:25 <jroll> keystone could go either way, there's nothing major there but it's failing CI
17:07:32 <jroll> virtualbmc stuff we should be doing
17:07:41 <jroll> security groups, I want to talk as a team about the riskiness
17:07:54 <jroll> then in nice-to-have:
17:08:09 <jroll> active node creation is only adding tests left, I'd like to do that
17:08:27 <jroll> I think agent/partition should probably bump
17:08:41 <jroll> notifications I would like, it isn't risky, but not sure if it will make it
17:09:30 <jroll> feel free to agree/disagree/etc
17:09:32 <jroll> :)
17:09:35 <lucasagomes> heh
17:09:38 <lucasagomes> I'm +1 on the notifications
17:09:47 <lucasagomes> I think it's really nice to have it
17:10:00 <rloo> i think the notifications will make it. i was going to do it last week but decided i wanted to spend a bit more time looking at nova's first.
17:10:07 <lucasagomes> there are many patches for it, but if we get at least the base ones I'm already happy with it
17:10:22 <jroll> yeah, I'd like to add the power notifications too
17:10:30 <dtantsur> as soon as we agree on the base patch, the remaining should be simple
17:10:43 <rloo> i haven't looked at security groups yet. i don't see many reviews on it though.
17:11:03 <jroll> it's small but seems risky to land it this late
17:11:04 <rloo> security groups patch, is just this one? https://review.openstack.org/#/c/361451/
17:11:11 <jroll> but people could persuade me otherwise
17:11:16 <rloo> jroll: if you think it is risky, i don't think it should go in
17:11:22 <gabriel-bezerra> not in the trello, but how about ironic consuming neutron notifications for synchronizing the port operations? https://review.openstack.org/#/c/345963/
17:11:25 <jroll> yes, that's it - it's SG for provisioning/cleaning networks
17:11:41 <jroll> gabriel-bezerra: we're talking about the rest of newton, there's no way that will happen
17:12:29 <devananda> security groups on prov/clean network looks small and relatively low risk to me, however, we don't have CI for it right now AFAIK
17:12:38 <gabriel-bezerra> ok
17:12:44 <sambetts> jroll: without the security groups changes we need to ensure that we document that the default security groups for those networks need to be open enough to allow the communication we need
17:13:05 <jroll> yeah, lack of CI combined with "if this breaks, it breaks the world" makes me think it's risky
17:13:16 <jroll> sambetts: yeah good point
17:13:24 <rloo> here's my question: who (which cores) want to look into the securit groups, test it, etc, this week?
17:13:43 * dtantsur wants to look into bugs rather
17:13:49 <jroll> I'm not sure I can, my calendar is slammed this week :(
17:13:57 <devananda> either way, we're missing documentation on what the security group settings should be for those networks
17:14:00 <rloo> i think if we don't have two cores willing to commit, then it is too risky
17:14:03 <jroll> ok, you've all convinced me to move from -0.5 to -1
17:15:06 <dtantsur> any other *features* people want to see in newton?
17:15:19 <rloo> so if it doesn't go in, who will update the doc? volunteer please? :)
17:16:09 <jroll> dtantsur: nope!
17:16:17 <dtantsur> :)
17:16:25 <dtantsur> rloo, let's start with filing a bug maybe?
17:16:33 <rloo> dtantsur: ok
17:16:42 <dtantsur> good think about documentation is that it does not have to fit into newton timeline
17:16:45 <dtantsur> * the good thing
17:16:50 <gabriel-bezerra> dtantsur: driver specific included?
17:17:00 <dtantsur> gabriel-bezerra, at this stage - yes
17:17:06 <gabriel-bezerra> in the features for newton?
17:17:13 <gabriel-bezerra> we want inband inspection in oneview
17:17:19 <rloo> i bumped security groups to Ocata (in trello)
17:17:20 <gabriel-bezerra> RFE and patch sent for review
17:17:44 <dtantsur> if it hasn't had a few reviews already, chances are not too high it will get in...
17:17:47 <dtantsur> anyway, link?
17:17:55 <rloo> i moved 'keystone policy support' to 'nice to have' in trello
17:17:56 <gabriel-bezerra> just a second
17:18:18 <gabriel-bezerra> #link https://review.openstack.org/#/c/367065/
17:18:32 <gabriel-bezerra> #link https://bugs.launchpad.net/ironic/+bug/1621530
17:18:33 <openstack> Launchpad bug 1621530 in Ironic "[RFE] Inband inspection for the OneView drivers" [Undecided,New]
17:19:27 <dtantsur> gabriel-bezerra, does not look huge, but the RFE must be approved first. torture jroll after the meeting please :)
17:19:31 <devananda> rloo: to be fair, keystone policy support is already in and documented, but one of the CI patches for it isn't passing yet
17:19:37 <jroll> +1
17:19:43 <gabriel-bezerra> lol. OK
17:19:58 <dtantsur> folks, do you think we can move on?
17:20:01 <devananda> that being the preventative unit test that would prevent future API changes from landing without a policy check
17:20:04 <gabriel-bezerra> thanks
17:20:15 <rloo> devananda: so there is no coding? and/or we are talking about a bug(s) that need to be fixed?
17:20:15 <lucasagomes> dtantsur, ++
17:20:30 <rloo> devananda: oh.
17:20:47 <dtantsur> a unit test can land at any stage before stable/newton is branched IMO
17:20:52 <jroll> dtantsur: +1 for moving on
17:20:58 <dtantsur> #topic Review subteam status reports (capped at ten minutes)
17:21:07 <dtantsur> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:21:14 <dtantsur> the status report starts at line 90
17:21:16 <jroll> rloo: devananda: there's also a feature on the gerrit topic for keystone stuff
17:21:23 <dtantsur> please update, if you haven't updated your items yet
17:21:57 * dtantsur is worried by the bug count
17:23:01 <rloo> dtantsur: the 35 high bugs? :-(
17:23:12 <dtantsur> and overall number too
17:23:23 <rloo> dtantsur: are most/all of those bugs triaged?
17:23:31 <dtantsur> I wonder, however, if we should adapt Nova's procedure of closing too old bug reports
17:23:41 <dtantsur> rloo, what isn't "New" is usually triaged
17:24:08 <lucasagomes> dtantsur, I thought we had something already
17:24:14 <dtantsur> lucasagomes, not really
17:24:23 <dtantsur> they close even valid bugs if they are too old
17:24:32 <dtantsur> it looks strange at first glance, but makes sense to me at a second
17:24:34 <lucasagomes> oh, what? hmm
17:24:47 <rloo> i'm not sure I like closing valid bugs if they are too old
17:24:53 <dtantsur> if nobody fixed a bug in 1-2 years, who will figure out it now?
17:25:03 <rloo> a bug is a bug
17:25:10 <mat128> state: Abandoned
17:25:10 <lucasagomes> yeah I'm at the "first glance" stage yet, heh it doesn't look like a good solution to me
17:25:12 <mat128> or smt
17:25:13 <rloo> whether it gets fixed or not, is another question
17:25:14 <dtantsur> maybe? or maybe not a bug any more? or maybe a different bug now?
17:25:36 <dtantsur> somebody has to spend a lot of time figuring out, and it looks like people are not too keen on figuring out ancient bugs
17:25:42 <dtantsur> anyway, this is just a thought aloud :)
17:25:59 <lucasagomes> yeah
17:26:16 <rloo> maybe we need a different 'class' then; "old bugs that may not be valid". is that the 'abandon' thing? :)
17:26:33 <mat128> rloo: yeah, like abandoned ironic-specs ;)
17:26:34 <persia> I've long been an advocate of preservation of bugs: if they change, but are reproducible, let them be unti lsomeone can fix them.  That said, if a bug was a bug once, and is no longer reproducible with the given description, closing in some time period (unless somone documents how to reproduce) is very attractive (unless one wishes to support all historical branches, ever).
17:26:50 <dtantsur> "if they change, but are reproducible" who knows it?
17:26:56 <mat128> persia: that means we should filter old bugs for "are still reproducible" ?
17:27:05 <persia> dtantsur: Anyone who tries them.
17:27:06 <dtantsur> who is going to go over the list and reproduce the bugs from time to time?
17:27:10 <persia> mat128: That would be my recommendation, yes.
17:27:18 <devananda> persia: that implies all bug reports include sufficient instructions to reproduce them :)
17:27:37 <persia> devananda: And if they do not, they aren't reproducible on first triage, and the timer starts...
17:27:37 <devananda> I would support closing any bug > 1yr old that isn't reproducible any longer
17:27:42 <rloo> not just to reproduce them (and yes, they ought to), but we don't have a bot that will reproduce them.
17:28:16 <dtantsur> also, before we go too far with this discussion: infra wants us to move our dsvm jobs to Xenial for *newton*
17:28:18 <rloo> we're lucky that dtantsur even does triaging etc and I thank him for that
17:28:22 <dtantsur> :)
17:28:52 <rloo> dtantsur: xenial for newton and beyond?
17:28:56 <dtantsur> rloo, correct
17:29:28 <rloo> dtantsur: what does 'wants us to' mean? we're going to try to do it in the next 2 weeks?
17:29:30 <dtantsur> just FYI, it might start happening at any moment. they're pressing hard.
17:29:50 <dtantsur> rloo, it means: soon the jobs will start switching. maybe tomorrow, maybe next week, not sure.
17:30:10 <rloo> dtantsur: so it isn't our choice? infra will switch it when they want?
17:30:23 <dtantsur> well, more or less so, at least IIUC
17:30:24 <devananda> dtantsur: is there a way we can test that on patches, eg. with "check experimental" ?
17:30:29 <lucasagomes> dtantsur, but we don't have to do anything do we? Maybe test on xenial and tell infra it works!?
17:30:36 <lucasagomes> devananda, ++
17:30:37 <dtantsur> lucasagomes, that would be awesome
17:30:58 <dtantsur> devananda, we need at least one job for that, but first they asked us to clean up the current list of our jobs
17:31:05 <dtantsur> which is something I've been doing for 2 days already :)
17:31:56 <dtantsur> we're past 10 minutes cap for the status report. any questions, except for what I raised?
17:32:01 <rloo> zuul is already running most/all? in xenial, right?
17:32:06 <sambetts> dtantsur: thanks for doing that, I notice you fixed some mistakes I made when refactoring :D
17:32:24 <jroll> right, once this is cleaned up, infra is going to start testing our jobs on xenial and going from there
17:32:31 <dtantsur> rloo, a lot of tox jobs are xenial-only or xenial/precise
17:32:44 <dtantsur> but not dsvm jobs, these haven't switched
17:32:57 <rloo> jroll, dtantsur: thx for clarifying
17:33:05 <clarkb> the "stock" dsvm jobs are switched but working through all of the random specific ones
17:33:27 <dtantsur> aha, thanks clarkb
17:34:01 <dtantsur> #topic Stuck specs
17:34:05 <rloo> thx JayF and mat128 for migrating the install guide
17:34:13 <dtantsur> not sure we have stuck specs at this stage :)
17:34:19 <jroll> ditto
17:34:19 <mat128> rloo: =)
17:34:24 <dtantsur> anyway, anything to bring *except* for driver composition update? ;)
17:34:45 <rloo> one thing that came up that i don't have an answer for, osc plugin vs openstackclient
17:34:48 * jroll usually just skips this if nobody put any on the agenda
17:34:56 <dtantsur> #topic RFE review
17:35:01 <dtantsur> ditto, anything here?
17:35:09 <jroll> wait, what's rloo 's question?
17:35:11 <rloo> our python-ironicclient pulls in openstackclient. some people don't want that.
17:35:15 <lucasagomes> not that I know of, I think it can be skipped as well
17:35:25 <dtantsur> #topic Open discussion
17:35:30 <dtantsur> rloo, now go ahead please :)
17:35:34 <jroll> lol
17:35:35 <lucasagomes> rloo, isnt the idea to replace the ironic CLI with OSC ?
17:35:41 <rloo> but if we don't pull in openstackclient, it means that anyone currently using our plugin, will have to do something extra to use our plugin
17:35:48 <dtantsur> so yeah, different projects do it differently
17:36:07 <dtantsur> for ironic-inspector-client and tripleoclient, OSC is the only CLI, so they definitely pull in OSC dependency
17:36:18 <rloo> there isn't any consistency (yet) wrt how projects do it.
17:36:23 <lucasagomes> should this https://bugs.launchpad.net/ironic/+bug/1562258 be critical ? I'm wondering because this may cause Ironic to hammer flaky BMCs because we can't configure a longer interval periodic for tasks like sync_power_states
17:36:24 <openstack> Launchpad bug 1562258 in Ironic "Ironic does not honor the intervals passed to the @periodic decorators" [High,Triaged] - Assigned to Lucas Alvares Gomes (lucasagomes)
17:36:32 <dtantsur> note, however, that OSC folks are discussing having a basic CLI functionality in osc-lib
17:36:37 <dtantsur> if it happens, the problem will be solved
17:36:40 <lucasagomes> (btw I've a fix for that locally, just need to right the commit message)
17:36:45 <rloo> dtantsur: OH. that might work then.
17:36:55 <sambetts> ++ on the ocs-lib thing
17:37:09 <jroll> lucasagomes: if it broke during this cycle, maybe yeah
17:37:11 <gabriel-bezerra> I'd put the RFE I mentioned above in the RFE requests :)
17:37:16 <rloo> dtantsur: i only brought it up cuz there was a patch wanting to remove osc from the requirements.
17:37:22 <dtantsur> lucasagomes, I usually put "critical" only when EVERYTHING IS BROKEN OH NOES
17:37:33 <rloo> dtantsur: and if we wanted to do that, i figured we should do, have a story sooner rather than later
17:37:38 <gabriel-bezerra> as dtantsur say, I'll check that out with jroll later
17:37:41 <lucasagomes> jroll, right, yeah idk the answer :-) I have to investigate to see if it was this cycle of the past cycle
17:37:57 <jroll> gabriel-bezerra: feel free to link me now
17:38:17 <lucasagomes> dtantsur, right, yeah maybe leave as high then
17:38:19 <gabriel-bezerra> jroll: https://bugs.launchpad.net/ironic/+bug/1621530
17:38:20 <openstack> Launchpad bug 1621530 in Ironic "[RFE] Inband inspection for the OneView drivers" [Undecided,New]
17:38:35 <jroll> gabriel-bezerra: thanks, and the question was if we do this in newton?
17:38:40 <lucasagomes> dtantsur, I was wondering because there's no workaround for that, except changing the code
17:38:58 <gabriel-bezerra> jroll: asked who had more features expected for newton. I mentioned that.
17:38:58 <jroll> gabriel-bezerra: no patches for this?
17:39:01 <gabriel-bezerra> yes
17:39:04 <jroll> where?
17:39:09 <gabriel-bezerra> https://review.openstack.org/#/c/367065/
17:39:17 <jroll> thanks
17:39:22 <dtantsur> lucasagomes, I think high is high enough (pun intended). critical to me is really like "everyone drops everything they've been doing"
17:39:33 <jroll> hm, where's oneview CI on this
17:39:41 <gabriel-bezerra> I registered the RFE after the patch. xavierr will still add the reference in the commit message.
17:39:48 <gabriel-bezerra> oneview CI under maintenance
17:40:00 <gabriel-bezerra> we had a major network change in the lab last week
17:40:00 <rloo> lucasagomes, dtantsur: I agree, high is good. if it was critical, it would have been noticed in march when it was reported. I think.
17:40:09 <lucasagomes> dtantsur, rloo ack, high is it then
17:40:09 <gabriel-bezerra> things went awry
17:40:25 <rloo> lucasagomes: but we/you can fix it in newton, right?
17:40:40 <dtantsur> landing vendor stuff without CI... is something we wanted to avoid. no other objections to this RFE.
17:40:44 <lucasagomes> rloo, yeah I've found out the problem few minutes before this meeting
17:40:54 <gabriel-bezerra> We're working on fixing it
17:41:04 <gabriel-bezerra> I understand the concern
17:41:11 <lucasagomes> rloo, tho the solution is either a flaky one (reordering some imports) or an ugly one (reloading the module)
17:41:12 <rloo> lucasagomes: great, thx for fixing it. i'll take a look at your patch when you have it fixed!
17:41:30 <lucasagomes> rloo, I've an idea for a good solution too but it requires some changes in futurist
17:41:47 <rloo> lucasagomes: we can't change futurist now, right? too late for newton.
17:42:08 <lucasagomes> rloo, right. So yeah I will leave a FIXME in our code
17:42:22 <lucasagomes> until we get the futurist stuff in, which is ok because can be done in parallel no problem
17:43:08 <lucasagomes> rloo, I mean I will fix the problem with the ugly-ish/flaky solution (probably the ugly-ish) and leave a FIXME
17:43:21 <rloo> lucasagomes: :)
17:46:10 <dtantsur> anything else for today? wanna talk about releases?
17:46:46 <rloo> dtantsur: do we want to meet to review eg high bugs for newton?
17:47:01 <sambetts> I think it would be cool to organise a bug smash
17:47:09 <lucasagomes> sambetts, ++
17:47:22 <lucasagomes> we recently had one downstream (for bugzilla) and it was quite helpful
17:47:23 <dtantsur> ++ for a bug smash. we're already having ones downstream, just discussing having upstream ones as well today
17:47:28 <dtantsur> yes, this :)
17:47:49 <rloo> my focus (now) is to see if there are any bugs that we MUST/should fix for newton.
17:48:28 <rloo> although HIGH bugs aren't usually quick fixes :-(
17:48:31 <dtantsur> somebody wants to come up with a doodle for such event?
17:49:43 <dtantsur> I guess this silence means "please do it yourself", right? ;)
17:49:57 <lucasagomes> dtantsur, just a card on the "to do" column ?
17:50:03 <sambetts> rloo: there are certainly some bugs looking at the list that are more like RFEs
17:50:19 <dtantsur> lucasagomes, sorry?
17:50:36 <lucasagomes> dtantsur, oh I was looking at the newton priorities board
17:50:50 <dtantsur> roll call: who wants to participate in a bug smash this one next week?
17:50:56 <dtantsur> s/one/or/
17:51:03 <lucasagomes> ++
17:51:04 <sambetts> o/
17:51:07 <lucasagomes> next week if possible
17:51:15 <rloo> am thinking. next week is too late
17:51:26 <rloo> i mean, if we want to identify any bugs that need to be fixed for newton
17:51:28 * sambetts is on PTO from mid next week
17:51:34 <lucasagomes> rloo, oh right, yeah agreed
17:51:39 <dtantsur> we can do both, if enough people attend
17:51:41 <rloo> anyone avail wed?
17:52:06 <sambetts> I should be
17:52:13 <rloo> or i can take a look wed am my time and ping anyone who is around
17:52:14 <dtantsur> Wed 3:30 pm UTC - roughly 5:00 pm UTC is fine by me
17:52:26 <lucasagomes> I might be around too, but I won't stay until late
17:52:40 <krtaylor> maybe use the ironic-qa meeting timeslot?
17:52:41 <dtantsur> Europe morning is also fine
17:52:44 <rloo> that works for me, or even earlier.
17:52:48 <devananda> I'll be here wed.
17:53:11 <rloo> krtaylor: the ironic-qa meeting time is a bit late for the europeans i think.
17:53:24 <dtantsur> we can declare the whole wed the "ironic bug smash day", and coordinate on irc. wdyt?
17:53:30 <krtaylor> yeah, 1700UTC
17:53:30 <devananda> dtantsur: ++
17:53:32 <rloo> dtantsur: ++
17:53:55 <lucasagomes> dtantsur, ++
17:54:07 <lucasagomes> krtaylor, too late, I'll probably start earlier than that
17:54:07 <jroll> wednesday is good for me
17:54:36 <dtantsur> okay, I'll send an announcement to the ML
17:54:50 <rloo> thx dtantsur
17:55:35 <dtantsur> anything else folks want to bring up?
17:55:49 <sambetts> oh dear... /me just found a giant chain of duplicate bugs, some medium some high that I would say are an RFE not a bug to do with mapping neutron ports to ironic ports
17:56:14 <dtantsur> yes, this RFE-bug border is understood very differently by different contributors
17:56:51 <jroll> well, those are from before RFEs :)
17:58:11 <rloo> meeting over?
17:58:17 <sambetts> yeah... I think we might need to do quite a bit of cleanup for this sort of thing. /me opened a bug just now and it began I have a cluster of nodes in an Icehouse OpenStack
17:58:17 <lucasagomes> yeah seems so
17:58:18 <dtantsur> yes, I think we can wrap up
17:58:28 <dtantsur> thanks all!
17:58:29 <NobodyCam> thank you all
17:58:34 <dtantsur> #endmeeting ironic