17:00:49 <jroll> #startmeeting ironic
17:00:50 <openstack> Meeting started Mon Nov 28 17:00:49 2016 UTC and is due to finish in 60 minutes.  The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:52 <vdrok> o/
17:00:53 <openstack> The meeting name has been set to 'ironic'
17:00:54 <sambetts> o/
17:00:55 <dtantsur> o/
17:00:55 <xek> o/
17:00:56 <lucasagomes> o/
17:01:00 <aarefiev> o/
17:01:01 <xavierr> o/
17:01:01 <yuriyz|2> o/
17:01:05 <rloo> o/
17:01:12 <JayF> o/
17:01:21 <jroll> hey everyone :)
17:01:27 <rajinir> o/
17:01:31 <jroll> #topic announcements and reminders
17:01:35 <rpioso> o/
17:01:45 <jroll> #info don't forget to sign up for the PTG
17:01:47 <jroll> #link https://www.eventbrite.com/e/project-teams-gathering-tickets-27549298694
17:01:54 <rama_y> o/
17:01:55 <stendulker> o/
17:02:03 * jroll has no other announcements or reminders, does anyone else have one?
17:02:34 <jlvillal> o/
17:03:22 <NobodyCam> o/
17:03:28 <vgadiraj> o/
17:03:49 <TheJulia> o/
17:03:56 <jroll> #topic subteam status reports
17:04:04 <jroll> as always, those are here:
17:04:08 <jroll> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:04:11 <jroll> line 70 this time
17:04:47 <joanna> o/
17:05:30 <rloo> dtantsur: how far behind are you with bug triaging? should we hold some bug-triage-thing?
17:05:32 <jroll> #info dtantsur needs help with bug triage, volunteers very welcome
17:05:46 <rloo> ^^ volunteers would be great
17:05:55 <dtantsur> rloo, I've not done essentially anything for a while. and probably won't be able any soon.
17:06:06 <jroll> looks like we're making good progress on lots of this stuff
17:06:10 <dtantsur> at least cleaning up "New" and checking health of "In Progress" things
17:06:21 <JayF> I don't want to become sole person in charge, but I'd be glad to help generally with bug triage
17:06:28 <rloo> dtantsur: so it'd be good to have a volunteer catch up and keep an eye on new ones
17:06:38 <rloo> thx JayF!!!
17:06:46 <lucasagomes> I'm adding it to my list, tomorrow I'll help triage some of these bugs
17:06:54 <dtantsur> JayF, lucasagomes, feel free to check any of these 42 new bugs
17:07:57 <vgadiraj> I'm down to spend some time helping with bug triage. I am new but if I could be of help, I'll be glad to contribute.
17:08:40 <dtantsur> thanks all!
17:09:09 <lucasagomes> vgadiraj, sure thing and welcome :-)
17:09:28 <jroll> thank you all :)
17:10:20 <rloo> I see that we have 5? specs that need reviews. Are any at the point where spending eg 1 hour will get it done?
17:10:45 <jroll> I haven't looked lately, but wonder if rolling upgrades is at that point
17:11:08 <dtantsur> might be
17:11:22 <rloo> jroll: oh, i skimmed the latest revision but it looks close.
17:11:25 <jlvillal> jroll: I have been away from the rolling upgrade testing work for about the last two weeks. Starting to work on it again now.
17:11:30 <rloo> xek: what do you think?
17:11:32 <yuriyz|2> is there a list of spec with high priority?
17:11:38 <jroll> jlvillal: yeah, wondering about the spec :)
17:11:40 <jlvillal> jroll: Not sure on the rolling upgrade development work.
17:11:58 <rloo> yuriyz|2: i am looking at the subteam reports, where they say spec needs reviews.
17:12:05 <jroll> yuriyz|2: also http://specs.openstack.org/openstack/ironic-specs/priorities/ocata-priorities.html#smaller-things and anything on that page without the spec merged
17:13:42 <jroll> ok, so I trimmed up the priorities for the week a bit
17:13:51 <jroll> anything that feels missing there? maybe the next driver comp patch?
17:13:53 * gabriel-bezerra wondering about the status of events from neutron stuff
17:13:59 <dtantsur> jroll, yes please
17:14:13 <jroll> dtantsur: the move node_create thing, yes?
17:14:19 <dtantsur> jroll, yep
17:14:20 <jroll> or maybe just both
17:14:22 <jroll> thanks
17:14:30 <JayF> It's not urgent, and probably shoulnd't go on this weeks' priorities
17:14:34 <dtantsur> I've just finished another huge patch, but let's start with node_create
17:14:45 <JayF> but the specific-faults spec is "close enough" to get some design input from others before it goes too much further
17:15:00 <JayF> so I don't think it should be added to that list, but if anyone wants to take a look it'd be helpful
17:15:04 <dtantsur> link?
17:15:17 <JayF> https://review.openstack.org/#/c/334113/
17:15:29 * dtantsur adds to his todo list for tomorrow
17:15:52 <jroll> rloo ran away before RFE review, heh
17:16:02 <dtantsur> heh
17:16:05 <jroll> anything else here before that?
17:16:26 <jroll> #topic RFE review
17:16:34 <jroll> rloo: hey :)
17:16:34 <yuriyz|2> if we move node create to conductor looks like we should have start-end-fail notifications schema in this case
17:16:50 <yuriyz|2> what you think?
17:17:02 <lucasagomes> yuriyz|2, ++ (but it's out of topic)
17:17:07 <jroll> yuriyz|2: let's take it to the patch :)
17:17:15 <dtantsur> yuriyz|2++
17:17:22 <yuriyz|2> agree will prepare spec change
17:17:34 <rloo> https://bugs.launchpad.net/ironic/+bug/1630442
17:17:34 <openstack> Launchpad bug 1630442 in Ironic "[RFE] FSM event for skipping automatic cleaning" [Wishlist,New] - Assigned to Varun Gadiraju (varun-gadiraju)
17:17:41 <rloo> so, does that need a spec? ^^
17:17:56 <jroll> I think so, it's changing the state machine (which usually requires a spec yes?)
17:18:01 <dtantsur> ++ for usually requires
17:18:03 <rloo> i'm not even sure we need it ?
17:18:16 * dtantsur also thinks that "skip" is a horrible name for a transition
17:18:30 <rloo> so folks think the idea has merit, just needs spec?
17:18:37 <TheJulia> dtantsur: ++
17:18:38 <lucasagomes> yeah, usually when it touches the FSM we require a spec
17:18:41 <yuriyz|2> ++
17:18:44 <NobodyCam> dtantsur: ++ on the name thing
17:18:44 <lucasagomes> I think we should have a spec for this case
17:18:50 <dtantsur> rloo, how to put it.. I'd not reject it right away before hearing their use case.
17:18:53 <jroll> rloo: I'm not sure we need it either, but I'm willing to hear it out
17:18:55 <jroll> dtantsur: ++
17:19:07 <rloo> ok thx.
17:19:10 <rloo> https://bugs.launchpad.net/ironic/+bug/1633299
17:19:10 <lucasagomes> and I kinda like the simplicity of always transitioning through all the states (even if it's no-op, like cleaning in this case)
17:19:10 <openstack> Launchpad bug 1633299 in Ironic "[RFE] Overcloud deploy resiliency " [Undecided,New]
17:19:17 <lucasagomes> not sure if we will discuss it here tho
17:19:32 <jroll> I'd like to rename this one before doing anything with it :P
17:19:47 <dtantsur> ++
17:20:19 <jroll> I'm inclined to say this isn't impossible in the general case and reject it on that premise
17:20:31 <TheJulia> agreed, this seems like several issues rolled into one
17:20:32 <dtantsur> I don't know what to do about this request (as you can guess it's not the first time I hear it)
17:20:41 <jroll> isn't possible****
17:21:09 <JayF> Yeah, I'm thinking that RFE is difficult/impossible for Ironic
17:21:20 <rloo> this is like an ops thing. can we make it easier/better for them? documentation?
17:21:31 <JayF> unless you have a BMC that can somehow confirm, in a way that ironic can detect, that a reboot has actually happened
17:21:38 <vgadiraj> rloo: I assigned bug 1630442 to myself months ago when I was looking for something to contribute on, let me move it to unassigned.
17:21:38 <openstack> bug 1630442 in Ironic "[RFE] FSM event for skipping automatic cleaning" [Wishlist,New] https://launchpad.net/bugs/1630442 - Assigned to Varun Gadiraju (varun-gadiraju)
17:21:42 <dtantsur> maybe we should reduce the timeout when we wait for the first heartbeat
17:21:45 <lucasagomes> yeah, specially with network segragation. If wasn't for that a simple ping test could be used
17:21:50 <rloo> thx vgadiraj
17:21:58 <JayF> lucasagomes: ping test makes lots of really bad assumptions
17:21:58 <sambetts> lucasagomes: ++
17:22:00 <jroll> well, the RFE is to ensure that the instance booted properly
17:22:13 <jroll> which yeah, I agree isn't really possible
17:22:22 <lucasagomes> JayF, right, totally. But, it would be better than having no check at all
17:22:24 <JayF> lucasagomes: is it Ironic's problem, and should a node not go active, if a deployer chooses to deploy an image which doesn't setup netowkr? or firewalls by default?
17:22:31 <dtantsur> our pingtest is heartbeat
17:22:31 <JayF> lucasagomes: I think a bad check is 100x worse than no check at all
17:22:44 <lucasagomes> jroll, would be possible to somehow tell neutron to perfom a ping test (in the network segregation case) for us ?
17:22:46 <jroll> dtantsur: heartbeat is outside of "make sure the instance booted correctly", though
17:22:48 <dtantsur> maybe we should try rebooting if we don't receive a heartbeat in e.g. 20 minutes?
17:22:52 <dtantsur> ah, the instance
17:22:56 <lucasagomes> JayF, right
17:22:57 * dtantsur confused this with another complain
17:22:59 <JayF> Right now we make no guarantees, I think that's the sanest way to remain. Some higher-level function (like OOO?) coulduse something like pingchecking to trigger a redo
17:23:27 <lucasagomes> JayF, let's maybe keep it as a "generic" test... I think the main problem with this RFE is that we don't have a way to ironic reach the node after the deployment
17:23:28 <rloo> hmm. if we provided some generic step/hook so the operator could specify what tests to run to test if it booted up... ?
17:23:33 <jroll> as much as I wish we could do this, we honestly can't do it in a way that works in all cases
17:23:39 <TheJulia> JayF: That is my feeling as well
17:23:42 <lucasagomes> if we get pass that we could perform <some> test to make sure it has booted
17:23:56 <dtantsur> well, it can just as well happen outside of ironic, right?
17:24:02 <JayF> dtantsur: exactly.
17:24:05 <dtantsur> run test, tear down and reschedule the node if it fails
17:24:06 <lucasagomes> dtantsur, could yeah
17:24:10 <sambetts> +1
17:24:15 <JayF> And since seeing if the node is 'up' is completely image/deploy/node independent
17:24:18 <rloo> well, if you are using ironic via nova?
17:24:35 <dtantsur> rloo, tripleo does
17:24:38 <JayF> I think it's difficult/impossible for Ironic to tackle generically.
17:25:02 <rloo> this seems maybe? worthy of a design session...
17:25:05 <dtantsur> I think it belongs to triple-heat-templates. maybe wrap Nova resource into something running the validation
17:25:09 <TheJulia> I think this is really a problem that can only be implemented from nova up.  But I do't think the schematics exist for "I've started and now I need to reschedule"
17:26:11 <dtantsur> I'd not do something in ironic that is just as easy to implement outside of ironic..
17:26:27 <TheJulia> I don't think we can without the ability to support rescheduling
17:26:33 <TheJulia> and knowing what the user wanted to schedule on precisely
17:26:41 * JayF notes that him and TheJulia, the two most operator-ish folks in this meeting, both think implementing this RFE is hard/impossible to do in ironic
17:26:47 <lucasagomes> dtantsur, I agree in parts, even outside ironic, I don't think it's easy
17:26:54 <dtantsur> right, rescheduling is our of question completely
17:26:57 <dtantsur> * out
17:27:00 <lucasagomes> dtantsur, the node will go to active first then gets reschedule, it's kinda odd
17:27:36 <dtantsur> well, outside of ironic, you can delete the nova instance, (optionally move node to maintenance) and create it again
17:27:42 <jroll> so, sounds like we should reject this yes?
17:27:44 <sambetts> How does nova handle it if you try to boot an image that doesn't boot successfully?
17:27:59 <lucasagomes> sambetts, good q, idk either
17:28:00 <JayF> sambetts: nova agent is how we did it at Rackspace
17:28:03 <jroll> sambetts: it goes to active, boots the vm
17:28:07 <jroll> does nothing else for you
17:28:12 <JayF> sambetts: IDK if that was upstream or not, but downstream at Rackspace we had a nova-agent callback
17:28:29 <TheJulia> jroll: I agree we should reject it
17:28:33 <JayF> ++ Reject
17:28:35 <sambetts> me too
17:29:00 <gabriel-bezerra> how about the '1) The node just doesn't reboot (seen on HP, Dell and Supermicro)' part?
17:29:12 <JayF> gabriel-bezerra: that, I think, is a lot more interesting
17:29:33 <dtantsur> gabriel-bezerra, this bit can be converted to a bug
17:29:36 <jroll> well, we use power off/on now, so hopefully that's mitigated
17:29:40 <jroll> but yeah, sounds like a bug
17:29:46 <JayF> jroll: don't we also have an in-band reboot option?
17:29:46 <lucasagomes> yeah
17:29:59 <jroll> JayF: in-band power off
17:30:04 <JayF> jroll: aha
17:30:18 <jroll> and then bmc poll until off
17:30:36 <gabriel-bezerra> I wonder if it is a soft-power-off vs press-and-hold issue
17:30:43 <TheJulia> There is a possibility the BMC may have decided to go on a vacation at this point and there is not much we can do then.
17:30:58 <dtantsur> TheJulia++ this happens way too often
17:31:36 <JayF> TheJulia: dtantsur: or even a crazy saturated BMC network problem or similar ... ipmi is udp :x
17:31:56 <TheJulia> JayF: That also is a possibility
17:32:00 <dtantsur> yeah. if BMC is not reliable, we can't do much
17:32:03 <jroll> okay, I've marked this wontfix with a message
17:32:07 <dtantsur> thanks!
17:32:16 <jroll> rloo: what's next?
17:32:20 <rloo> did you see comment #1, about provisioning network?
17:32:41 * dtantsur probably participated in the discussion resulting in this comment
17:32:50 <jroll> rloo: what about it?
17:32:57 <rloo> dtantsur: if that looks familiar to you, maybe you could comment.
17:33:08 <rloo> jroll: the idea about keeping provisioning network around
17:33:19 <jroll> rloo: that's a massive security issue
17:33:33 <dtantsur> rloo, I didn't like the idea back then, nor do I like it now
17:33:38 <rloo> jroll: right. which they mention too.
17:33:47 <rloo> jroll: just wanted us to give our opinion on it :)
17:33:50 <rloo> ok, next up
17:33:51 * lucasagomes doesn't like it either
17:33:53 <rloo> https://bugs.launchpad.net/ironic/+bug/1633756
17:33:53 <openstack> Launchpad bug 1633756 in ironic-lib "RFE: Add initial static type hint checking support" [Wishlist,In progress] - Assigned to John L. Villalovos (happycamp)
17:33:53 <jroll> rloo: IMO it isn't acceptable
17:33:53 <jroll> ok
17:34:05 <rloo> i think jlvillal mentioned it at the summit
17:34:12 <lucasagomes> the "best" way I can think of, is if neutron could perform a test for us in any of the networks
17:34:16 <jlvillal> :)
17:34:18 * lucasagomes is moving on
17:34:26 <jroll> I think I stated my feelings on this at the summit, I don't think it's a very good use of our time, but not going to block it myself
17:34:29 <rloo> but i don't recall what we decided
17:34:38 <lucasagomes> I don't think it needs a spec
17:34:42 <lucasagomes> seems straight forward
17:34:47 <JayF> I sorta wonder what's the point of this, especially if it's not going to be openstack-wide
17:34:51 <lucasagomes> and agree with jroll about priority
17:34:56 <dtantsur> I'm with jroll and JayF on it
17:35:01 <jlvillal> It is low on my priority list. But hope to get more time to play around with it in my spare time.
17:35:03 <jroll> JayF: helps us avoid a certain class of bugs
17:35:05 <JayF> Is there any movement to get this sort of type checking in any other OpenStack project?
17:35:22 <JayF> I'm just curious if this is something we should do at a larger-than-single-project level, if it gets done
17:35:22 <rloo> JayF: movement sometimes starts with one ball rolling
17:35:26 <jlvillal> I don't know. Someone has to be first.
17:35:27 <jroll> not that I'm aware of
17:35:29 <lucasagomes> JayF, we could start the trendy :-)
17:35:35 <jroll> but yeah, this sort of thing would prove it out
17:35:45 <JayF> Then I'm with the other opinions of "I don't personally care, but don't wanna block either"
17:35:46 <dtantsur> worth at least raising to other folks
17:35:48 <rloo> unless folks are against it, why don't we approve and see what (if anything) happens
17:36:07 <dtantsur> my only concern is whether it's going to bloat the code and confuse contributors
17:36:07 <jroll> idk, I think as a 'thing to play with' there's plenty of other python projects to play with this in
17:36:27 <dtantsur> I'm not sure I'd like to -1 patches from newcomers saying "please add static type hints"
17:36:40 * dtantsur already does it for release notes often enough
17:36:51 <jroll> dtantsur: yep, that's a good point
17:37:00 <jlvillal> I'm hoping that this will catch some bugs and improve code quality. Once it is implemented.
17:37:13 <lucasagomes> overall I think knowing what the method returns/receive kinda helps with code quality and understand of it
17:37:14 <jlvillal> Hard to say though, until the work is done.
17:37:31 <dtantsur> jlvillal, tbh I'm not convinced that a substantial share of our bugs is due to wrong type sent in or out the call
17:38:12 <jlvillal> Could be. I'm not sure.
17:38:19 <rloo> do we want to vote? i love votes :)
17:38:27 <TheJulia> Maybe we bless it for ironic-lib, and kind of see where it goes?
17:38:51 <TheJulia> rloo: only if it is not a boolean voting option :)
17:39:11 <dtantsur> TheJulia, my only question is: are we going to require it for all patches from now on?
17:39:22 <rloo> I like TheJulia's idea too. start with ironic-lib.
17:39:29 <TheJulia> dtantsur: I don't think so, but maybe add a test in eventually
17:39:36 <rloo> dtantsur: we'd only require it if we felt like it was worthwhile.
17:39:39 <TheJulia> that way we self-require later on
17:39:49 <NobodyCam> who is going to do the first retrofit
17:39:51 <TheJulia> IF we see value
17:39:58 <gabriel-bezerra> IIRC, it is optional even for the checker
17:40:00 <jlvillal> I don't think we would require initially. See what happens. Is it worthwhile.
17:40:00 <rloo> dtantsur: i think after ironic-lib is done, we can evaluate it and decide then.
17:40:05 <JayF> NobodyCam: the RFE was filed by jlvillal, and aiui he's the most interested party
17:40:16 * dtantsur is fine with that
17:40:25 <NobodyCam> okay :)
17:40:27 <jlvillal> I would do the work. Though it is not at the top of my priority list either.
17:40:37 <jlvillal> As there are much more important things to work on.
17:40:48 <jlvillal> More of a spare time thing for me.
17:40:50 <jroll> okay, if others are fine, I'm fine
17:41:01 * jlvillal needs to find more interesting things to do on weekends ;)
17:41:03 <rloo> sounds good. thx all. one more ...
17:41:05 <rloo> https://bugs.launchpad.net/ironic/+bug/1634118
17:41:06 <openstack> Launchpad bug 1634118 in Ironic "[RFE] Auto cleaning after instance deletions: secure and non-secure projects" [Undecided,New]
17:41:47 <jroll> jlvillal: fyi left a comment on that RFE about it being approved for ironic-lib only
17:41:53 <jlvillal> jroll: Thanks
17:42:01 <yuriyz|2> there is not support now on keystone side
17:42:30 <jroll> rloo: I think for me this one is similar to the state machine rfe, I don't like it but willing to hear it out in a spec
17:42:31 <dtantsur> hmm, so it boils down to selecting clean steps for automated cleaning, right?
17:42:40 <TheJulia> I'm really not a fan of having any concept of classification of data, because people misclassify data all the time
17:42:53 <jroll> my biggest problem is the ironic operator setting it
17:42:55 <lucasagomes> but we can do it already, with priorities ?
17:43:00 <jroll> rather than the project/user themselves
17:43:01 <lucasagomes> not sure if I grasp the RFE correctly
17:43:06 <JayF> I'm really nervous about adding more knobs to disable automated cleaning, because I think it adds more ambiguity around data security and guarantees
17:43:08 <jroll> lucasagomes: this is per-tenant decision
17:43:11 <jroll> JayF: ++
17:43:17 <lucasagomes> ohh
17:43:18 <dtantsur> jroll, which brings us back to passing complex data from nova :D
17:43:20 <yuriyz|2> we should have keystone project metadata (not present now)
17:43:25 <jroll> dtantsur: :|
17:43:44 <xavierr> JayF++
17:43:44 <dtantsur> JayF, actually I thought about proposing removing automated_clean option.. I hate that TripleO disabled it.
17:43:48 <rloo> yuriyz|2: keystone project == based on the tenant of the instance?
17:43:58 * dtantsur wants to see a spec for sure
17:44:00 <TheJulia> JayF: +1000
17:44:00 <yuriyz|2> tenant based
17:44:02 <JayF> My suggestion to whoever wrote this RFE: Write a custom hardware manager
17:44:11 <JayF> that skips cleaning based on some metadata on the node, or on disk
17:44:18 <rloo> JayF: yuriyz|2 proposed it
17:44:29 <JayF> rather than adding another mechanism to ironic to skip cleaning steps and make whether cleaning happened more ambiguous
17:44:39 <lucasagomes> JayF, I agree with it
17:44:45 <JayF> lucasagomes: with me or with the rfe?
17:44:50 <lucasagomes> with you
17:44:54 <JayF> cool, ty :D
17:45:30 <rloo> yuriyz|2: you have a usecase for that?
17:45:34 <jroll> so, reject or hear this out in a spec?
17:45:36 <lucasagomes> having N interfaces to disable/enable the same things sounds like a bad UX, terrible for troubleshooting as well
17:46:00 <yuriyz|2> now no this is not my priority currently
17:46:03 <TheJulia> I'm with JayF, I just think it is asking for trouble to have more knobs and to expect each tenant to be setup correctly with some sort of metadata eventually.  I would immediately see operators demand an override knob.
17:46:06 <JayF> I mean, I am -1 to the RFE as written, and am skeptical that a spec could change my mind
17:46:16 * dtantsur remembers that we have an RFE now to skip only in-band cleaning.. which adds even more mess to the picture.
17:46:28 <jroll> dtantsur: @_@
17:46:32 <rloo> ok. yuriyz|2 said it isn't a priority so even if we ask for a spec, it probalby won't happen soon. why don't we reject and someone can bring it up again if they need/want it
17:46:37 <jroll> rloo++
17:46:44 <rloo> you ok with that yuriyz|2?
17:46:45 <dtantsur> jroll, I've requested a spec on that... but I don't believe it's going to land
17:46:51 <yuriyz|2> agree
17:46:53 <lucasagomes> dtantsur, hah
17:47:02 <TheJulia> I'm for rejecting as well.
17:47:08 <rloo> thx yuriyz|2 and everyone else. That's it for my 4 today :)
17:47:09 <dtantsur> jroll, what I would love to see though, is conductor not starting IPA if all requested *manual* clean steps are OOB
17:47:13 <dtantsur> but this is offtopic
17:47:24 * rloo passes the baton back to jroll
17:47:25 <gabriel-bezerra> i don't think the tenant is the place for this. even within a single tenant there might be classified and unclassified instances.
17:47:28 <jroll> thanks rloo
17:47:41 <jroll> rloo: are you marking that rejected then?
17:47:48 <NobodyCam> rloo: thank you for this section!
17:47:51 <jroll> #topic open discussion
17:47:54 <jroll> anybody have a thing?
17:47:56 <rloo> jroll: yeah, i'm going to go through them all and make sure they're marked or whatever
17:48:02 <jroll> cool, thanks
17:48:17 <JayF> dtantsur: that sounds sorta like a bug to me, presuming you can specify interfaces in manual cleaning (i.e. rather than just assuming a step existing in OOB precludes it from existing IB)
17:49:02 <rloo> NobodyCam: yw :)
17:49:03 * jroll waits a minute
17:49:21 <krtaylor> sorry if it was already discussed, just wanted to make sure all ( jlvillal ) are ok with mering ironic-qa back into this meeting and handling the work via subteam reports
17:49:21 <dtantsur> JayF, yeah.. we have this problem with drac RAID which is fully OOB
17:49:29 <xavierr> do we have a topic for 3rd party CI?
17:49:31 <jroll> krtaylor: oh yeah, jlvillal pinged me this morning
17:49:35 <jlvillal> I am okay with canceling the QA meeting.
17:49:49 <krtaylor> sweet, I'll make a note in the meeting page
17:49:50 <jlvillal> Thanks krtaylor
17:49:52 <jroll> krtaylor: do you mind doing the irc-meetings patch?
17:50:04 <krtaylor> jroll, sure, will do
17:50:08 <jlvillal> krtaylor: If you need help, let me know.
17:50:11 <jroll> xavierr: we don't have a standing topic here, but if someone has something to bring up they are welcome to add it to the agenda
17:50:47 <krtaylor> xavierr, did you have a CI question?
17:50:52 <JayF> jroll: rloo: Thought maybe for these updated RFEs: Should we maybe link to the meeting log so whoever filed the bug can see the discussion that led to the decision?
17:51:05 <rloo> JayF: yup, that's my plan!
17:51:09 <gabriel-bezerra> btw, anyone doing zuul + ansible for 3rd party ci?
17:51:15 <JayF> rloo: awesome
17:51:21 <xavierr> OneView CI is back. we are working to fix https://bugs.launchpad.net/ironic/+bug/1503855 and bring back agent_pxe_oneview job back
17:51:21 <openstack> Launchpad bug 1503855 in Ironic "Set boot device while server is on" [High,In progress] - Assigned to Galyna Zholtkevych (gzholtkevych)
17:51:32 <TheJulia> xavierr: \o
17:51:33 <TheJulia> err
17:51:34 <gabriel-bezerra> instead of zuul + oneview
17:51:36 <TheJulia> \o/
17:51:39 <dtantsur> maybe we need subsection for each CI status?
17:51:41 <xavierr> \o/
17:51:47 <dtantsur> or just report it under your driver
17:51:48 <gabriel-bezerra> \o/
17:51:54 <dtantsur> we already have sections for them
17:51:55 <rloo> ++ report under the driver
17:51:57 <xavierr> dtantsur: yeah, good catch
17:52:17 <krtaylor> gabriel-bezerra, we use puppet
17:52:18 <gabriel-bezerra> ++ report under the driver
17:52:31 <gabriel-bezerra> ops zuul + jenkis**
17:53:04 <dtantsur> gabriel-bezerra, last time I've heard about it, this method was not yet recommended for 3rd party CI
17:53:04 <krtaylor> infra requires status on the test systems page, can we link to that? I'd hate to start a new place to record status
17:53:12 <dtantsur> gabriel-bezerra, check with #openstack-infra
17:53:17 <krtaylor> ++
17:53:24 <gabriel-bezerra> I've seen something about upstream infra changing jenkins for ansible.
17:53:34 <gabriel-bezerra> dtantsur: thanks. i'll check it
17:53:45 <jroll> gabriel-bezerra: I believe the current recommendation is keep using jenkins until zuul v3
17:54:29 <gabriel-bezerra> thanks
17:54:35 <krtaylor> gabriel-bezerra, there have been several talks about ansible over the years, but not sure latest status
17:55:13 <xavierr> v3 openstack zuul or netflix zuul? :P lol
17:55:21 <jroll> anything else here?
17:55:30 <xavierr> nope
17:55:34 <NobodyCam> thank you all great meeting!
17:55:42 <gabriel-bezerra> thank you
17:55:46 <jroll> yes, thanks all, see you next time
17:55:48 <TheJulia> Thank you everyone
17:55:48 <jroll> #endmeeting