17:00:00 <jroll> #startmeeting ironic
17:00:02 <openstack> Meeting started Mon Mar 27 17:00:00 2017 UTC and is due to finish in 60 minutes.  The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:05 <openstack> The meeting name has been set to 'ironic'
17:00:08 <vdrok> o/
17:00:08 <lucasagomes> o/
17:00:09 <baha> o/
17:00:12 <xavierr> o/
17:00:12 <NobodyCam> o/
17:00:14 <mariojv> hi o/
17:00:14 <milan> o/
17:00:18 <rama_y> o/
17:00:19 <jroll> mmm, this chair is comfy, it's been a while
17:00:22 <jlvillal> o/
17:00:29 <rloo> o/
17:00:30 <stendulker> o/
17:00:31 <TheJulia> o/
17:00:32 <NobodyCam> :p
17:00:34 <joanna> o/
17:00:36 <jroll> hi everyone!
17:00:42 * rloo hopes jroll doesn't fall asleep in comfy chair...
17:00:48 <TheJulia> heh
17:00:59 <jroll> pft, I'm wide awake :D
17:01:03 <vgadiraj_> o/
17:01:04 <jroll> #topic announcements and reminders
17:01:08 <jroll> couple things
17:01:17 * NobodyCam not has monty python skit running through his head
17:01:17 <mjturek> o/
17:01:22 <crushil> \o
17:01:30 <jroll> dmitry is out this week
17:01:49 <jroll> I'm also planning to do a nice small release of ironic before we start slamming features home
17:01:52 <jroll> and ironicclient
17:01:53 <Michael-ZTE> \o
17:01:54 <jroll> and ironic-lib
17:01:58 <jroll> still need to check the rest
17:02:07 <jroll> will do those this afternoon
17:02:13 <TheJulia> jroll: I think bifrost as well, will double check in a little bit
17:02:23 <lucasagomes> nice, I plan to release sushy this week as well
17:02:23 <jroll> TheJulia: nice, thanks
17:02:23 <rloo> jroll: hmm, can we wait til we land the api for dynamic drivers?
17:02:48 <rloo> jroll: ah, maybe not. i'm not sure it'll be ready in time.
17:02:51 <jroll> rloo: maybe, let's talk later
17:02:52 <TheJulia> One other announcement.  I sent an email about a half hour ago to the cores that I would appreciate feedback on in the next day or two.  Thanks in advance!
17:02:55 <rpioso> o/
17:03:23 <jroll> thanks TheJulia
17:03:29 <jroll> any other announcements?
17:04:18 <jroll> #topic review subteam status updates
17:04:24 <jroll> as always, those are here
17:04:28 <jroll> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:04:35 <jroll> line 79
17:04:58 * jroll gives a friendly reminder to do these before the meeting, not during :)
17:05:38 <jroll> vsaienk0: nice work on standalone stuff :)
17:05:47 * rloo just updated Bugs stats
17:06:49 <lucasagomes> ++ vsaienk0
17:06:51 <alezil> o/
17:06:53 * TheJulia glares at etherpad for eating her update
17:07:04 <jlvillal> ++ on stand-alone :)
17:07:43 <jlvillal> Also looking forward to vsaienk0's increased concurrency patch, if it works. Big time savings :)
17:08:04 <rloo> vdrok: are you taking over the node tags stuff?
17:08:22 <vsaienk0> thanks folks!
17:08:31 <vdrok> rloo: i can't say taking over, but I'll be updating them if zhenguo is not around
17:08:34 <rloo> great work vsaienk0
17:08:37 <rloo> vdrok: thx!
17:08:51 <rloo> vsaienk0, sambetts: any idea what the status is wrt routed networks support? (L188)
17:10:14 <vsaienk0> rloo: I'm still working on initial commits for networking-baremetal project
17:10:42 <rloo> vsaienk0: ok, i'll add that to the status, thx.
17:10:54 <jroll> routed networks support is mostly about scheduling, right? I can't remember the difference between that and "physical network awareness"
17:11:29 <TheJulia> we have to be aware of the networks to be able to schedule on them correctly
17:11:31 <jroll> oh and more
17:11:37 * jroll reads http://specs.openstack.org/openstack/ironic-specs/priorities/pike-priorities.html#routed-networks-support
17:11:38 <sambetts> jroll: I think there is some other weird things there that require research but the physical awareness is a big part of it
17:11:47 <vdrok> btw routed networks spec has merged today
17:11:56 <jroll> sambetts: right, they're two separate priorities/subteams, that's why I ask
17:12:18 <vsaienk0> jroll: no not only about scheduling, we need to create a mechanism which will populate nova hypervisor (ironic node) to neutron segments mapping
17:12:24 <rloo> vdrok: is the link to that spec, in the etherpad (subteam report par)?
17:12:35 <jroll> vsaienk0: right
17:12:36 <jroll> rloo: it is
17:12:41 <jroll> \o/ for merging that
17:12:43 <vdrok> yup
17:13:05 <rloo> jroll: it is? oh, do you mean the physical network spec?
17:13:29 <jroll> rloo: yes, I know you weren't directing the question at me but I answered it
17:13:37 <mariojv> yeah, physnet spec was landed - i didn't see the routed nets one though ?
17:13:57 <rloo> jroll: thx. and mariojv ++, that's what i thought we were talking about but guess not.
17:14:04 <jroll> anything else on subteam updates? they seem legit to me
17:14:09 <jroll> mariojv: yeah, I haven't seen a spec on that yet
17:14:35 * jroll waits a sec
17:14:39 <mariojv> cool - RFE is here, i'll put a link to that for tracking on the status page for now: https://bugs.launchpad.net/ironic/+bug/1658964
17:14:39 <openstack> Launchpad bug 1658964 in Ironic "[RFE] Implement neutron routed networks support in Ironic" [Wishlist,Confirmed]
17:15:05 <rloo> thx mariojv
17:15:21 <jroll> #topic Deciding on priorities for the coming week
17:15:30 <jroll> looks like we were 1/5 last week
17:15:36 <vdrok> I think we need to prioritize the rolling upgrades
17:15:41 <jroll> so I suggest just leaving the same 4 there
17:16:09 <vdrok> otherwise any patch bumping object version or rpc version breaks the grenade multinode job
17:16:12 <vdrok> eg https://review.openstack.org/233357
17:16:35 <mariojv> vdrok: i'm fine with adding rolling upgrades to that list, too - maybe at the bottom so folks review stuff that's lagging behind first?
17:16:52 <jroll> though... did folks find blockers with those? or is it just slow review/code cycle? are the existing things there close to ready?
17:16:53 <mariojv> maybe top is better to prevent that breakage, though
17:16:56 <NobodyCam> ++ for rolling upgrades
17:17:02 <jroll> vdrok: yikes
17:17:18 <jroll> that seems... wrong
17:17:29 <rloo> wrt rolling upgrades. i rebased them last week, and fixed some stuff in one patch. there's *one* patch I'm not sure about. although others are welcome to review. i'd like to test this week.
17:17:42 <vdrok> I think this includes the first patch in the chaing for rolling upgrades + adding some pins to our devstack plugin
17:17:43 <TheJulia> That kind of does, but I'm +1 on rolling upgrades
17:18:09 * jroll would love a doc on how the multinode grenade works
17:18:12 <rloo> vdrok: will talk to you later on irc about the rolling upgrades stuff
17:18:20 <rloo> ++ jroll. that is on my list this week to understand.
17:18:21 <vdrok> rloo: ok sure
17:18:38 <vdrok> basically, we need to be able to pin things
17:19:04 <jroll> yeah, it just seems like a compatible rpc bump should work anyway
17:19:08 <jroll> we can discuss that later
17:19:10 <jroll> so rolling upgrades priority 1 or "last"?
17:19:19 * TheJulia thinks priority 1
17:19:34 * NobodyCam would like to see 1
17:19:41 <mariojv> +1
17:19:52 <jroll> okay
17:20:39 <jroll> if someone could throw the gerrit link there on the whiteboard, that would be awesome
17:20:49 <jroll> any other priority change requests or are we good?
17:21:05 <rloo> jroll: done
17:21:17 <jroll> thanks
17:21:20 <NobodyCam> ty rloo
17:21:22 <NobodyCam> :)
17:21:35 <jroll> no stuck specs, discussion time
17:21:43 <jroll> #topic CI failure rates
17:21:45 <jroll> this is TheJulia
17:21:58 <jroll> #link http://paste.openstack.org/show/603960/
17:22:51 <TheJulia> So dmitry put together a report at http://paste.openstack.org/show/603960/ that shows CI failure rates, and a major point of concern is the third party CI jobs.
17:23:14 <TheJulia> I guess the biggest question is if anyone has any insight as to why the failure rates are so much higher, and how we can make it better?
17:23:29 <rpioso> We've been actively working on the issue -- http://lists.openstack.org/pipermail/openstack-infra/2017-March/005263.html
17:23:55 <vdrok> I think we can remove the parallel job now that we have the standalone tests?
17:24:05 <lucasagomes> (just a note, the UEFI job is now voting in gate)
17:24:16 <mariojv> TheJulia: are the top 4 jobs there (starting L151) expected to have 100% failure rates?
17:24:17 <vdrok> gate-tempest-dsvm-ironic-parallel-ubuntu-xenial-nv
17:24:27 <vdrok> or is it about the third party ci only? :)
17:24:33 <rloo> TheJulia: I thought Dmitry said he'd get in touch with the 4rd party CI folks about their tests failing
17:24:41 <rloo> s/4/3/ :)
17:24:50 <jroll> mariojv: they're experimental, so probably a WIP or abandoned WIP
17:25:02 <jlvillal> A lot of those top jobs consist of our experimental jobs.
17:25:18 <jlvillal> Some could probably be pruned
17:25:55 <TheJulia> rloo: He indicated he was going to reach out, but I'm wondering if any of us in the larger community have any insight into the third party ci jobs failures, since the rates do see rather high across the board.  If it is something we're doing, we should likely fix it :)
17:25:55 <jlvillal> But they only get run if someone does "check experimental"
17:26:07 <mariojv> ok cool, just wanted to be sure there's not some super serious breakage there
17:26:18 <jroll> I think we can probably kill parallel like vdrok said. the py3 jobs need to get working. the full, I would like to keep around, but I don't have much time lately to work on it
17:26:26 <jroll> the third party stuff, those parties will need to speak for :)
17:26:47 <vdrok> also, I don't see the ibm ci here in this list
17:26:54 <jlvillal> I think JayF and hurricanerix are working or will be working on the Python3 jobs. Based on owners for Pike priorities.
17:26:58 <rloo> I think we should/could either 1. wait for dmitry to get back to find out where/what he's done and/or 2. send email to the dev list.
17:27:07 <rajinir> A bunch of devstack changes broke our CI. Not sure about others
17:27:19 <jroll> I'd wait for dmitry on third party stuff
17:27:32 <TheJulia> rloo: That is reasonable, I was just kind of hoping that people might have gained some insight by looking at failed third party CI logs when doing reviews
17:27:35 <xavierr> rajinir: ++
17:27:50 <rloo> TheJulia: honestly, they seem to fail so often that I don't look at them :-(
17:28:00 <jlvillal> rloo: +1 on that.
17:28:04 <mariojv> same rloo
17:28:15 <milan> vdrok, maybe it means it's passing all the time (passing as in def test(self): pass) ;)
17:28:16 <jroll> rloo: right, that's one of dmitry's goals to fix this cycle, it seems
17:28:27 <mariojv> i basically have 0 confidence voting on a lot of driver patches because of the CI flakiness there
17:28:36 <vdrok> milan: hopefully not :)
17:28:48 <rloo> i think it was discussed before so i don't want to go over it now, but we need some definitive place to see the status of the 3rd party tests. that would at least give us an indication if we should look or not, to see if our patch is causing a failure.
17:28:52 <TheJulia> Okay, well it sounds like we have work to do.  Lets see what dmitry gets back from the 3rd party CI operators and go from there.
17:29:30 <TheJulia> Thank you everyone
17:29:39 <jroll> thanks for bringing it up, TheJulia :)
17:29:45 <rloo> TheJulia: maybe add agenda item (again) for next week :)
17:29:51 <mariojv> ++
17:29:59 <rloo> TheJulia: although dmitry might not be ready to give any update
17:29:59 <TheJulia> rloo: excellent idea!
17:30:12 <rloo> that'll teach him to go on PTO
17:30:24 <jroll> rough
17:30:28 <TheJulia> heh
17:30:29 <lucasagomes> lol
17:30:30 <rloo> ha ha
17:30:36 <NobodyCam> :p
17:30:39 <jroll> #topic open discussion
17:30:41 <jroll> anything else?
17:30:58 * TheJulia hears crickets
17:31:01 <rloo> jroll: oh, just remembered. do we have a bug triager?
17:31:14 <jroll> rloo: not on the agenda, not my job!
17:31:17 <jroll> :D
17:31:20 <NobodyCam> just a thank you to lucasagomes: for the red fish work
17:31:27 <rloo> jroll: ha ha. it got deleted from the agenda :)
17:31:28 <jroll> who wants to triage the bugs?
17:31:31 <lucasagomes> NobodyCam, :-) cheers man
17:31:32 <TheJulia> I have brain cells, I can go through the new bugs
17:31:33 <jroll> thanks again to mjturek for doing it again
17:31:34 <vdrok> I can help with bug triage this week
17:31:42 <jroll> TheJulia: vdrok: battle
17:31:49 <vdrok> :D
17:31:51 <TheJulia> lol
17:31:56 <jroll> I'll just mark both of you, thanks!
17:32:00 <vdrok> yup
17:32:18 * TheJulia suddenly thinks of "The Princess Bride"
17:32:22 <rloo> oh, wrt the brainstorming/forum thing. i'm guessing nothing happened there?
17:32:25 <jroll> muahaha
17:32:29 <rloo> for the summit?
17:32:42 <TheJulia> rloo: nope, I believe jroll submitted the tc inspired session
17:32:50 <rloo> i forgot when the deadline was. apr 1? or already past?
17:32:52 <jroll> yeah, the vm/baremetal session is proposed
17:32:53 <TheJulia> or someone did
17:33:06 <jroll> and stig telfer proposed a "Baremetal BIOS/RAID reconfiguration according to instance" topic
17:33:07 <mariojv> #link https://etherpad.openstack.org/p/BOS-ironic-brainstorming
17:33:11 <jroll> http://forumtopics.openstack.org/ for the record
17:33:17 <mariojv> ^ there's the brainstorming etherpad, not a ton of stuff there
17:33:45 <jroll> rloo: april 2
17:33:45 <mariojv> looks like a couple ideas about ops feedback
17:33:52 <xavierr> will you ironicers attend boston summit?
17:33:55 * jroll makes a todo to add these things
17:33:59 <TheJulia> Yeah, I had no braincells last week.
17:34:01 <rloo> thx jroll et al!
17:34:06 <jroll> xavierr: I know at least two people are going, maybe 3, maybe more
17:35:12 * jroll counts down from 10 before closing the meeting, chirp up if you have something
17:35:12 <rloo> crickets?
17:35:24 <jroll> #endmeeting