15:00:12 <TheJulia> #startmeeting ironic
15:00:16 <openstack> Meeting started Mon Jul  9 15:00:12 2018 UTC and is due to finish in 60 minutes.  The chair is TheJulia. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:18 <TheJulia> Good morning everyone!
15:00:20 <openstack> The meeting name has been set to 'ironic'
15:00:28 <jiapei> o/
15:00:32 <rpioso> \o
15:00:34 <TheJulia> Our agenda can be found on the wiki, as always!
15:00:35 <TheJulia> o/
15:00:46 <hshiina> o/
15:01:01 <TheJulia> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
15:01:01 <stendulker> o/
15:01:03 <bdodd> o/
15:01:13 <mgoddard> o/
15:01:21 <rloo> o/
15:01:28 <jroll> \o
15:01:35 <TheJulia> #topic Announcements/Reminders
15:01:43 <TheJulia> #Info Ironic version 11 was released last week!
15:02:12 <TheJulia> Thanks to everyone who has helped contribute to that milestone. Hopefully we'll be able to get another few major things merged in the next couple weeks!
15:02:25 <jroll> woot
15:02:31 <rloo> great work, lots of goodies in 11 :)
15:02:31 <TheJulia> #undo
15:02:32 <openstack> Removing item from minutes: #info Ironic version 11 was released last week!
15:02:37 <TheJulia> #info Ironic version 11 was released last week!
15:02:49 <hjensas> o/
15:02:50 <TheJulia> Heh, commands must be case insensitive
15:02:54 <TheJulia> Anyway, moving on!
15:03:59 <TheJulia> #info A "mid-cycle" call is slated for July 17th. Anticipate the agenda to consist of a quick review of the cycle so far and discussion of any topics for the upcoming PTG.
15:04:04 <TheJulia> Speaking of the upcoming PTG!
15:05:43 <TheJulia> #info Stein PTG planning etherpad has been posted for a ?couple? weeks. Thoughts, Ideas, needs for time with the group should be added to the list before August 23rd.
15:05:56 <TheJulia> #link https://etherpad.openstack.org/p/ironic-stein-ptg
15:06:21 <jroll> nice, didn't realize we had one
15:06:49 <rloo> TheJulia: wrt the mid-cycle, is there going to be an etherpad/agenda/whatever for that, or do we decide at the time?
15:07:39 <TheJulia> #info This week is R-7, which means we realistically have just a few more weeks to be ready for ironic 11.1.
15:07:51 <rloo> also wrt the mid-cycle, there was a doodle-thingy for it; when will the date/time be decided?
15:08:34 <TheJulia> #info Non-core reviewers are absolutely welcome and encouraged to review items on our weekly priority list.
15:08:46 <TheJulia> rloo: I'll send out an email and create an etherpad
15:08:59 <TheJulia> #action TheJulia to send out an email and create an etherpad for the "midcycle" call
15:09:02 <rloo> thx TheJulia
15:09:36 <TheJulia> rloo: The doodle was posted a week ago, and we've had limited feedback, but the time is based upon the mutually available time selected.
15:09:41 <mjturek> o/
15:09:42 <TheJulia> or, I should say, will be
15:10:01 <TheJulia> #link https://doodle.com/poll/nu7bf2t8ygxfm6vf
15:10:05 <TheJulia> \o mjturek
15:10:32 <TheJulia> If there is nothing anyone wishes to raise, we can move on
15:11:33 <TheJulia> #topic Review action items for our past meeting
15:11:51 <TheJulia> #info No action items since the last meeting.
15:12:52 <TheJulia> #info TheJulia sent out a summit summary email last week, which was an action item from the prior week.
15:12:54 <TheJulia> #link http://lists.openstack.org/pipermail/openstack-dev/2018-July/131972.html
15:12:59 <TheJulia> Anyway, Moving on!
15:13:22 <TheJulia> #topic Review subteam status reports
15:13:27 <rloo> thanks for that summary email TheJulia!
15:13:39 <TheJulia> #link https://etherpad.openstack.org/p/IronicWhiteBoard
15:13:55 <TheJulia> Starting around line 149
15:14:53 <TheJulia> re: deploy steps, I saw we still ended up splitting the deploy steps patch and I've not had a chance to review it since since I've been on PTO. I'll try to review the series in the next few days.
15:16:37 <rloo> thx TheJulia. I have two questions (so far) about deploy steps. There is only one big deploy step -- I am good with that but want to make sure others are. AND... deprecation period? 2 cycles instead of 1?
15:16:42 <TheJulia> hshiina_: Do you happen to know the status of the python-scciclient changes presently blocking the irmc raid interface? Curious based on when we should expect those merged
15:17:32 <rloo> TheJulia: wrt reference arch guide -- i think we should do it at the next PTG; i am highly doubtful it will happen before then.
15:17:34 <hshiina_> TheJulia, this will be merged this week
15:17:36 <TheJulia> rloo: I'm good with one big deploy step this cycle to get the framework landed, even though we discussed two. We can expose more as time goes on, the important thing is beginning to offer the flexibility.
15:17:42 <TheJulia> hshiina_: awesome
15:17:50 <jroll> +1 for one big deploy step
15:18:21 <mgoddard> +1 - keep it simple as time is short
15:18:23 <rloo> TheJulia, jroll: thx. Makes sense to do deprecation as 2 cycles (min) then.
15:18:30 <TheJulia> rloo: also, w/r/t deprecation, I think two cycles is ideal, especially as the staging drivers is going to need some time and attention.
15:18:44 * TheJulia feels like we have consensus
15:18:47 <jroll> how much work is it to switch over for a driver?
15:18:51 <rloo> TheJulia: oh yeah, staging drivers... will try to get those done in rocky too.
15:19:02 <TheJulia> Not too much, but we need review bandwidth for those to change...
15:19:07 <rloo> shouldn't be much work but I'm not familiar with the staging drivers :)
15:19:26 <jroll> I'm more thinking about other out of tree drivers
15:19:36 <rloo> not sure how much/any testing we have with staging drivers
15:19:41 <TheJulia> Staging moves very slowly due to limited reviewers
15:19:54 <TheJulia> And it is outside of our governance...
15:20:06 <jroll> we could just remove after staging drivers get done
15:20:32 <rloo> for the one big deploy step -- there are two main changes, add the decorator and change the 'last part' of the deployment code, to call the conductor. I'll add docs for that... somewhere...
15:20:37 <jroll> or the people that contributed those staging drivers can come fix it, and if they don't it'll be broken
15:20:41 <TheJulia> I know there are some out of tree deploy drivers that operators have created, so it would likely be good to give them time to adapt.
15:21:32 <jroll> also this only affects deploy drivers, not sure if we have any of those in staging
15:21:36 <TheJulia> rloo: re: refarch, I don't know. *sigh*
15:21:42 <jroll> looks relatively trivial: https://review.openstack.org/#/c/578649/8/ironic/drivers/modules/iscsi_deploy.py
15:21:43 <patchbot> patch 578649 - ironic - Deploy steps - conductor & drivers
15:22:00 <TheJulia> that is a good point jroll, I don't think we do
15:22:02 <rloo> TheJulia: yeah, it is important. so we MUST? set time aside at PTG to do it if it isn't done before hand.
15:22:08 <jroll> oh and this bit: https://review.openstack.org/#/c/578649/8/ironic/drivers/modules/agent_base_vendor.py
15:22:08 <patchbot> patch 578649 - ironic - Deploy steps - conductor & drivers
15:22:12 * jroll thinks one cycle is plenty
15:22:18 <TheJulia> rloo: ++
15:23:00 <TheJulia> jroll: Yeah, I'm leaning towards one cycle as long as we do it promptly
15:23:24 <rloo> wrt deprecation of non-deploy-steps; we could do one cycle, and change to longer if it becomes necessary?
15:23:27 <jroll> TheJulia: do what promptly? land the deploy steps code?
15:23:40 <TheJulia> We absolutely can do that rloo
15:23:41 <jroll> rloo: yeah, we definitely can change later
15:24:34 <TheJulia> jroll: land... although didnt we also talk about the default being like automatic cleaning so a user could still just asay deploy and things happen. At which point, what exactly are we deprecating?
15:25:19 * TheJulia senses the call of task.driver.deploy.deploy() in conductor.py
15:25:25 <rloo> TheJulia: deprecating the code that supports non-deploy-steps
15:26:16 <TheJulia> I guess I need to see in-line deprecation warnings to really be able to wrap my head around it properly
15:26:26 <rloo> TheJulia: maybe we should talk more at midcycle next week?
15:26:50 <TheJulia> possibly, I'll need to re-review the patches as well which is already on my list
15:26:52 <TheJulia> \o/ for PTO
15:27:09 <rloo> :)
15:27:13 <TheJulia> Anyway, Anything else status update wise that we need to discuss?
15:27:56 <rloo> ready to move on :)
15:28:01 <TheJulia> Alright
15:28:05 <TheJulia> #topic Deciding on priorities for the coming week
15:28:09 * TheJulia massages the list
15:28:34 <rloo> wrt hw-type cleanup. do we need it as a weekly priority?
15:28:45 <jroll> probably not anymore
15:28:53 <jroll> we landed a lot of the final removals last week
15:28:58 <rloo> (esp since CI doesn't pass and it is a WIP)
15:29:13 <jroll> that's in staging-drivers anyhow
15:29:13 <TheJulia> I was just about to note that I've seen some doc updates, but that we can remove it from the priority list
15:30:02 <rloo> dtantsur|afk: if you are reading this, ^^ please add the topic 'hw-types' for your doc patches too
15:30:30 * TheJulia lets jroll type
15:30:34 <jroll> TheJulia: sorry, was just trying to s/4/3
15:30:54 <TheJulia> jroll: Oh, I thought it was 4 when I looked like an hour ago :\
15:30:57 * jroll is done there
15:30:59 <TheJulia> awesome
15:31:01 <jroll> one is the merged spec
15:31:03 * jroll fixes link
15:31:05 <rloo> what is the 'Notes for the meeting' section (L117)? Is that for discussion in a few min, not part of weekly priorities?
15:31:15 <jroll> well, I'll leave the link
15:31:29 <TheJulia> rloo: That was for me, that I've not yet deleted
15:31:45 <jroll> do we want specs in priorities given the limited time left to land code?
15:32:25 <TheJulia> That is a good question
15:32:40 <TheJulia> I'd like to try and land a ramdisk interface given the number of operators that have been asking for a very long time for one
15:32:53 <jroll> in rocky?
15:33:06 <TheJulia> yeah
15:33:11 <rloo> is there code already for that? (i should probably read the spec first)
15:33:15 <jroll> there is
15:33:23 <jroll> I'm okay with trying for that
15:33:42 <TheJulia> Okay, I'll move the rest down in priority since they are also on that list to raise visibility for reviews
15:33:50 <rloo> is the code 'small' ? I'm guessing it is.
15:34:05 <rloo> i'd like to get the neutron event stuff, but i think the code there is nontrivial.
15:34:28 <TheJulia> rloo: very
15:34:28 <jroll> rloo: it's +291, -17, not bad
15:35:20 <rloo> probably too late in the cycle to land that anyway, would like it being used a bit first before we cut a release.
15:35:24 <TheJulia> Anyway, I'm good with the list, does anyone have anything else to add?
15:35:39 <jroll> I'm good
15:35:57 <rloo> +1
15:35:57 <TheJulia> rloo: re: neutron event, agreed.
15:36:18 <TheJulia> Okay, moving on then!
15:36:21 <TheJulia> #topic RFE Review
15:36:59 <TheJulia> Dmitry has asked for us to review and discuss an RFE that he would like to get in ASAP (at least, that is the way I interpreted the discussion from the logs.
15:37:13 <TheJulia> #link https://storyboard.openstack.org/#!/story/2002868
15:37:53 <jroll> there's a comment from rloo that it needs more thought, anything specific?
15:37:53 <rloo> is sambetts here? where are they wrt their discussions?
15:38:14 <rloo> jroll: that's what sambetts was discussing with dmitry
15:38:26 <jroll> IIRC it ended with this being fine for now
15:38:45 <TheJulia> I'm good with both options, the first being the fastest and easiest to implement for users.
15:38:54 <sambetts> Hi, yup we decided this was best for now, and i opened a new story (RFE) for improving the API for interfaces later on
15:38:59 <rloo> so --reset-deploy-interface is *ONLY* avail with --driver
15:39:04 <jroll> sambetts: awesome, thanks
15:39:16 <TheJulia> I feel like we might actually have a bug based on original discussions for driver composition, but I'd have to go back and dig through notes
15:39:27 <sambetts> rloo: yup, only with --driver, because you can't send an empty patch request for a node
15:39:36 <rloo> sambetts: exactly.
15:40:21 <sambetts> ditto for the reset-all-interfaces one
15:40:22 <rloo> i think my question before was whether 'reset_interfaces' was the same as 'unset_X_interface' and whether we should rename it 'unset_intefaces' to be more consistent. but ...
15:40:47 <rloo> oh, i mean 'openstack baremetal node unset --X-interface'
15:41:21 <TheJulia> That doesn't seem super ideal. I thought we agreed a long time ago that we should catch and reset interfaces if someone tried to change the hardware type
15:41:59 <TheJulia> because that change is to a completely different profile of sorts
15:42:06 <rloo> TheJulia: that's what #1 is for, 'openstack baremetal node set --driver <hwtype> --reset-deploy-interface
15:42:13 <sambetts> oh yeah thats for the other use case, think doing unset deploy_interface should reset it, the only problem with that is there is no way to discover which interfaces exist today
15:42:19 <sambetts> for all API versions
15:42:19 <TheJulia> rloo: I thought under the hood deep inside of the conductor
15:42:21 <jroll> IIRC we didn't do it that way because it's so implicit
15:44:15 <rloo> oh wait. we're adding to our REST API: PATCH /v1/nodes/<node>?reset_interfaces=True
15:44:32 <TheJulia> jroll: yeah, but even then I'm just not sure there is a reason not to honor the requested action as a high level change.
15:44:36 <sambetts> my personal preference is for non-driver-changing default resets to be done via unset, but reseting during a set --driver change to be done with a --reset-interfaces
15:44:50 <rloo> which allows one to reset all interfaces (except any explicitly specified in the patch). so it has nothing to do with user setting driver=diff-hw-type
15:45:29 <sambetts> rloo: execpt you can't have an empty patch request so you need to send at least --driver even if its for the same driver
15:46:02 <rloo> sambetts: no, i mean, you could have a patch that eg. changes the node's name to foo, and also reset_interfaces.
15:46:15 <rloo> sambetts: nothing about changing the node's driver.
15:46:23 <sambetts> yeah, me and dtantsur|afk decided against that
15:46:43 <rloo> oh, it isn't mentioned in the RFE (#2). if i understand what it sez there.
15:46:44 <sambetts> but open for opinions
15:47:07 <TheJulia> It really feels like we are overthinking something that should be kept as simple as possible
15:47:20 <rloo> TheJulia: what do you suggest?
15:47:22 <TheJulia> the simpler it is for the user, the better the experience
15:47:26 <sambetts> 100% I really want this to be as simple as possible because IMO its a bandaid for a bad API
15:47:45 <sambetts> and this: https://storyboard.openstack.org/#!/story/2002901 is the real fix
15:47:56 <rloo> i think the proposed openstack CLI part is simple?
15:48:00 <TheJulia> sambetts: agree
15:49:48 <jroll> I think we all agree on the basic premise. the current questions are: 1) should the name be different? 2) should the full reset be implicit instead?
15:49:50 <TheJulia> rloo: I feel like the simplest thing for 90% of the users is a command that just explicitly unsets all interfaces or replaces the value to None which should reset them back to default upon the next task. The pure API side which could land next cycle if needed could then be wired up to the client switch and not have any task fire in the case of an invalid configuration
15:50:53 <sambetts> the problem with a command that explictity unsets all the interfaces (my original suggestion to dtantsur|afk) was that older clients won't be able to know about new interfaces
15:50:57 <rloo> TheJulia: so the command == CLI? I think what is proposed in #2 'openstack baremetal node set --driver xx --deploy-interface foo --reset-interfaces' satisfies that?
15:51:03 <TheJulia> Although I think dmitry fixed the code that would prevent a patch with an existing invalid configuration from completely preventing changing interfaces
15:51:05 <sambetts> so they'll reset some not all of the interfaces
15:51:43 <rloo> Oh, does TheJulia mean no one thing that resets more-than-one interface?
15:51:47 <TheJulia> sambetts: people have to upgrade to get the command anyway and we operate on the lastest known api version now...
15:52:02 <TheJulia> I was thinking just complete reset
15:52:04 <sambetts> TheJulia: but new interfaces after the new command is added will be hidden
15:52:14 <TheJulia> reset back to what the conducto rwould calculate based on the hardware type
15:52:25 <sambetts> new CLI or API for that?
15:52:48 <sambetts> CLI can't do it because it doesn't have a way to discover all the interfaces to unset them
15:53:01 <TheJulia> sambetts: ahh, that is true, which is why I'm thinking just reset the entire node back to the calculated default
15:53:19 <TheJulia> actually... it could call validate
15:53:29 <sambetts> new interfaces are hidden to older clients
15:53:36 <TheJulia> not in validate if memory serves
15:53:55 <rloo> so if i understand, TheJulia wants 'openstack baremetal node set --driver xx --deploy-interface direct' to change the driver, set deploy interface, and unset all other interfaces?
15:54:02 <TheJulia> regardless, you are right, ideal is an api method to do it
15:55:03 <TheJulia> rloo: that means we have to understand the precedence of all the set actions, seems better to just offer a reset-driver-interfaces method or something
15:55:08 * jroll is just confused now as to what people want
15:55:12 * TheJulia is too
15:55:17 <rloo> now i am confused.
15:55:28 <rloo> i am only trying to understand TheJulia, thought she said we were overthinking.
15:55:58 <rloo> i don't see how we can add a reset-driver-interfaces method that is separate from a method to change the driver.
15:56:02 <jroll> well, we've 5 minutes left, maybe TheJulia can summarize her take in a comment on the RFE?
15:56:06 <rloo> i mean we can add, but i don't see how it solves the problem.
15:56:28 <rloo> and i thought that's what the proposed 'reset_interfaces' etc was meant to address.
15:56:28 <TheJulia> jroll: I'll do so
15:56:38 <jroll> thanks :)
15:56:38 <rloo> thx TheJulia!
15:56:40 <TheJulia> #action TheJulia to summarize possible third option in the RFE
15:56:50 <sambetts> rloo: yup, think of it as --force-reset-driver
15:57:20 <rloo> so this is why we need (sorry) an RFE spec.
15:57:38 <TheJulia> heh
15:57:39 <rloo> OR everyone add comments ot that story so we know why we end up choosing what we choose.
15:57:40 <jroll> spec would also work, yeah, we do better with these async
15:57:55 <TheJulia> Anyway!
15:57:59 <TheJulia> #topic Open Discussion
15:58:04 <TheJulia> Anyone have anything else to discuss today?
15:58:10 <mgoddard> I'd like to introduce w-miller
15:58:25 <TheJulia> mgoddard: *looks around*
15:58:33 <mgoddard> or rather willm, looking at he members
15:58:39 <jroll> \o willm
15:58:42 <rloo> welcome willm!
15:58:54 <TheJulia> Greetings and welcome to the party willm!
15:58:54 <mgoddard> he's just started at StackHPC as an intern and will be with us over the summber
15:58:56 <rpioso> Hey willm :-)
15:59:05 <sambetts> o/ willm
15:59:05 <willm> Hi all! \o
15:59:11 <rloo> (how long is your summer?)
15:59:14 <stendulker> o/ willm
15:59:18 <mgoddard> working on inspector and https://review.openstack.org/579583
15:59:19 <patchbot> patch 579583 - ironic-specs - [WIP]: Add virtual Bare Metal Clusters spec
15:59:20 <NobodyCam> welcome to Ironic Willm
15:59:41 <TheJulia> mgoddard: willm: Awesome!
15:59:51 <willm> I’m here until the 4th week of September and am looking forward to getting stuck in :)
15:59:51 <jiapei> I'd like to give a brief update on the xclarity ci status: we are blocked the last week, since zuul v3 can't pull ironic from gerrit, we got responses from IRC, but still in debugging
16:00:33 <rloo> jiapei: is it only ironic that it can't pull, or any openstack project?
16:00:46 <jiapei> any openstack project
16:00:57 <jiapei> we tried to pull cinder, but failed too
16:01:17 <jiapei> however, CLI worked
16:01:21 <TheJulia> jiapei: Are logs being posted anyplace for test jobs? Maybe one of us can take a look and possibly offer some insight or ideas?
16:01:21 <rloo> jiapei: so before last week, it was working, then it stopped?
16:01:42 <TheJulia> rloo: I believe they were unable to get to the gerrit event streamer previously
16:01:49 <rloo> oh, so never worked
16:01:57 <jiapei> rloo: no, it didn't worked before...
16:01:58 <rloo> can you go on the test machine and manually try it?
16:03:02 <jiapei> If I manually run the command zuul throws exception with, it can pull the code
16:03:09 <rloo> bzzzz... time is up :-(
16:03:25 <jiapei> TheJulia: I'll post the latest logs tomorrow
16:03:37 <TheJulia> jiapei: Thanks
16:03:37 <mjturek_> like I mentioned last  week - delayed bug day by a week. Will people be able to attend this thursday 1:00 PM UTC?
16:03:50 <TheJulia> mjturek_: checking calendar
16:04:02 <mjturek_> (sorry really wanted to squeak that in)
16:04:02 <rloo> (i'm not avail, esp with midcycle next week)
16:04:09 <TheJulia> mjturek_: I am available
16:04:22 <mjturek_> rloo :( TheJulia \o/
16:04:26 <TheJulia> mjturek_: lets do it and welcome all those avialable
16:04:38 <mjturek_> awesome, I'll send out an invite today
16:04:43 <TheJulia> Anyway, time is up for today. Thanks everyone!
16:05:05 <TheJulia> #endmeeting