15:00:03 #startmeeting ironic 15:00:04 Meeting started Mon Mar 4 15:00:03 2019 UTC and is due to finish in 60 minutes. The chair is dtantsur. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:07 The meeting name has been set to 'ironic' 15:00:13 Who's here for our exciting weekly meeting? 15:00:18 \o 15:00:18 o/ 15:00:19 o/ 15:00:23 o/ 15:00:28 o/ 15:00:32 o/ 15:00:39 o/ 15:00:43 \o 15:00:45 Julia is out today, so you have an opportunity to see me in this chair once again :) 15:00:57 o/ 15:00:58 o/ 15:01:00 o/ 15:01:23 #link https://wiki.openstack.org/wiki/Meetings/Ironic our agenda for today 15:01:48 * rpioso is happy to see dtantsur, but hope TheJulia feels better soon 15:02:02 #topic Announcements / Reminder 15:02:13 #info This week is R-5. Client library release deadline and feature freeze. 15:02:36 This is about the right time to stop writing new features and start finishing whatever is proposed already. 15:02:48 o/ 15:03:01 After Thursday (pending Julia's decision) new features will need an exception. 15:03:49 o/ 15:04:12 #link https://etherpad.openstack.org/p/DEN-train-ironic-brainstorming Please keep suggesting your ideas for the Forum/PTG 15:04:28 #info dtantsur is on the Ops Meetup Wednesday-Thursday 15:04:43 #link https://etherpad.openstack.org/p/BER19-OPS-BAREMETAL baremetal section of the ops meetup, ideas welcome 15:04:59 anything else to announce? any questions? 15:05:10 o/ 15:05:13 TC elections - vote. Ends tomorrow. 15:05:17 oh right 15:05:23 Hi folks, this is my last meeting as the OpenStack Outreachy intern. During my internship I have acquired a lot of knowledge thanks to my mentors etingof, dtantsur and my team. Before the internship I didn't knew anyone from the IT industry. Thanks to my mentors & Outreachy I got the chance to meet and work with super talented and nice people from around the world, I never would have been able to 15:05:29 before. 15:05:31 looks like Train PTL self-nomination starts this week (according to the stein schedule) 15:05:38 I just want to say, thank you Ironicers! It's been a privilege working with you all :) 15:05:51 dnuka: thank YOU for working with us :) 15:05:52 AND... the stein schedule has 'Stein Community Goals Completed' this week 15:05:57 tks for the reminder rloo 15:06:04 tks for the work dnuka 15:06:11 dnuka: thanks for spending time with us, glad to hear you enjoyed :) have fun out there! 15:06:14 thanks dnuka, and good luck! :) 15:06:17 * dtantsur tries to remember community goals for this cycle 15:06:22 :) 15:06:27 dnuka, thanks for your help and good luck :) 15:06:30 python3 XD 15:06:58 python3 and upgrade checkers i guess 15:07:07 * etingof hopes that dnuka won't get far from us 15:07:09 dnuka: Well done! Thank you for your contributions. 15:07:40 okay, both look pretty good 15:07:44 anything else? 15:08:02 thanks rpioso, dtantsur, rpittau, jroll, iurygregory :) 15:08:10 btw, we got two new ironic projects approved for the next Outreachy round 15:08:52 \o/ 15:09:05 thank you rloo :) 15:09:06 https://governance.openstack.org/tc/goals/stein/index.html 15:09:07 gtz 15:09:17 i think the two 15:09:24 thanks kaifeng 15:09:30 We have no action items from the previous meeting. Moving on to the statuses? 15:09:36 ++ movin' 15:09:41 ++ 15:09:45 #topic Review subteam status reports 15:09:45 let's move :) 15:10:03 #link https://etherpad.openstack.org/p/IronicWhiteBoard starting around line 272 15:10:47 those who read bug stats may notice that I triaged quite a few stories this morning :) 15:12:28 yeah, got quite a lot of email notifications 15:12:45 heh 15:13:47 not many status updates so close to the release, I'm done. 15:14:54 how's everyone? 15:14:57 so python3 is one of the community goals. 15:15:18 there are a bunch of maybe's there... L295+ 15:15:18 rloo, correct :) 15:15:28 what is 'minimum'? 15:15:44 I think minimum is py36 15:15:57 rloo: minimum is voting jobs on py36, both integration and unit tests 15:16:12 so the overall openstack plan is py2 and py3 fully supported in Train, dropping py2 in U 15:16:13 I think everything else are stretch goals (py37, zuul v3, etc) 15:16:16 (for context) 15:16:16 what i meant is, of all that stuff there, what should we get done to satisfy the goal. 15:16:44 lines 300-305 15:16:49 at a minimum we *need* py3 support *fully* completed by Train 15:17:12 I guess grenade cannot be switched until Train.. 15:17:50 Ah, the Stein goal page has a list of 'Completion Criteria': https://governance.openstack.org/tc/goals/stein/python3-first.html 15:18:43 it seems ironic client is the only one left? 15:19:28 we should prioritize whatever if left... is derekh here? 15:20:06 anyway, we can take that offline. would be good to know what exactly needs to be done and if we need volunteers to work on them, etc. 15:20:10 here, reading back 15:20:47 rloo, I'm actually working on that :) 15:20:59 jroll: quickly, for nova conductor group awareness -- the nova patch merged, so done? 15:21:09 rloo: ah yes, sorry 15:21:16 * jroll forgets that is on there 15:21:26 jroll: no worries, i just updated 15:21:31 jroll: and yay! 15:21:38 thanks! 15:22:29 iirc we could switch some of the non grenade ironic jobs with a flag, not sure about the grenade ones though 15:22:40 the neutron event processing stuff is out of date too. 15:23:13 other than that, i'm good 15:23:47 Nikolay Fedotov proposed openstack/ironic-inspector master: Use getaddrinfo instead of gethostbyname while resolving BMC address https://review.openstack.org/626552 15:23:50 #topic Deciding on priorities for the coming week 15:24:17 I guess we need it on the priorities list, so hjensas could you update the events patches on the whiteboard and throw them on the priority list? 15:25:23 What ironicclient changes do we need still? Only deploy templates? 15:25:43 dtantsur: i think so. that's what i wanted to know too :) 15:26:12 the configdrive change I pushed does not (for now?) have a client part for it 15:26:29 dtantsur: oh. neutron events will need client (python API) 15:26:36 rloo: I *think* it's done 15:26:58 dtantsur: oh, then we're good there. (need to catch up on neutron events) 15:27:02 https://github.com/openstack/python-ironicclient/commit/e8a6d447f803c115ed57064e6fada3e9d6f30794 15:27:13 we did API+client before implementing actual processing 15:27:18 dtantsur: awesome. 15:28:09 so, seems only deploy templates, and it seems very close. good! 15:28:09 dtantsur: i think the only thing left besides deploy templates, is to go over any bugs/prs against the client to see if we want to land any of those. and do zuulv3 migration? https://review.openstack.org/#/c/633010/ 15:28:10 patch 633010 - python-ironicclient - Move to zuulv3 - 16 patch sets 15:28:22 rloo: yep 15:30:03 it feels like the events work will have to request an FFE 15:30:06 who is doing L170: check missing py36 unit test jobs on all projects? 15:30:13 volunteers ^^? 15:30:30 dtantsur: wrt events, let's see where it is at on Thursday? 15:30:40 yeah. for now two patches are WIP. 15:31:19 dtantsur: so either FFE or punt to Train. Guess Julia can make that decision on Thurs since you'll be at that Ops thing. 15:31:27 yep 15:32:07 is hjensas around? 15:32:58 I guess not. Would just be nice to get a feel for how far off he thinks it is 15:33:02 iurygregory, rpittau, any of you want to check all our projects for the presence of a py36 unit test job? 15:33:09 yeah.. 15:33:12 dtantsur, i can do 15:33:17 =) 15:33:32 thanks! 15:33:32 dtantsur, I'm already doing that :) 15:33:41 hah, okay, so I'll leave it up to you too :) 15:33:56 I had a slightly awkward thought about doing the neutron event processing via deploy steps 15:34:04 hmmmmmm 15:34:13 main downside is that it would complicate the steps seen by the user 15:34:17 when i am. suspecting there is any project hasn't a py36 job then found ngs requires one.. 15:34:29 * mgoddard stops derailing 15:34:38 mgoddard: is it only at deployment. don't we get events (or won't we) when deleting? 15:34:45 and probably cleaning 15:34:46 mgoddard: and yeah, we can discuss later 15:34:53 and potentially inspection \o/ 15:34:54 kaifeng, most of the projects are good already 15:34:59 * dtantsur stops derailing as well 15:35:08 okay folks, how's the list looking? 15:35:13 anything that must be there? 15:35:41 i wonder what's the status of molteniron, it seems haven't been maintained 15:35:45 dtantsur, can we add one patch for zuulv3? 15:35:46 so there are as yet unwritten patches to make deploy templates more useful 15:35:57 iurygregory: add after line 166 15:36:05 mgoddard: for example? 15:36:14 i.e. to add deploy_step decorators for RAID 15:36:22 and BIOS 15:36:27 that fast track support is a feature. anyone know the status of that? (L196ish). 15:37:12 I'm out until Thursday, so it would have to be an FFE if we wanted to look at that 15:37:13 rloo: I think the status is "Julia is close to getting it working" 15:37:28 nvm both are there 15:37:33 I doubt it's making Stein, but we'll see.. 15:37:37 it's not critical to the deploy templates feature, but we did talk about doing it at the PTG 15:37:49 mgoddard: so do you think we shouldn't merge existing deploy templates, w/o the other stuff? 15:38:04 rloo: I think it's fine to have the foundational bits/API in place 15:38:16 dtantsur: yeah, me too. just wanted mgoddard's opinion 15:38:22 and I'll personally be open for an FFE for RAID/BIOS steps 15:38:31 rloo: I think we should merge - it could be used with a 3rd party driver that has some steps for example 15:38:44 that's probably what I'd demo if we don't get that part into Stein 15:39:12 mgoddard: Is it the decorator infrastructure or adding them to the individual driver methods? 15:39:13 mgoddard, dtantsur: ok, let's proceed then. and yes to FFE for raid/bios. depends on the work involved there but should be minimal. i am wondering though if we need to split up existing big deploy step for anything to work. 15:39:41 rloo: I think splitting to core step requires careful discussion. I'd postpone it until the PTG. 15:39:43 I could probably put together a PoC for BIOS/RAID this evening, then we can use that to evaluate FFE 15:39:47 mgoddard: wanna add the topic for ^^^? 15:40:31 rpioso: individual driver methods, possibly even base driver interface classes if we can do it generically 15:40:35 dtantsur: definitely. i was thinking that if we did have to split up deploy step, then it won't get done until Train. 15:40:54 I think we can do it later without breaking the API 15:41:08 dtantsur: you mean add the deploy templates topic to my PoC? 15:41:09 * rpioso would like to chat with mgoddard after the meeting 15:41:10 we already have a deploy step decorator, it might need tweaking if anything. 15:41:18 mgoddard: I mean, splitting the core step 15:41:41 also, ready to move on? I have a few RFEs to review (for Train, I guess). 15:41:47 sure 15:41:49 ++ move on 15:42:12 #topic RFE review 15:42:19 I'll start with the one I posted 15:42:26 #link https://storyboard.openstack.org/#!/story/2005126 Updating name and extra for allocations 15:42:49 This is mostly an oversight on my side, the description should be pretty clear 15:43:20 seems reasonable 15:44:04 any comments? objections? 15:44:23 works for me 15:44:32 added rfe-approved 15:44:53 thanks! 15:45:13 #link https://storyboard.openstack.org/#!/story/2005119 iDRAC OOB inspection to set boot_mode capability 15:45:48 IB introspection sets that flag, which allows UEFI boot mode to "just work" 15:46:05 the idea is to bring OOB introspection to parity 15:46:20 this seems consistent with what ironic-inspector already does 15:46:23 so that UEFI boot mode will just work with the iDRAC driver 15:46:26 yes 15:46:37 The story's headline is mislabeled. Remove [RFE]. 15:46:47 rpioso: no, it's an RFE 15:46:59 Seems like a bug fix to me. 15:47:18 If inspection should set boot_mode. 15:47:30 it's a bit of a stretch to goal it a bug fix, given that it never worked.. 15:48:19 how many drivers are actually setting boot_mode during inspection? 15:48:33 idrac 15:48:54 well, currently none 15:49:04 (only ironic-inspector) 15:49:20 so it does sound like a new feature to me, even though I'm open to approving it right away 15:50:10 no one has objections to it 15:50:11 it should be ok i suppose as drac does not have ability to perform management operation of set_boot_mode() 15:50:46 this would provide an option to update ironic node about the available boot mode on node 15:50:51 * dtantsur marks as approved 15:51:17 let's debate rfe vs bug a bit later, I have a few items more to go over 15:51:19 * rpioso meant idrac does not set it 15:51:33 #link https://storyboard.openstack.org/#!/story/2005060 Add option to control if 'available' nodes can be removed 15:51:34 dtantsur: ty 15:51:39 arne_wiebalck_: if you're around ^^^ 15:52:05 yes 15:52:13 Iury Gregory Melo Ferreira proposed openstack/python-ironicclient master: Move to zuulv3 https://review.openstack.org/633010 15:52:13 I can imagine why people may want to prevent deletion of available nodes 15:52:22 This is to protect nodes in available from being deleted. 15:52:42 Happened to us, so we thought some protection would be nice. 15:52:55 I'm +1, jroll apparently as well. Objections? 15:53:22 should we allow a force mode to bypass the restriction ? 15:53:35 rpittau: in maintenance probably? 15:53:36 * jroll has totally been there 15:53:48 bypass is the default and current behavior 15:53:52 arne_wiebalck: wdyt about still allowing deletion in maintenance? 15:54:02 even with this option? 15:54:09 dtantsur, sounds good 15:54:34 sounds good 15:54:44 so to be clear. what does 'removing' mean? 15:54:47 ‘available’ means ready for use 15:54:53 rloo: 'baremetal node delete' 15:54:56 cuz we use 'delete', 'destroy'. 15:54:56 remove == delte from ironic 15:55:08 ah, so destroy 15:55:19 rloo: :-D 15:55:35 maybe call it allow_destroy_available_nodes? 15:55:36 annihilate 15:55:52 (no comment on the terms we already used, delete & destroy) 15:56:03 rloo: the downside is that we never use "destroy" in user-visible parts, only internally 15:56:18 dtantsur: oh, we use 'delete'. 15:56:24 the API is `DELETE /v1/nodes/`, the CLI is 'baremetal node delete'. not sure about Python API though.. 15:56:37 dtantsur: that’s why I used delete (but then realised its destroy internally) 15:56:44 ok, allow_delete_available_nodes. and of course, we will have words to describe it... 15:57:24 yeah, let's settle on "delete". the Python API also uses it. 15:57:36 arne_wiebalck: please update the RFE, then we can approve it. 15:57:45 dtantsur: ok 15:57:54 next? 15:58:18 #link https://storyboard.openstack.org/#!/story/2005113 Redfish firmware update 15:58:32 This one may need more details, and I've increases its scope a bit from the initial version by etingof 16:00:09 * dtantsur hears crickets 16:00:25 I guess we can leave it till next time. Thanks all! 16:00:42 oh wow, time already 16:00:44 yeah 16:00:46 Will there be a spec? 16:00:49 thanks! o/ 16:00:50 #endmeeting