16:01:00 #startmeeting ironic_bfv 16:01:01 Meeting started Thu May 25 16:01:00 2017 UTC and is due to finish in 60 minutes. The chair is TheJulia. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:04 The meeting name has been set to 'ironic_bfv' 16:01:12 o/ 16:01:16 o/ 16:01:23 Greetings everyone! 16:01:41 hey TheJulia :) 16:01:58 As always, our agenda for today is on our wiki page. 16:01:59 #link https://wiki.openstack.org/wiki/Meetings/Ironic-BFV#Agenda 16:02:11 #topic Announcements/Reminders 16:02:30 o/ 16:02:57 I have one announcement/ask today from myself, I'll potentially be in meetings on Thursday and wonder if someone would be willing to run this meeting next week if I'm unavailable. 16:03:08 o/ dtantsur 16:03:08 I can run it! 16:03:15 mjturek: Awesome 16:03:38 #info mjturek has volunteered to run the next meeting if TheJulia is unavailable. 16:03:44 Anyone have anything else? 16:03:48 thanks mjturek! 16:04:14 nppp, I'll ping you TheJulia before to confirm 16:04:27 mjturek: excellent, Thanks! 16:04:52 So if nobody has anything else for annoucements, we can move on 16:05:15 #topic Current Status 16:05:28 #link https://etherpad.openstack.org/p/Ironic-BFV 16:05:51 It looks like we managed to get a few patches landed this week \o/ 16:05:54 so cinder driver has merged - thanks dtantsur and sambetts for the reviews 16:06:00 np :) 16:07:07 hshiina: Regarding the status ether pad, I noticed you updated a patch or two. When I went through the revisions about an hour ago, I noted their current state, so the ether pad may be slightly out of date for those revisions. 16:08:10 sorry, i started rebasing a few patches. 16:08:12 mjturek: you indicated you only rebased and put a change in on the wire-in patch, I think I have some minor changes locally for that one as well, and I'll have to reconcile that and push it up 16:08:26 hshiina: no apologies necessary :) 16:09:14 cool, yep I added 'cinder' to the default drivers list (and updated the commit message to indicate that). Tests were failing 16:09:19 And mjturek has a few discussion items for the ipxe patch, so I guess we should move along. :) 16:09:38 Awesome 16:10:14 Time to discuss priorities? 16:10:15 oh - I uploaded a follow up patch for the cinder driver as well 16:10:29 I saw, thank you 16:10:36 take a look, but I don't like that the detach_volumes signature is different than the base class :-\ 16:11:10 #topic Planning/Priorities 16:11:52 Priorities as I see them is the ipxe patch, the wire-in patch which I should be able to reconcile and get updated quickly, and then the deploy skip patch. The last of which may be impacted by one of the discussion items. 16:12:12 sounds good to me] 16:12:42 sounds good to me, too 16:13:02 Sounds like we are in agreement. 16:13:11 ++ 16:13:28 And with that, if nobody has anything else, then we should move on to discussion :) 16:14:08 #topic Discussion 16:14:30 * TheJulia hands the microphone to mjturek 16:14:39 So first item is on the wirein patch 16:14:57 dtantsur mentioned that we should probably move the storage ops up from the driver level 16:15:04 https://review.openstack.org/#/c/413324/20/ironic/drivers/modules/pxe.py@291 16:15:11 yeah 16:15:32 seems like the right approach as the code will need to be in each deploy module the current way 16:15:39 otherwise we're doomed to copy-paste it 16:15:42 :) 16:15:51 like with networking, which is something vdrok wants to address 16:16:32 I can volunteer to look into it as a followup, unless you'd like to address it in the patch TheJulia 16:17:00 (or if there's a disagreement here - maybe there's a good reason for it being in the driver level?) 16:17:08 I'm fine with a follow-up, as long as it happens rather sooner than later (e.g. as the next patch in the chain) 16:17:43 So move any helpers into centralized helpers, and move the wire-in calls into the conductor? 16:18:02 Have we identified where we would call the networking in the conductor? 16:18:25 networking is a separate thing, I guess. or do we want to solve both at once? 16:18:27 * TheJulia feels like it would be best as a follow-up for the conductor part 16:18:52 I'm just suggesting that we don't place volume actions in the deploy drivers (or move it away soon) 16:18:53 I feel like we are going to step on each other's feet if we don't have a basic agreement of how in place before we try and do it 16:19:04 yeah I'd think we'd want to do it around the same spot as the networking 16:19:16 maybe vdrok has a plan there already, I can ask him 16:19:22 he has a spec 16:19:30 oh cool 16:19:43 #link https://review.openstack.org/#/c/468000/ 16:19:54 * dtantsur is not sure he can use #link here 16:19:56 I guess, there're only two logic places for any of it to really go in the conductor 16:20:20 #link https://review.openstack.org/#/c/468000/ 16:20:29 I think anyone can, but we can look at the logs later to be sure :) 16:21:11 I feel like the resolution is talk to vdrok, get on the same page, and determine the happy path from there. 16:21:21 sounds good 16:21:23 and well, read the spec too 16:21:58 :) 16:22:00 So, your next item mjturek ? 16:22:22 sure 16:22:31 #link https://review.openstack.org/#/c/406290/25/ironic/drivers/modules/agent.py@396 16:22:46 whoops 16:23:07 I swapped the links by accident :) 16:23:12 https://review.openstack.org/#/c/413324/20/ironic/drivers/modules/pxe.py@291 16:23:21 this is the one wrt sanhook 16:23:37 I just figured that out too, but the other item does kind of fit with-in the same topic in a way 16:23:41 in part 16:23:49 yep true! 16:23:52 Anyway! 16:24:19 so I think Anton is correct about the separate sanhooks but I was wondering if people had more insight 16:24:37 basically, for multipath the ipxe documentation shows a single call for both paths 16:24:50 while we're doing one sanhook call for each 16:25:17 should we be anticipating multipath here and doing the one call instead? 16:25:39 are there any reasons not to? 16:26:44 honestly I'm not sure. I would think there aren't any reasons not to, but not sure if it's worth the effort or not 16:26:51 So sanhook should be explicitly executed for each "device", a single device could be defined twice which would represent another path possible, but the OS has to handle that and all 16:27:03 https://review.openstack.org/#/c/413324/20/ironic/drivers/modules/ipxe_config.template <-- does sanhooks and a sanboot 16:27:38 mjturek: last I looked it didn't reference multiple paths 16:27:59 dtantsur: no reason not to, we should pass it if we can 16:28:06 ack 16:30:00 hmm okay 16:30:03 mjturek: so I'm not sure we have a good way of de-duplicating all of the possible paths. Conceptually for us, they would be unique targets 16:30:59 okay so basically the OS will handle it 16:31:06 the OS is the only thing that can truly consolidate that back together by looking at data it gets via the connection and if it realizes it is a duplicate that is locked, or if it is a unique device 16:31:08 yeah 16:31:18 * TheJulia gets out the "There be dragons here sign" 16:31:22 fair enough :) 16:31:39 * TheJulia makes a nice concrete foundation for the sign, so it stays around. 16:31:57 hahaha, okay I'm cool on this then. I'll reply to Anton 16:32:23 I'm feeling like ipxe also just may have added multipath iscsi in their docs.... 16:32:44 as in, they added it recently? 16:32:56 yes 16:33:02 interesting 16:33:03 * dtantsur prepares his usual complain about ipxe versioning and releases 16:33:23 * TheJulia gives dtantsur the ipxe soap box 16:33:41 :D 16:33:47 last modified 2017/03/26 so quite possible :-| 16:34:04 Yeah, I think I wrote that code late last year, so yeah 16:34:09 :( 16:34:28 Anyway, we're in a catch-22 there, but the OS should be responsible for MPIO path management 16:34:42 I hope we won't start depending on the iPXE version from this march :) 16:34:46 The device should get locked anyway along one path 16:34:48 yeaaaah, sounds reasonable to me 16:34:51 :) 16:35:07 dtantsur: +^one_million 16:36:03 If the device is not locked, then the OS is either doing something stupid, or the storage is doing something stupid, and chaos can then ensue regardless because it would have on its own anyway. 16:36:18 (as in, in no way shape or form mpio safe) 16:36:50 I _think_ we're good 16:37:36 Anything else? 16:38:05 I'm good 16:38:12 ditto 16:38:13 #topic Open Discussion 16:38:19 Anyone have anything for open discussion? 16:39:06 i noticed cinder doesn't accept metadata with dict 16:39:21 when ironic sets metadata in attach/detach 16:40:02 we need to change metadata not to use dict 16:40:10 to just be a string? 16:40:11 hshiina: what should it be using? 16:40:31 TheJulia, yes 16:41:11 hshiina: I think you mean a non-nested dict, no? /me tries to remember the patch 16:41:39 dtantsur, yes, non-nested 16:41:50 hshiina: https://review.openstack.org/#/c/413324/20/ironic/drivers/modules/pxe.py@291 referring to connector? 16:42:43 mjturek: I think he is referring to the metadata post to the cinder api upon attachment 16:43:05 of the extra metadata information, which I thought was listed as a dictionary :( 16:43:18 Easy enough to look it up and double check cinder's api code 16:43:56 sorry wrong link 16:43:59 https://review.openstack.org/#/c/366197/47/ironic/drivers/modules/storage/cinder.py@327 16:45:19 ironic creates metadata here: https://github.com/openstack/ironic/blob/767ed8002045defa0a3e982c22a0c4856796df7e/ironic/common/cinder.py#L425 16:45:40 ohhh got it 16:45:41 thanks 16:46:24 http://git.openstack.org/cgit/openstack/ironic/tree/ironic/common/cinder.py#n312 16:47:50 https://developer.openstack.org/api-ref/block-storage/v3/index.html?expanded=create-metadata-for-volume-detail#create-metadata-for-volume 16:47:58 so string of JSON I guess.... 16:50:05 hshiina: Could you create a patch to fix it for your testing? 16:50:26 It seems super simple, since we have the utilities to do it 16:51:50 TheJulia, i created an patch. https://review.openstack.org/#/c/467930/ 16:52:13 This issue is not critical. warning is always logged. 16:53:03 i reported it to cinder: https://bugs.launchpad.net/cinder/+bug/1690363 16:53:04 Launchpad bug 1690363 in Cinder "Nested metadata to volume causes an internal server error" [Undecided,Fix released] - Assigned to luqitao (qtlu) 16:53:42 only response code was fixed. cinder API doesn't accept nested metadata. 16:53:59 okay 16:54:04 That makes sense then I guess 16:54:08 Thank you! 16:54:16 cool 16:54:23 Well if there is nothing else, we have six minutes left... 16:55:28 Well, I'm taking the silence as it is time to call the meeting to an end 16:55:31 Thank you everyone! 16:55:36 thanks all! 16:55:38 ttyl 16:55:41 thank you 16:55:46 #endmeeting