16:01:00 <TheJulia> #startmeeting ironic_bfv
16:01:01 <openstack> Meeting started Thu May 25 16:01:00 2017 UTC and is due to finish in 60 minutes.  The chair is TheJulia. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:04 <openstack> The meeting name has been set to 'ironic_bfv'
16:01:12 <mjturek> o/
16:01:16 <hshiina> o/
16:01:23 <TheJulia> Greetings everyone!
16:01:41 <mjturek> hey TheJulia :)
16:01:58 <TheJulia> As always, our agenda for today is on our wiki page.
16:01:59 <TheJulia> #link https://wiki.openstack.org/wiki/Meetings/Ironic-BFV#Agenda
16:02:11 <TheJulia> #topic Announcements/Reminders
16:02:30 <dtantsur> o/
16:02:57 <TheJulia> I have one announcement/ask today from myself, I'll potentially be in meetings on Thursday and wonder if someone would be willing to run this meeting next week if I'm unavailable.
16:03:08 <TheJulia> o/ dtantsur
16:03:08 <mjturek> I can run it!
16:03:15 <TheJulia> mjturek: Awesome
16:03:38 <TheJulia> #info mjturek has volunteered to run the next meeting if TheJulia is unavailable.
16:03:44 <TheJulia> Anyone have anything else?
16:03:48 <dtantsur> thanks mjturek!
16:04:14 <mjturek> nppp, I'll ping you TheJulia before to confirm
16:04:27 <TheJulia> mjturek: excellent, Thanks!
16:04:52 <TheJulia> So if nobody has anything else for annoucements, we can move on
16:05:15 <TheJulia> #topic Current Status
16:05:28 <TheJulia> #link https://etherpad.openstack.org/p/Ironic-BFV
16:05:51 <TheJulia> It looks like we managed to get a few patches landed this week \o/
16:05:54 <mjturek> so cinder driver has merged - thanks dtantsur and sambetts for the reviews
16:06:00 <dtantsur> np :)
16:07:07 <TheJulia> hshiina: Regarding the status ether pad, I noticed you updated a patch or two.  When I went through the revisions about an hour ago, I noted their current state, so the ether pad may be slightly out of date for those revisions.
16:08:10 <hshiina> sorry, i started rebasing a few patches.
16:08:12 <TheJulia> mjturek: you indicated you only rebased and put a change in on the wire-in patch, I think I have some minor changes locally for that one as well, and I'll have to reconcile that and push it up
16:08:26 <TheJulia> hshiina: no apologies necessary :)
16:09:14 <mjturek> cool, yep I added 'cinder' to the default drivers list (and updated the commit message to indicate that). Tests were failing
16:09:19 <TheJulia> And mjturek has a few discussion items for the ipxe patch, so I guess we should move along. :)
16:09:38 <TheJulia> Awesome
16:10:14 <TheJulia> Time to discuss priorities?
16:10:15 <mjturek> oh - I uploaded a follow up patch for the cinder driver as well
16:10:29 <TheJulia> I saw, thank you
16:10:36 <mjturek> take a look, but I don't like that the detach_volumes signature is different than the base class :-\
16:11:10 <TheJulia> #topic Planning/Priorities
16:11:52 <TheJulia> Priorities as I see them is the ipxe patch, the wire-in patch which I should be able to reconcile and get updated quickly, and then the deploy skip patch.  The last of which may be impacted by one of the discussion items.
16:12:12 <mjturek> sounds good to me]
16:12:42 <hshiina> sounds good to me, too
16:13:02 <TheJulia> Sounds like we are in agreement.
16:13:11 <dtantsur> ++
16:13:28 <TheJulia> And with that, if nobody has anything else, then we should move on to discussion :)
16:14:08 <TheJulia> #topic Discussion
16:14:30 * TheJulia hands the microphone to mjturek
16:14:39 <mjturek> So first item is on the wirein patch
16:14:57 <mjturek> dtantsur mentioned that we should probably move the storage ops up from the driver level
16:15:04 <mjturek> https://review.openstack.org/#/c/413324/20/ironic/drivers/modules/pxe.py@291
16:15:11 <dtantsur> yeah
16:15:32 <mjturek> seems like the right approach as the code will need to be in each deploy module the current way
16:15:39 <dtantsur> otherwise we're doomed to copy-paste it
16:15:42 <mjturek> :)
16:15:51 <dtantsur> like with networking, which is something vdrok wants to address
16:16:32 <mjturek> I can volunteer to look into it as a followup, unless you'd like to address it in the patch TheJulia
16:17:00 <mjturek> (or if there's a disagreement here - maybe there's a good reason for it being in the driver level?)
16:17:08 <dtantsur> I'm fine with a follow-up, as long as it happens rather sooner than later (e.g. as the next patch in the chain)
16:17:43 <TheJulia> So move any helpers into centralized helpers, and move the wire-in calls into the conductor?
16:18:02 <TheJulia> Have we identified where we would call the networking in the conductor?
16:18:25 <dtantsur> networking is a separate thing, I guess. or do we want to solve both at once?
16:18:27 * TheJulia feels like it would be best as a follow-up for the conductor part
16:18:52 <dtantsur> I'm just suggesting that we don't place volume actions in the deploy drivers (or move it away soon)
16:18:53 <TheJulia> I feel like we are going to step on each other's feet if we don't have a basic agreement of how in place before we try and do it
16:19:04 <mjturek> yeah I'd think we'd want to do it around the same spot as the networking
16:19:16 <mjturek> maybe vdrok has a plan there already, I can ask him
16:19:22 <dtantsur> he has a spec
16:19:30 <mjturek> oh cool
16:19:43 <dtantsur> #link https://review.openstack.org/#/c/468000/
16:19:54 * dtantsur is not sure he can use #link here
16:19:56 <TheJulia> I guess, there're only two logic places for any of it to really go in the conductor
16:20:20 <TheJulia> #link https://review.openstack.org/#/c/468000/
16:20:29 <TheJulia> I think anyone can, but we can look at the logs later to be sure :)
16:21:11 <TheJulia> I feel like the resolution is talk to vdrok, get on the same page, and determine the happy path from there.
16:21:21 <mjturek> sounds good
16:21:23 <TheJulia> and well, read the spec too
16:21:58 <mjturek> :)
16:22:00 <TheJulia> So, your next item mjturek ?
16:22:22 <mjturek> sure
16:22:31 <mjturek> #link https://review.openstack.org/#/c/406290/25/ironic/drivers/modules/agent.py@396
16:22:46 <mjturek> whoops
16:23:07 <mjturek> I swapped the links by accident :)
16:23:12 <mjturek> https://review.openstack.org/#/c/413324/20/ironic/drivers/modules/pxe.py@291
16:23:21 <mjturek> this is the one wrt sanhook
16:23:37 <TheJulia> I just figured that out too, but the other item does kind of fit with-in the same topic in a way
16:23:41 <TheJulia> in part
16:23:49 <mjturek> yep true!
16:23:52 <TheJulia> Anyway!
16:24:19 <mjturek> so I think Anton is correct about the separate sanhooks but I was wondering if people had more insight
16:24:37 <mjturek> basically, for multipath the ipxe documentation shows a single call for both paths
16:24:50 <mjturek> while we're doing one sanhook call for each
16:25:17 <mjturek> should we be anticipating multipath here and doing the one call instead?
16:25:39 <dtantsur> are there any reasons not to?
16:26:44 <mjturek> honestly I'm not sure. I would think there aren't any reasons not to, but not sure if it's worth the effort or not
16:26:51 <TheJulia> So sanhook should be explicitly executed for each "device", a single device could be defined twice which would represent another path possible, but the OS has to handle that and all
16:27:03 <TheJulia> https://review.openstack.org/#/c/413324/20/ironic/drivers/modules/ipxe_config.template <-- does sanhooks and a sanboot
16:27:38 <TheJulia> mjturek: last I looked it didn't reference multiple paths
16:27:59 <TheJulia> dtantsur: no reason not to, we should pass it if we can
16:28:06 <dtantsur> ack
16:30:00 <mjturek> hmm okay
16:30:03 <TheJulia> mjturek: so I'm not sure we have a good way of de-duplicating all of the possible paths.  Conceptually for us, they would be unique targets
16:30:59 <mjturek> okay so basically the OS will handle it
16:31:06 <TheJulia> the OS is the only thing that can truly consolidate that back together by looking at data it gets via the connection and if it realizes it is a duplicate that is locked, or if it is a unique device
16:31:08 <TheJulia> yeah
16:31:18 * TheJulia gets out the "There be dragons here sign"
16:31:22 <mjturek> fair enough :)
16:31:39 * TheJulia makes a nice concrete foundation for the sign, so it stays around.
16:31:57 <mjturek> hahaha, okay I'm cool on this then. I'll reply to Anton
16:32:23 <TheJulia> I'm feeling like ipxe also just may have added multipath iscsi in their docs....
16:32:44 <mjturek> as in, they added it recently?
16:32:56 <TheJulia> yes
16:33:02 <mjturek> interesting
16:33:03 * dtantsur prepares his usual complain about ipxe versioning and releases
16:33:23 * TheJulia gives dtantsur the ipxe soap box
16:33:41 <dtantsur> :D
16:33:47 <mjturek> last modified 2017/03/26 so quite possible :-|
16:34:04 <TheJulia> Yeah, I think I wrote that code late last year, so yeah
16:34:09 <TheJulia> :(
16:34:28 <TheJulia> Anyway, we're in a catch-22 there, but the OS should be responsible for MPIO path management
16:34:42 <dtantsur> I hope we won't start depending on the iPXE version from this march :)
16:34:46 <TheJulia> The device should get locked anyway along one path
16:34:48 <mjturek> yeaaaah, sounds reasonable to me
16:34:51 <mjturek> :)
16:35:07 <TheJulia> dtantsur: +^one_million
16:36:03 <TheJulia> If the device is not locked, then the OS is either doing something stupid, or the storage is doing something stupid, and chaos can then ensue regardless because it would have on its own anyway.
16:36:18 <TheJulia> (as in, in no way shape or form mpio safe)
16:36:50 <TheJulia> I _think_ we're good
16:37:36 <TheJulia> Anything else?
16:38:05 <mjturek> I'm good
16:38:12 <dtantsur> ditto
16:38:13 <TheJulia> #topic Open Discussion
16:38:19 <TheJulia> Anyone have anything for open discussion?
16:39:06 <hshiina> i noticed cinder doesn't accept metadata with dict
16:39:21 <hshiina> when ironic sets metadata in attach/detach
16:40:02 <hshiina> we need to change metadata not to use dict
16:40:10 <TheJulia> to just be a string?
16:40:11 <mjturek> hshiina: what should it be using?
16:40:31 <hshiina> TheJulia, yes
16:41:11 <dtantsur> hshiina: I think you mean a non-nested dict, no? /me tries to remember the patch
16:41:39 <hshiina> dtantsur, yes, non-nested
16:41:50 <mjturek> hshiina: https://review.openstack.org/#/c/413324/20/ironic/drivers/modules/pxe.py@291 referring to connector?
16:42:43 <TheJulia> mjturek: I think he is referring to the metadata post to the cinder api upon attachment
16:43:05 <TheJulia> of the extra metadata information, which I thought was listed as a dictionary :(
16:43:18 <TheJulia> Easy enough to look it up and double check cinder's api code
16:43:56 <mjturek> sorry wrong link
16:43:59 <mjturek> https://review.openstack.org/#/c/366197/47/ironic/drivers/modules/storage/cinder.py@327
16:45:19 <hshiina> ironic creates metadata here: https://github.com/openstack/ironic/blob/767ed8002045defa0a3e982c22a0c4856796df7e/ironic/common/cinder.py#L425
16:45:40 <mjturek> ohhh got it
16:45:41 <mjturek> thanks
16:46:24 <TheJulia> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/common/cinder.py#n312
16:47:50 <TheJulia> https://developer.openstack.org/api-ref/block-storage/v3/index.html?expanded=create-metadata-for-volume-detail#create-metadata-for-volume
16:47:58 <TheJulia> so string of JSON I guess....
16:50:05 <TheJulia> hshiina: Could you create a patch to fix it for your testing?
16:50:26 <TheJulia> It seems super simple, since we have the utilities to do it
16:51:50 <hshiina> TheJulia, i created an patch. https://review.openstack.org/#/c/467930/
16:52:13 <hshiina> This issue is not critical. warning is always logged.
16:53:03 <hshiina> i reported it to cinder: https://bugs.launchpad.net/cinder/+bug/1690363
16:53:04 <openstack> Launchpad bug 1690363 in Cinder "Nested metadata to volume causes an internal server error" [Undecided,Fix released] - Assigned to luqitao (qtlu)
16:53:42 <hshiina> only response code was fixed. cinder API doesn't accept nested metadata.
16:53:59 <TheJulia> okay
16:54:04 <TheJulia> That makes sense then I guess
16:54:08 <TheJulia> Thank you!
16:54:16 <mjturek> cool
16:54:23 <TheJulia> Well if there is nothing else, we have six minutes left...
16:55:28 <TheJulia> Well, I'm taking the silence as it is time to call the meeting to an end
16:55:31 <TheJulia> Thank you everyone!
16:55:36 <mjturek> thanks all!
16:55:38 <mjturek> ttyl
16:55:41 <hshiina> thank you
16:55:46 <TheJulia> #endmeeting