16:00:06 <ildikov> #startmeeting cinder-nova-api-changes
16:00:07 <openstack> Meeting started Thu Jan 11 16:00:06 2018 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:11 <openstack> The meeting name has been set to 'cinder_nova_api_changes'
16:00:18 <mriedem> o/
16:00:18 <ildikov> johnthetubaguy jaypipes e0ne jgriffith hemna mriedem patrickeast smcginnis diablo_rojo xyang xyang1 raj_singh lyarwood jungleboyj stvnoyes
16:00:30 * johnthetubaguy lurks
16:00:50 <stvnoyes> o/
16:01:01 <ildikov> johnthetubaguy: you can lurk if you're reviewing in the meantime :)
16:01:07 <jungleboyj> @!
16:01:07 <_pewp_> jungleboyj ( *՞ਊ՞*)ノ
16:01:40 <ildikov> so thanks to mriedem among other things we have a thorough tracking etherpad
16:01:43 <ildikov> #link https://etherpad.openstack.org/p/multi-attach-volume-queens
16:02:19 <ildikov> we have the Nova side patches almost good to go
16:03:14 <ildikov> we've just talked about swap in the Nova channel as that does not use the attach code in block_device.py hence we need to pass the multiattach info separately in that case
16:03:27 <ildikov> I believe we agreed it's an easy fix
16:04:21 <ildikov> otherwise this is the top of the series in Nova: https://review.openstack.org/#/c/271047/
16:04:45 <mriedem> i've also got the nova-multiattach ci job patch here https://review.openstack.org/#/c/532689/
16:04:48 <mriedem> but annoyingly,
16:04:54 <mriedem> i can see it queued up in the experimental queue in zuul,
16:05:02 <mriedem> but it doesn't seem to run, and zuul didn't post results on it last night
16:05:10 <mriedem> i know if that's a problem in the patch, or with zuul
16:05:13 <mriedem> *don't know
16:05:28 <mriedem> i'm assuming the latter because the gate has been borked all week
16:05:34 <ildikov> mriedem: if you saw the queues they are in rough shape in general
16:05:49 <mriedem> yes the constant restarts don't help, but it is what it is,
16:05:50 <mriedem> thanks intel
16:06:05 <ildikov> yeah :/
16:06:38 <ildikov> I did a recheck as well and then realized it won't go anywhere today...
16:06:59 <ildikov> mriedem: thanks for the CI job!
16:07:21 <mriedem> stvnoyes: were you still working on adding some tempest patches on top of https://review.openstack.org/#/c/266605/ for resize?
16:07:44 <mriedem> the two tempest patches below that are approved, so that's nice
16:07:58 <stvnoyes> yes, that's what I'm doing today
16:08:25 <mriedem> ok i'll see if the qa people really need me to split up https://review.openstack.org/#/c/266605/25
16:09:05 <mriedem> re: cinder patches, i commented on https://review.openstack.org/#/c/531569 last night
16:09:08 <ildikov> it's not that big...
16:09:09 <mriedem> looks like jgriffith is going to update that
16:09:34 <ildikov> he has a couple of related patches: jgriffith is working on a series of Cinder changes as well: https://review.openstack.org/#/q/topic:bp/multi-attach-v3-attach+(status:open+OR+status:merged)
16:09:54 <mriedem> so my comments on the policy change was,
16:10:07 <mriedem> 1. should be the same as volume create so non-admins can create a multiattach volume
16:10:10 <mriedem> otherwise the tempest patch won't work
16:10:29 <mriedem> i mean we could hack devstack, but it just seems like it should be open to start and ops can disable if they don't want to support it when upgrading
16:10:44 <mriedem> and 2. the nova spec said we'd have a cinder policy for disabling multiattach+bootable for the bfv case,
16:11:06 <mriedem> i think that becomes just a check in the set_bootable volume action API or whatever
16:11:38 <ildikov> yeah, we talked about the bootable case back at the time of the Nova spec
16:12:00 <mriedem> https://github.com/openstack/cinder/blob/master/cinder/api/contrib/volume_actions.py#L362
16:12:09 <mriedem> it looks like there is no existing policy check on creating bootable volumes
16:12:48 <mriedem> but it would be easy to add a policy check there for just, if you're setting bootable=True and volume is multiattach=True, check the policy
16:12:54 <jungleboyj> Yes, we agreed it wasn't supported for BFV.
16:12:55 <mriedem> and default the policy to allow it
16:13:06 <mriedem> jungleboyj: no we did agree we'd support bfv
16:13:21 <mriedem> jungleboyj: you might be thinking of pike when we said we wouldn't :)
16:13:34 <ildikov> jungleboyj: and we add policy to turn it off
16:13:35 <jungleboyj> mriedem:  What?  I thought we said no multi-attach for BFV?
16:13:47 <jungleboyj> ildikov: Oh, ok.  I am just going to be quiet now.
16:13:50 <mriedem> we said nova wouldn't do any bfv checks and just leave it to cinder policy
16:13:54 <jungleboyj> You guys are the experts.
16:14:17 <mriedem> https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/cinder-volume-multi-attach.html#rest-api-impact
16:14:19 <ildikov> jungleboyj: mriedem is right, we said if someone wants to properly shoot themselves in the foot, we gladly support them :)
16:14:22 <mriedem> "When we enable the feature we will have a ‘multiattach’ policy to enable or disable the operation entirely on the Cinder side as noted above. Read/Only policy is a future work item and out of the scope of this spec."
16:14:34 <mriedem> The following policy rules will be added to Cinder:  Enable/Disable multiattach=True Enable/Disable multiattach=True + bootable=True
16:15:03 <jungleboyj> mriedem:  Ok, that makes sense.
16:15:53 <ildikov> and the bootable part is missing currently fro the proposed changes
16:17:15 <mriedem> yeah comments are in the cinder patch, so i think we're good
16:17:29 <ildikov> mriedem: do you see anything else missing?
16:18:06 <ildikov> mriedem: or I should continue begging for reviews next week and fix whatever needs to be fixed, while you're out of office and workaholism?
16:18:23 <mriedem> swap volume is the only thing that came to me late last night
16:18:30 <mriedem> and the policy thing
16:18:39 <mriedem> i won't know about other stuff unless / until it comes to me
16:18:57 <mriedem> i'm hoping that johnthetubaguy and gibi could get a review pass on the nova patches yet this week,
16:19:00 <ildikov> I remembered the policy one, just wasn't sure what's in the queue jgriffith hasn't uploaded yet
16:19:00 <mriedem> before i have to leave
16:19:15 <ildikov> gibi started already, will continue tomorrow
16:19:24 * johnthetubaguy nods that he hopes to do that ASAP
16:19:39 <ildikov> he also helps with getting some use cases from the E/// folks who need this for the PTG
16:19:44 <johnthetubaguy> bit distracted with some ironic pieces, but will carve out time fo rthat
16:20:00 <ildikov> johnthetubaguy: let me know if I should talk to your boss about priorities :)
16:20:46 <johnthetubaguy> :)
16:20:51 <ildikov> mriedem: I guess you will whip up the swap patch
16:20:59 <mriedem> yeah
16:21:09 <ildikov> cool, thanks
16:21:36 <ildikov> mriedem: as for Tempest tests, are we good on that front including what stvnoyes is working on or there are more cases you would like to see covered?
16:22:04 <stvnoyes> as I'm adding the resize test, I'll see if I notice anything missing...
16:22:09 <mriedem> ildikov: swap volume needs to be covered as well
16:22:19 <stvnoyes> ok, I can do that next
16:22:22 <mriedem> just, whatever the todos are in the tempest patch
16:22:46 <ildikov> mriedem: stvnoyes: ok, that sounds good
16:24:18 <ildikov> jungleboyj: when is the freeze for the cinderclient?
16:24:36 <jungleboyj> Let me double check.  I think we have 2 weeks.
16:24:42 <ildikov> I mean the lib freeze because of which we had to wait a month last time
16:24:43 <smcginnis> Two weeks.
16:24:57 <jungleboyj> Yeah, 1/26
16:24:59 <ildikov> ok, sounds tight but doable
16:25:05 <smcginnis> os-brick next Thursday, python-cinderclient the following Thursday.
16:25:26 <mriedem> 1/25 is cinderclient freeze
16:25:27 <mriedem> and FF
16:25:59 <ildikov> Thursday resonated better with me, but oh well
16:26:12 <ildikov> and that's the 25th, right
16:26:46 <ildikov> this is just one of those days when there's not enough caffeine... :/
16:27:18 <ildikov> ok, I think we're good for today?
16:27:23 <mriedem> so let's end this meeting so you can get some coffee
16:27:44 <ildikov> mriedem: +1 :)
16:27:57 <jungleboyj> ++ for coffee
16:28:25 <ildikov> so if anyone finds anything concerning keep in touch on the channel plus here's the tracking etherpad: https://etherpad.openstack.org/p/multi-attach-volume-queens
16:28:32 <ildikov> thanks everyone!
16:28:46 <ildikov> mriedem: have a great vacation next week!
16:28:49 <jungleboyj> ildikov:  Thank you and thanks to everyone for continuing to work this!
16:28:50 <mriedem> thanks
16:29:03 <jungleboyj> mriedem:  Enjoy.  I hope you get warmer weather than I did in RTP.
16:29:14 <ildikov> :)
16:29:15 <mriedem> puerto vallarta
16:29:21 <mriedem> ceviches and sand
16:29:27 <jungleboyj> mriedem: Nice!
16:29:47 <ildikov> that sounds pretty awesome! :)
16:29:58 <ildikov> and with that
16:30:02 <ildikov> #endmeeting