16:00:26 <jungleboyj> #startmeeting Cinder
16:00:27 <openstack> Meeting started Wed Oct 10 16:00:26 2018 UTC and is due to finish in 60 minutes.  The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:30 <openstack> The meeting name has been set to 'cinder'
16:00:47 <whoami-rajat> Hi
16:00:51 <smcginnis> o/
16:00:55 <jungleboyj> @!
16:00:55 <_pewp_> jungleboyj |。・ω・|ノ
16:00:56 <xyang> hi
16:00:56 <yikun> o/
16:01:04 <LiangFang> hi
16:01:05 <e0ne> hi
16:01:30 <ganso> hello
16:01:34 <walshh_> hi
16:01:37 <jungleboyj> Give people a minute to join up.
16:01:47 <tpsilva> hello
16:01:50 <rosmaita> o/
16:02:25 <jungleboyj> Hope no one is in Florida.
16:03:02 <jungleboyj> Ok.  Lets get started.
16:03:06 <tbarron> hi
16:03:13 <jungleboyj> #topic announcements
16:03:55 <jungleboyj> I didn't get any concerns with adding Gorka as a stable core.  So, we will be doing that.
16:04:07 <jungleboyj> Thank you to all of you that had input.
16:04:10 <e0ne> geguileo_PTO: my congratulations!
16:04:21 <LiangFang> congratulation!
16:04:24 <jungleboyj> geguileo_PTO:  Thank you for continuing to help out!
16:04:35 <enriquetaso> Congratulations!
16:05:10 <jungleboyj> enriquetaso:  Hey!  Thanks for joining!
16:05:37 <enriquetaso> Hi jungleboyj ! how are u?
16:05:55 <jungleboyj> Also, just a friendly reminder to put any topics you have for the mid-cycle into the etherpad.
16:06:00 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-stein-mid-cycle-planning
16:06:21 <jungleboyj> enriquetaso: Good.  Thank you.
16:06:46 <smcginnis> Looking ahead, walshh_'s topic might be a good one fore midcycle.
16:07:25 <LiangFang> yes, vmware support this
16:07:27 <jungleboyj> smcginnis:  ++
16:07:45 <jungleboyj> I have been putting topics in there as well.
16:08:16 <jungleboyj> Some other topics.
16:08:52 <jungleboyj> People should have seen that I did os-brick releases for the current and stable releases.
16:09:11 <jungleboyj> I also did a python-cinderclient release.
16:10:08 <jungleboyj> Hopefully no one has seen issues there.
16:10:22 <smcginnis> Nothing so far.
16:10:31 <jungleboyj> Good.
16:11:21 <jungleboyj> Ok.  I think that was all I had for announcements.
16:11:56 <jungleboyj> #topic Add volume re-image API
16:12:04 <jungleboyj> yikun:
16:12:15 <yikun> https://review.openstack.org/#/c/605317/
16:12:15 <yikun> ^^ yes, it’s the re-image spec, now ready for review, and you could take a look when you have time.
16:12:25 <yikun> https://review.openstack.org/#/c/606346/
16:12:31 <yikun> Also, there is a POC implementation on ^, now the core implementation is ok. you could also take a look if you are interested.
16:12:37 <jungleboyj> #link https://review.openstack.org/#/c/605317/
16:12:56 <jungleboyj> #link https://review.openstack.org/#/c/606346/
16:14:03 <jungleboyj> That is great that we have PoC together already.
16:14:44 <yikun> :), and also have some discussion on "force" parameter
16:14:48 <yikun> #link http://lists.openstack.org/pipermail/openstack-dev/2018-October/135509.html
16:15:11 <jungleboyj> yikun:  Yes.  I feel like we have landed on the answer there unless there were more concerns raised since?
16:16:07 <yikun> yes, maybe "ignore_attached" is the answer if don't have other disagreement. :)
16:16:47 <enriquetaso> .
16:16:52 <jungleboyj> yikun:  Trying to catch up here.
16:18:07 <yikun> anyway, if you have any question or suggestion, feel free to leave the comments in spec. : ), thanks
16:18:10 <jungleboyj> yikun:  Where did the ignore_attached come from?
16:18:16 <jungleboyj> I didn't see that on the mailing list.
16:18:37 <yikun> here, https://review.openstack.org/#/c/605317/6/specs/stein/add-volume-re-image-api.rst@52
16:18:47 <yikun> suggestion from mriedem.
16:19:13 <jungleboyj> yikun:  Ah, ok.  That makes more sense.
16:19:20 <jungleboyj> Ok, I need to read through that spec.
16:19:30 <smcginnis> Personally I preferred "force", but I can see where the concern on that name comes from.
16:19:31 <yikun> OK, thanks. :)
16:19:33 <jungleboyj> So, team, please look at the patch and spec and make comments.
16:19:48 <jungleboyj> #action Cinder Team to review the Spec and Patch.
16:20:10 <jungleboyj> Think we can move to the next topic yikun ?
16:20:20 <yikun> yep, sure~
16:20:42 <jungleboyj> Ok.  Good.
16:21:14 <jungleboyj> #topic Support for Revert to any Snapshot
16:21:34 <jungleboyj> #link https://blueprints.launchpad.net/cinder/+spec/support-revert-to-any-snapshot
16:21:58 <jungleboyj> walshh_:
16:22:03 <jungleboyj> All yours.
16:22:26 <walshh_> just wondering if this is going to be implemented in the future?
16:22:36 <e0ne> how many backends except ceph support this feature?
16:23:10 <walshh_> I can only speak for VMAX, it does support it
16:23:52 <smcginnis> I think a lot support it, but we still have the concern from Barcelona that prompted the current design that some backends "lose" later snapshots once they've gone to earlier ones.
16:23:57 <jungleboyj> Anyone else know of backends that support it?
16:24:17 <e0ne> IMO, it could be useful if more backends supports this. I don't like features only for 2-3 backends
16:24:28 <smcginnis> I don't think the issue was support for doing it, it was the very high potential for data loss.
16:24:45 <e0ne> smcginnis: good poit
16:24:49 <e0ne> *point
16:25:00 <jungleboyj> smcginnis:  I am remembering that now.
16:25:11 <jungleboyj> We would have to have a safe way to implement this for everyone.
16:25:27 <e0ne> +1
16:26:06 <jungleboyj> walshh_:  Is this something that your team would potentially implement?
16:26:26 <walshh_> we can certainly look into it, wouldnt mind the challenge
16:27:26 <jungleboyj> This one would definitely require a spec.
16:27:52 <jungleboyj> It would be one of those functions that drivers would have to opt-in for to ensure that they can verify that data loss won't happen.
16:28:35 <smcginnis> Do we really want to introduce something like that?
16:28:57 <walshh_> is the data loss risk not also there for revert to last snapshot?
16:29:15 <jungleboyj> smcginnis:  Not really.  We have kind of stabilized things away from those types of features.
16:29:32 <smcginnis> No, the data loss risk is in losing snapshots, not in the delta of data between now and when the snapshot was taken.
16:29:50 <smcginnis> And how Cinder can know if that's the case and accurately reflect that in our database.
16:30:05 <walshh_> ok, thanks
16:30:24 <smcginnis> "Let's go back 5 snaps. Oops, that was too far, let's go 4. Where'd my snap go?" - that kinda thing.
16:31:21 <jungleboyj> Is that a limitation in Cinder or in the way that the backends handle the snaps?
16:31:48 <smcginnis> The backends.
16:32:08 <jungleboyj> Ok.  That is concerning.
16:32:19 <smcginnis> I think someone had a specific example of a backend that behaves that way at the design summit, but I don't recall which it was.
16:33:06 <smcginnis> We can try to get input if that is still the case. Especially since there are now less backends than before.
16:33:24 <jungleboyj> So, two approaches.  We can say 'No'.  Or we can make people aware that there has been a request here and then  talk about it at the Mid-Cycle.
16:33:37 <jungleboyj> smcginnis:  ++
16:34:45 <jungleboyj> So that would be a vote for option two.
16:34:50 <jungleboyj> Any other votes?
16:35:19 <walshh_> +1
16:35:27 <jungleboyj> e0ne:  ?
16:35:32 <rosmaita> midcycle option sounds good to me
16:35:53 <jungleboyj> walshh_:  Your team can join remotely for the mid-cycle?
16:35:59 <jungleboyj> We will be a little closer to your timezone.
16:36:02 <walshh_> yes, we hope to
16:36:39 <jungleboyj> Ok.  Lets do that.
16:36:41 <e0ne> It would be good to have more feedback from driver maintainers.
16:36:52 <jungleboyj> e0ne:  Agreed.
16:37:18 <jungleboyj> So, walshh_  can you add this to the etherpad for the mid-cycle and also send a note about it to the mailing list?
16:37:54 <walshh_> sure
16:38:04 <jungleboyj> walshh_:  Great.
16:38:26 <jungleboyj> #action walshh_  to add topic of reverting to any snapshot to mid-cycle etherpad.
16:38:47 <jungleboyj> #action walshh_  to send note to the mailing list about this in hopes of getting attention from other driver maintainers.
16:39:06 <jungleboyj> walshh_:  You ok with moving on?
16:39:18 <walshh_> yes, I am...thanks
16:39:37 <jungleboyj> walshh_:  Thanks.
16:40:02 <jungleboyj> #topic bug on migration of attached volumes.
16:40:10 <jungleboyj> tpsilva: Your floor.
16:40:15 <tpsilva> okay, thanks
16:41:05 <tpsilva> so I came across this last week... when testing migration I noticed a rather weird behavior and would like to confirm if we have an issue on cinder or nova
16:41:06 <e0ne> sorry, need to disconnect now. see you next week
16:41:53 <tpsilva> migrating a detached volume works fine, but the migration of an attached volume does not seem to finish, ever
16:41:54 <jungleboyj> tpsilva:  Ok.
16:42:15 <smcginnis> Thanks e0
16:42:22 <tpsilva> cinder calls nova, nova creates the new volume, moves the data, attaches it to the instance and then it should call back cinder to delete the volume
16:42:38 <tpsilva> but the old volume never gets deleted
16:42:45 <jungleboyj> tpsilva:  Ok.
16:42:57 <tpsilva> I can see the call on nova code, nothing's apparently wrong there
16:42:58 <whoami-rajat> tpsilva: can you share the bug link? would be helpful.
16:43:06 <jungleboyj> That functionality is quite new so wouldn't be surprised if there are issues.
16:43:14 <tpsilva> jungleboyj: oh, is it?
16:43:23 <tpsilva> on both sides? nova and cinder?
16:43:30 <jungleboyj> I thought it was relatively new.
16:43:43 <jungleboyj> Maybe I am thinking of attached extend though.
16:43:53 <tpsilva> whoami-rajat: didn't create it... I didn't know how it should work or if it's indeed a bug, so I wanted to confirm it first
16:44:16 <ganso> jungleboyj: I remember seeing it in Queens, and the same bug exists in Queens and master, so it appears it hasn't been thoroughly tested
16:44:37 <tpsilva> maybe there are some issues with the status of the old volume, or the migration_status that prevents it from being deleted
16:44:45 <tpsilva> but I didn't dig too deep trying to debug it
16:44:53 <jungleboyj> ganso:  Do you know of a bug for it?
16:45:12 <tpsilva> jungleboyj: we tested it on queens, rocky and master
16:45:23 <ganso> jungleboyj: I haven't seen one logged
16:45:41 <tpsilva> so I'll log the bug then
16:45:52 <jungleboyj> tpsilva:  Yes, I think that is what needs to be done.
16:46:03 <tpsilva> alright
16:46:04 <jungleboyj> If you have specific steps for recreating that would be helpful.
16:46:12 <jungleboyj> tpsilva: Which backend are you using?
16:46:22 <tpsilva> tested on NetApp and LVM
16:46:31 <tpsilva> ah, interesting part that I forgot
16:46:35 <jungleboyj> Same result either way?
16:46:40 <tpsilva> jungleboyj: yep
16:46:47 <tpsilva> but, retyping works fine, which is odd
16:47:19 <jungleboyj> tpsilva:  That isn't totally surprising.
16:47:25 <jungleboyj> Was it a retype without migration?
16:47:33 <tpsilva> with migration
16:47:43 <ganso> and attached
16:47:46 <tpsilva> yep
16:47:53 <jungleboyj> Now, that is interesting.
16:48:47 <tpsilva> alright, I'm creating the LP bug
16:49:02 <jungleboyj> So, there must be something little in the last step that is lost with migration only.
16:49:21 <smcginnis> There have been changes on the nova side for the new attach flow and multiattach. Maybe something got missed there.
16:49:54 <jungleboyj> #action tpsilva to open an LP bug.
16:50:11 <jungleboyj> tpsilva: Can you add the bug you open into the notes from the meeting please?
16:50:17 <tpsilva> will do
16:51:20 <jungleboyj> tpsilva:  Thank you.
16:51:27 <jungleboyj> Appreciate you bringing this up as well.
16:51:44 <tpsilva> jungleboyj: thanks
16:52:09 <jungleboyj> tpsilva: That was quick.  Thanks.
16:52:51 <jungleboyj> Ok.  So that covers that part of the agenda.
16:53:11 <jungleboyj> #topic Bug Triage
16:53:28 <jungleboyj> whoami-rajat:  Any updates on the bugs in the list?
16:54:03 <jungleboyj> We have a few minutes left.
16:54:15 <jungleboyj> Or if anyone has other topics.
16:54:27 * lbragstad has something policy related but will wait if needed
16:54:38 <jungleboyj> lbragstad:  You win.  Please go.
16:54:51 <lbragstad> first - thanks for the reviews on https://review.openstack.org/#/c/602489/
16:54:58 <whoami-rajat> geguileo_PTO: isn't available since the past 1 week. so no updates. also no updates on other bugs too.
16:55:12 <jungleboyj> whoami-rajat:  Ok.  Thanks.
16:55:21 <lbragstad> ^ that's essentially a template for what we were talking about on Thursday at the PTG about having safer ways to upgrade policies without regression
16:56:02 <lbragstad> i wanted to prove that the tests actually work when changing the policy - which is what https://review.openstack.org/#/c/604115/1 does
16:56:31 <lbragstad> and that's when i noticed https://review.openstack.org/#/c/604115/1/cinder/tests/unit/policy.json
16:57:01 <jungleboyj> #link https://review.openstack.org/#/c/602489/
16:57:05 <lbragstad> i can pull that out into it's own cinder change if folks thinks its useful
16:57:12 <jungleboyj> #link https://review.openstack.org/#/c/604115/1
16:57:51 <jungleboyj> lbragstad:  Oh.  That is good.
16:57:57 <smcginnis> lbragstad: You mean get rid of the unit test policy file?
16:57:59 <lbragstad> ideally - the more changes like #link https://review.openstack.org/#/c/602489/4 land the more policies can be removed from #link https://review.openstack.org/#/c/604115/1
16:58:05 <lbragstad> correct
16:58:15 <jungleboyj> lbragstad:  That probably should have been done back when we moved policies in code.
16:58:18 <jungleboyj> smcginnis:  Right?
16:58:20 <lbragstad> since 1.) you have defaults in code and 2.) the defaults are being tested
16:58:27 <smcginnis> lbragstad: That would be great!
16:58:35 <jungleboyj> smcginnis:  ++
16:58:47 <lbragstad> it's cool that it wasn't
16:59:10 <lbragstad> all of of this is a lot of work, so small, iterative changes are nice
16:59:27 <jungleboyj> lbragstad:  ++
16:59:38 <jungleboyj> Bite size changes that we can try to digest are good.
16:59:39 <lbragstad> but i can propose the removal of that policy from the testing policy file
17:00:04 <jungleboyj> Yeah, lets start removing those overrides that we can.  I think that is good.
17:00:09 <lbragstad> otherwise https://review.openstack.org/#/c/604115/1/cinder/policies/volumes.py should show how those tests protect your API
17:00:14 <jungleboyj> lbragstad:  Thank you for the work you are doing there!
17:00:25 <lbragstad> yep - anytime.. let me know if you have questions
17:00:34 <jungleboyj> lbragstad: Will do.
17:00:35 <lbragstad> or if anyone starts porting other tests and wants eyes on it
17:00:49 <jungleboyj> Ok.  We are at the top of the hour and need to wrap up.
17:01:01 <jungleboyj> Thank you everyone for meeting!
17:01:07 <jungleboyj> #endmeeting