16:00:26 #startmeeting Cinder 16:00:27 Meeting started Wed Oct 10 16:00:26 2018 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:30 The meeting name has been set to 'cinder' 16:00:47 Hi 16:00:51 o/ 16:00:55 @! 16:00:55 <_pewp_> jungleboyj |。・ω・|ノ 16:00:56 hi 16:00:56 o/ 16:01:04 hi 16:01:05 hi 16:01:30 hello 16:01:34 hi 16:01:37 Give people a minute to join up. 16:01:47 hello 16:01:50 o/ 16:02:25 Hope no one is in Florida. 16:03:02 Ok. Lets get started. 16:03:06 hi 16:03:13 #topic announcements 16:03:55 I didn't get any concerns with adding Gorka as a stable core. So, we will be doing that. 16:04:07 Thank you to all of you that had input. 16:04:10 geguileo_PTO: my congratulations! 16:04:21 congratulation! 16:04:24 geguileo_PTO: Thank you for continuing to help out! 16:04:35 Congratulations! 16:05:10 enriquetaso: Hey! Thanks for joining! 16:05:37 Hi jungleboyj ! how are u? 16:05:55 Also, just a friendly reminder to put any topics you have for the mid-cycle into the etherpad. 16:06:00 #link https://etherpad.openstack.org/p/cinder-stein-mid-cycle-planning 16:06:21 enriquetaso: Good. Thank you. 16:06:46 Looking ahead, walshh_'s topic might be a good one fore midcycle. 16:07:25 yes, vmware support this 16:07:27 smcginnis: ++ 16:07:45 I have been putting topics in there as well. 16:08:16 Some other topics. 16:08:52 People should have seen that I did os-brick releases for the current and stable releases. 16:09:11 I also did a python-cinderclient release. 16:10:08 Hopefully no one has seen issues there. 16:10:22 Nothing so far. 16:10:31 Good. 16:11:21 Ok. I think that was all I had for announcements. 16:11:56 #topic Add volume re-image API 16:12:04 yikun: 16:12:15 https://review.openstack.org/#/c/605317/ 16:12:15 ^^ yes, it’s the re-image spec, now ready for review, and you could take a look when you have time. 16:12:25 https://review.openstack.org/#/c/606346/ 16:12:31 Also, there is a POC implementation on ^, now the core implementation is ok. you could also take a look if you are interested. 16:12:37 #link https://review.openstack.org/#/c/605317/ 16:12:56 #link https://review.openstack.org/#/c/606346/ 16:14:03 That is great that we have PoC together already. 16:14:44 :), and also have some discussion on "force" parameter 16:14:48 #link http://lists.openstack.org/pipermail/openstack-dev/2018-October/135509.html 16:15:11 yikun: Yes. I feel like we have landed on the answer there unless there were more concerns raised since? 16:16:07 yes, maybe "ignore_attached" is the answer if don't have other disagreement. :) 16:16:47 . 16:16:52 yikun: Trying to catch up here. 16:18:07 anyway, if you have any question or suggestion, feel free to leave the comments in spec. : ), thanks 16:18:10 yikun: Where did the ignore_attached come from? 16:18:16 I didn't see that on the mailing list. 16:18:37 here, https://review.openstack.org/#/c/605317/6/specs/stein/add-volume-re-image-api.rst@52 16:18:47 suggestion from mriedem. 16:19:13 yikun: Ah, ok. That makes more sense. 16:19:20 Ok, I need to read through that spec. 16:19:30 Personally I preferred "force", but I can see where the concern on that name comes from. 16:19:31 OK, thanks. :) 16:19:33 So, team, please look at the patch and spec and make comments. 16:19:48 #action Cinder Team to review the Spec and Patch. 16:20:10 Think we can move to the next topic yikun ? 16:20:20 yep, sure~ 16:20:42 Ok. Good. 16:21:14 #topic Support for Revert to any Snapshot 16:21:34 #link https://blueprints.launchpad.net/cinder/+spec/support-revert-to-any-snapshot 16:21:58 walshh_: 16:22:03 All yours. 16:22:26 just wondering if this is going to be implemented in the future? 16:22:36 how many backends except ceph support this feature? 16:23:10 I can only speak for VMAX, it does support it 16:23:52 I think a lot support it, but we still have the concern from Barcelona that prompted the current design that some backends "lose" later snapshots once they've gone to earlier ones. 16:23:57 Anyone else know of backends that support it? 16:24:17 IMO, it could be useful if more backends supports this. I don't like features only for 2-3 backends 16:24:28 I don't think the issue was support for doing it, it was the very high potential for data loss. 16:24:45 smcginnis: good poit 16:24:49 *point 16:25:00 smcginnis: I am remembering that now. 16:25:11 We would have to have a safe way to implement this for everyone. 16:25:27 +1 16:26:06 walshh_: Is this something that your team would potentially implement? 16:26:26 we can certainly look into it, wouldnt mind the challenge 16:27:26 This one would definitely require a spec. 16:27:52 It would be one of those functions that drivers would have to opt-in for to ensure that they can verify that data loss won't happen. 16:28:35 Do we really want to introduce something like that? 16:28:57 is the data loss risk not also there for revert to last snapshot? 16:29:15 smcginnis: Not really. We have kind of stabilized things away from those types of features. 16:29:32 No, the data loss risk is in losing snapshots, not in the delta of data between now and when the snapshot was taken. 16:29:50 And how Cinder can know if that's the case and accurately reflect that in our database. 16:30:05 ok, thanks 16:30:24 "Let's go back 5 snaps. Oops, that was too far, let's go 4. Where'd my snap go?" - that kinda thing. 16:31:21 Is that a limitation in Cinder or in the way that the backends handle the snaps? 16:31:48 The backends. 16:32:08 Ok. That is concerning. 16:32:19 I think someone had a specific example of a backend that behaves that way at the design summit, but I don't recall which it was. 16:33:06 We can try to get input if that is still the case. Especially since there are now less backends than before. 16:33:24 So, two approaches. We can say 'No'. Or we can make people aware that there has been a request here and then talk about it at the Mid-Cycle. 16:33:37 smcginnis: ++ 16:34:45 So that would be a vote for option two. 16:34:50 Any other votes? 16:35:19 +1 16:35:27 e0ne: ? 16:35:32 midcycle option sounds good to me 16:35:53 walshh_: Your team can join remotely for the mid-cycle? 16:35:59 We will be a little closer to your timezone. 16:36:02 yes, we hope to 16:36:39 Ok. Lets do that. 16:36:41 It would be good to have more feedback from driver maintainers. 16:36:52 e0ne: Agreed. 16:37:18 So, walshh_ can you add this to the etherpad for the mid-cycle and also send a note about it to the mailing list? 16:37:54 sure 16:38:04 walshh_: Great. 16:38:26 #action walshh_ to add topic of reverting to any snapshot to mid-cycle etherpad. 16:38:47 #action walshh_ to send note to the mailing list about this in hopes of getting attention from other driver maintainers. 16:39:06 walshh_: You ok with moving on? 16:39:18 yes, I am...thanks 16:39:37 walshh_: Thanks. 16:40:02 #topic bug on migration of attached volumes. 16:40:10 tpsilva: Your floor. 16:40:15 okay, thanks 16:41:05 so I came across this last week... when testing migration I noticed a rather weird behavior and would like to confirm if we have an issue on cinder or nova 16:41:06 sorry, need to disconnect now. see you next week 16:41:53 migrating a detached volume works fine, but the migration of an attached volume does not seem to finish, ever 16:41:54 tpsilva: Ok. 16:42:15 Thanks e0 16:42:22 cinder calls nova, nova creates the new volume, moves the data, attaches it to the instance and then it should call back cinder to delete the volume 16:42:38 but the old volume never gets deleted 16:42:45 tpsilva: Ok. 16:42:57 I can see the call on nova code, nothing's apparently wrong there 16:42:58 tpsilva: can you share the bug link? would be helpful. 16:43:06 That functionality is quite new so wouldn't be surprised if there are issues. 16:43:14 jungleboyj: oh, is it? 16:43:23 on both sides? nova and cinder? 16:43:30 I thought it was relatively new. 16:43:43 Maybe I am thinking of attached extend though. 16:43:53 whoami-rajat: didn't create it... I didn't know how it should work or if it's indeed a bug, so I wanted to confirm it first 16:44:16 jungleboyj: I remember seeing it in Queens, and the same bug exists in Queens and master, so it appears it hasn't been thoroughly tested 16:44:37 maybe there are some issues with the status of the old volume, or the migration_status that prevents it from being deleted 16:44:45 but I didn't dig too deep trying to debug it 16:44:53 ganso: Do you know of a bug for it? 16:45:12 jungleboyj: we tested it on queens, rocky and master 16:45:23 jungleboyj: I haven't seen one logged 16:45:41 so I'll log the bug then 16:45:52 tpsilva: Yes, I think that is what needs to be done. 16:46:03 alright 16:46:04 If you have specific steps for recreating that would be helpful. 16:46:12 tpsilva: Which backend are you using? 16:46:22 tested on NetApp and LVM 16:46:31 ah, interesting part that I forgot 16:46:35 Same result either way? 16:46:40 jungleboyj: yep 16:46:47 but, retyping works fine, which is odd 16:47:19 tpsilva: That isn't totally surprising. 16:47:25 Was it a retype without migration? 16:47:33 with migration 16:47:43 and attached 16:47:46 yep 16:47:53 Now, that is interesting. 16:48:47 alright, I'm creating the LP bug 16:49:02 So, there must be something little in the last step that is lost with migration only. 16:49:21 There have been changes on the nova side for the new attach flow and multiattach. Maybe something got missed there. 16:49:54 #action tpsilva to open an LP bug. 16:50:11 tpsilva: Can you add the bug you open into the notes from the meeting please? 16:50:17 will do 16:51:20 tpsilva: Thank you. 16:51:27 Appreciate you bringing this up as well. 16:51:44 jungleboyj: thanks 16:52:09 tpsilva: That was quick. Thanks. 16:52:51 Ok. So that covers that part of the agenda. 16:53:11 #topic Bug Triage 16:53:28 whoami-rajat: Any updates on the bugs in the list? 16:54:03 We have a few minutes left. 16:54:15 Or if anyone has other topics. 16:54:27 * lbragstad has something policy related but will wait if needed 16:54:38 lbragstad: You win. Please go. 16:54:51 first - thanks for the reviews on https://review.openstack.org/#/c/602489/ 16:54:58 geguileo_PTO: isn't available since the past 1 week. so no updates. also no updates on other bugs too. 16:55:12 whoami-rajat: Ok. Thanks. 16:55:21 ^ that's essentially a template for what we were talking about on Thursday at the PTG about having safer ways to upgrade policies without regression 16:56:02 i wanted to prove that the tests actually work when changing the policy - which is what https://review.openstack.org/#/c/604115/1 does 16:56:31 and that's when i noticed https://review.openstack.org/#/c/604115/1/cinder/tests/unit/policy.json 16:57:01 #link https://review.openstack.org/#/c/602489/ 16:57:05 i can pull that out into it's own cinder change if folks thinks its useful 16:57:12 #link https://review.openstack.org/#/c/604115/1 16:57:51 lbragstad: Oh. That is good. 16:57:57 lbragstad: You mean get rid of the unit test policy file? 16:57:59 ideally - the more changes like #link https://review.openstack.org/#/c/602489/4 land the more policies can be removed from #link https://review.openstack.org/#/c/604115/1 16:58:05 correct 16:58:15 lbragstad: That probably should have been done back when we moved policies in code. 16:58:18 smcginnis: Right? 16:58:20 since 1.) you have defaults in code and 2.) the defaults are being tested 16:58:27 lbragstad: That would be great! 16:58:35 smcginnis: ++ 16:58:47 it's cool that it wasn't 16:59:10 all of of this is a lot of work, so small, iterative changes are nice 16:59:27 lbragstad: ++ 16:59:38 Bite size changes that we can try to digest are good. 16:59:39 but i can propose the removal of that policy from the testing policy file 17:00:04 Yeah, lets start removing those overrides that we can. I think that is good. 17:00:09 otherwise https://review.openstack.org/#/c/604115/1/cinder/policies/volumes.py should show how those tests protect your API 17:00:14 lbragstad: Thank you for the work you are doing there! 17:00:25 yep - anytime.. let me know if you have questions 17:00:34 lbragstad: Will do. 17:00:35 or if anyone starts porting other tests and wants eyes on it 17:00:49 Ok. We are at the top of the hour and need to wrap up. 17:01:01 Thank you everyone for meeting! 17:01:07 #endmeeting