16:00:20 #startmeeting Cinder 16:00:21 Meeting started Wed Aug 29 16:00:20 2018 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:24 The meeting name has been set to 'cinder' 16:00:29 o/ 16:00:39 hi 16:00:41 hello 16:00:49 hi 16:01:51 hi 16:01:56 Hey. 16:02:21 o/ 16:02:21 Give people another minute. 16:02:29 Good morning. 16:02:42 Hey woojay . Welcome! 16:02:53 jungleboyj: Thank you. 16:03:40 Ok. Guessing this is who we are getting for today so lets get started. 16:03:51 #topic announcements 16:04:09 The usual reminder that we have the PTG Planning Etherpad. 16:04:27 #link https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 16:04:38 Hi 16:04:53 If you have topics please add them. 16:05:01 If you are planning to come please add your name. :-) 16:05:25 A good list of people and some good topics. 16:05:37 Late next week I will start organizing the times. 16:06:25 Also, we landed on RC3 for our Rocky release. I know we found one more issue after that, but c'est le vie. 16:06:46 Overall I feel the release went pretty well. Thank you to everyone for your efforts. 16:06:57 Hoping to do a bit of time for a post-mortem at the PTG. 16:07:49 I think that is all I have for announcements. 16:07:52 anyway, we can release it in rocky later with a next stable release 16:08:02 e0ne: Right. 16:08:24 #topic Follow-up on Cinder/Manila Team Dinner. 16:09:09 If you want to come to dinner and haven't voted please do so: 16:09:13 #link https://doodle.com/poll/8rm3ahdyhmrtx5gp#table 16:09:52 Looks like the only day that works for everyone is Tuesday. So, thinking we will plan for then after the welcoming happy hour. 16:10:08 tbarron: ^^^ Any concerns with that plan? 16:11:22 Ok, guessing he will catch up later. 16:11:49 I will look at some options for food. Did people like the place we went to last time we were in Denver? 16:12:46 * e0ne didn't remember what place did we meet in 16:13:01 jungleboyj: I did. The place across the street was also good 16:13:03 Oh, looking at the Etherpad it looks like they already landed on Tuesday. 16:13:25 It was pretty good except they didn't have enough space for us really. 16:13:39 I will look around and land on a proposal for next week's meeting. 16:13:52 hey 16:14:35 Ok. I think that is all I needed to say there. 16:14:56 #topic Pruning Cinder backups with Ceph RBD driver . 16:15:01 cmart: You here? 16:15:04 yep 16:15:15 #link http://lists.openstack.org/pipermail/openstack/2018-August/046878.html 16:15:20 cmart: Take it away. 16:15:30 cool. I realize that part of this discussion may be more for operators than developers -- but bear with me :) 16:15:52 cmart: No worries. All are welcome. :-) 16:16:23 Do we have eharney here? 16:16:39 jbernard is not. He could also help with this question. 16:16:45 hi 16:16:49 very short version: many of my servers are 'pets' running non-cloud-native workloads. I run them as volume-backed instances, and use Cinder backup service to take volume backups of them every day 16:17:01 all is good until I want to start pruning old backups. (backend here is Ceph RBD) 16:17:43 it seems that Cinder Backup Service + Ceph RBD = you can only delete the *latest* backup of a given volume 16:17:59 which makes it basically impossible to manage recurring backups in a reasonable way 16:18:24 and it seems this limitation is not inherent to Ceph backend, so I'm not sure why Cinder imposes it 16:18:51 was hoping to connect with people who are either: 16:18:51 yeah, i think this is a limitation of cinder backup with how it can manage chains of incremental backups, and not really a Ceph issue 16:18:55 1. taking recurring volume backups like I am, and may have advice/experience to share 16:19:13 e0ne: Have you bumped into this with the work you have done on backups? 16:19:15 2. familiar with this architecture in Cinder, and could help me understand and maybe I can help work on a fix 16:20:08 iirc cinder backup doesn't provide what would be needed for an "incremental forever" kind of back up strategy, i think it assumes you will regularly create full backups, but i haven't looked at this in depth in a bit 16:20:09 Could you resolve the issue by doing full backups instead of incremental? 16:20:24 I would if I could 16:20:24 jungleboyj: I didn't face this issue, but I agree with eharney 16:20:38 Cinder ignores the `--incremental` flag when you create backups with Ceph RBD backend. 16:20:55 the first backup of a given volume is *always* a full backup, and each subsequent backup is *always* an incremental backup. 16:21:17 Hmmm, ok. Good to know. 16:21:50 cmart: it could be an issue with ceph driver if it ignores 'incremental' flag 16:22:12 isn't the inability to delete older backups an api-layer restriction? 16:22:14 That sounds like a ceph bug actially. 16:22:19 *actually 16:22:20 e0ne: Yeah, that sounds buggy. 16:22:25 e0ne and eharney: yes and yes 16:22:46 Any backup program behaves this way if you only ever do incremental backups. 16:23:09 But you need to be able to create full backups from time to time at least or incremental really becomes useless. 16:23:28 smcginnis the nomenclature is loosely applied here -- I believe that Cinder "incremental" backups with Ceph RBD are really *differential* backups 16:23:31 smcginnis: ++ 16:23:39 this is solved in some backup software by synthesizing new full backups out of a chain of incremental backups periodically (which you need to do anyway if you want to be able to restore in a reasonable amount of time) 16:24:18 hrmm, i forgot about that terminology detail 16:24:28 If Cinder itself doesn't allow full backups, then that's a bug in cinder. If Ceph doesn't ignores it and only does incremental or differential, that's a bug in that dirver. 16:24:50 smcginnis: ++ I think that is where we need to start. 16:25:32 cmart: If we are able to resolve the inability to ever create a new full backup would that help resolve your issues? 16:26:25 yes, I think so, if then Cinder allows deletion of old full / "incremental" backups before the newer full backup. 16:26:43 That's how it should work. 16:27:08 on the backend, interestingly, "incremental" backups are stored as snapshots of the RBD image, meaning they don't depend on each other 16:27:36 how backups are stored on the ceph backend is a bit confusing compared to how they are represented in cinder -- but some of that is a feature, not necessarily a bug 16:27:54 yeah. each "incremental" backup (according to cinder) actually contains a complete diff from the base in Ceph. so it's kinda standalone 16:27:56 so we need to be clear about what exactly we're trying to sort out here 16:28:22 eharney: +1 16:28:58 yeah. I think a straightforward solution would be if the `--incremental` flag was respected 16:29:29 I'm not really familiar with the existing code but happy to try to help 16:29:49 cmart: ++ I think that makes sense. 16:29:52 with the goal being that if you didn't supply that flag, you would end up with a fully independent new backup, which is not based on the older previous backups, right? 16:30:00 correct, eharney 16:30:06 eharney: ++ 16:30:29 so, that's one possible solution, another would be to add something to break the dependency chain when you are performing the delete, to preserve the space optimization etc 16:30:45 but regardless of how it gets fixed it sounds like an interesting thing to look into 16:30:56 Well, the first is how it should work, but the second may be useful in some cases. 16:31:07 But a little odd as far as backup software behavior goes. 16:31:11 as to your last point eharney, Cinder would need a new concept of "differential" backup (as opposed to incremental) 16:31:28 but we only define how it should work in terms of what data is backed up and restoreable and how it behaves from the Cinder API layer 16:31:51 this driver does some optimizations in the backend to avoid transferring more data than is needed, which is fine as long as the right semantics are preserved at the higher layer 16:32:02 ++ 16:32:42 so i guess the other question is, can you actually prune old backups when using other drivers? 16:32:51 eharney: ++ 16:33:03 my only experience here is with RBD 16:33:05 I don't think we want to create a special case. 16:33:39 sounds like it's worth writing a bug about for more investigation 16:34:00 Sorry that I may have missed some of the earlier discussions. If I remember correctly, Ceph backup driver implemented its own way of incremental backup, not using Cinder's incremental backup 16:34:47 So Ceph added support to incremental backup to ceph volumes, even before we added incremental backup support to Cinder 16:34:57 it still works with incremental backups in cinder, and tracks parent ids, etc 16:35:23 (at least it's supposed to, it was kind of a mess a while ago but got a lot of bug fixes a couple of cycles ago) 16:35:29 but Ceph driver does not inherit from the chunked driver mechanism 16:35:40 Let's get a bug filed and verify that things are working correctly for all of them or if this is just an issue with ceph. 16:35:44 unless if someone has changed that 16:35:48 that's true, but i don't think it's really relevant? 16:35:58 smcginnis: ++ 16:36:02 Yeah, chunked shouldn't matter, should it? 16:36:09 That's just how the data gets transferred. 16:36:11 yes, it does matter 16:36:23 Ceph has its own way of doing incremental 16:36:28 if that matters then we messed up somewhere, because that's just an implementation of how to keep track of the blocks of data 16:36:34 so you need to take a look of that driver 16:36:56 unless if someone re-write that to use cinder's incremental backup model 16:37:07 I thought Gorka may be familiar with this 16:37:14 I was hoping he might be around 16:37:27 this is a bit old https://gorka.eguileor.com/inside-cinders-incremental-backup/#Ceph-Incremental-Backup but maybe still relevant 16:37:55 So, I have a proposal ... 16:37:56 cmart: I think you should check with Gorka about this 16:38:20 Lets get a bug opened for this. Gorka and the important players will be in Denver. 16:38:40 If we can get geguileo aware of the issue and then we talk it over at the PTG we can figure out where to go. 16:38:41 +1 I'll file a bug, and reference the questions asked here. 16:38:58 cmart: Thanks! 16:39:02 jungleboyj: sorry, forgot about the meeting (I miss the ping) 16:39:27 geguileo: Sorry. 16:39:36 I just can't get booted every week. ;-) 16:40:32 lol 16:40:38 I was catching up 16:40:50 So this is an issue with the backups being linked when incremental, right? 16:40:55 They want to remove that? 16:41:01 cmart: just checked. Ceph backup driver is still not using Chunked driver 16:41:18 geguileo: Basically, after a new full backup, you should be able to delete previous backups. 16:41:22 xyang: we don't use it because it's faster to ask Ceph cluster to get us the diff 16:41:35 geguileo: It doesn't look like that's possible, at least with the way ceph is implementing things. 16:41:49 geguileo: not a problem. trying to differentiate Ceph's implementation vs Cinder's incremental 16:41:52 smcginnis: yup, we have to start checking the parameter passed by Cinder 16:41:57 I don't think we do it right now 16:42:15 Ah, good. Not that it doesn't work now, but that it is something simple like that. 16:42:23 I would have to check how easy it is to do 16:42:42 Because I believe Ceph does it always incremental if it can 16:42:52 regardless of the parameters passed to Cinder 16:42:55 geguileo yes that's my experience 16:43:11 it's great until you want to remove old backups while keeping newer ones 16:43:32 Ok, so it sounds like we have a bug that needs to be investigated. 16:43:33 (I think pretty much everyone doing recurring daily backups will want to do that) 16:43:41 got it. thanks for the context, all :) 16:43:41 jungleboyj: yup 16:43:45 jungleboyj: +1 16:43:59 the bug is that Ceph cannot create full backups after the first full backup 16:44:25 geguileo: ++ 16:44:59 geguileo: Are you able to look into that? 16:45:12 jungleboyj: I can at least look to see if it's an easy fix 16:45:14 lol 16:45:22 geguileo: I will take that. 16:45:39 cmart: is there a bug open? 16:45:39 geguileo: If not we can talk about it more at the PTG? 16:45:49 geguileo not yet but i'll summarize the above ^ and write one up for you 16:45:57 cmart: That would be great. 16:46:10 #action cmart to open a bug summarizing the issue. 16:46:25 #action geguileo To look into it and see how hard it would be to change. 16:46:27 cmart: thanks! 16:46:49 cmart: Thanks for bringing this up! 16:47:13 of course. thanks for accommodating my use case 16:47:31 cmart: Sounds like a totally valid one. :-) 16:47:52 cmart: Anything else? 16:48:00 that's all for now jungleboyj 16:48:05 cmart: Great. 16:48:20 eharney: geguileo e0ne Thanks for your input on that as well. 16:48:38 #topic Cross Project Times Added for PTG 16:48:57 So, we have time at the PTG planned 4-5 on Wednesday to chat with Ironic. 16:49:37 Julia contacted me and asked if we had any topics for her. I didn't know that we did but thought discussion about what Ironic is doing with Cinder and what they are planning to do in the future could be useful. 16:50:05 If you have specific topics for that time please add them to the etherpad and mark them accordingly. 16:50:22 Also, we have our traditional 9 to 11 am time on Thursday with Nova. 16:50:48 If you have anything to discuss with Nova please add the topic and I will schedule it for the cross project time. 16:51:51 Anyone have anything else to share there? 16:52:18 Wednesday will be great for me. that's great if we get full support of boot from volume and attach for ironic instances 16:52:47 e0ne: Cool. Glad that will be useful time. 16:52:59 I figured if we got the teams together something useful could come from it. :-) 16:53:11 jungleboyj: I added something to Nova 16:53:19 Plus who doesn't like hanging out with TheJulia 16:53:20 :-) 16:53:33 geguileo: Cool. 16:53:49 jungleboyj: and I assume they'll be adding the cross cell migration discussion 16:53:58 if we don't clear it all out on the ML 16:54:06 geguileo: I would assume so. If not, we should add it. 16:54:35 That is all I had there. 16:54:41 #topic Open Discussion 16:54:58 Anyone have anything else to discuss? 16:55:36 Nothing here 16:56:00 Ok. Cool. Look forward to seeing people f2f in less than 2 weeks. 16:56:11 Talk to you all here next week. 16:56:18 #endmeeting