16:00:30 #startmeeting Cinder 16:00:30 Meeting started Wed Jan 23 16:00:30 2019 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:34 The meeting name has been set to 'cinder' 16:00:41 Courtesy ping: jungleboyj diablo_rojo, diablo_rojo_phon, rajinir tbarron xyang xyang1 e0ne gouthamr thingee erlon tpsilva ganso patrickeast tommylikehu eharney geguileo smcginnis lhx_ lhx__ aspiers jgriffith moshele hwalsh felipemonteiro lpetrut lseki _alastor_ whoami-rajat yikun rosmaita enriquetaso 16:00:45 hello 16:00:47 Hi 16:00:48 hi 16:00:51 o/ 16:00:58 hi! o/ 16:00:58 hi 16:00:59 hi 16:01:00 hello 16:01:00 @! 16:01:00 <_pewp_> jungleboyj ( ・_・)ノ 16:01:06 hi 16:01:14 o/ 16:02:25 pretty good showing. Do we have smcginnis as he has a couple of topics. 16:02:41 hi 16:02:48 walshh_: Welcome. 16:03:31 Hmmm. Ok. Guess we will get started. 16:03:48 o/ 16:03:51 #topic announcements 16:03:55 smcginnis: Yay! Welcome. 16:04:04 So, announcements ... 16:04:11 o/ 16:04:32 We did not get any dissent to the proposal of having yikun and whoami-rajat so they have now been added to the core list. 16:04:48 congratulations! 16:04:50 Sorry that took a little longer to get done but you should now see a +2 option for your reviews. 16:04:59 Welcome! 16:05:07 whoami-rajat, yikun: welcome! 16:05:21 Yes, welcome. Thank you for your commitment as of late. It is great to have you onboard. 16:05:23 ha, thanks. :) 16:05:25 jungleboyj: yes, Thanks! 16:05:32 Thanks everyone. 16:06:01 So, it is great to grow the team. 16:06:35 Also, friendly reminder that our mid-cycle planning continues: 16:06:37 jungleboyj: +1 16:06:39 #link https://etherpad.openstack.org/p/cinder-stein-mid-cycle-planning 16:06:50 hey 16:07:09 Details on our hotels have been added in the etherpad. 16:07:48 Also, there are a number of meet-up sessions that just happening to happen while we are in town so those will be good for us to attend. 16:08:13 I think that is all I had for announcements. 16:08:56 #topic Default value of backend_url vs tested value 16:08:59 smcginnis: 16:09:24 I just wanted to make sure folks were aware of this and see if anyone had any thoughts if we should change anything. 16:09:45 Right now, in code our lock coordination through tooz uses local file. 16:10:03 That is also what you end up with if you do a distro package based install of Cinder. 16:10:08 Not sure about other deployments. 16:10:09 smcginnis: is it backend_url for locks? 16:10:15 e0ne: Correct 16:10:40 The issue is, devstack sets the backend url to use etcd. 16:10:58 At least by default. It is possible to override that, but I don't see anywhere where we do. 16:11:14 So all gate testing is using etcd. 16:11:23 just using files by default seems right to me -- but i'm not sure why we've moved to only testing etcd in the gate 16:11:24 So our default settings are not being tested, but that it. 16:11:53 eharney: My guess is when we declared etcd as an expected service, they wanted to get coverage on that. 16:12:27 So we can change the devstack default, but seems like we would want to have both local file and etcd tested. I'm just not sure where to divide that up. 16:12:45 Or if it's really worth changing a bunch here. 16:12:56 So just wanted to point that out in case anyone else has any thoughts or ideas. 16:13:04 Since not tested equals broken 16:13:09 :-) 16:13:11 we could consider switching the lio-barbican job to not use etcd, since it's been serving as a place to test "the other option" for a few things already 16:13:18 assuming it's easy to turn off 16:13:31 Yeah, just a flag. 16:13:35 eharney: That is a good idea. 16:13:37 That might be best. 16:13:53 eharney: great idea. I like this option 16:13:56 It didn't seem like the "correct" place to do that, but it is somewhere that we could easily switch it. 16:14:29 smcginnis: the other option is to add one more job with tempest and local locks 16:14:34 Maybe that should be the 'other options' job ? 16:14:55 e0ne: With the current infra state do we want to add more jobs? 16:14:56 e0ne: Yeah, that's another possibilty. Seemed overkill though. We already have so many jobs. 16:15:03 smcginnis: ++ 16:15:11 it also seems like overkill to deploy etcd in a bunch of jobs that don't really need it... 16:15:34 It's a "base OpenStack service" :) 16:15:34 jungleboyj: I don't want more jobs too. I just pointed on another option 16:15:59 e0ne: Understood. :-) No worries 16:16:04 :) 16:16:15 Well, if folks are OK with the barbican_lio idea, I can try to put up a patch later to set this flag. 16:16:27 Then at least we have *something* covering that config scenario. 16:16:33 https://review.openstack.org/#/c/632773/ will see what happens 16:16:40 smcginnis: ++ 16:17:04 eharney: What took you so long? 16:17:09 eharney: Awesome! 16:17:25 jungleboyj: OK, that's enough for now unless anyone else has questions. 16:17:36 #link https://review.openstack.org/#/c/632773/ 16:17:52 Ok. Thanks for catching that smcginnis 16:18:12 #topic Alembic instead of sqlalchemy-migrate 16:18:16 smcginnis: You again. 16:18:26 * jungleboyj defers to shadow PTL 16:19:00 I like this idea in general but to we have some easy way to move existing migrations to alembic? 16:19:05 So this came up in the nova channel. 16:19:33 zzeeek (I may have missed some z's and e's there) is the maintainer of both and had actually deprecated sqlalchemy-migrate years ago. 16:19:43 The direction has been to get off of that and use alembic. 16:20:01 I don't think that was communicated too widely. Or at least I wasn't really aware of that. 16:20:15 So he would like to stop maintaining it, but there are still a few OpenStack services using it. 16:20:15 I had no idea. 16:20:37 I know glance had done the migration, so at least there are some examples of it being done. 16:20:53 And he sounded very willing to help with doing the migration. 16:20:54 we had 2 people working on it for about 1.5 cycles 16:21:09 rosmaita: Oh wow, I was hoping it was less effort than that. :/ 16:21:27 well, we were doing the rolling upgrade stuff at the same time 16:21:36 Well, if we need to get off of it, we probably should get started then if it's going to take that long. 16:21:50 Ah, so it might be easier for us then. That's good. 16:21:53 and it was maybe ocata? hopefully by now it's a bit easier 16:22:15 I was hoping he would just say "run this tool and it does it for you", but no such luck. ;) 16:22:27 what is a procedure to re-write current migrations? 16:22:36 There are guides out there. 16:23:18 Maybe someone is more of an expert, but my impression is you now have UUID migration scripts instead of numbered and it's a little more flexible on how you manage those. 16:23:23 rosmaita: Any experience there? 16:24:04 we didn't duplicate all migrations, just started with a liberty db (this was in ocata) and did the few migrations from there 16:24:17 That makes sense. 16:24:34 our migration scripts are named by release (which actually may be a problem) 16:24:51 And I was collapsing the cinder migrations for awhile too so we could just start with a base supported schema and not have to do step by step since Folsom anyway. 16:25:03 that's the way to go 16:25:04 is there any test helper in oslo.db to test alembic migrations? 16:25:22 yeah, there are some mixins 16:25:29 smcginnis: ++ 16:25:40 rosmaita: great 16:26:15 we carried both old and new migration scripts for one release, "just in case", but it didn't turn out to be necessary (as far as i've heard) 16:26:39 my impression is that alembic is really solid 16:26:47 So if we agree, I think we need to get started scoping this work. We can probably enlist zzzeeeeeeeeeek for some help. Not sure if we want to write a spec, but we should probably have a blueprint to at least track it all. And any volunteers to lead the effort would help. 16:27:18 rosmaita: Unrelated, but that reminds me. We can probably clean up that sqlalchemy-migrate stuff out of the glance repo. 16:27:20 looks like a good midcycle topic 16:27:25 rosmaita: ++ 16:27:28 smcginnis: +1. I'll take a look on it if I can help with this effort 16:27:36 e0ne: Great! 16:27:37 rosmaita: ++ 16:27:53 I have some DB experience. Will help if I can. 16:28:10 I can also help it. :) 16:28:12 Let's do a little research and regroup at the midcycle to hammer out a plan. 16:28:22 sounds good 16:28:23 smcginnis: +1 16:28:25 smcginnis: ++ 16:28:38 That's all from me then. 16:28:40 smcginnis: count me in too. 16:28:59 I could help too 16:29:01 smcginnis: Cool. Thank you. 16:29:08 Awesome, thanks everyone. 16:29:23 #topic Cinderlib 16:29:30 geguileo: You here? 16:29:37 jungleboyj: yup 16:29:50 Cool. The floor is your's. 16:29:59 I just wanted to ask for reviews on the cinderlib patces 16:30:02 patches 16:30:15 thought right now the gate seems to be failing for unrelated issues 16:30:34 I also wanted to know if anybody had any questions related to cinderlib 16:30:50 I know that hemna started looking at it and had a couple... 16:30:52 hemna: Did the other day. 16:31:07 He was wondering why the DB was there. 16:31:22 jungleboyj: I answered him aftewards on the channel, but he was away 16:31:34 geguileo: Ok. Were my answers close? 16:31:34 in case anybody is wondering the same question 16:31:49 jungleboyj: I don't remember XD 16:32:01 *sad trombone.wav* 16:32:14 jungleboyj: :) 16:32:16 I don't have a great memmory XD 16:32:22 Ok. 16:32:22 basically, the thing is that cinderlib implements a persistence plugin system 16:32:42 so you can either keep the metadata in memory and then the user of cinderlib can store this data wherever they want 16:32:50 using the json serialization mechanism 16:33:02 or they can use a plugin to store it in a DB (included plugin) 16:33:07 or write their own plugins 16:33:13 like I did for the Ember-CSI project 16:33:26 where I store the Cinder metadata into CRDs in the k8s deployment 16:33:30 Ok. Makes sense. 16:33:42 I roughly said that to hemna 16:33:50 jungleboyj: thanks! :-) 16:34:11 I know that smcginnis also had a look at some of the patches and made some suggestions 16:34:19 Anyone else have questions? 16:34:27 jungleboyj: I have a question 16:34:36 When is the limit to get these patches merged? 16:34:47 I would say milestone-3 16:35:09 ok 16:35:23 smcginnis: You agree? 16:36:27 I mean, as per our processes it can't go in any later than that. 16:36:44 It isn't really a driver so I hadn't enforced ms-2. 16:37:03 It is just a tech preview but we don't want to put anything in later than ms-3. 16:37:07 sounds reasonable to me 16:37:17 * jungleboyj is thinking outloud. 16:37:37 Yeah, sounds good. 16:37:43 So, milestone-3 sounds to be the answer. 16:37:49 thanks 16:38:04 geguileo: Thanks. Sorry I haven't reviewed everything yet. 16:38:08 I will work on that. 16:38:15 jungleboyj: thanks!!! 16:38:39 Ok. So that is all from you geguileo ? 16:38:44 yup 16:38:53 Ok. That was all we had on the agenda. 16:39:00 #topic OpenDiscussion 16:39:10 Anything else to talk about today? 16:39:16 i have something 16:39:23 rosmaita: 16:39:26 Go for it. 16:39:32 i need some stable cores to take a look at https://review.openstack.org/#/c/629463/2 16:39:41 it's a squash of 4 cherry picks backported from rocky to queens 16:39:49 i explain in the commit message why i did it like that 16:39:58 (though my practical reason is that this is going to have to go into pike, too) 16:40:02 rosmaita: I saw that earlier and was waiting for the check to pass before looking closer. 16:40:07 but i can do 4 separate cherry picks if that's preferable 16:40:11 Makes sense. I'll take a look at it. 16:40:15 jungleboyj: it has passed, thanks for the recheck 16:40:23 rosmaita: Yep. 16:40:38 Can it easily be separated into separate changes? Or does it need to be together to make tests pass? 16:40:40 I thought we had talked about that before and agreed to do the squashed patch. 16:41:32 yes, we had discussed it on the bug 16:41:43 Yeah, this looks good to me. 16:41:55 ok, cool 16:42:15 If it were more lines of code I would be worried but I think it is ok once it passes tests. 16:42:57 Anything else on that? 16:43:12 nope, that's all from me ... thanks! 16:43:22 rosmaita: Thanks. 16:43:29 Any other topics? 16:43:40 jungleboyj: i've a request 16:43:47 whoami-rajat: Go ahead. 16:44:52 if anyone has time to look into this bug https://bugs.launchpad.net/cinder/+bug/1811663 16:44:53 Launchpad bug 1811663 in Cinder "Gate failure : AssertionError: Lists differ: [] != []" [Undecided,In progress] - Assigned to Rajat Dhasmana (whoami-rajat) 16:45:39 it was proposed in the last meeting. 16:45:42 #link https://bugs.launchpad.net/cinder/+bug/1811663 16:45:49 * smcginnis sees a Thread and looks in geguileo's direction 16:45:50 :) 16:46:01 XD 16:46:07 * jungleboyj did that as well 16:46:19 smcginnis: I think I agreed to look into it and I didn't... 16:46:26 Hah 16:46:26 * jungleboyj hopes geguileo 's threads don't unravel 16:47:11 yup, I agreed in a previous meeting to fix that one... 16:47:19 let's see if I can get it fixed this time 16:47:29 :-) 16:47:31 geguileo: Thanks! 16:47:33 Yay for my notes! 16:48:05 geguileo: Thank you. 16:48:14 whoami-rajat: Any other bugs that need attention? 16:48:28 All of them. 16:48:36 :-) 16:48:47 Hear no evil, see no evlil. 16:48:50 *evil 16:49:23 jungleboyj: not now currently. will prepare some for next week. 16:49:55 Ok. Sounds good. No worries. We have been running out of time in meetings lately. 16:50:14 whoami-rajat: i imisread that as you would create some new bugs in cinder! 16:50:26 * jungleboyj is laughing 16:50:35 We have plenty of people doing that. Don't need help. 16:50:47 lol 16:50:56 rosmaita: haha, will prepare a list* 16:51:03 :) 16:51:14 Ok. Other topics then? 16:51:55 May plan to do a review of the mid-cycle topics when we meet next week. 16:52:32 So, please get your topics in there please. 16:53:03 ++ 16:53:09 It has gotten quiet so I think we can wrap up. :-) 16:53:17 ++ :) 16:53:29 :-) +++++++++++++++++++++ 16:53:37 Thank you for joining team. 16:53:54 Stay warm, stay out of the snow and have a good rest of your week. 16:54:06 Thanks! 16:54:07 bye! 16:54:11 Bye all! 16:54:19 #endmeeting