16:00:30 #startmeeting Cinder 16:00:31 Meeting started Wed Jul 24 16:00:30 2019 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:34 The meeting name has been set to 'cinder' 16:00:47 Hey 16:00:53 hi! o/ 16:00:53 Courtesy ping: jungleboyj whoami-rajat rajinir lseki carloss pots woojay erlon geguileo eharney rosmaita enriquetaso e0ne smcginnis davidsha walshh_ xyang hemna _hemna tosky 16:01:08 Hi 16:01:08 o/ 16:01:11 hi 16:01:12 hi 16:01:14 hi 16:01:34 @! 16:01:35 <_pewp_> jungleboyj (=゚ω゚)ノ 16:01:40 o/ 16:01:41 o/ 16:01:55 #link https://etherpad.openstack.org/p/cinder-train-meetings 16:02:08 hi 16:02:34 * smcginnis is here but not really 16:02:46 smcginnis: NOOOOO! You must be here. ;-) 16:03:07 * jungleboyj is lost without ShadowPTL 16:03:38 hah! 16:03:43 :-) 16:03:49 * jungleboyj defers to rosmaita Instead 16:04:25 Ok. Quite a few things to get to today. 16:04:27 that won't do you any good 16:04:32 :-p 16:04:45 So, reminder that Train Milestone 2 is this week. 16:05:01 #link https://releases.openstack.org/train/schedule.html 16:05:25 That means that I will start looking at CI runs to see if they are running Pyton3. 16:05:28 this week == tomorrow 16:05:47 e0ne: ;) 16:05:48 e0ne: ++ 16:06:59 Any questions about those two questions? 16:07:08 Two items... 16:07:32 Take that as a no. 16:07:46 Cinder Mid-Cycle Topics ... Please take a look at the etherpad. 16:07:58 #link https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning 16:08:20 Please add topics there so that we can make sure to have a productive mid-cycle. 16:09:32 #topic Spec: Leverage compression hardware accelerator 16:09:40 hi 16:10:11 thanks jungleboyj and rosmaita to give +2 16:10:24 So, my comments have bee addressed. I think we just need Eric to sign off. 16:10:24 eharney: ^^ 16:10:52 i haven't caught up with the last round of discussion between LiangFang and Brian on this one 16:11:35 Ok. If you can take a look that would be good. rosmaita gave a _2 16:11:38 +2 16:11:56 it was software fallback if no accelerator, and a config option about whether to allow any compression or not 16:12:09 rosmaita: Good. 16:12:18 sounds good 16:12:47 LiangFang: Anything else to address? 16:13:10 rosmaita ask me about nova impact last week 16:13:41 zhengMa has implement the code, and it seems no impact 16:14:01 that's good 16:14:14 he have successfully created VM using container format 'compressed' 16:14:50 that's surprising, but ok 16:15:05 rosmaita: Surprising ? 16:15:37 yeah, nova had to know to decompress the image before trying to use it 16:16:00 thought it might just fail with unsupported format or something 16:16:27 when cinder download image, cinder know the image container format 16:16:39 so cinder help to decompressed it 16:17:14 so what nova get is a compressed volume 16:17:17 ok, so it was a boot from volume VM 16:17:33 yes 16:17:44 we need to check what happens if you try to just boot from image the normal way with container_format == 'compressed' 16:17:57 What about when Nova doesn't use a Cinder volume? 16:18:08 what smcginnis said 16:18:30 because you know some user will try to use this the wrong way 16:19:13 rosmaita: ++ 16:19:20 i am thinking nova will fail gracefully, we just want to verify that 16:19:38 so to be clear, we aren't expecting you to implement boot from compressed image in nova 16:19:42 oh, has not verified this yet 16:19:48 just want to make sure nothing breaks badly 16:20:59 rosmaita: Any other concerns there? 16:21:07 no, that's all 16:21:24 Ok. So, I think we just need eharney to review. 16:21:33 it's not really a problem with the spec, just a courtesy check on behalf of nova 16:21:52 thanks 16:22:06 we will check as soon as possible 16:22:11 #topic Status and Zuul v3 porting of the remaining legacy Zuul jobs 16:22:28 hi! 16:22:37 Hi. 16:22:56 do you want to me to show what I found out so far, or should I add some notes to the etherpad and then we can discuss about them? 16:23:16 Etherpad link? 16:23:39 to the etherpad of the meeting, I mean, unless you prefer a separate document 16:23:49 Go ahead and share. 16:26:38 sorry for the initial mess, I think it should be readable now on https://etherpad.openstack.org/p/cinder-train-meetings 16:26:56 Ok. Anything that you need to highlight. 16:27:47 first, if there is anything which I don't know and that I should consider :) like some important non-documented reasons about some design decisions from that past that I should consider 16:28:55 second, there are a few open questions (namely, whether I can go forward with replacing the LIO jobs with its already native counterpart from cinder-tempest-plugin, and other small architectural questions) 16:29:20 (like whether it's fine to nuke the zeromq job from all existing branches) 16:29:44 I think so. As far as I understand, the ZMQ stuff is all dead. 16:29:48 Ok. Trying to understand all this. Didn't know about this effort. 16:29:59 replacing the LIO jobs should be fine as long as we end up with something that runs the same configuration and tests somewhere (LIO, Barbican, and maybe one other thing that's turned on in there that i forget) 16:30:43 eharney: and that's the point; if you check the cinder-tempest-plugin-lvm-lio, is basically doing that already 16:30:52 right 16:30:59 the blacklist is a bit different and it lacks the cinderlib tests, but those are easy to fix 16:31:01 jungleboyj: Infra has stated the support for those legacy jobs will be going away. Not sure on timeframe, but we need to get updated to Zuul v3 native jobs as soon as we can. 16:31:15 smcginnis: Ah, I see. 16:31:58 smcginnis: thanks for this update! 16:32:04 also, the native jobs are easier to deal with; there is no more devstack-gate in between, and in my experience modifying them is easier 16:32:18 ++ 16:32:28 tosky: +1 16:33:47 of course there are many questions to digest but we can talk about this anytime; I'm now hanging around on #openstack-cinder too, so feel free to ping me anytime I'm connected (and/or comment on the reviews) 16:34:02 tosky: Sounds good. 16:34:02 Thanks for looking at that tosky 16:34:08 smcginnis: ++ 16:34:24 tosky: Is there anything else you need from us right now? 16:34:44 no, I guess I have a general "go on, let's figure out the details", so that's fine, thanks! 16:34:59 tosky: Ok great. Thank you for working on this. 16:35:36 Ok if we move on? 16:36:44 Take that as a yes. 16:36:47 yep 16:37:06 #topic stable branches release update 16:37:15 rosmaita: All your's. 16:37:19 there was a discussion yesterday among most of the cinder stable-maint cores in #openstack-cinder about the holdup releasing cinder stable/rocky (and hence stable/queens) 16:37:25 #link http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2019-07-23.log.html#t2019-07-23T16:50:35 16:37:31 the tl;dr is we agreed to revert the patch declaring multiattach support for HPE MSA from stable/rocky 16:37:38 i restored mriedem's reversion patch 16:37:45 #link https://review.opendev.org/#/c/670086/ 16:37:49 but the commit message is kind of hyperbolic, particularly the part about no 3rd party CI running on the dothill driver 16:37:54 (there is CI, it's just run on subclasses of the driver) 16:38:02 i don't know how much the commit message matters, TBH 16:38:08 but i do know that we (the cinder team) tend to be a bit un-conservative with respect to what we consider to be appropriate driver backports 16:38:15 and in fact, we're not rejecting the backport because it could be considered to be adding a feature 16:38:22 but because further testing has indicated that multiattach isn't working for that driver 16:38:29 like i said, i don't know if anyone pays attention to commit messages 16:38:36 but it might give us more flexibility in the future if it's more accurate in this case 16:38:55 (this is where i stop to see if anyone has a comment) 16:39:15 I think that drivers code should follow the same policy as the rest of cinder: no features backports 16:39:36 with some exceptions when we need to introduce some config option to fix some bug :( 16:40:46 i think the comment about the 3rd party CI was referring to whether the specific multiattach tests were being run correctly, which was a fair criticism. 16:41:16 ok, maybe i am being too sensitive 16:41:37 pots: Right. 16:41:52 afaik, we don't have a good 3rd-party CI coverage for stable branches 16:42:04 it would be great if I'm wrong here 16:42:04 rosmaita: I think we would find most people haven't added the multi-attach tests. 16:42:10 Probably something we should check into. 16:42:23 they don't seem to be running on a lot of drivers 16:42:25 e0ne: There is also that. 16:42:37 We don't have that as a requirement though. 16:42:49 ok, i withdraw my comment about the commit message 16:42:55 :-) 16:43:11 but i will need a stable core to +2 the revert so i can update the release patch 16:43:26 rosmaita: Patch? 16:43:35 #link https://review.opendev.org/#/c/670086/ 16:44:21 ok, that's all from me 16:44:44 Ok, yeah, the commit message isn't really accurate anymore here. I will update that and then we can get that patch in. 16:44:55 rosmaita: Make sense? 16:45:16 Makes sense to me. 16:45:25 ok 16:45:28 Okie dokie. Will do that after the meeting. 16:45:51 Ok. So now we can move on. 16:46:14 #link https://review.opendev.org/#/c/523659/21 16:46:29 A few comments have been addressed by the Infortrend driver. 16:46:31 * e0ne is waiting for geguileo's review 16:46:54 e0ne: on which patch? 16:47:06 I think it is ok to me, the reamaining issue, if there is one, could be fixed with a follow-up. 16:47:14 geguileo: actually, you did it already for my spec. thanks! 16:47:30 e0ne: yeah, I just did that review XD 16:47:57 We have discussion around the Seagate driver earlier this week: 16:48:08 #link https://review.opendev.org/#/c/671195/ 16:48:33 I think that we can let this slip a bit as it is a rename and pots has other patches to backport first. 16:48:42 Doesn't anyone have an issue with that? 16:48:50 Yeah, I don't see that as a new driver. 16:49:14 smcginnis: Ok. Good you agree there given I am kind-of close to that one. ;-) 16:49:45 I see that the MacroSan driver was added into the list. 16:49:48 It really is just a rebrand. DotHill is gone, it is now Seagate. It makes sense to get that updated to show that. 16:49:56 jungleboyj: yep, i added it 16:49:58 Cool. 16:50:06 whoami-rajat: So, it has a -2 from eharney 16:50:32 jungleboyj: the maintainer keeps querying about reviews almost everyday and updates the patch regularly 16:50:40 jungleboyj: it's an old -2 16:51:00 I haven't looked. Have they gone through the new driver checklist and addressed everything? Is there CI reporting now? 16:51:11 Ok. I guess I had missed those pings. 16:51:12 Last I looked it was quite a way off. 16:51:40 Yeah, I don't see a CI reporting. 16:51:49 smcginnis: they've turned off the CI reporting quite for a while now, but it has been reporting on other patches, need to report on this one too. 16:52:21 smcginnis: seemingly, the driver checklist has been addressed (as i checked last, maybe i missed something) 16:52:25 If it's the deadline and there hasn't been CI reporting on the new driver and other patches for at least several days, that's concerning. 16:53:11 smcginnis: ++ 16:53:28 smcginnis: yea it is 16:54:49 So, I am concerned about trying to get that one through. 16:55:24 Folks should review it and put specific issues/concerns on the review so they know what and why. 16:55:33 I also haven't followed that one. .... So, I defer to those who have looked at it/followed it. 16:56:44 Let me take a look at that driver and we can follow up in the channel after the meeting. 16:57:06 eharney: Says he fixed his volume rekey spec. I think I am good with getting that in. 16:57:53 #topic Final run-through of open patches for milestone-2 16:58:09 e0ne: geguileo Has comments on your spec. 16:58:21 #link https://review.opendev.org/#/c/556529/ 16:58:57 Can we get eyes on the encryption one and see if we can merge that: https://review.opendev.org/#/c/608663/ 16:59:38 Not sure if there is more discussion on those specs. 16:59:49 Just please review and respond to comments please. 17:00:10 Ok. That is all our time for today. 17:00:27 \o 17:00:30 Thanks for joining the meeting. 17:00:35 Thanks! 17:00:41 Talk to you all next week. 17:00:42 thanks! 17:00:48 #endmeeting