16:00:03 #startmeeting Cinder 16:00:04 Meeting started Wed Jun 7 16:00:03 2017 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:07 The meeting name has been set to 'cinder' 16:00:12 Hi 16:00:18 hi! 16:00:29 hi 16:00:30 ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex karthikp_ patrickeast dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino karlamrhein diablo_rojo jay.xu jgregor lhx_ baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell watanabe.isao,tommylikehu mdovgal ildikov wxy 16:00:31 hi 16:00:36 viks ketonne abishop sivn 16:00:37 hi 16:00:38 o/ 16:00:42 @! 16:00:42 hemna |ʘ‿ʘ)╯ 16:00:43 hi 16:00:50 <_alastor_> o/ 16:00:56 hemna: That bot is calm today. :) 16:01:13 hi 16:01:30 @! 16:01:30 jungleboyj (。Ő▽Ő。)ノ゙ 16:01:38 o/ 16:01:50 smcginnis: It can just do a friendly wave now. 16:02:10 jungleboyj: A kinder, gentler pewpbot. :) 16:02:29 #topic Announcements 16:02:34 smcginnis: pewpbot doesn't have to always be angry. 16:02:56 Our P-2 deadline is today. 16:03:10 I will probably request that cut later in the day. 16:03:21 * diablo_rojo_phon sneaks in 16:03:23 hi 16:03:32 oh 16:03:35 Last chance for drivers and specs. Anything that didn't make it can get in line for Queens. 16:03:40 smcginnis: do we have anything to get merged before P-2? 16:03:44 .o/ 16:03:59 e0ne: I think all the remaining drivers still had issues to resolve. 16:04:03 Either code or CI. 16:04:43 I haven't looked too closely at specs in the last few days, but I have the feeling we've already bitten off more than we can chew for this cycle with new things. 16:05:19 * e0ne forgot about specs deadline :( 16:05:36 #link https://releases.openstack.org/pike/schedule.html Pike release schedule 16:06:00 So we are 12 weeks out from release. 16:06:07 Still some good time to get things done. 16:06:25 But let's not forget about it until it's too late. 16:06:26 Could fit in an entire ocata2 in that time. 16:06:33 it's only 2 months before feature freeze 16:06:39 Would be great not to have a crunch at the end, but I know that's inevitable. 16:06:51 e0ne: Yeah, that'll go quick. 16:07:25 #topic targetd driver for remote LVM management 16:07:30 e0ne: All yours. 16:07:38 smcginnis: thanks 16:07:46 #link https://github.com/open-iscsi/targetd 16:08:04 I just want to know if community is interested in this ^^ 16:08:32 I played with this last week and get working target driver for create/delete/attach/detach 16:08:42 i think it's a good idea, i was long ago attempting to write a driver for this as well 16:08:47 it's a remote LVM management 16:08:52 I like it. 16:09:06 so we can use LVM driver as any other 16:09:30 i don't like the idea of trying to move from the LVM driver to this 16:09:35 but having both, sure 16:09:40 e0ne, nice. I could have used that for my rpi demo :) 16:09:49 hemna: +1 16:09:55 Yeah, seems like a cool idea. 16:09:57 eharney: sure, it's just adding a new (target) driver 16:10:00 I would hope this would be a completely separate driver 16:10:04 target driver? 16:10:09 it should be a volume driver... 16:10:16 eharney, +1 16:10:31 eharney, hemna: it makes sense 16:10:49 I'll propose a patch early next week 16:10:52 doing it as a target driver implies it would still use local lvm, which i think is not the goal unless i'm misinterpreting this 16:11:12 Seems like it should be handled as a separate driver. 16:11:30 eharney: target driver is not enough for snapshot-related operations 16:11:55 So queens and we would need an associated gate job running to test it, just to state that explicitly. 16:12:08 it could be easier to implement a separate driver 16:12:17 it should be a separate driver 16:12:23 smcginnis: sure, it's too late for P 16:12:27 eharney: ok 16:12:45 as far as gate jobs, does targetd function on ubuntu? 16:12:55 eharney: yes 16:12:59 good to know 16:13:09 eharney: I tried it on my ubuntu-based devstack 16:13:25 thanks everybody for feedback. It's everything from me if no more questions 16:13:43 e0ne: Great, thanks for looking into it. 16:13:51 smcginnis: np 16:13:52 I think that could address a few peoples needs. 16:14:04 i'll be happy to review 16:14:36 That was all on the agenda. Anything else to bring up? Or do we get to get out of here early for once? 16:14:39 eharney: thanks! I'll let you know when it will be ready for review 16:15:03 smcginnis, 16:15:12 * jungleboyj brings up replication so we can rathole for 45 minutes. 16:15:14 hemna: 16:15:19 :) 16:15:48 I tried +A'ing a few of the mark drivers unsupported 16:15:57 there was one that got a response from the maintainer 16:16:04 he said he was working on the CI 16:16:04 hemna: I saw that. 16:16:10 #topic Open discussion 16:16:16 jungleboyj, Still think replication v3 should be no replication. 16:16:17 so I didn't issue a recheck on it, as it would go through 16:16:24 hemna: Probably OK to wait a few days for that since they appear to be so close. 16:16:25 Swanson, +A 16:16:27 smcginnis, hemna: the same for virtuozzo driver 16:16:29 Swanson: +1 16:16:38 lets remove replication. 16:16:48 I believe we can mark drivers as supported once CI issues will be fixed 16:16:54 * jungleboyj shakes my head. 16:17:11 hemna, from everywhere. good idea 16:17:12 smcginnis, https://review.openstack.org/#/c/463029/ 16:17:13 that one 16:17:14 e0ne: Yes, as soon as we get good responses we can either abandon or revert those. 16:17:45 smcginnis: +1. that's what I replied in IRC 16:17:57 hemna: I'm fine if that really is just over the weekend. But if there's no movement there by late Monday I will do the recheck if no one else does. 16:17:59 hemna, let custom extra specs handle it for each driver and cinder can wash its hands of it. 16:18:09 hemna: here is one more https://review.openstack.org/#/c/463030/ 16:18:21 smcginnis, ok cool. I just wanted to raise visibility on it, and that's why I didn't recheck. 16:18:27 Swanson: jgriffith must not be here or I think he would be blowing up. :) 16:18:39 smcginnis, I know, right? 16:18:42 hemna: Cool, thanks. 16:18:55 smcginnis, I doubt it. I think he'd be on the bandwagon :) 16:18:57 on CI... we currently have https://review.openstack.org/#/c/471352/ pending to fix the Ceph job 16:19:13 also ci related, one thing we may want to consider is that the upstream openstackci stuff is pretty broken for anyone new coming in, doesn't look like its really being maintained 16:19:16 hemna: I think he'd blow up because that was what he was saying all along. :) 16:19:21 hrmm " I promise that this cyle our CI will start reporting again before summer ends." 16:19:23 somewhat affects us since we have hard requirements for one 16:19:26 that's like months from now. 16:19:45 hemna: Yeah, wrong answer 16:20:22 eharney: Would be nice to have ceph passing again. Since apparently everyone in the user survey uses it. 16:20:35 * smcginnis would really, really love to see the NFS job passing again too 16:20:49 smcginnis: we're reaching out to find some folks on the tempest side now -- we have a regular problem of tempest landing changes which break the Ceph job 16:20:51 patrickeast, the CI projects aren't being maintained? 16:21:02 smcginnis: and we need to find a way to address it in that project 16:21:10 eharney: +1 16:21:11 hemna: the one for like jenkins jenkins is, but not the like 3rd party ci portion of it 16:21:15 eharney: Seems to be happening more and more. 16:21:16 huh 16:21:28 how are those tempest patches landing in the first place then? 16:21:32 hemna: like these directions would 100% not work https://github.com/openstack-infra/puppet-openstackci/blob/master/doc/source/third_party_ci.rst 16:21:34 eharney: What is broken now? 16:21:37 hemna: Ceph job on tempest is non-voting and it gets ignored. 16:21:39 patrickeast: Aren't there still regular weekly meetings for third party CI? 16:21:47 :( 16:21:58 smcginnis: not sure, i haven't been involved with the 3rd party ci stuff much the last few months :( 16:22:05 eharney, that seems like a major problem since ceph is the most widely used backend. 16:22:18 smcginnis: Yes, there is still a 3rd Party CI group that meets. 16:22:22 * smcginnis is just going to casually drop this right here: https://github.com/stmcginnis/openstack-ci 16:22:32 I would like to have ceph job voting once it's fixed 16:22:33 my current hope is to turn off broken tests and try to get Ceph back to voting status in Cinder, Nova, Tempest, etc, so this stops happening 16:22:34 hemna: right 16:22:45 sadness 16:22:45 smcginnis: yea thats probably a better option at this point 16:22:48 currently things break quicker than they get fixed 16:22:54 eharney: +1 16:23:03 eharney: it's a good plan! 16:23:20 @!r 16:23:20 hemna (ノಠ益ಠ)ノ彡┻━┻ 16:23:21 eharney: +1 16:23:26 Out of your mind to use infra's stuff. Yours works great (still). 16:23:34 Uh uh, pewpbot has gotten angry again. 16:23:41 Swanson: ;) 16:24:16 Nothing like 3rd Party CI to extend a meeting. 16:24:33 jungleboyj: No kidding. 16:24:42 jungleboyj, we can go back and rathole on removing replication..... 16:25:01 * jungleboyj shuts up 16:25:18 hemna: I guess, since we have time - I've been asking folks at ops meetups and things, and have yet to find anyone that uses or wants replication. :/ 16:25:29 * diablo_rojo_phon imagines a cinder block puking cheesecake 16:25:32 not surprising really 16:25:32 At least via cinder 16:25:42 diablo_rojo_phon: hah! 16:25:44 smcginnis, Exactly! v3, pull it! 16:25:47 <_alastor_> On the subject of 3rd-party CI, I've recently written a backup driver, do we require CI or anything beyond unit tests? 16:25:59 smcginnis: i've got one that is interested... the biggest problem is all the "big" deployments are all like on kilo 16:26:07 so they don't have it as a feature yet 16:26:15 I think karbor had talked about using it at one point, but IIRC there was some technical hurdle to doing that. 16:26:18 _alastor_: not yet. we not agreed about CI for backup drivers yet 16:26:22 does the ceph driver support replication (or does that even make sense?) ? 16:26:35 <_alastor_> e0ne: Thanks. At least it makes my life easy for now :) 16:26:40 patrickeast, do they know we only cover meteor strike of the primary? 16:26:44 hemna: it does 16:26:45 if it doesn't, that might be why nobody is asking for it/using it 16:26:52 patrickeast: Interested in it being an official Cindner API? Or do they just have a need that could easily be addressed by extra specs? 16:26:54 Swanson: no, thats the problem : / 16:26:55 huh, I stand corrected then. dang 16:27:23 smcginnis: good point, i suspect they wouldn't really care with the right marketing pitch 16:27:24 smcginnis: a customer asked me about after our presentation in Boston. they want to be able to protect workloads. so group replication would be handy for that 16:28:45 xyang2: That could be more useful than what we have now. 16:28:50 * lbragstad sneaks in late 16:29:08 I think we are barely 1/4 a$$ed on DR with this stuff and it is hard to explain all the limitations. 16:29:15 _alastor_: We had wanted to have backup driver CI, but last I looked there's only like two backup tempest tests, so it wasn't that interesting. 16:29:45 lbragstad: I suppose you want to talk about policy, huh? :) 16:29:59 Swanson, yah I believe we will always be chasing a ghost with replication and not provide something that's usable for real world scenarios. 16:30:09 smcginnis: only if there is a few minutes in open discussion :) 16:30:29 lbragstad: We've just been kind of rambling for the last several minutes, so sure. 16:30:30 s/is/are/ 16:30:38 lbragstad, we have at least 30) 16:30:51 smcginnis, _alastor_: we covered all backup APIs via tempest since the last conversation, AFAIK 16:30:51 sounds like i get to ramble! 16:31:12 e0ne: Oh good 16:31:42 smcginnis: I need to check if all patches were merged to tempest 16:33:03 so - quick update on the policy efforts 16:33:21 i've attempted to document the current state of things and how we can make them better 16:34:06 my goal was to generate discussions that result in different solutions to the problems at hand 16:34:16 (which i think has happened) 16:34:19 smcginnis: https://github.com/openstack/cinder/blob/master/cinder/tests/tempest/api/volume/test_volume_backup.py 16:34:46 it looks like we're getting close to an optimal solution 16:35:19 lbragstad: Great! I'm hoping to pick up my policy-in-code spec for queens. 16:35:19 in the interest of cross-project open-ness, it'd be awesome to get some feedback from the cinder folks on the approach 16:35:20 smcginnis: and there are some tests in a tempest repo 16:35:28 smcginnis: ++ sweet! 16:35:57 #link https://review.openstack.org/#/c/460344/ attempts to document where we are with policy today 16:36:08 e0ne: Cool, thanks. Still not aton, but it's not a lot to cover I guess. 16:36:18 #link https://review.openstack.org/#/c/462733/ is an attempt to generate discussion around improving security with policy 16:36:29 #link https://review.openstack.org/#/c/464763/ is a specification for global roles 16:36:34 lbragstad: Nice! 16:36:42 ^ that last one will eventually impact cinder to some extent 16:37:32 i've tried to highlight how projects will be impacted through examples in each of those documents 16:37:45 if you have specific questions though, i'd be happy to field them here 16:38:25 lbragstad: Cool, I'll have to read through those to see what kind of questions come up. 16:38:34 lbragstad: The policy meeting overlaps with this one, right? 16:38:43 smcginnis: yes, unfortunately 16:38:49 lbragstad: Thanks for putting that all together. 16:39:00 we ended early today so i was able to swing by 16:39:03 jungleboyj: o/ 16:39:06 jungleboyj: anytime 16:39:16 lbragstad: Maybe one of these times where we have a short agenda we can get it on the list to discuss some more. 16:39:47 since policy is scattered across all the projects - my goal with those first two was to level set to try and get everyone on the same page 16:40:08 lbragstad: Good goal, IMO. 16:40:15 smcginnis: yeah - that would be good idea 16:40:51 i'm hoping the work we have staged for queens and rocky will help projects achieve capability APIs 16:41:11 in a way that makes sense 16:41:25 i thought i remember cinder having an interest int hat 16:41:31 in that* 16:41:33 Yep, definitely 16:41:54 awesome 16:42:23 OK, I'll try to follow up on those. Anyone have anything else to discuss today? 16:42:42 smcginnis: awesome - thanks for the time! 16:42:57 lbragstad: Thanks for working on that. Glad to see someone pushing it forward. 16:43:19 absolutely - ping me anytime if you, or anyone else, has questions 16:43:34 Thanks! 16:43:50 OK, I guess we're done for today. Thank you everyone. 16:43:56 Thanks! 16:43:58 #endmeeting