16:00:16 #startmeeting Cinder 16:00:19 Meeting started Wed Dec 13 16:00:16 2017 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:23 The meeting name has been set to 'cinder' 16:00:38 o/ 16:00:44 hey 16:00:46 courtesy ping: jungleboyj DuncanT diablo_rojo, diablo_rojo_phon, rajinir tbarron xyang xyang1 e0ne gouthamr thingee erlon patrickeast tommylikehu eharney geguileo smcginnis lhx_ lhx__ aspiers jgriffith moshele hwalsh felipemonteiro 16:00:49 Hi 16:00:50 @! 16:00:50 <_pewp_> jungleboyj (。・д・)ノ゙ 16:00:56 hi 16:00:57 heyump! 16:00:58 xyang1: Yay! Welcome! 16:01:04 hi 16:01:07 o/ 16:01:08 :) 16:01:12 o/ 16:01:24 felipemonteiro_: Welcome. 16:01:30 Agenda for today: https://etherpad.openstack.org/p/cinder-queens-meeting-agendas 16:02:00 ./o 16:02:03 .o/ 16:02:15 bswartz: Kinda turned into a mutant for a second there. 16:02:25 bswartz: Ouch. 16:03:01 Ok, we have a good number of people here and have a few topics to cover today. 16:03:06 bswartz, dislocate your head? 16:03:26 #topic Announcements 16:03:40 #link https://etherpad.openstack.org/p/cinder-spec-review-tracking 16:03:47 I'm okay -- just typing too fast 16:04:13 We are working towards Queens-3 ... So, lets keep the reviews going and please keep getting reviews for your sepcs into that file. 16:04:24 I have closed down the new driver reviews. 16:04:40 Unfortunately had a few drivers that missed, but they weren't ready. 16:04:57 Wanted to let everyone know that we have released Cinderclient 3.3.0 last week. 16:05:27 There were a number of missing microversions in the client and this release brings us up to date there so that we can keep tackling the multi-attach work. 16:05:51 In sad news we have had some Cores stepping down. 16:05:56 :( 16:06:05 :( 16:06:09 I started a discussion due to the disconnect in number of cores and actual activity. 16:06:24 Was wondering who would not be able to get back to Cinder. 16:06:38 dulek, scottda, we'll miss all of you 16:06:49 Unfortunately Michal Dulko and Scott D'Angelo needed to step back. 16:07:06 e0ne: ++1000 16:07:08 seems that DuncanT will step down too:( 16:07:11 DuncanT: ? 16:07:28 DuncanT: Yeah, he hadn't officially announced yet so I wasn't going to say anything. 16:07:40 Yup, sadly, I'm not able to do any reasonable numbe rof reviews 16:07:53 DuncanT: :-( 16:07:56 DuncanT: You can do an unreasonable number if you want. :) 16:08:07 smcginnis: Hey, I like that idea. 16:08:13 DuncanT: You always found such good stuff. 16:08:16 DuncanT: :( I hope you'll find some time for specs reviews. they were extremly useful 16:08:25 e0ne: ++ 16:08:31 smcginnis: Jay's email made it clear that reviews are needed by cores. It's fair enough but means I have to step down 16:08:45 e0ne: +1 16:09:17 I'll keep my eye on specs when I can for sure... if I ever come back to cinder I don't want ot find it full of things I don't like ;-) 16:09:18 I think it is important that the number of cores we have accurately reflects activity. That, however, doesn't mean that I don' 16:09:29 DuncanT: :) 16:09:39 DuncanT: ;) 16:09:54 t greatly appreciate any input that people can give and obviously we can add people back in, in the future. 16:10:05 Another option is to keep more cores on cinder-specs. 16:10:07 jungleboyj: +2 16:10:35 jungleboyj: for both your messages above 16:10:45 Just checked, and cinder-spec-core includes cinder-core, but explicitly has DuncanT. You're a double core there. 16:10:52 cinder-specs is the only thing I have any input on these days, and would probably work for some of the cores who were avoiding +2s due to being out of touch with the code 16:11:10 Double core? Awesome! /me updates his CV 16:11:13 smcginnis: Cool. 16:11:19 DuncanT: Is hard core. 16:11:44 I would be fine with Leaving DuncanT as specs core given the help you have been giving there. 16:11:57 jungleboyj: +1 16:12:15 ++ 16:12:33 +1 16:12:34 DuncanT: How does that sound? 16:12:42 Sounds great to me. Cheers. 16:13:02 DuncanT: Cheers! Would appreciate if you can continue to help with Specs. 16:13:03 DuncanT: hope to see you at PTG 16:13:32 e0ne: ++ 16:13:42 e0ne: Definitely, it's only up the road 16:13:47 :) 16:14:08 Anyway, for those stepping down, thank you for letting us know where your work has taken you. 16:14:18 ++ 16:14:20 Thank you for all of your previous contributions and time on Cinder! 16:14:42 And as said before, we can always bring cores back in if priorities change. 16:15:02 and we'll be happy once you return to Cinder 16:15:09 e0ne: +_ 16:15:12 ++ 16:15:42 I am still waiting to hear back from a few people. Will keep you all updated. 16:16:05 Anymore questions/comments on this topic? 16:16:45 Ok, moving on. 16:17:10 #topic Potential bug in API Return Values 16:17:22 #link https://bugs.launchpad.net/cinder/+bug/1737609 16:17:23 Launchpad bug 1737609 in Cinder "Some APIs return None when missing permissions instead of raising 403" [Undecided,New] 16:17:28 felipemonteiro_: 16:17:58 felipemonteiro_: You still there? 16:18:01 Thanks, provided launchpad link which should go into all necessary detail. Just want others' opinion before starting on the work. 16:18:09 hey 16:18:14 I can summarize here too. 16:18:21 looks like we need to fix it 16:18:23 erlon: Wlecome. 16:18:32 felipemonteiro_: Thank you for opening the bug. 16:18:32 felipemonteiro_: A quick summary would be good. 16:18:34 thanks 16:18:43 it's a bug 16:19:17 For two endpoints, policy enforcement is done with fatal=False; if it passes, a response body is returned. 16:19:18 tommylikehu: That was my feeling as well. 16:19:28 If it fails, None is returned. Usually the behavior is raise 403. 16:19:51 And fatal=False is mostly reserved for adding additional attributes to a response body... as far as *I* can tell 16:21:04 So the problem is if policy does not allow it, we return None. 16:21:14 I agree it's a bug. We should return {} 16:21:27 This behavior is currently caught by Tempest since the rules are admin_or_owner rather than admin_only. But it is detected when you override the defaults to admin_only 16:21:35 currently _not_ caught 16:22:19 smcginnis: Shouldn't we return a 403? 16:23:42 jungleboyj: Oh, right. Still need to really look at it, but yes, if they don't have permission, 403 would be accurate. 16:23:52 smcginnis: Ok, good. 16:24:44 felipemonteiro_: You mean not caught by tempest. 16:25:02 Yes, sorry. not caught 16:25:17 felipemonteiro_: Ok good. 16:25:31 Stacktrace in launchpad link was generated in my local env after changing the rules to admin_only 16:25:57 felipemonteiro_: Ok. 16:26:13 So, I think the concensus is that it would return a 403 and that that is a valid bug. 16:26:23 jungleboyj: +1 16:26:28 jungleboyj: +1 16:26:42 +1 16:26:42 Only two questions I had were: does the fix need a new microversion 16:26:43 Cool. Do we have someone to assign the bug to? 16:26:51 And: I can pick it up if need be, but don 16:27:01 't mind letting someone else do it either 16:27:13 felipemonteiro_: No, shouldn't need a microversion since it is a bug. 16:27:23 smcginnis: *phew* Good answer. 16:27:26 ;) 16:27:47 We do need to make sure that the documentation matches. 16:28:26 felipemonteiro_: If you are able to pick this up that would be great. 16:28:51 jungleboyj: np, I'll assign it to myself 16:29:20 felipemonteiro_: Already done. 16:29:21 :-) 16:29:27 thanks :) 16:29:36 #action jungleboyj to update bug with notes from today's discussion. 16:29:47 #action felipemonteiro_ To work on fixing the bug. 16:29:59 felipemonteiro_: Thank you for bringing this up and fixing it! 16:30:08 Glad to have your help. 16:30:24 np, we each have to chip in from time to time 16:30:38 Indeed. A community. 16:31:01 Ok, so I think we are closed on that one. 16:31:10 #topic NVMET target 16:31:15 #link https://review.openstack.org/#/c/505556/10 16:31:25 e0ne: ... 16:31:28 thanks 16:31:50 so, moshele is working on NVMeT target driver and CI 16:32:09 e0ne: What is the state of the CI? 16:32:48 smcginnis: I saw it reported so some patches for os-brick 16:33:11 smcginnis: honestly, I'm not the right contact person to ask about CI 16:33:44 as I know, Mellanox team wants to get CI for it too 16:33:53 So I do want this feature. But we've previously stated something like this is considered a new driver, and it has definitely not met the deadline to be included in Queens. 16:34:24 smcginnis: Yeah. I wasn't even aware this was coming along. 16:34:26 smcginnis: fair enough 16:34:48 e0ne: Can we get this on the 'new driver' list ? 16:35:02 #link: https://etherpad.openstack.org/p/cinder-spec-review-tracking 16:35:08 jungleboyj: sure, I'll add it there 16:35:44 and I still have some concerns about implementation 16:35:56 e0ne: Thank you. 16:36:08 the patch proposes 2 new config options: target_type and nvmeof_helper 16:36:19 #action e0ne to add the NVMeT support as a new driver in the spec-review-tracking etherpad. 16:36:43 what do you thing about changing current 'iscsi_helper ' option to target_helper' or 'target_driver'? 16:37:08 of course, with following of all depecation policies 16:37:32 e0ne: Wow, that would be quite the impact. 16:38:02 e0ne: We could probably get those config deprecation/changes in this cycle, then pick up in Rocky for the rest of it. 16:38:11 jungleboyj: I was afraid of it:( 16:38:33 smcginnis: Yeah, if we are going to go that way, that would make sense. 16:38:51 smcginnis: we can start work on it early in Rocky to get more time to fix is something will be brokeb 16:39:21 Works for me. 16:39:38 e0ne: I think that sounds like the best plan. We should be stabilizing and it sounds like this could be destabilizing. 16:39:40 @action e0ne to propose a patch with deprecation to see what will be broken. e.g. grenade 16:39:45 #action e0ne to propose a patch with deprecation to see what will be broken. e.g. grenade 16:40:08 jungleboyj: thanks 16:41:09 #action driver to be targeted to merge early in Rocky 16:41:41 In the interest of continuing to reduce option sprawl it does make sense to rename the option. 16:41:58 ++ 16:42:15 Though I don't know where that work is at. I am guessing we will need someone to pick that work up from diablo_rojo 16:42:41 e0ne: Anything more on that topic ? 16:42:54 jungleboyj: no, that's all from my side 16:43:10 thank you for feedback 16:43:15 e0ne: Great. Thank you for being willing to let that go to Rocky. 16:43:37 I do think it will be good to get that support in. I know we are interested in it. 16:43:42 jungleboyj: I don't have an option. it's a cinder rules and deadlines :) 16:43:47 :) 16:43:49 :-) 16:44:20 #open discussion. 16:44:29 #topic open discussion 16:44:32 I want to make sure everyone is aware of this: 16:44:34 #link http://lists.openstack.org/pipermail/openstack-dev/2017-December/125473.html 16:44:55 * e0ne didn't read it yet 16:45:12 * jungleboyj hasn't either 16:45:12 We've been having discussions with a few vendors and users, but want to get this out there into the broader community. 16:46:16 I'll take the silence to mean that everyone is reading it. :) 16:46:35 Please feel free to respond to that thread or hit me up with any questions or concerns. 16:47:20 including summit? 16:47:31 The biggest one has been devs not wanting to lose the chance to get together as often. So just want to point out that this does bring back the option for midcycles. 16:47:31 * jungleboyj is reading. 16:47:40 And they would actually be encouraged for some teams. 16:47:55 tommylikehu: Summits and the Forum would still be twice yearly. At least for now. 16:48:02 Cool. So next release after queens is March 2019? 16:48:19 Swanson: If we can get buy in on this, yes. 16:48:43 Well, I should kind of correct that actually. 16:48:55 Next *coordinated* release would be March 2019. 16:49:08 But we would actually be encouraging teams to release more frequently. 16:49:20 Just not necessarily as a coordinated release across all projects. 16:49:40 smcginnis, Nice. Should mean that I have 11 1/2 months of what they want me on before frantically powering hardware back on. 16:49:46 Lack of coordinated release is likely to shake up downstreams, but that is likely not a bad thing 16:49:47 So March 2017 will be Cinder 13.0.0 I think. We could potentially have a 13.1.0 release a few months later that adds some drivers. 16:50:18 You mean March 2018? 16:50:21 DuncanT: Agreed. But we have spoken to several of the big distros, and so far at least most are happy with this change. 16:50:29 jungleboyj: Bah! Yep. 16:50:35 :-) 16:51:03 So, this really starts a process that has been in the making where the projects are more under their own control. 16:51:20 jungleboyj: Exactly! At least that's one big aspect of it. 16:51:20 swift has been doing this for a while and they seem to do fine. 16:51:49 Users will be happy with less number of upgrades to do 16:51:50 I would miss having two PTGs, but we can then address that as a team. 16:51:54 xyang1: ++ 16:51:59 So we would at least have once a year sync up points for all the projects, but in the mean time, projects have more control over what they want to do. 16:52:02 We are growing up. 16:52:10 I like this proposal from vendor's side. need to think a bit more from the developer's perspective 16:52:14 I'm always a fan of loser coupling between projects, though it makes nova/keystone depreciations more complex 16:52:36 jungleboyj: We just need someone to host a midcycle in Ft Collins in Aug/Sept now. 16:52:38 DuncanT: Yeah, I am worried about the impact this will have on integration with Nova. 16:53:03 smcginnis: :-) Well, I am sure I can get Lenovo to host if we can't get anyone in Ft Collins 16:53:15 Wait, where is jgriffith !?! 16:53:28 * jungleboyj takes the crew to his farm 16:53:31 jgriffith's barn should be big enoug for us all. 16:53:38 :) 16:53:39 smcginnis: ++ 16:53:53 He isn't here to fight ... 16:54:52 smcginnis: Thank you for bringing this up. I thought it was kind of silly at first but when it is put down this way it seems like it could be a good thing. 16:55:00 5 minute warning. 16:55:12 jungleboyj: Yeah, the more we talked about it, the more it's made sense to me. 16:55:25 Cool. 16:55:33 Anything else to cover? 16:55:43 e0ne: What was your comment about Mellanox? 16:55:54 They want the NVMeT driver in as well? 16:56:23 jungleboyj: yes. actually, they are working on nvme target driver now 16:56:40 e0ne: Ok, so you are just supporting that effort? 16:56:41 They make NVMe adaptors, so yes, they want it in. :) 16:56:47 smcginnis: Makes sense. 16:56:54 jungleboyj: unfortunately, moshele didn't manage to attend this meeting and ask me to help him 16:56:59 e0ne: Is that moshele ? 16:57:10 e0ne: Ok, where is he based? 16:57:13 or she 16:57:16 I have zero concerns about integerations including Nova 16:57:22 jungleboyj: moshele is from Mellanox Tel Aviv team 16:57:27 irrelevant 16:57:53 Ok. Hopefully moshele can attend the meeting more as we get closer to integrating . 16:57:59 jungleboyj: few years ago, I started work on NVMe PoC for Barcelona summit, so I just want to finish this :) 16:58:03 jgriffith: You mean as far as NVMe support on the nova side? 16:58:17 jgriffith: Good. And about hosting the mid-cycle? ;-) 16:58:32 no, sorry.. the looser coupling/scheduling thing 16:58:44 :) 16:58:54 smcginnis: here is a patch to nova https://review.openstack.org/#/c/482640/ 16:58:56 jgriffith: With how long it takes to get things in, it will probably be no different. 16:59:11 Theoretically, now that we have microversions it shouldn't really matter about coordinated releases for some of those kinds of things. 16:59:17 Either the support is there or not. 16:59:23 jungleboyj: that and the fact that things usually are driven the opposite direction 16:59:32 smcginnis: and os-brick part https://review.openstack.org/#/c/482642/ 16:59:34 Time check. 16:59:35 and hey... that's why we have mv's :) 16:59:36 e0ne: Thanks 16:59:37 You'll probably have to do something different for adding drivers. 16:59:41 jgriffith: :) 16:59:44 which I will propose again we abandon :) 17:00:03 * jgriffith ducks 17:00:12 :-) Ok. That is time. I am stopping this while I can. 17:00:16 Thanks for a good meeting all. 17:00:18 haha 17:00:23 toodles 17:00:26 Swanson: We will need to discuss that some more. But we could theoretically just include drivers whenever they are ready. If thats what we want. 17:00:28 Thanks for bringing up the new schedule Sean. Good stuff. 17:00:37 smcginnis: ++ 17:00:45 #endmeeting