16:02:18 #startmeeting Cinder 16:02:19 Meeting started Wed Jan 4 16:02:18 2017 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:24 The meeting name has been set to 'cinder' 16:02:26 There is the man! 16:02:33 hi 16:02:34 hi all 16:02:37 hi 16:02:38 hi all 16:02:40 Sorry, I need a door on my cubicle. :) 16:02:44 hi 16:02:55 Or just work from home on Wednesdays... 16:02:58 #link https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting 16:03:03 o/ 16:03:08 hi 16:03:11 Hello :) 16:03:14 smcginnis: Working from home is the way to go. 16:03:17 e0ne: Doesn't do much good to capture that in the logs since it's changed every week. :) 16:03:18 hi 16:03:32 jungleboyj: Especially when you have a ridiculously large monitor, right? 16:03:41 hi 16:03:43 smcginnis: ;-) 16:03:48 #topic Announcements 16:03:48 maybe two 16:03:57 #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review focus 16:04:01 hi 16:04:16 I believe scottda has done more testing of the A/A patches. Would be nice to get those through. 16:04:41 yeah, I crashed by Devstack and need to restest. 16:04:45 #info Need to register for the PTG if you have not already done so 16:04:50 hopefully not caused by AA/HA :) 16:04:58 #link http://www.openstack.org/ptg PTG info and registration 16:05:02 scottda: Hah 16:05:14 Please get registered for the PTG if you can. 16:05:21 We need participation to make this useful. 16:05:24 smcginnis: wish so 16:05:30 hi 16:05:35 tommylikehu_: Long trip for you. :) 16:05:36 Finally getting to a good number of cinder folks. 16:05:41 tommylikehu_: Would be great if you could make it though. 16:05:52 diablo_rojo_phon: Oh good, do you know the numbers off hand? 16:05:55 geguileo: How's rebasing of A/A stuff? You need any help there? 16:06:02 smcginnis: thanks, I would like to have a try 16:06:44 #info Summit call for presentations is open 16:06:51 #link https://www.openstack.org/summit/boston-2017/call-for-presentations/ Summit CFP 16:07:09 If you have any ideas for Summit topics, please submit them. 16:07:28 #info Need to start planning for the PTG 16:07:35 #link https://etherpad.openstack.org/p/ATL-cinder-ptg-planning PTG topic planning 16:07:57 smcginnis: I put a placeholder in there for cinder-nova meeting... 16:07:59 smcginnis: not offhand but I can check the spreadsheet later and let you know. 16:08:13 smcginnis: NOt sure about when that needs to happen, how to get a big room, etc 16:08:23 erlon here? 16:08:33 I think there's some cross-project designated day or something? 16:09:03 scottda: no designated day. 16:09:08 * patrickeast sneaks in late 16:09:21 scottda: From what I understand, there will be some space for ad hoc groups. 16:09:24 diablo_rojo_phon: ok, cool. What about a big room? 16:09:36 scottda: We will have our dedicated rooms Wed-Fri 16:09:53 scottda: http://lists.openstack.org/pipermail/openstack-dev/2017-January/109608.html 16:09:54 smcginnis: Well, we might have a smaller cinder-nova api meeting, That might work. 16:09:59 So any and all PTG ideas, please add to that etherpad. We'll sort it out once we get closer. 16:10:04 scottda: here's ttx answer on that. 16:10:05 not sure about how many from both teams are interested. 16:10:29 scottda: yeah the rooms are kind of going to be scaled like how they were for the design summits. 16:10:30 scottda: Yeah, probably a small enough group that actually cares. 16:10:37 Small announcement from me - since today Cinder's listed here: https://governance.openstack.org/tc/reference/tags/assert_supports-rolling-upgrade.html :) 16:10:53 dulek: +1 Nice to see that. 16:11:14 dulek: thx 16:11:20 #topic APIs for encryption types 16:11:29 Not sure who added this one, no nick listed 16:11:46 maybe Steve Martinelli? 16:12:00 stevemar: Was this from you? ^^ 16:12:08 #link https://bugs.launchpad.net/cinder/+bug/1562043 16:12:09 Launchpad bug 1562043 in Cinder "[api-ref]API doc reference missing "volume type encryption create"" [Medium,In progress] - Assigned to wangxiyuan (wangxiyuan) 16:12:09 o/ 16:12:16 oh yeah 16:12:22 that was me, a while ago 16:12:39 i was trying to find them and got frustrated that they are not there 16:12:40 Been two weeks, so it's hard to remember some times? :) 16:12:48 the bugs been opened for several months now 16:12:53 stevemar: Ah, we're just missing the api-ref? 16:13:01 * jungleboyj hands stevemar a coffee 16:13:10 looks like we just need to review https://review.openstack.org/#/c/415320/ ? 16:13:17 smcginnis: yes, i was looking here: http://developer.openstack.org/api-ref/block-storage/v2/index.html 16:13:18 #link https://review.openstack.org/#/c/415320/ 16:13:22 tabbed 16:13:38 stevemar: Unfortunately, there's probably a lot more than that missing. 16:13:48 smcginnis: that makes me sad :( 16:13:56 We added in-tree api-ref last release, but one person was doing it and they've moved on to another job. 16:14:08 smcginnis: do a sprint :P 16:14:10 stevemar: Yeah... on the list of things that need attention. 16:14:11 So is there any work list for Api ref? 16:14:29 wxy|_: Not that I have or have seen. 16:14:47 wxy|_: If that's something you are interested in, feel free to start an etherpad or something if you think it would help. 16:14:57 Any attention to the api-ref would be beneficial. 16:15:13 its unfair to expect users to know internals of cinder to use an API, gotta doc it! 16:15:20 I think nova had some tool or something they were using to track progress on their api-ref work. Not sure though. 16:15:30 stevemar: Yes, definitely. 16:15:34 (also the help and docstrings in cinderclient aren't much more helpful) 16:16:03 I think there are a few Cinder bugs filed for api-ref work that needs to be done. So if that interests anyone, they're out there. 16:16:06 stevemar: Thanks for keeping us on our toes. 16:16:16 scottda: np :) 16:16:16 stevemar: Keeping us in line! :) 16:16:22 smcginnis: Is there a tag for that? 16:16:34 I think it's [api-ref] 16:16:38 jungleboyj: mmmmm, maybe? :) 16:16:52 That's in the title, but not sure if a tag has been set, at least on all of them. 16:16:57 we had the same issue in keystone (when we moved our APIs in-tree), we did a 2 day sprint near the end of the cycle, it worked really well 16:17:04 Will try to make sure it is next time I do a pass through them. 16:17:17 it was all doc'ed here: https://etherpad.openstack.org/p/keystone-api-sprint 16:17:17 :-) Sounds like a good idea. 16:17:29 stevemar: Good tip. That sounds like something good to do closer to the end of a cycle. 16:17:43 stevemar: Nice, thanks! 16:17:47 anyway, i'll step off my soapbox now 16:18:00 stevemar: Are you becoming a Cinder guy? 16:18:10 stevemar: While you're up, want to do a quick mention for the driver maintainers? 16:18:25 jungleboyj: We're forcing him to be by not having good docs. ;) 16:18:59 smcginnis: Ah, so that is our hidden agenda. Cool. 16:19:18 oh sure 16:19:24 We're sneaky like that. 16:19:32 so, i had another ask 16:20:17 the TC is doing some work around drivers, smcginnis has been keeping up to date on it 16:20:22 i can't find the ML post atm... 16:20:48 but is anyone interested in helping us out here? we're looking to talk to a few cinder driver maintainers 16:21:22 the TC wants to make sure we're asking the right questions and proposing solutions that'll help driver maintainers 16:21:33 Might be this one: http://lists.openstack.org/pipermail/openstack-dev/2016-December/108410.html 16:21:53 thanks smcginnis! 16:22:05 stevemar: I will put the call out to our internal driver maintainers. 16:22:11 Have them contact you? 16:22:26 jungleboyj: just shoot me their email and name 16:22:39 stevemar: Can do buddy. 16:22:49 do driver maintainers normally not attend the meeting? 16:23:04 yes maybe 16:23:14 stevemar: Many do, but some companies have a different set of driver maintainers. 16:23:18 stevemar: Attendance is a bit light today... 16:23:18 stevemar: Some do, but for IBM most of them are Asia, so not a good time. 16:23:28 ah 16:23:34 various reasons 16:23:36 cool 16:23:49 #action Driver maintainers contact stevemar to get involved in driver team discussion 16:23:51 any suggestions for names i can poke? from the esteemed core team? :) 16:24:07 stevemar: Hopefully the mention in here will get read in the meeting logs and get some activity 16:24:11 stevemar: I'd be interested in talking to you about this, do you need my email? 16:24:36 stevemar: maybe you can leave your email here 16:24:52 the guys could check this log and contact you later 16:24:54 sure, folks can reach me at s.martinelli@gmail.com or send me a PM 16:25:05 stevemar: thanks 16:25:16 neutron has a "lieutenant" list of drivers: http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#core-review-hierarchy 16:25:16 stevemar: Now the spam bots have you. ;) 16:25:22 (if you scroll down a bit) 16:25:28 lol 16:26:07 ok, now i'm really done :P 16:26:15 stevemar: Thanks! 16:26:28 #topic Reset-status for all resources with single command in Cinder-client and OSC 16:26:37 great, I add this topic for eharney 16:26:38 eharney and tommylikehu_: Take er away 16:26:46 eharney: here? 16:26:49 yep 16:27:43 I think the idea comes from eharney 16:27:57 #link https://review.openstack.org/#/c/413156/ Proposed approach 16:28:11 and it's about reset different resource's status with single command 16:28:23 So the idea is rather than adding reset commands for everything, just have one general command that can be pointed at whatever needs to be reset? 16:28:24 for context, the start here is that i voted against adding group-reset-state and group-snapshot-reset-state to the cinder CLI 16:28:33 for a couple of reasons 16:28:40 smcginnis: right 16:29:10 1) we don't need a CLI full of X reset-state commands, one for every resource, IMO, because it's a lot of clutter 16:29:26 2) copying the way we do reset-state now is not a good plan because what we do now is not really a great design 16:29:33 eharney: I like this idea 16:29:38 and i'd rather fix it rather than propagate it further 16:29:42 (we've 3 *reset* commands now, and are proposing adding 2 more) 16:29:52 eharney: +1 16:29:55 scottda: yeah 16:30:02 if we move to a new command, we can smoothly fix the other issues with the current scheme 16:30:16 such as the fact that it defaults to "available" which IMO is rarely a safe choice 16:30:16 what's the downside of collapsing it all down to 1 command? 16:30:45 bswartz: it could be ugly maybe 16:30:57 Or it could be beautiful 16:30:59 :) 16:31:06 bswartz: the only argument really presented against it so far was kind of "we do it in one command in osc, but in cinder CLI we already do this, so let's keep doing it the current way" 16:31:15 sorry maybe different from others 16:31:16 which IMO is not really much of an argument 16:31:24 eharney: I think that makes sense. Not add additional commands and set things up so that it is easier to fix in the future. 16:31:50 I lean in favor of this 16:31:51 eharney: I haven't looked at that patch, but is that working? Or needs more work? 16:32:17 It's going to need a spec. What will the API look like? Will we have a ResetManager? 16:32:17 eharney: can you get it done in Ocata? 16:32:18 smcginnis: it works for volume and snap 16:32:20 we are gathering more response and then take the steps 16:32:51 Or maybe no spec? What do people think? 16:32:56 eharney: So it would just need a little more work to add other object support? Seems worth spending a little more time on. 16:33:01 i'm also concerned about whether we are relying on reset-state too much in general rather than making things work in a way where it isn't needed so often 16:33:05 smcginnis: right 16:33:06 for the case to reset a volume that is 'in-use' to 'available' (or the other way around), we need reset-state API to be able to deal with not just 'status' field. 16:33:18 winston-d: Yup, same with migration status. 16:33:23 scottda: Good question. I'm kind of on the fence on a spec. Could be useful to document what's being done, but could also be overkill. 16:33:44 eharney: Madness! 16:33:49 Well, this is ultimately about an admin API to change the DB.... 16:33:50 winston-d: Good point. Maybe a spec is needed to flesh out some of these details. 16:34:19 i don't mind writing specs if people agree on the general direction 16:34:30 Only the clients are involovd in this? 16:34:41 I agree with eharney that ideally we would fix thing to prevent bad state, but that's been a bit hard. 16:35:01 eharney: I tentatively agree with the general direction. :) 16:35:08 We got pushback from nova for this 2 years ago, and we're still trying to fix the api... 16:35:10 scottda: well, we keep not doing things we could be doing to avoid it. like using async error messages rather than the pattern of "leave the object in error_X and ask the admin to reset it" 16:35:16 dulek: yeah, so basically it shouls support almost every status-like filed for different resources. 16:36:05 Designing such a able can be challenging. 16:36:22 I think it's better to design it right, and take our time. 16:36:25 reset-state already deals with attach status, it can be made to deal w/ migration status too 16:36:33 scottda: That;s crazy talk. 16:37:02 If we start the spec now, we can beat on it at PTG and have something in Pike 16:37:15 Good call. 16:37:21 scottda: sounds great 16:37:29 My two cents are that we need to eventually get the problems that lead to the need for these resolved. 16:37:44 +1 16:37:44 In the short term though, getting a good solution for reset-state is desirable. 16:37:49 jungleboyj: +1 16:37:54 Yeah, but sh*&t happens, and we'll always need a tool. 16:38:09 scottda: Exactly. 16:38:10 scottda: SQL 16:38:16 I don't know if we can ever get rid of the root causes, but worth making things more robust. 16:38:18 :) 16:38:22 winston-d: ;) 16:38:24 scottda: So lets make the tool decent. :-) 16:38:37 we can certainly stop adding new root causes 16:38:39 smcginnis: +1 16:38:50 eharney: Madness again! 16:38:50 eharney: :) 16:38:57 i -1'd a spec this morning on revert to snapshot because it wanted to add a new state of "error_reverting"... this is the design trap we tend to go toward by default 16:39:21 eharney: I gonna check this 16:39:22 If things get to the point where it's rare enough, SQL is probably OK. But until then, I think having a decent tool to help admins with this is worth it. 16:39:38 Not every admin can access the DB directly with SQL 16:39:46 SQL isn't good enough because then people will break things like resetting the volume status and not the attachment status 16:39:50 Accessing production DB is generally bad... 16:40:03 scottda: +1 16:40:16 Yeah, only if it's a very, very rare occurence. If that's ever possible. 16:40:18 And I have done it countless times :) 16:40:33 cause sh*!t breaks. 16:40:33 yeah, and schema change is nightmare for SQL based tool 16:41:06 eharney: So spec then? 16:41:24 smcginnis: i guess, i'll start with the proposed CLI ideas and see what else ends up in there 16:41:25 spec +1 16:41:41 eharney: Thanks. Might be good to add to here too: https://etherpad.openstack.org/p/ATL-cinder-ptg-planning 16:41:59 eharney: Would be nice to have a high bandwidth discussion on things. 16:42:05 sure 16:42:07 smcginnis, eharney thanks 16:42:23 eharney, tommylikehu_: Thanks! Anything else on this topic? 16:43:11 #topic Open Discussion 16:43:16 Anything else? 16:43:44 Did we get a netsplit or something? 16:43:53 no 16:43:53 * jungleboyj waves 16:43:55 no 16:44:01 :) 16:44:01 still here 16:44:05 https://review.openstack.org/#/c/337814/ would really like some more reviews 16:44:17 tabbed 16:44:46 Oh, another reminder, non-client lib deadline is in two weeks. 16:45:00 O-3 16:45:12 So if you are working on anything in os-brick, or have anything you know we need to get through, nows the time to push that. 16:45:23 tommylikehu_: non-client library freeze is before o-3 16:45:34 smcginnis: ok~ 16:45:56 tommylikehu_: Gives a little time to make sure there are dependency issues before the bigger freeze. 16:46:07 #link https://releases.openstack.org/ocata/schedule.html Ocata schedule 16:46:08 we landed some os-brick stable/newton fixes. i/we need to cycle through those again and see about doing a release there, i suppose 16:46:37 eharney: Or can we just raise Newton upper constraints to use the newer lib? 16:46:59 I don't think we could do that with Mitaka since we added privsep, but I think we should be able to now. 16:47:11 Though probably doesn't hurt to just do a stable release. 16:47:12 smcginnis: well we already merged them into stable, might as well cut a release 16:47:19 eharney: True 16:47:22 smcginnis: any reminder about the drivers's deprecation strategy 16:47:27 for those people who actually "use" this code that i'm always hearing about :) 16:47:36 eharney: :) 16:47:40 tommylikehu_: Good point. 16:47:51 There are a few remaining patches marking drivers as unsupported. 16:48:00 And a few went through already. 16:48:04 Yeah, looks like a lot of driver unsupported patches that just need +A 16:48:13 If we get CI stable on those before RC1, I think we can revert those. 16:48:25 smcginnis: great 16:48:33 Otherwise, some queued up for removal in Pike. 16:48:38 If it comes to that. 16:48:50 https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:ci_unsupported 16:48:59 Open reviews ^^ 16:49:13 any thoughts on NFS snapshots? i still owe erlon some testing there but it hasn't seen any other review lately 16:49:41 eharney: I've kind of been waiting for that to shake out. If it's ready, I would like to get that pushed through. 16:49:59 The one question I had there is any impact with drivers that build on top of the NFS driver. 16:51:35 i don't know of anything really worrying there, but it's been a while since i've looked at it really closely 16:52:10 eharney: *Sigh* I have been wanting to play with those but been in turmoil lately. :-) 16:52:16 eharney: I'm sure we'll find out eventually. Really won't be in any worse shape than we are now, so I don't think anything needs to be held back by it. 16:52:18 I will see if I can play with the latest patches. 16:52:26 jungleboyj: cool 16:52:29 smcginnis: +2 16:52:54 hi, minor topic, can this be merged now https://review.openstack.org/#/c/378105/? Sry to bring it again. Trying to save one more merge conflict 16:53:25 viks: I'll take a look later. Got it open in a tab now. Hopefully should be good now. 16:53:43 correct link is https://review.openstack.org/#/c/378105/ 16:53:49 thanks 16:53:56 OK, anything else for the meeting? 16:54:23 OK, thanks everyone! 16:54:30 Thanks smcginnis ! 16:54:34 #endmeeting