16:00:01 #startmeeting Cinder 16:00:01 Meeting started Wed Mar 22 16:00:01 2017 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:04 ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex karthikp_ patrickeast dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell watanabe.isao,tommylikehu mdovgal ildikov 16:00:05 The meeting name has been set to 'cinder' 16:00:10 wxy viks ketonne abishop sivn breitz 16:00:10 hi 16:00:11 Hi 16:00:11 o/ 16:00:12 hi 16:00:13 o/ 16:00:13 hi 16:00:14 hi! 16:00:15 <_alastor_> o/ 16:00:20 hi 16:00:24 #topic Announcements 16:00:25 How is it Wednesday again? 16:00:26 The usual: 16:00:27 #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review focus 16:00:28 hi 16:00:28 o/ 16:00:41 I've seen some new driver reviews going - good to see that. 16:00:42 yough 16:00:44 hello 16:00:53 Hello 16:00:57 hi 16:00:59 hi 16:01:02 hey 16:01:07 DuncanT, Are you surprised?) 16:01:27 It is DuncaT again! 16:01:47 mdovgal: Very much so. I entirely didn't notice until I get the pop up from smcginnis announcing the meeting 16:02:03 DuncanT: Glad it's useful. :D 16:02:06 DuncanT, you aren't the only one :) 16:02:33 Just a couple of other quick announcements, then we can discuss more if we have open time at the end. 16:03:08 If there are any Cinder specific sessions we want at the Summit, apparently we were supposed to have been discussing that and brainstorming. 16:03:23 But since that's completely opposite of the original message, I missed that one. 16:03:36 hi 16:03:42 So just an FYI. Let me know if anyone has anything. 16:04:15 And just for awareness, Pike-1 is coming up in a few weeks. 16:04:25 smcginnis: I'll not be there, but "what is broken" is always the most useful feedback... we mostly hear from vendors wanting to add stuff, rarely from users who're trying to make use of what we have 16:04:40 smcginnis you are spreading untruths 16:04:44 smcginnis it CANT be already :) 16:04:57 jgriffith: Not an alternative fact! 16:05:02 haha 16:05:04 #fakenews 16:05:08 DuncanT: +1 16:05:24 #topic 3rd part CI 16:05:39 jgriffith: I think we brought it up last week, but ran out of time. 16:05:41 shhhweeet 16:05:52 #link https://etherpad.openstack.org/p/cinder-ci-proposals Changing 3'rd party CI requiremen 16:05:52 Yeah... https://etherpad.openstack.org/p/cinder-ci-proposals 16:06:12 was hoping to see outrage and general dissent on that etherpad... 16:06:26 but since there's nothing there I guess that means we're all in agreement and life is good? 16:06:28 Or everyone is just happy with it. :) 16:06:42 smcginnis you're the eternal optimist! 16:06:49 ;) 16:06:58 * jungleboyj sees unicorns and rainbows 16:07:02 jgriffith: Can we filter on CI results? 16:07:10 jgriffith: I'm all for these changes 16:07:10 Bahhhh!!!! 16:07:12 hehe 16:07:16 * jgriffith 's head just exploded 16:07:19 lol 16:07:26 He he he. 16:07:39 Should we do an official vote? 16:07:43 ok... so seriously; does anybody have any thoughts or objections to lowering the bar and taking a stricter approach to CI ? 16:07:52 * smcginnis frantically looks up the voting syntax 16:07:56 smcginnis oh.. yes! We haven't had an official vote in a long time 16:08:04 It has to be worth a try... waht we have is a complete mess 16:08:12 DuncanT agreed 16:08:13 stricter how? 16:08:45 eharney if you can't pass a nightly it's easy to track and I don't think we need to be very lenient 16:09:03 eharney driver gets listed as unsupported or whatever 16:09:10 jgriffith: So no long grace period if they are not passing? 16:09:15 eharney and it can be automated 16:09:16 the puzzling part to me is why the overall results are so bad after this many years... do we have some vague understanding of why so many CIs don't work? 16:09:26 eharney no more "somebody send an email to xyz" 16:09:30 Are we doing Nightly or Weekly? 16:09:38 jungleboyj that's a great question 16:09:48 eharney, there are just an infinite list of reasons why they fail it seems 16:09:55 it's not really clear to me what being stricter would actually accomplish 16:09:57 How strict it is very much depends on that. 16:10:00 the environment is always a moving target 16:10:19 IMO, nightly should be good 16:10:21 eharney the only thing it would gain is automating the process 16:10:33 I agree, there can be multiple reasons for failure 16:10:37 HTTP501 Enclosure on Fire 16:10:43 I have to mention not every country have a good network as the us do 16:10:54 tommylikehu_ yeah, that's fair 16:10:55 and if we can get automatically CI triggering on driver's patch - is would be great 16:11:26 eharney so if you don't like the proposal or have input I'd love to get your feedback 16:11:32 I think weekly would be fine, and we can trigger on specific relevant patches. 16:11:40 Occasionally they run off with our CI equipment. 404, I guess for that one. 16:11:46 eharney I'm just trying to come up with a way to manage the disaster we currently have and perhaps make it something more useful 16:11:53 404 - CI rig not found 16:11:59 I know a few of us have hit that one. 16:12:01 currently IMHO it's only slightly more than useless 16:12:36 We had originally talked weekly. Doing the passing and a forced failure run. 16:12:38 as a side note, we should also discuss any ramifications for os-brick patches and CIs. 16:12:51 eharney thoughts? 16:13:22 I'm not entirely sure what we will gain by changing "test every patch" to "test nightly". Not a lot of drivers are "unstable", most are either "always not working" or "always working". So basically we will change to "fail on every commit"/"pass on every commit" to "fail every night"/"pass every night". 16:13:33 Ok... so maybe let folks ponder this for a week and vote or propose changes next week? 16:13:40 my thoughts are more around what exactly the goals are with third party CI and how we should approach it in general 16:13:59 eharney do you have any interest in sharing those thoughts with the group :) 16:14:09 because years of evidence seems to point toward just banging on CIs to work better is not going to be an easy road 16:14:25 I thought we discussed this at the PTG ? 16:14:32 eharney and FWIW that's exactly why I wrote that up and it's exactly what we discussed in ATL 16:14:46 hemna +1 we did 16:15:30 anyway, if nobody actually has anything to say about it I don't want to waste meeting time on it 16:15:54 I think it's clear what we have now is not working the way we had envisioned. 16:16:03 And I think we all want the CI results to be more useful 16:16:06 smcginnis +100000000000000000000000000 16:16:11 I t++ 16:16:22 In that respect I think it's worth exploring ways to make the signal to noise ratio better. 16:16:36 smcginnis if nothing else that's all I'd like to accomplish 16:16:53 Let's let it stew for a week and bring it up again after more folks have had time to mull it over. 16:17:12 works for me 16:17:34 Sounds good. 16:17:35 Alrighty... 16:17:40 maybe we should just add it as a topic for Australia now :) 16:17:40 #topic Filtering and the API 16:17:45 jgriffith: Head explosion time again. :) 16:17:48 haha 16:18:08 so many of you may have noticed that this has become a bit of a hot button for me 16:18:11 * jungleboyj puts on my blast shield 16:18:28 We have all sorts of convenience filters being added here and there for different resources 16:18:38 I certainly get that it's a nice thing to have and can be useful 16:18:44 based on jgriffith and eharney's spec I think we would finally have one command for create one command for delete ........ 16:19:06 Good thing we have DuncanT here, the patron saint of large deployments. :D 16:19:10 but it makes for a rather inconsistent mess of things in the API 16:19:14 smcginnis not any more :) 16:19:26 jgriffith: I've still got the scars... they never heal 16:19:30 #link https://review.openstack.org/#/c/441516/ 16:20:15 anyway... my proposal may not be clear 16:20:40 the point was basically that we already have list filters on most of the resources at the DB layer 16:21:02 rather than add another 27 micoversion bumps over the next release for a filter here and there 16:21:17 I'd like to propose instead we just leverage the db filter mechanism 16:21:35 allow a *generic* --filter arg to the resource list calls 16:21:49 jgriffith: +1 16:21:51 generic in terms of db keys 16:22:18 also, give the admin the ability to specify what they want to allow via a filters list file 16:22:21 is that a direct line into sql ? re: sql injection 16:22:27 * DuncanT likes jgriffith's spec. Need to check again the list is sane (it was missing tenantid last time I looked, and I forgot to leave a review comment) but the idea looked great. Being able to graba list of what you can filter by from the API is nice. We should get good defaults, since almost nobody will change them 16:22:54 that could then be used not only to make sure things are valid, and they're things the admin wants to allow, BUT also it could be used for the end user to query the system and see what's available/valid 16:23:16 jgriffith: that make sense ,+1 16:23:17 DuncanT the current example just takes what was added for volumes to the config file 16:23:17 hemna: That was my initial reaction, but I believe it's not an issue with this case. 16:23:43 also, tenant id doesn't make sense for an end user cuz... well they can only view their tenant id resources :) 16:23:52 ok, we should just be careful and make sure we can't allow end users to bork the DB with a filter. 16:23:59 jgriffith: Yeah. I think it just needs tenantid adding (for admins and nested project types) 16:24:11 hemna it's not, it's just allowing you to set the filter k/v pairs 16:24:22 hemna: I think we will still take advantage of current filter logic in db layer 16:24:26 hemna if you provide something that's not in the valid list file it gets popped and ignored 16:24:35 ok sweet 16:24:42 thanks, I just wanted to raise and ask. 16:25:03 hemna DuncanT eharney https://review.openstack.org/#/c/444598/ 16:25:07 jgriffith: Nova has something similar. Right? 16:25:08 we wil need to validate filter value types as well 16:25:14 jgriffith: users in nested projects can see other people's volumes 16:25:15 jungleboyj I don't know 16:25:22 i mean certain filter values are key/value itself 16:25:26 DuncanT ok 16:25:27 e.g. metadata filters 16:25:44 jgriffith: I thought we had talked about this at the PTG and concluded they did. I could be wrong though. 16:26:02 jungleboyj don't know for sure, don't know that I care either :) 16:26:17 jgriffith: I saw that coming. ;-) 16:26:56 anyway, sounds like eharney doesn't like this idea... are there questions that I can answer maybe? 16:27:37 i haven't looked at the code, i think the spec just didn't give me the right idea of what was being proposed 16:27:46 eharney fair enough 16:27:50 I suck at writing specs 16:28:09 It seems impossible to really convey the ideas without code 16:28:27 eharney maybe that link to the code will help 16:28:53 DuncanT I think you're missing an important point here 16:29:02 https://review.openstack.org/#/c/444598/2/etc/cinder/resource_filters.json 16:29:11 we can add anything and even everything here 16:29:16 or a cloud operator can do it 16:29:18 BUT 16:29:27 what I put there for now is stricly an example 16:29:41 and it's based off of what people already approved/merged here: 16:29:49 jgriffith: We should have good defaults... most operators won't and shouldn't change it 16:29:59 DuncanT that's fine 16:30:46 DuncanT run a tox -egenconfig 16:31:12 we already merged a change that puts those things as the default in the config file directly 16:31:16 I just copied them over 16:31:24 jgriffith: Ah, got you 16:31:35 I don't care if we just provide a default list of *everything* or keep it compatable, or whatever 16:32:01 https://review.openstack.org/#/c/444598/2/cinder/api/common.py 16:32:03 jgriffith: Why the json then? Why not just read fromt he config file if it is already there? 16:32:09 DuncanT line #48 16:32:37 DuncanT: that could be another option 16:32:50 We can undepricate them from the config file if that makes more sense than a json file 16:33:06 I kind of like having it separate from the config file. 16:33:13 DuncanT tommylikehu_ yeah, the main reason was that I thought it would be pretty fugly by the time you added all of that in the config 16:33:40 smcginnis: ++ 16:33:42 Kind of like the policy file. 16:33:49 smcginnis +1 16:33:52 Yeah, it reminds me of that. 16:33:59 Not a fan of fugly. 16:34:07 jgriffith: Our config file is already pretty fugly :-) I'm not religious about it either way, as long as it has been vaguely considered 16:34:24 DuncanT I considered it, thought it sucked so didn't do it 16:34:31 DuncanT: Hey now. Don't talk about our baby that way. 16:34:44 jgriffith: Fair enough 16:35:07 jgriffith: Should we move on to the next one? 16:35:32 jgriffith: if we have the default value and the user only required to add or remove filter keys, will that help to maintain the config things 16:36:07 smcginnis: sorry, slow type speed 16:36:13 tommylikehu_: the problem there is you can't ever update the default list later 16:36:15 tommylikehu_: That makes the config file hard to read though 16:36:17 tommylikehu_: No problem! 16:36:29 DuncanT, eharney, ok 16:37:16 jgriffith: Did we lose you? 16:37:25 jgriffith: IMO Make the defaults at least a bit more inclusive, and ship it 16:37:31 Head exploded. 16:37:38 (Maybe his head really did explode.) 16:37:43 sorry 16:37:46 jungleboyj: I'm afraid of that! 16:37:50 * jungleboyj plays taps 16:37:59 :) 16:38:00 jungleboyj you wish! 16:38:02 You all wish! 16:38:03 :) 16:38:27 so this one kinda piggy backs on the whole filtering madness 16:38:33 :-) Nah ... Wouldn't be the same without you. 16:38:50 So I have an opinion that an API isn't something to just have crap bolted on to over the years to hack things in 16:39:20 in other words, it shouldn't try to deliver every convenience wrapper that you can think of to users 16:39:40 but instead it should be well designed, stable, infrequently changing and give you everything you need 16:39:50 more exotic things should be left to the consuemr 16:39:52 consumer 16:40:05 jgriffith: I don't think things like filtering are convenience wrappers... try doing a detail list with 10k volumes. Our API performance sucks enough already. 16:40:10 as such, it would be interesting to consider returning *data* to callers to enabl ethem 16:40:50 so I thought it might be worth considering just returning data without the fancy view builders etc for users that want it 16:40:52 DuncanT: that's why we need like filter 16:41:03 are we on the client/shell topic now? 16:41:10 eharney yeah 16:41:12 #topic Client/Shell output format 16:41:14 at least I thought we were :) 16:41:15 Now we are. :) 16:41:29 jgriffith: are you around? :) 16:41:45 e0ne, :D 16:41:59 e0ne :) 16:42:05 smcginnis: Wondered when you were going to transition. 16:42:14 anyway... 16:42:21 jungleboyj: Wasn't sure if we were ready or not. 16:42:32 jgriffith: Yes, let's please move on. :) 16:42:37 jgriffith: in general, this idea looks like a good options for operators who doesn't want to use curl 16:42:46 so we're talking about formatting data that's been processed by the client, right, not raw HTTP responses? 16:42:46 regardless of all the debate, is there value in providing an option to return json data in the client shell? 16:43:04 eharney yeah, basically the json payload 16:43:19 eharney otherwise just use pycurl :) 16:43:26 jgriffith: IMO, it's needed only for CLI 16:43:27 jgriffith: Could be useful with jq. Easy to write bash scripts for things. 16:43:27 which json payload? 16:43:38 eharney TBF though it's the same as just doing "import client.... xyz lajdslfjaldsf " 16:43:51 Syntax error on line 1 16:43:52 it would be great to take what the CLI is printing into tables now and do that as json 16:44:05 eharney: +1 16:44:05 eharney so for example instead of passing the resultant volume list through the view builder just return a json blob 16:44:12 but not raw server responses 16:44:14 eharney +10000 16:44:15 eharney: +1 16:44:18 eharney: +1 again 16:44:19 exactly what I mean 16:44:22 great 16:44:33 holy shit eharney and I agree on something today! 16:44:37 yay :) 16:44:43 now if DuncanT and I agree the world is sure to end 16:44:46 jgriffith: I see value. I wouldn't consider it a high priority thing to get in, but seems like it could be useful. 16:44:47 Nice. 16:44:49 LOL 16:44:52 smcginnis yeah 16:44:56 unbelievable! 16:44:57 smcginnis the nice thing is it's super easy 16:44:59 qemu-img has this feature, it's nice 16:45:18 Maybe we should just stop here. :D 16:45:57 Well, in the interest of time, since there's no major "hell no" 16:46:05 's, move on? 16:46:14 15mins reminder 16:46:20 #topic Backup service initialization 16:46:20 Yeah, seems like everyone is ok with the idea. 16:46:28 Not sure who added this one. 16:46:31 yes 16:46:34 #link https://review.openstack.org/#/c/367439 - feature's spec 16:46:40 #link https://review.openstack.org/#/c/446518 - Problems with swift driver for this feature 16:47:02 so e0ne added spec about backup service driver init check 16:47:44 and during realizing this feature in code i came across some troubles connected with swift driver 16:48:31 idea was to check at least connection to backend at the startup moment, but for swift we don't have enough information 16:48:46 it's mostly because we use user context to connect to swift 16:49:07 and at the moment of startup we don't have this info 16:49:24 we can just verify if swift-related config options are configured well 16:49:29 could it just check to see that the swift endpoint is alive/reachable instead? 16:49:37 Ah yes. Fun. You don't even know what server to talk to if you're (sensibly) using the one from the catalogue 16:49:43 eharney: yes, we can 16:50:29 and this spec was created due to bug connected to swift 16:51:04 so we can just check the config 16:51:05 we've got one more problem with implementation 16:51:24 we use new backup driver instance for each call 16:51:44 so, we have to re-initialize it 16:52:11 that's why I do propose to refactor manager to bi similar with volume manager 16:53:17 e0ne: Seems reasonable. 16:53:19 now we don't have one best solution for this situation 16:54:33 So I believe that currently you can tell backup to use whatever swift endpoint is in the catalogue? 16:54:54 DuncanT: yes, we can 16:54:55 yes 16:55:10 In which case, two different users can be backing up to different swift endpoints... 16:55:24 5 minute warning. 16:55:53 DuncanT: oh... I understood the issue 16:56:14 we can't do it. having one object of backend driver we need to reconnect every time 16:56:18 DuncanT, +1 16:56:56 mdovgal: I would like to play with your code a bit more 16:57:05 Ok, having stated the problem I'll hide in the corner until somebody has a good fix then :-) 16:57:21 DuncanT: you're welcome to fix it:) 16:57:28 DuncanT, :) 16:57:34 so, we'd to 2 items now: 16:57:55 1) we're ok to refactor backup manager to be like a volume manager 16:58:18 2) we have to resolve issue with swift driver and different users/swift servers 16:58:21 2 minutes 16:58:47 tommylikehu_: I'll leave your topic on the agenda for next time. 16:58:56 smcginnis: sure 16:58:57 I'll work with mdovgal on PoC for this fix 16:59:01 is everything ok with e0ne's proposition? 16:59:09 e0ne, mdovgal: Sounds good. 16:59:33 smcginnis, ok. tnx) 16:59:35 Sounds ok to me. 16:59:41 OK, thanks everyone. Times up. 16:59:52 e0ne: I wrote the current code, you can see what a mess I made :-) 17:00:00 :| 17:00:04 #endmeeting