16:03:25 #startmeeting Cinder 16:03:26 Meeting started Wed Jul 30 16:03:25 2014 UTC and is due to finish in 60 minutes. The chair is DuncanT-. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:03:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:03:29 sweet 16:03:30 The meeting name has been set to 'cinder' 16:03:33 hope 16:03:56 o/ 16:04:03 #topic cinder client 16:04:27 subject says it all :) 16:04:34 Hey guys, sorry I am late. 16:04:42 * jungleboyj is getting beaten up today. 16:05:34 anybody have anything on cinderclient? 16:05:35 jungleboyj: Try importing _, that'll get you beaten up less ;-) 16:05:49 DuncanT-: :-p 16:06:12 hi 16:06:15 jgriffith: Just looking at the open reviews... Quite a few there 16:06:16 I'm just asking folks to take a look today at things that want in the tag 16:06:33 I have a patch in cinderclient. what do I need to do? 16:06:34 jgriffith: I have something for Cinderclient. 16:06:39 https://review.openstack.org/#/c/104056/ I'd quite like, will review now, it already has one +2 16:07:46 I would like https://review.openstack.org/107512 and https://review.openstack.org/107153 to get in if possible. 16:08:18 hey sorry I'm late 16:08:40 jungleboyj: DuncanT- none of those look awful 16:08:43 https://review.openstack.org/#/c/107153/ I don't want to see land. I *like* seeing the token in debug mode, it is really handy 16:08:57 the token one is the only one I question 16:09:00 I have this in cinderclient: https://review.openstack.org/#/c/104743/, but it depends on https://review.openstack.org/#/c/104732/ 16:09:01 ha! 16:09:03 DuncanT-: +1000! 16:09:06 lag lag lag 16:09:23 IMO debug output isn't designed to be "easy" to read :) 16:09:27 xyang1: We don't land client changes until the server changes are in 16:09:34 not sure I get the security concern... jungleboyj 16:09:36 ? 16:09:42 xyang1: blocked by jgriffith till your server code lands 16:09:49 DuncanT-: yes, that makes sense 16:10:03 yeah... please mark client changes that depend on server side changes as WIP 16:10:06 navneet: sure, I see that 16:10:09 Was raised internally here. Shouldn't have passwords coming back in debug or the tokens from keystone. 16:10:11 that should just be standard practice 16:10:19 Nova has already made that change. 16:10:20 jungleboyj: it's not a password 16:10:31 jungleboyj: Why not? 16:10:31 jungleboyj: token should be fine 16:10:35 jungleboyj: it's a token which expires, and it's "your" token 16:10:43 jgriffith: can you remove your WIP so I can mark it? thanks 16:10:55 xyang1: sure... also we can both mark it :) 16:11:17 yo 16:11:28 jungleboyj: see the comment for similar change: https://review.openstack.org/#/c/109808/ 16:11:29 I think we'll pass on the token one.... at least for today and circle back after this tag 16:11:30 jgriffith: having your WIP looks like you want to block the server side change as well:). that's why I prefer you remove it 16:11:34 It also make the debug output a lot easier to read jgriffith 16:11:50 xyang1: well... maybe I do :) 16:11:52 just kidding 16:12:26 jgriffith: that's exactly what I'm trying to figure out:) 16:12:32 jungleboyj: I use the curl in the debug info all the time, loosing it would be really, really incovenient 16:12:34 jgriffith: I marked it as WIP now 16:12:39 I'm with DuncanT-, I like seeing the keystone token in the debug output 16:12:43 jungleboyj: masking passwd is OK, but token is debug mode really is useful for me. 16:12:52 xyang1: saw that.. tahkns 16:13:10 easier to read +1, but if some people prefer seeing the token, I won't go for removing it. Maybe something like [TOKEN TRUNCATED] works ? :) 16:13:32 jgriffith: thanks 16:13:34 rushiagr: Then there's no point having the token in there at all 16:13:34 winston-d_: can someone not use keys and certificate to figure out some info? 16:13:48 rushiagr, but if you actually use the token outside for curl calls, then truncating it makes it useless. 16:13:52 DuncanT-: Ok, so, the main complain internally was with regards to the password being displayed and the fact that it can be put in a log file and reused. 16:13:55 just guessing security cocerns 16:14:07 DuncanT-: oh, I see 16:14:11 jungleboyj: Why are you keeping client log files like that? 16:14:20 DuncanT-: winston-d_ If I scope it down to that are you ok and if the token comes back around again we areaddress. 16:14:31 Ok... I don't want to spend 20 minutes on this if we can avoid it :( 16:14:42 * hemna zips it. 16:14:42 DuncanT-: I am not, but our consumers are worried about a security issue. 16:14:44 jungleboyj: Removing the password is ok I guess... 16:14:46 I think there are valid justifications for leaving it as is 16:14:47 I thought someone just 'sees' it for confirmation or something. Anyways, lets move on 16:14:51 at least for this week 16:15:04 and making debug output "easier" to read is nice 16:15:06 but... 16:15:08 jungleboyj: +1 for masking passwd, please leave token alone. 16:15:09 dang, I I shouldn't have missed this 16:15:20 jgriffith: ^^ You ok with that? 16:15:21 +2 for leaving token alone 16:15:24 jungleboyj: you could always add a debug level 16:15:44 jungleboyj: sorry... ok with what? 16:15:51 masking passwd but leaving token alone? 16:15:57 jgriffith: Yes 16:15:59 ^^ for sure 16:16:04 +1 16:16:15 although I myself have needed that in debug output :( 16:16:27 jgriffith: Ok, I will redo the patch and scope it that way. 16:16:34 ie my creds were jacked up and I didn't know it until I ran with debug... but whatevs 16:17:02 again, I think there are plenty of "real" fixes and additions that should take priority for the next push to pypi (tomorrow) 16:17:10 this certainly isn't on my list right now 16:17:26 we'll push another releas I promise :) 16:17:27 jgriffith: When will the next push be after that? 16:17:41 jungleboyj: when I feel like it ;) 16:17:46 jungleboyj: probably next milestone 16:17:52 Shall we say get fixes reviewed and in before midnight jgriffith's time or they miss the cut? 16:17:54 jgriffith: Ok. Fair enough. 16:17:58 +1. I see a majority for non-removal of tokens, for now 16:18:05 DuncanT-: that's what I'm sayin :) 16:18:28 Shout on the channel if you want something specific reviewed 16:18:34 Ok... this see if we can move to the second of the 7 topics 16:18:41 if you do the math we're going to have a problem :) 16:18:48 #topic Inheritance model change 16:18:56 18 minutes per topic * 7 = screwed 16:19:03 Lol 16:19:11 So thanks for folks that started looking at these 16:19:11 jgriffith: Get on with it then 16:19:27 I've spun these around a number of times now based on input etc 16:19:30 jgriffith, so did you see my review comments last night 16:19:34 wrt the db object? 16:19:40 I did and responded in disagreeement 16:19:44 ok 16:19:48 haven't seen it yet... 16:19:49 hemna: including the grep that shows where it's used in other places :) 16:20:03 gah, ok. I blame chrome's search. 16:20:05 arguing over where people want it to live aside 16:20:15 I'd really like to get this moving along if we could 16:20:30 i've casually looked over the code a bit, it seems reasonable to me so far. Will take a closer look 16:20:33 it keeps falling out of date but most of all we need to make sure it gets good time in test 16:20:41 eharney: thanks! 16:20:50 jgriffith: perhaps it would help to set a goal of when we would like it in by. That would give enough time for it be tested in the gate, etc 16:20:51 iser and iet are the big sticking points 16:21:03 DuncanT-: I'll have to look at your concern about volume_group 16:21:20 should work since it passed the gate which uses "stack-volumes" instead of def "cinder-volumes" 16:21:27 but I may not follow exactly what you noticed 16:21:48 thingee: yeah 16:21:50 jgriffith: Look at the sample conf change.... the option is no longer in the sample conf at all 16:21:50 good idea 16:21:52 any way we could pull the db object out of these classes? 16:21:57 I'd like it in last week please ;) 16:22:11 hemna: Why are you fighting this? 16:22:28 how about we aim for friday? 16:22:29 hemna: If there's a concern/issue I'm open 16:22:30 Just thinking how we can get it back into brick. 16:22:32 thingee: +1 16:22:41 hemna: I dont' think it belongs in brick at all 16:23:07 the target classes would be a good fit IMO 16:23:14 DuncanT-: you mind setting that action? get this patch in by friday https://review.openstack.org/#/c/105923/ 16:23:23 as I said yesterday... I can't see a case where another project in OpenStack is attaching Targets to Volumes 16:23:33 #action get this patch in by friday https://review.openstack.org/#/c/105923 16:23:47 ok cool. I'll move on. Just wanted to float the idea. 16:24:07 next item? 16:24:14 #topic CGs 16:24:27 jgriffith: can we set a date for this one as well:)? https://review.openstack.org/#/c/104732/ 16:25:29 xyang1: we should have a cut off probably 16:25:34 xyang1, so what's holding it up now? just need more reviews ? 16:25:41 I had some discussion with naveet on the CG patch. just want to bring it up here so that we are on the same page 16:25:44 xyang1: I'm a bit unconvinced of a few things 16:25:57 jgriffith: go ahead 16:26:12 well... I fear we're going down the wrong path with some of these features 16:26:22 xyang1: I saw the spec and so my comments are late for discussion 16:26:22 especially after asking for input from folks on the operators ML 16:27:10 http://lists.openstack.org/pipermail/openstack-operators/2014-July/004789.html 16:27:21 xyang1: but would be good if some core guy looked at them as well 16:27:39 anyway.... 16:27:48 Just needs reviewers 16:27:56 and testers more importantly 16:28:39 the scheduler part of CG patch looks fine now after two iterations 16:28:43 I've been looking at it, it feels like an ugly change with a hard to use interface, but I also can't come up with any better suggestions :-( Has anybody else tried to develop a working driver for it? I don't know much about backend interfaces for this sort of thing 16:28:46 jgriffith: I'll have to read the email thread. can you give a quick summary here? you are saying users don't want replication and cg? 16:28:49 I'm unconvinced that for CG or for Replciation that the whole API tarck and cross backend approach is worth while 16:29:00 do we want to expose a new create call which creates all volumes in CG? 16:29:27 jgriffith: +1...I feel same too 16:29:29 Arkady_Kanevsky: Bulk operations are hard 16:29:41 Arkady_Kanevsky: From a rollback PoV 16:29:59 jgriffith: do you think cg implements that currently? 16:30:11 cross backend approach 16:30:18 It is hard to ensure that volumes creates in CG are on the back end tha tcan take CG snapshot. Thus offloading it to backend (driver) will be almost impossible) 16:30:41 jgriffith, DuncanT-: so now you don't think any of these features should land at all 16:31:05 xyang1: I think it should land, personally, I can see the use 16:31:22 xyang1: I'm asking if anybody has tried to write a driver for your design yet? 16:31:50 DuncanT-: I see blueprints on ceph and IBM driver already 16:31:52 xyang1: not us...but it should not be difficult as per driver api 16:32:01 DuncanT-: I think it would be good for new features like this to have a supporting driver. much like what jgriffith was doing with the new connector work + LVM 16:32:03 Arkady_Kanevsky, the backends can do snapshots today, I'm not sure how CG complicates it that much, other that coordinating the snapshots for the backends in the CG. 16:32:03 DuncanT-: we are also looking at it internally 16:32:14 xyang1: I'll look at those blueprints, thanks 16:32:32 DuncanT-: it'll expose issues in your design of actually trying to use it 16:32:42 xyang1: I'm not saying don't merge, I'm saying 'get feedback from the people writing the drivers' 16:32:45 thingee: I have a lvm implementation. It is just that the quiesce call needs to wait 16:32:59 OK. I iwll comment in review 16:33:04 xyang1: got a link? 16:33:21 thingee: https://review.openstack.org/#/c/104732/ 16:33:25 thanks 16:33:35 thingee: I have a (hacky) dm module that I'm trying to code up the driver support for.... it is feeling clunky, but it might just be me 16:34:07 xyang1: oh I see. I missed that was in the initial patch 16:34:42 so we have a couple of people unsure of the approach. 16:34:48 DuncanT-: https://blueprints.launchpad.net/cinder/+spec/consistency-groups, you can see the dependency blueprints for other drivers here 16:35:15 xyang1: how do you feel about the approach with the feedback you have? 16:35:30 thingee: which one? from naveet? 16:35:38 xyang1: jgriffith and DuncanT- 16:35:42 I'm unsure of navneet 16:35:45 's feedback 16:36:19 thingee: on the review page if you want to have a look 16:36:24 jgriffith, DuncanT-: I'm not sure what your feedback is. what is the alternative? 16:36:33 I'm only having issues at the detail level, not the whole approach, and it might be me.... short on time to spend on it, hense the lack of negative reviews 16:37:07 DuncanT-: bigger question is if the community is ok wih the design and approach? 16:37:24 23 minute warning 16:37:38 navneet: Are you ok with it? 16:37:43 thingee: jgriffit1 seems to have concerns not just for cg, bug also volume replication 16:37:50 but 16:37:59 ok, lets move on, but it looks like the overall feedback from people paying attention the design doesn't seem positive. and jgriffit1 you mentioned the operators list being unsure of the usage of it. 16:38:01 navneet: There's no point being 'not ok' without details 16:38:11 DuncanT-: not sure...it does not seem like a general approach 16:38:19 I'll read the operator's mailing list thread 16:38:26 DuncanT-: details on the review page 16:38:31 thingee: the spec was thoroughly reviewed and approved 16:38:33 I'm reading through the thread now 16:38:33 navneet: I'll read your review 16:38:33 DuncanT-: from navneet's comments it doesn't appear so. Honestly I haven't really taken a look at it myself to really weigh in. 16:38:53 xyang1: lets take this offline after the replication walk through meeting 16:38:59 #action Duncan to read all the current feedback and summarise by tomorrow 16:39:04 I want to understand why people changed their mind from the approved spec. 16:39:18 thingee: ok, thanks 16:39:20 DuncanT-: I think interested parties should all do that :) 16:39:25 next topic? 16:39:27 thingee: yeah thats not good...sorry I did not read the spec 16:39:30 xyang1's patch and approach looks OK to me. I've also read operator thread, couldn't come up with better idea so far. 16:39:43 #topic Hitachi driver 16:39:49 hi 16:40:04 I'm code submitter of hitachi driver 16:40:14 saguchi: hi 16:40:15 howdy saguchi 16:40:23 Nice to meeting you. 16:40:35 hey 16:40:35 I would like you to review my driver. I've updated it in accordance with hemna's comment. 16:40:44 thanks for adding the test results. I haven't looked at them yet today 16:40:46 winston-d_: thanks for all your reviews! 16:41:10 saguchi: thanks for the submission. we got about 8 drivers submitted that want the same thing. it looks like you're targeted for juno though 16:41:31 saguchi: was there anything else you wanted to mention? 16:41:36 and want to know if there are any requirement or not. Will my driver be merged by just passing reiview. 16:41:46 I've just spotted a minor rootwrap filter issue 16:41:52 thingee, the cert results were missing, and they are up on the review now. 16:42:00 saguchi: Passing review merges it 16:42:11 saguchi: Have you got a CI setup ready to go? 16:42:13 DuncanT-: understood 16:42:32 astrawman alternatives https://review.openstack.org/#/c/109774/ 16:42:32 As for CI, I'm working on this with steven. 16:42:57 saguchi, awesome 16:42:58 I think you've talked with steven about Hitachi CI by email. 16:43:16 saguchi: Yes, I've got him on my list, thanks 16:44:02 DuncanT-: Please put your review comment about rootwrap. And I wil update based on it. 16:44:10 jgriffit1, so that's solid fire to solid fire replication yes? 16:44:16 hemna: yes 16:44:17 Hi Duncan, on the call with Steve S. He's waiting for Service account approval 16:44:25 ok, thanks saguchi and apologies on the review taking time. We're low on reviewers, and it's definitely appreciated if you help with reviews on other approved blue prints to help us get to drivers. next topic? 16:44:26 sorry... I have interenet again, not sure how long it will last 16:44:28 saguchi: Done 16:44:35 Set up for CI is in progress. 16:44:57 CI nightmare :( 16:44:58 #topic replication 16:44:58 saguchi, I'll look at your driver again today. 16:45:05 Hi 16:45:09 we have a walk through today with ronenkat 16:45:20 Replication for driver owner at the top of the hour, everyone is invited.... details on the Cinder meeting page... 16:45:32 jgriffit1, so we could do basically the same thing with our drivers. we could do 3par -> 3par replication with just driver changes 16:45:45 hemna: that's sort of my point 16:46:03 hemna: yeah 16:46:04 (Thats all from my side) 16:46:04 do operators want to replicate from a solidfire to a 3par ? 16:46:22 or maybe lvm -> solidfire ? 16:46:25 hemna: people would probably love it if it was "good" 16:46:38 hemna: they don't want it if it's crap (ie dd over wire) 16:46:42 yah 16:47:05 that's pretty much what I thought as well. 16:47:05 I think that is a massive undertaking, and would need developing and testing out-of-tree for a good long time 16:47:19 DuncanT-: +1 16:47:36 But really that's a real "value add" from Cinder 16:47:43 wrapping vendor calls is... meh 16:47:51 I'd love to work on it. 16:48:00 Not sure if my management would approve 16:48:06 I would think we would also want to see two supporting drivers, separate vendors really showing this working. 16:48:14 yah I think it would be pretty amazing if it worked well 16:48:15 action... DuncanT- implements cross platform rep this week-end :) 16:48:33 that's what I'd call speed :p 16:48:34 Looking at core value add ideas might be a good chat for the meetup? 16:48:40 jgriffith: not enough whiskey in the world to help DuncanT- with that. 16:48:45 DuncanT-, +1 16:48:50 thingee: LOL 16:48:52 DuncanT-: +1 16:49:05 thingee: +1 16:49:10 Damn it, now I'm busy thinking about how it could work! You're all bad people! 16:49:19 I just wanted to throw out that sometimes devs implement things that make no sense to end users 16:49:21 :) 16:49:26 I'd like to be better about that 16:49:38 by users I mean OS Operators 16:49:42 So, what do we do about the existing replication patch? 16:49:49 we're lucky to have DuncanT- and thingee representing some of that community here 16:50:08 DuncanT-: I dunno... I'm not proposing anything necessarily 16:50:14 DuncanT-: reviews would be nice...:-) 16:50:26 DuncanT-: I just wanted to have everybody think about the long term implications 16:50:34 and what we're delivering vs what people need/want 16:50:53 ronenkat: I'm in the 'I don't underatnd this enough to review it' camp.... maybe your call will help 16:51:13 also I've said before complex features for the sake of complex vedors or exposing one feature in a specific vendors product don't seem worthwhile 16:51:20 thingee 's session about what is Cinder may again be required for new guys 16:51:51 jgriffith: DRBD will help with cross vendor replication 16:51:59 anyway... I'm also a fan of start small and grow 16:52:02 #topic SecureNFS 16:52:07 jgriffith: I think no one with common sense would see the benefit for cinder of a vendor-specific feature. 16:52:16 bswartz: ^ 16:52:19 ronenkat: ummm... dunno about that 16:52:28 bswartz: Are you going to speak on this? 16:52:33 joa: You'd be surprised ;) 16:52:45 okay so I'm virtual glenng today 16:52:46 bswartz???? 16:52:50 jgriffith: maybe my definition of common-sense is a bit too narrow then :p 16:52:58 joa: LOL 16:53:12 as many of you know we're changing the NFS drivers to create files as 0660 instead of 0666 to reduce the security risk 16:53:37 however a number of users will have a broken configuration on upgrade if they don't make changes to their NFS environment 16:54:00 bswartz: that was my next question :) 16:54:13 so we're thinking that we will make the new driver config flag a REQUIRED option and forces users to select either the old or the new bahaviour 16:54:17 bswartz, well everyone reads documentation right ? 16:54:35 *haem* 16:54:50 bswartz: that might be reasonable 16:54:51 rather than simply defaulting to the old behaviour (which is less secure) or defaulting to the new behaviour (which will break people who don't read release notes) 16:55:02 hemna: What are you smoking? 16:55:03 that seems almost as broken to me, upgrade path is still not smooth enough to just work... 16:55:05 bswartz: is there a way to determine "if existing use x; else use y" 16:55:09 jungleboyj, :P 16:55:31 hemna: Please share 16:55:34 dynamic detection would be much better IMO 16:55:35 jgriffith: I would love it if we could detect whether it's a new install vs and upgrade 16:55:42 jgriffith: Too many variables for automation. Plus, NFS Server side changes are required to be secure. 16:55:47 so far we haven't come up with a way that works more than 90% of the time 16:55:50 bswartz: so what's Cinder actually managing here? Nothing? 16:55:59 jungleboyj, ask jgriffit1. CO is where it's legal :P 16:56:03 bswartz: Oh... 90% 16:56:13 bswartz: ls -l all the files, and if they are 600 use the new behaviour, otherwise spam the logs and fall back? 16:56:14 That's better than I thought you were going to say :) 16:56:24 the idea is that it's more user friendly to force a choice than to pick a bad default 16:56:29 bswartz: any thoughts on an upgrade helper script? 16:56:32 i kinda like Duncan's idea there... 16:56:47 hemna: You had to go there, didn't you? :-) 16:56:49 eharney, +1 I'm with you on this one. 16:56:56 DuncanT-: it's a descent first try IMO 16:56:56 Make that the config option 'auto' and make that the default? 16:57:06 forcing a choice doesn't seem viable to me as it just breaks upgrades... i don't think we ever do that... 16:57:07 DuncanT-: that will screw up in the 10% case 16:57:10 DuncanT: Think about multi Cinder nodes where some have volumes and some don't. Then you end up with mixed behavior. 16:57:11 although I still wonder if we could provide a clever "upgrade" script 16:57:11 DuncanT-: That sounds reasonable. 16:57:14 DuncanT-: what if the share has files not managed by cinder? 16:57:20 bswartz: What is the 10% case? 16:57:29 the problem is that the things that need to be changed are outside the control of cinder 16:57:36 navneet: Cinder should know what it's managing shouldn't it? :) 16:57:43 navneet: Then you'll get the old behaviour and screaming in the logs... that is better than breakage 16:57:49 we're aware of a lot of scenarios where stuff only works because of the 0666 permissions 16:57:50 and thus the flaw in the entire design IMO 16:57:55 we should still default to the old behavior and config option to the new way. then folks can upgrade cinder and not be broken. when they read the release notes, they do what they need to and then bump the config option to use the new perms. 16:57:57 jgriffith: cant gurantee for all nfs vendors.. 16:58:04 navneet: You can set the config option to something other than auto and the screaming stops 16:58:08 DuncanT-: that would be multibackend upgrade scenarios where some backends think they're new installs 16:58:11 navneet: got ya 16:58:13 2 minute warning! 16:58:30 we could just drop NFS support if that's better? 16:58:39 Now, now... 16:58:42 right after we drop FC support. 16:58:42 * bswartz glares at jgriffith 16:58:49 I'm just going to inject my point here: Please see and review https://review.openstack.org/#/q/project:openstack/cinder+comment:code_cleanup_batching+-status:merged,n,z - it is cleanup merge week! 16:58:51 bswartz: you can't reach me from there 16:59:27 so I'm hearing that adding a new required option with no default isn't an acceptable option? 16:59:36 bswartz: Having an 'auto' where you get the old, insecure behaviour (and log spamming) in 10% of cases is fine 16:59:41 bswartz: I'm not sure 16:59:52 bswartz: It's a better option than breaking everyone 16:59:58 bswartz: but not as good as dynamic fix 17:00:07 bswartz: Anybody who really cares will see the logs and/or the docs and change the default 17:00:12 what about simply defaulting to the existing behavior? 17:00:19 I don't think the question is whether to go that route or not, but if there's any possible way to do something better 17:00:24 bswartz, that's what I suggested earlier. 17:00:29 that won't break anyone, but people who don't read the docs may have insecure systems 17:00:33 bswartz: probably a good first step, really. 17:00:39 bswartz: hemna the problem with that is we "never" fix the underlying issue that way 17:00:51 people use defaults, despite what we'd like to tell ourselves 17:00:56 hemna: for people who dont read docs we scream in logs 17:00:59 upgrade or new install 17:01:04 * joa nods, agreeing with jgriffith 17:01:04 so you have it yell into the logs for Juno and then have it break stuff in K 17:01:12 Auto guess that in a few cases goes with the old behaviour unnecessarily shoudl be fine... 17:01:14 jgriffith: do you know a way to reliably determine a new install? 17:01:16 that would solve all of this 17:01:27 bswartz: wait.... isn't there info in Cinder DB regarding shares at all? 17:01:35 you mean volumes? 17:01:37 well screaming in the log may help some folks 17:01:39 bswartz: In other words... if DB empty, use new settings 17:01:41 jgriffith: not permissions 17:01:43 bswartz: else use old 17:01:47 and for people that don't read and don't look at the logs....do we even care? 17:01:49 navneet: no, not perms 17:01:50 jgriffith: can drivers simply read the DB like that? 17:01:54 don't care aobut that 17:02:06 It might require a cinder core change 17:02:09 bswartz: I'd make it part of *somethign* else 17:02:19 so the manager would have to tell the driver "this is a new install" 17:02:20 Time 17:02:21 back to the helper script idea 17:02:23 jgriffith: driver reading DB is not secure somebody told me...not rcommended 17:02:36 bswartz: I'm saying set that higher up 17:02:53 why not just puke a message in the log and say the old way is deprecated. provide a config option to allow the user to force the new perms. 17:03:05 then in a release or 2, we remove the old deprecated way. 17:03:29 hemna: you just kick the can 17:03:32 seems reasonable to me 17:03:35 hemna: you still break upgrades 17:03:36 jgriffith: I'd like to hear more 17:03:39 and in between if we can find a mechanism for doing the auto discovery, done. 17:03:41 you just told them you're going to break them 17:03:46 about your idea for "higher up" 17:03:52 could you guys switch to #openstack-cinder? 17:03:55 bswartz: yeah... let's chat in #cinder 17:03:57 if you have something in mind it might be perfect 17:03:58 better to tell them you are going to break them in the future, then breaking them now. 17:04:05 arnaud: sorry for running over 17:04:14 dunno, just trying to find a reasonable compromise. 17:05:01 #endmeeting