16:01:35 #startmeeting cinder 16:01:36 Meeting started Wed Feb 19 16:01:35 2014 UTC and is due to finish in 60 minutes. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:37 Hello 16:01:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:39 The meeting name has been set to 'cinder' 16:01:44 o/ 16:01:55 hi 16:01:55 pheww... quite a week 16:02:00 and it's only Wed :) 16:02:13 Indeed. 16:02:18 Ok... we've got a number of things on the agenda so let's get on it 16:02:25 #topic I3 Status check/updates 16:02:46 IMO there's a ton of cruft in here 16:03:06 #link https://launchpad.net/cinder/+milestone/icehouse-3 16:03:35 I agree. 16:03:42 The BP and bug list should be frozen at this point 16:03:47 no new proposals 16:03:59 bugs we can slip to RC's of course 16:04:06 but feature proposals are done 16:04:17 so let's focus on those fornow 16:04:18 for now 16:04:30 The way I've been doing this is sort on priority 16:04:30 with the number of reviews already in, I worry about the things that are just "started" 16:04:40 should any BP not in Needs Code Review be pushed to Juno? 16:04:41 thingee: understood 16:04:55 thingee: I think we may want to propose dumping some of these 16:05:01 DuncanT: let's start with yours 16:05:09 DuncanT: are you actually working on this? 16:05:23 #link https://blueprints.launchpad.net/cinder/+spec/filtering-weighing-with-driver-supplied-functions 16:05:24 Yes. I've got code that works but needs tidying up for submission 16:05:42 Realisitically if it isn't in tomorrow it is not going to be in 16:05:51 DuncanT: Ok... fair enough 16:06:01 DuncanT: I'm going to hold you to that when I wake up in the AM :) 16:06:09 Fair enough 16:06:29 #link https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api 16:06:52 rohit404: is this you? 16:06:56 ^^ 16:07:02 or dosaboy ? 16:07:13 or nobody 16:07:29 jgriffith: not me 16:07:41 rohit404: sorry.. Ronen Kat 16:07:50 anyway... it's proposed since Jan 16:08:01 https://review.openstack.org/#/c/69351/ 16:08:17 IMO this one is higher on the priority list for this week 16:08:49 dosaboy: I think we need to break the BP though between the metadata and export/import 16:08:49 Yeah, I have a list of reviews to take a look at. I should add this. 16:08:55 got some drafts on that one. 16:09:23 mostly the body key export-import confuses me 16:09:53 thingee: can you point us to the part you're thinking of? 16:10:04 https://review.openstack.org/#/c/69351/5/cinder/api/contrib/backups.py 16:10:07 l325 16:11:15 whoops sorry i'm late 16:11:22 dosaboy: doesn't seem to be around 16:11:26 jgriffith: implementing import/export without metadata would kind of be a regression since you would not be able to import/export e.g. bootable volumes 16:11:32 dosaboy: oh... there he be 16:11:36 aye aye 16:12:12 i've not had a chance to review that patch yet tbh 16:12:23 dosaboy: yeah 16:12:35 so ok let's review see if we can get info from Ronene 16:12:38 ronen 16:12:51 dosaboy: my question is the missing parts to complete the BP 16:13:11 dosaboy: I agree we need to get the metadata import/export landed still 16:13:34 jgriffith: I will make sure Ronen is aware we have Qs. 16:13:44 dosaboy: It's just unclear of what actually constitutes this bp being "implemented" 16:13:48 jgriffith: which bp? 16:13:59 dosaboy: https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api 16:14:48 I'm a newbie here :) 16:15:08 jgriffith: ok i'll see if I can get that clarified 16:15:17 dosaboy: thank you sir 16:15:19 ik__: welcome 16:15:38 dosaboy: I'd like to separate it out to what we're going to do in Icehouse and reference maybe what's still ongoing 16:15:52 ok sure 16:15:53 ik__: Welcome to the party! 16:15:55 we've got a number of bp's that we aren't very clear on "when is it done" 16:16:18 Next... 16:16:31 bswartz: you here 16:16:52 #link https://blueprints.launchpad.net/cinder/+spec/multiple-capability-sets-per-backend 16:17:15 BP proposed and approved in Dec but no activity 16:17:24 considering this will not make it 16:17:30 nope, push to Juno, next 16:17:33 I'll get with bswartz when he's around 16:17:36 jgriffith: i think we spoke about this a few weeks ago and bswartz said it was more complicated than he thought, and it would be juno 16:17:52 He's said before that he's stuck trying to get a clean implementation 16:18:01 K... done 16:18:04 thanks 16:18:25 jgriffith: yes 16:18:35 bswartz: too late we figured it out without you :) 16:18:40 bswartz: shout if we're wrong 16:18:45 Next... 16:18:46 hah yes thank you 16:18:51 #link https://blueprints.launchpad.net/cinder/+spec/per-project-user-quotas-support 16:18:58 I'm not happy with this one 16:19:02 two things... 16:19:06 1. It's ugly 16:19:10 2. Is it really needed 16:19:40 https://review.openstack.org/#/c/66772/ 16:19:50 Don't know if anybody else has any thoughts on this? 16:20:05 I have concerns about it for a number of reasons 16:20:26 It certainly is ugly, and the quota code has proven to be fragile in the past.... 16:20:45 not the least of which being we have existing quota consistency issues, and piling user quotas (which I don't know how valuable that is anyway) on top of it makes things worse IMO 16:21:02 and I'm not sure about the implementation anyway 16:21:17 Anybody object to pushing it? 16:21:24 I mean, pushing it out 16:21:26 I can see the value of the feature but I think we should punt to J since the implementation is not ready 16:21:40 anybody else? 16:21:41 jgriffith: how can you object to its usefulness? nova has it! :) 16:21:42 DuncanT: thanks 16:21:51 avishay: very very poor argument 16:21:58 avishay: although it's getting used more and more lately 16:22:07 thingee: has a great cartoon of that 16:22:07 jgriffith: agree. we should invest a bit of effort to clean up quotas first. 16:22:10 jgriffith: :) 16:22:12 what is the usefulness of it exactly? why don't project level quotas suffice? 16:23:11 ameade: Allowing the tenant to do finer grained quotas inside their tenant is something some users like, e.g. in a public cloud context - means one account can be shared more widely 16:23:52 I think push to Juno .. given that quotas are a bit broken, this will also be broken 16:24:00 Done 16:24:05 avishay: +2 16:24:29 Sorry.. I'm slow because i'm typing notes, updating reviews and bp's :) 16:24:42 :-) 16:24:45 jgriffith: your secretary took the day off? :) 16:25:05 hehe 16:25:10 * jungleboyj is slow because I am laid out with a stomach bug. Was so nice of my boys to share. 16:25:29 avishay: yeah... :) 16:25:37 * DuncanT is just slow. 16:25:40 hah 16:26:11 :-) 16:26:27 Ok, there's two more mediums that I think we need to talk about 16:26:35 #link https://blueprints.launchpad.net/cinder/+spec/local-storage-volume-scheduling 16:26:38 and 16:26:53 avishay: you think this is gonna make I-3 and could it include the meta support? (see comment in BP) 16:27:01 this being - https://review.openstack.org/#/c/73456/ 16:27:11 #link https://blueprints.launchpad.net/cinder/+spec/cinder-force-host 16:27:22 both have code proposed 16:27:33 both are kinda fugly 16:27:35 dosaboy: TSM driver will not have metadata support in icehouse 16:28:16 jgriffith: can we start with the 2nd (force host)? i think that's easier because nobody liked it 16:28:18 I'm extremely concerned that the local scheduler is not solving a clearly defined problem and should be thought about more carefully 16:28:28 avishay: :) sure 16:28:35 avishay: okey 16:28:37 avishay: I think that impl gets kicked 16:28:42 jgriffith: i think we had a round of discussions for force-host 16:28:49 I like the idea of force host, but not if the implementation is anything other than clean, simple and unobtrustive 16:28:54 avishay: but we look at doing an exception to get the feature for Icehouse still 16:29:04 DuncanT: ^^ 16:29:27 I don't see why force host is needed when volume types can achieve the same effect? 16:29:29 and figure out who/when somebody rewrites it 16:29:31 The proposed implementation is fugly 16:29:49 bswartz: admin/testing/similar - nothing tenant facing 16:29:50 bswartz: yes, that's a debatable point 16:30:01 I think that winston put forth some very good objections in the review 16:30:06 So there are some "holes" here as well 16:30:10 bswartz: i agree 16:30:18 keep in mind we don't expose "host" anywhere really either 16:30:21 DuncanT makes a excellent point 16:30:23 at least not a mapping 16:30:46 * jgriffith only sees this as something for admin, and still limited 16:30:55 I do get the need for testing 16:30:56 I think we need to build some better admin tools 16:31:11 Certainly it isn't worth ugly code 16:31:16 so I guess this falls lower on priority list 16:31:30 I could live without this ever being implemented 16:31:40 And if yes, clean and admin-only 16:31:45 DuncanT: bswartz avishay OK... I'm going to say we punt, but if someobdy cares enough to write a clean admin interface into this we can look at it 16:31:55 and it would be this week or early next 16:32:02 jgriffith: how far can you punt it? :) 16:32:05 otherwise it's not something we seem to really "need" 16:32:13 avishay: depends on how long of a running start I get 16:32:15 avishay: +1 I could live w/o it 16:32:16 haha 16:32:20 If somebody really needs it, they've got to pony up good code... 16:32:26 ok.. 16:32:30 I'm just going to defer it then 16:32:52 we need to remember to detail the bp better in Juno (I'll forget) :) 16:33:10 back to local-storage-volume-scheduling? 16:33:16 sure 16:33:19 i think thingee and DuncanT had comments here? 16:33:51 spoke to jgriffith about it. THis seems aligned with what vishy was talking about, along with jgriffith talking about brick 16:34:09 if we want to help nova in this regard, this is something we have to move towards. 16:34:10 I'm concerned that the semantics just aren't defined anywhere... they seem to want ephemeral volumes from the commit message, but aren't implementing that 16:34:51 So it's not what we actually talked about in Portland and want 16:34:54 but it's a start 16:34:58 DuncanT: why ephemeral? 16:35:06 DuncanT: it's no ephemeral (even if it reads that way) 16:35:17 DuncanT: it's really about local attached for perf reasons 16:35:24 DuncanT: that's really it in a nut-shell 16:35:37 The ability to schedule local disk resources on the compute node for an instance to use 16:35:45 does nova support booting a VM on the same host as a cinder volume? 16:35:45 But what happens when the instance dies? What are the rules for connecting the volume to a new instance? 16:35:46 instead of san attached 16:35:52 DuncanT: same as they are today 16:35:58 DuncanT: It's still a Cinder volume 16:36:16 avishay: there's no coorelation 16:36:27 avishay: I mean... there's no shared knowledge 16:36:34 jgriffith: so you /can/ remote attach it afterwards, on any compute host? That's better... maybe just a docs problem then 16:36:35 avishay: all this patch does is provide that 16:36:46 DuncanT: Well... no :( 16:37:03 DuncanT: so remember we have a "block" driver now that's a local disk only 16:37:08 jgriffith: i meant to ask what DuncanT asked - if you shut down the VM, can you bring another one up to attach to your volume? 16:37:08 no iscsi, no target etc 16:37:12 jgriffith: oh, well then perhaps I'm still not understanding :) 16:37:13 there was a plan to add the so-called "shared knowledge" to one or both schedulers though wasn't there? 16:37:14 HOWEVER you make an interesting point 16:37:28 jgriffith: IMO that isn't a cinder volume... 16:37:33 it would be interesting to extend the abstraction 16:37:46 treat it more like a real cinder vol 16:38:00 jgriffith: Or at least we don't have a rich enough interface to express that 16:38:05 difference is if it's "local" to the node your provider_location and export is just the dev file 16:38:11 instead of a target 16:38:11 'Island' tried that, right? 16:38:30 DuncanT: I never really figured out what they were trying ;) 16:38:43 DuncanT: but yes, I think it was along the same lines 16:38:46 So anyway... 16:38:49 My problem is that there's nothign in the return of 'cinder list' that tells me which vms I can / can't connect to 16:38:50 My thoughts on this are: 16:39:01 Useful features, needs a bit of thought and cleaning 16:39:10 I'm ok with letting it ride til the end of the week 16:39:25 if it's not cleaned up and made mo'betta then it gets deferred 16:39:49 DuncanT: Yeah... to your point 16:39:52 I'd really like to hear in detail what is supposed to happen after detach 16:39:59 DuncanT: the cinder list comment is good. I think you should raise that in the review 16:40:04 DuncanT: I'd say go back to my suggestion about how to abstract it so it "CAN" have a target assigned and work like any other cinder volume 16:40:17 thingee: DuncanT I don't want to do that :( 16:40:28 thingee: DuncanT I'd rather make it more "cinder'ish" 16:40:37 I agree - make it more cinderish 16:40:39 So the patch looks different this way 16:40:44 jgriffith: +1 16:41:14 It becomes more of a filter scheduling deal 16:41:24 My understanding of the proposal was to make it like a regular cinder volume with a hint that allowed you to bypass the iscsi layer when the target and initiator would be on the same box 16:41:26 So the hint applies, but in every other respect except performance, it is a normal cinder volume 16:41:28 and attach then determines "hey... can I just do a local attach or do I need an export" 16:41:29 i think nova also needs a similar way of saying "launch a VM on the same host as this cinder volume" 16:41:47 DuncanT: for the most part 16:41:50 The call out to the nova API in the API server still worries me too 16:41:57 avishay: yeah, it probably needs to go both ways 16:42:01 But that is an implementation detail 16:42:10 I don't want to go too deep on this 16:42:25 I've been going back and forth on the idea for a about a year 16:42:34 this was what we were aiming for with brick 16:42:46 but that got completely side ways 16:43:21 whatever happened to brick 16:43:37 is it split out from cinder yet? 16:43:58 :) 16:44:23 I think a discussion about local volumes needs to start with answering the question how cindery do you want them? 16:44:33 and there's new stuff in the works for cross project communicatiion and scheduling 16:44:33 that solves alot of this problem 16:44:33 so I hate to get carried away and invest a ton because I think that stuff is going to land in J 16:44:33 alright... I'll take a look at this later and update the BP and review 16:44:38 bswartz: not yet, WIP 16:44:38 if we get it great, if we don't we don't 16:44:38 agreed? 16:44:41 bswartz: no, I flat out haven't gotten around to it 16:45:03 jgriffith: sounds good 16:45:06 bswartz: and the LVM code kept changing so much this past cycle I didn't feel it was stable enough to break out 16:45:15 bswartz: It's J-1 now though :) 16:45:33 jgriffith: timecheck 15 minutes left 16:45:40 jgriffith: so when can we push the nova guys to use it instead of their crappy attach code? 16:45:57 Ok.. sorry I took all the time up here 16:46:12 bswartz: I think hemnafk ported most of the initiator/attach stuff already? 16:46:43 one more 16:46:47 #link https://blueprints.launchpad.net/cinder/+spec/when-deleting-volume-dd-performance 16:46:52 descent enough idea 16:47:03 but it's been stagnant since october 16:47:09 defer 16:47:11 IMO 16:47:24 not to mention as eharney points out there are considerations here 16:47:32 sure 16:47:37 jgriffith: +1 defer 16:47:41 If there's no code and nobody offering it, defer 16:48:04 even though the BP seems to contain code, there's no patch :) 16:48:34 I think the patch is in the BP -- it's literally 2 lines 16:48:52 Needs a config option too 16:48:57 and unit test 16:48:58 even so eharney's alternative suggestion seems reasonable 16:49:07 DuncanT: +2 16:49:14 I'll look at it later and consider implementing it 16:49:19 but for now it's off the table 16:49:48 that works 16:49:58 I'll get with eharney on his stuff later 16:50:14 My stuff is on the way (need one good day of no crisis or not being sick) 16:50:18 do people use the cfq scheduler on their volume nodes? 16:50:24 jgriffith: need any helping hand there? I've not started here yet. 16:50:36 ik__: reviews would be fantastic :) 16:50:55 cfq scheduler? 16:51:02 for ionice on taht blueprint 16:51:31 jgriffith: even if I'm novoice? :) 16:51:34 guitarzan: sorry.. .don't know what you're saying :) 16:51:38 guitarzan: i would assume so 16:51:42 ik__: best way to learn the code is review :) 16:51:49 ik__: We will help you learn! 16:51:51 avishay: I guess if it helped for that person with the blueprint 16:51:57 jgriffith: +2 16:52:05 Ok, so wer'e about out of time and I hogged the entire meeting 16:52:14 thingee: 16:52:17 You ahd some items 16:52:20 ik__: -1 is the best review you can provide 16:52:45 ik__: be critical 16:52:46 yes 16:52:54 not typos etc but in the code quality 16:53:10 we've been getting bad about writing ugly code lately IMO 16:53:12 ok... 16:53:16 thingee... all yours 16:53:34 driver maintainer: please review and update your cert results here: https://wiki.openstack.org/wiki/Cinder/certified-drivers#Most_Recent_Results_for_Icehouse 16:53:44 topic change? 16:53:50 milestone consideration for drivers 16:54:03 #topic milestone consideration for drivers 16:54:11 #link https://review.openstack.org/#/c/73745 16:54:47 I want to propose something written on how we allow new drivers in. 16:55:09 to avoid a backlog in milestone 3 when we should be focusing on stability 16:55:15 and documentation 16:55:17 thingee: +1 16:55:33 thingee: +1 16:55:34 hear hear 16:55:35 +1 16:55:37 thingee: +1 16:56:03 Thnk we all agreee, and stated this before but never wrote it in stone :) 16:56:04 This is being more strict with maintainers, but in return we should be better on reviews in milestone 2 of getting a driver through 16:56:43 thingee: no arguments here :) 16:56:59 thingee: Yeah, that means we have to be better about tackling the hard reviews. 16:57:10 What about requiring a cert run for new drivers? 16:57:12 * ameade is curious about the cinder hackathon 16:57:18 :-) Badges for the cores! 16:57:22 ameade: +1 16:57:32 DuncanT: different topic - see the wiki 16:58:00 DuncanT: https://wiki.openstack.org/wiki/Cinder/certified-drivers 16:58:07 DuncanT, avishay: you both asked about the cert tests. I have a review for that https://review.openstack.org/#/c/73691/ 16:58:21 needs to be more helpful as pointed out by jgriffith, otherwise, good 16:58:37 thingee: cool 16:59:01 so please comment on those two. let me know what wording should be fixed up. I would like to have this settled before J 16:59:09 thingee: coolio? 16:59:10 and finally hackathon 16:59:11 thingee: 2 minutes - want to advertise your cinder 3-day super coding thing? 16:59:14 thingee: next topi 16:59:19 #topic hackathon 16:59:34 so hangout will probably be it. unfortunately spots are limited. 16:59:44 if you are going to be dedicated, please join the hangout :) 16:59:55 I'll post a link to the room 16:59:59 ok 17:00:02 topic or whatever for people to join 17:00:25 * hartsocks waves 17:00:30 thingee: can you post before monday? other time zones can start earlier 17:00:31 I would really like to see us get through reviews together and finish some stability bugs. 17:00:31 hartsocks: :) 17:00:37 we're going we're going 17:00:37 yes! 17:00:48 bye all! 17:00:49 avishay: I'll be likely up late to start 17:00:51 ok done! 17:00:54 :) 17:00:59 thanks everyone 17:01:03 thanks 17:01:04 clear out for hartsocks 17:01:07 #endmeeting