16:00:01 #startmeeting cinder 16:00:02 Meeting started Wed Oct 22 16:00:01 2014 UTC and is due to finish in 60 minutes. The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:05 The meeting name has been set to 'cinder' 16:00:15 hello everyone 16:00:22 hello 16:00:22 .o/ 16:00:23 hi 16:00:23 hi 16:00:24 hi 16:00:26 o/ 16:00:26 mornin 16:00:28 o/ 16:00:30 hello 16:00:35 again full agenda https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting 16:00:36 morning 16:00:43 hi 16:00:44 hi 16:00:48 o/ 16:00:53 hi 16:01:03 Hi 16:01:06 o/ 16:01:07 Hi 16:01:07 hi 16:01:11 Hi 16:01:21 #topic Kilo summit topic proposals 16:01:33 https://etherpad.openstack.org/p/kilo-cinder-summit-topics 16:01:39 #link https://etherpad.openstack.org/p/kilo-cinder-summit-topics 16:02:02 hi 16:02:05 +1 for state machine as a session 16:02:12 hope people reviewed some of these. we don't have a lot of time so lets just keep in mind for the summit we have a couple of different sessions that can happen 16:02:26 thingee, how many slots do we have to fill ? 16:02:52 1) scheduled slots, we have 7 of them. these should be really for things we value getting consensus outside of cinder. 16:03:09 2) meetup, this will be for talking about processes within the team and we can improve 16:03:15 hemna: did you add cinder-agent and brick? do you want to discuss about those 16:03:35 ok so state machine 16:03:55 xyang1, I didn't add those. If you think I should, I can. :) 16:04:07 xyang1, I was hoping to get those ironed out before Paris fwiw 16:04:11 We have enhancements waiting for the state machine concept to be introduced in cinder. 16:04:24 from the midcycle meetup we talked about having a PoC ready 16:04:29 to be looked at 16:04:58 Does anyone *not* think this would be fine to have a slot for? 16:05:16 yes we should have one for state machine 16:05:18 crickets 16:05:31 I think it's worth the discussion/ 16:05:31 +1 for state machine 16:05:40 +1 16:05:41 #agreed state machine will be a session 16:05:42 +1 16:05:50 (i'd also like an agent and brick one as xing noted) 16:05:58 next use mock properly 16:06:09 +1 brick, -1 mock 16:06:09 ameade: ping 16:06:17 hey 16:06:19 +1 brick 16:06:22 +1 brick 16:06:29 +1 for agent/brick 16:06:45 +1 brick and cinder agent 16:06:47 ameade: can we make this a cross project so other projects can take advantage of this? 16:07:09 thingee: I think that makes the most sense, is there a meeting to discuss cross project sessions? 16:07:17 eharney, xyang1 I added brick to the etherpad 16:07:18 and if not that have it be a slot at the meetup on friday? 16:07:38 i put it on the cross-project etherpad 16:07:40 ameade: I'm not aware of how cross projects are being decided 16:07:49 hemna: great 16:08:11 thingee: sure, i'd love to just put the nail in the coffin for mock confusion, it's fairly trivial but rediculous how much confusion is caused and time is wasted 16:08:34 #agreed using mock properly will be in the meet up or cross project 16:08:49 async error reporting is next 16:08:53 ameade: this is you again 16:09:07 so I think this is also fine for cross project or meet up 16:09:21 markmcclain annegentle and russellb are going over the cross-project etherpad and going to have a schedule up next week, for cross-project 16:09:23 yeah agreed, this is a pervasive issue in openstack 16:09:31 i've thought about this a bit myself... IMO this could be a useful one if we have some particular proposals to think about 16:09:33 it was discussed yesterday at the tc meeting 16:09:49 if there aren't concrete proposals then probably not 16:09:55 eharney: you mean having a specific scheduled s lot for cinder sessions? 16:10:08 thingee: right 16:10:20 it may be something we want to solve in cinder and have spread throughout openstack 16:10:39 I have a few ideas, the tricky part is a solution that isn't drastic to the current API 16:10:52 ameade: keep me in the loop on this one regardless, i'd be interested in sharing thoughts 16:11:04 eharney: definitely 16:11:09 ok lets go through the rest and we can circle back 16:11:13 I think it needs some discussion, since it is an area where the requirements are drasticly different between different cloud types 16:11:18 scheduler enhancement 16:11:20 xyang1: this is you 16:11:21 kk 16:11:51 thingee: ok 16:11:52 thingee: I submitted a cinder spec for it 16:12:13 is there a lot of disagreement and complexity that we think this warrants a slot? 16:12:15 this is a nice capability, as far as session we may want to see whether people are mostly in agreement already on this 16:12:21 right :) 16:12:21 thingee: I'd like to discuss about it because there seems to be lots of opinions and I'd like to get consensus 16:12:42 xyang1: please post a link to the spec for those of us who are lazy 16:12:44 xyang1: link to the spec? 16:12:51 heh 16:12:51 ok, a sec 16:13:08 https://review.openstack.org/#/c/129342/ 16:13:31 I'll update based on eharney's comments soon 16:13:37 xyang1: by a lot of disagreements you mean jenkins and eharney ? 16:13:59 this topic has a long history 16:13:59 thingee: :) 16:14:12 bswartz: that's true 16:14:12 thingee: eharney. also Duncan and jgriffith have some comments previously 16:14:17 I can't remember a summit when the issue of "inifinte capacity" hasn't come up 16:14:25 bswartz: true 16:14:36 bswartz: i think there's actually a nice chunk of work here that isn't about that question actually 16:14:43 I guess no one else has a strong opinion on this being discussed? 16:14:44 (arg, too many actuallys) 16:15:03 i think we need this for ThinLVM where the whole infinite thing is not a concern 16:15:12 thingee: I've got quite a few questions for that, actually. 16:15:17 The filter should probably maintain existing behaviour around 'infinite' 16:15:21 eg. how space for snapshots should be accounted, etc. 16:15:28 i.e. it passes but gets lowest weighting 16:15:31 xyang1: ok for this to be useful with the long history, please have your proposal with alternatives. I'm sure they're alredy in the spec. 16:15:53 thingee: sure 16:15:56 flip214: Can you comment on the spec? 16:15:57 Sounds like this is appropriate for a session then. 16:15:57 #agreed Scheduler enhancement to support over subscription session at the summit 16:16:19 next session is Better error handling for creating snapshot/volume from source volume/volume from snapshot/replica 16:16:22 winston-d: ^ 16:16:38 DuncanT-: will try to. 16:17:07 probably has overlap with both the state machine work and the other async error item? 16:17:12 can anyone speak on this for winston-d ? 16:17:18 eharney: +1 16:17:23 so we have been encountered a lot of errors with that 16:17:46 I mean cloneing volumes bypassing schedule and ending up no enough capacity 16:17:49 winston-d: what is the advantage of the scheduler reporting it over the driver reporting it? 16:17:53 seems like it's just something we should just do no ? 16:17:58 \o 16:18:11 avishay: for one, scheduler can fail faster than driver 16:18:17 winston-d, seems like a bug 16:18:21 Cross-backend snaps is something we don't support yet 16:18:29 Cross backend clones I believe the same 16:18:41 DuncanT-: I don't mean to do cross backend snapshot or cloning 16:18:47 there's no host field for snapshot 16:18:57 DuncanT-: the drbdmanage driver will ;) 16:19:03 winston-d: Can you explain then please? 16:19:09 oh sorry, not cross-backend. 16:19:13 what i want is the scheduler should rasie proper error instead of driver trying to do a clone for a minute and then fail 16:19:17 only consistency-groups, got that mixed up. 16:19:24 do we need to solve this here, or decide on if it's a session topic ? 16:19:39 winston-d: But that means you'll fail even when a thin clone would hav esucceeded fine 16:19:46 DuncanT-: we can keep the shortcut (i.e. no real scheduling) in scheduler. 16:20:04 i don't know if there is much discussion here. if it's really a problem for the driver to do it, then let the scheduler do it. 16:20:09 winston-d: The scheduler has less info than the driver.... jsut make the driver do the check faster 16:20:13 avishay, +1 16:20:26 * DuncanT- is against the scheduler doing it 16:20:36 I can't see why it would ever be better 16:20:38 DuncanT-: no really, with driver correctly report differnt type of capacity, thin provisioning snapshot would be allowed from scheudler point of view 16:20:46 The driver has all of the info the scheduler has and more 16:20:56 sounds like we a session topic 16:21:01 we have a* 16:21:17 thingee: Dunno, might be quickly solved in an online discussion 16:21:24 ok 16:21:30 anyone else? 16:21:32 thingee: i would say it's a debatable topic, but maybe not the most pressing issue 16:21:33 this is probably more useful when we have async error report 16:21:46 avishay: +1 16:22:00 I don't mind a session or not, I just want to solve the problem. 16:22:05 thingee: Meet-up topic? 16:22:10 ok, we'll circle back again 16:22:13 jungleboyj: yea maybe 16:22:16 Maybe winston-d and I should just go discuss it 16:22:23 DuncanT-: sure 16:22:26 source level debugging with pycharm 16:22:37 This is primarily an info-sharing thing, perhaps better covered in a wiki or meet-up. 16:22:48 cknight: I agree :) 16:22:49 cknight, +1 16:22:57 anyone disagree? 16:22:58 +1, it's pretty cool though 16:23:12 Looks like a 'just do it' to me 16:23:18 yes, cool info, needs a different venue 16:23:26 would be nice if we have a live demo or sth 16:23:30 NEXT TOPIC: NFS based volume creation optimization from an image 16:23:36 screen recording would be fine too 16:23:42 lightning talk? 16:23:46 screen recording ++ 16:23:50 +1 to winston-d about live dmo 16:23:52 winston-d: +1 16:24:08 who from vmware proposed this? 16:24:11 i think this is a driver enhancement and not really a large design thing? 16:24:19 eharney: +1 16:24:38 eharney: +1 16:24:45 what was the pain for such enhancement not making into Juno? 16:25:16 ok no one is here to drive this, so I'm punting it 16:25:35 objectify cinder - thangp 16:25:37 spec is up https://review.openstack.org/#/c/130044/ 16:25:49 comments from josh & boris 16:25:50 next topic ^ 16:26:06 thangp: first of all thanks for getting this up 16:26:12 np 16:26:23 Josh has quite a lot of comments for the spec 16:26:27 i was rushing to get it in for today 16:26:31 I think it would be great to get some help from nova folks as well, so this would be a great session topic. 16:26:37 ++ 16:26:39 i'm inclined to think this needs a session if it has gotten far enough along to dive into it 16:26:40 +1 16:27:00 Sounds like something important to discuss. 16:27:01 I think this topic potentially ties into the state machine work 16:27:06 so +1 16:27:13 #agreed objectify cinder for kilo session 16:27:13 there were concerns about performance, but nothing we can't iron out 16:27:38 NEXT TOPIC: capacity headroom 16:27:41 o/ 16:27:48 eglynn: hello! 16:28:09 so this is a topic the WtE WG raised with me, so I'm acting as the conduit here 16:28:21 seems like something that operators are interested in surfacing 16:28:33 aren't drivers already supposed to report this information in get_volume_stats ? 16:28:44 hemna: yupp 16:28:45 hemna: +1 16:28:47 hemna: +1 16:28:49 apparently not fully accurate in all case tho' 16:28:54 it sounds like this has implications wrt thin prov 16:28:58 eglynn, so that sounds like driver bugs then 16:29:03 eglynn: +1 again :) 16:29:07 so may be something that can be worked on in that session? 16:29:08 and it's not going to get more accurate. this is something difficult for distributed systems to report on. 16:29:12 hemna: Cinder bugs 16:29:16 hemna: that and the addition of the notification from the scheduler 16:29:17 our drivers are guilty of this to a certain extent 16:29:18 So file bugs where it isn't accurate, and for the emitting events, just do it[tm] 16:29:21 hemna: noboby can agree on what to report 16:29:47 jgriffith: i have to agree with that 16:29:56 eglynn: this is a touchy topic, and really I don't think cinder driving it is going to make things more accurate from vendors honestly 16:29:56 DuncanT-: I think you mentioned before it would be problematic for some drivers to compute? 16:30:12 I agree that this won't get solved over all -- let's file bugs and fix it in specific cases where it's very wrong 16:30:13 thingee: I'd say that cinder needs to fix it 16:30:16 eglynn: Drivers that don't report it get treated like they don't do thin 16:30:20 bswartz, +1 16:30:29 and needs to fix it ASAP 16:30:53 how about meet up on this then? 16:30:56 eglynn: Get a 'mostly working' solution up that suites the majority, then work ont he problem cases as bugs 16:31:13 thingee: meetup works for me 16:31:24 DuncanT-: k 16:31:24 we can probably talk about his as well in the thin provisioning session 16:31:43 xyang1: sounds good 16:31:53 cinder track on which day? 16:32:00 thursday 16:32:23 a-ha, some overlap with the ceilometer track then 16:32:24 #agreed capacity headroom to split time with scheduler enhancements for thin provisioning 16:32:41 NEXT TOPIC: LVM: Support a volume-group on shared storage 16:32:49 cool, thanks folks! 16:33:14 I'm wary of that... We've got too many people doing parallel access to shared storage, and then destroying their data that way.. 16:33:19 haven't we visited this topic previously ? 16:33:29 what's meant by shared storage in this context? 16:33:29 hemna: yes 16:33:29 this session will be spent with people arguing as to whether or not this is a valid driver model 16:34:00 i think that was the main question before 16:34:12 yah I think we punted on this idea previously 16:34:19 mtanino: ^ 16:34:46 i find it to be a reasonable idea 16:34:58 who is sharing storage with whom? 16:35:17 ok let me ask this, do we want to revisit this? 16:35:40 thingee: I don't :) 16:35:44 thingee: if we don't then i think we need to put some effort into explaining what kinds of drivers are and aren't ok 16:35:50 I'd quite like to, it is a model with some benefits 16:35:59 Though many issues too 16:36:21 I believe part of the idea is to get backend arrays to add their volumes into the lvm group and use lvm to export to VMs 16:36:25 it was odd 16:36:48 maybe I misunderstood though 16:36:57 ok, I'm going to punt this and agree that we should define what we expect from drivers then 16:37:00 there are some gerrit links on the etherpad about this topic 16:37:01 We have several other options to discuss here. Seems like we may want to to circle back on this one. 16:37:03 mtanino doesn't appear to be here anyways 16:37:17 So the basic idea is to use a single large lun connected to all the compute nodes, rather than have cinder re-export 16:37:31 We've circled a lot abotu how to do that 16:37:36 NEXT TOPIC: Automated discovery 16:38:11 thingee: +1 16:38:25 I have a counter proposal to this topic 16:38:34 it does seem like a real issue worth discussing though 16:38:37 there's a lot going on in this one, some of which at least i think we are interested in 16:38:41 bswartz: +1 16:38:49 bswartz, +1 16:38:56 Interesting. 16:39:04 I think it is a real problem, but don't like the sould of this solution 16:39:05 i am curious about this one so +1 16:39:06 I'd suggest having both proposals to be given as a topic 16:39:18 /topic/session/ 16:39:19 bswartz: can you add that to the etherpad? 16:39:21 +1 16:39:24 I think I'd like to see dynamic configuration as a pre requisite to this one 16:39:26 +1 16:39:30 We need details before the session, or it won't go anywhere 16:39:37 DuncanT-: +1 16:39:39 hemna: i agree 16:39:45 #agreed automated discovery as a kilo session 16:39:45 DuncanT-: +1 16:39:56 NEXT TOPIC: Enable support for capturing volume stats 16:40:23 * jungleboyj needs to drop. Catch you guys later. 16:40:36 thingee: I thought we agreed at the mid-cycle that sessions without specs were a waste of time? 16:40:39 again same person who is not present..I fear about who will be driving these besides just discussions 16:41:04 bswartz: you can come up with a spec for your automated discovery? 16:41:05 if they aren't here to discuss their own topics.......... 16:41:29 bswartz: and do you have someone in mind to drive it after discussion? 16:41:42 never know, they may have a family emergency 16:41:44 I added a line to the etherpad 16:41:53 thingee: this topic seems like my previous one on volume statistics reporting. 16:42:18 winston-d, +! 16:42:20 err 1 16:42:25 winston-d: yea 16:42:25 winston: yes 16:42:34 ok we'll move on 16:42:53 next topic: Support for QoS specifications for volumes 16:42:59 again person is not present 16:43:22 I'd like to turn this into a generic 'per-volume tuning' discussion 16:43:26 jgriffith: any interests to discuss that? 16:43:35 jgriffith: yeah :) 16:43:41 :) 16:44:08 I'm not completely sure what the other has going here 16:44:22 what? 16:44:22 but 16:44:41 thingee: what what? 16:44:54 yes I'd like to discuss but don't know that there's much needed 16:44:57 just needs to be done 16:45:08 Yeah I agree 16:45:28 NEXT TOPIC: Automating data management using policies 16:45:40 so I think this interesting to some regard. 16:45:49 this sounds like something for the orchestration area... 16:45:49 I think this can be done outside cinder 16:46:00 DuncanT-: yes 16:46:03 +1 16:46:24 anyone else? 16:46:26 this smells like an openstack scheduler/cron project to me 16:46:28 before it gets punted 16:46:42 NEXT TOPIC: Data Services pluggable framework 16:47:13 Need some sort of PoC for this before we can have a discussion that is more than hot air IMO 16:47:16 this sounds like a pretty neat idea to me (VAAI/vStorage kinda stuff) 16:47:54 Like replication, there are so many things this can be that we could talk in circles for ever 16:47:57 I didn't understand this one 16:47:59 DuncanT_: +1 16:48:03 DuncanT-: agreed, hard to know if this has enough behind it to get done in kilo 16:48:29 ok we'll punt it 16:48:29 we need to see some design details in a spec 16:48:29 eharney: Coming up with a (disposable) PoC can be done any time 16:48:31 this topic needs to be broken up into smaller concrete proposals in my opinion 16:48:36 bswartz, +1 16:48:42 NEXT TOPIC: Downloading volume data to cinder node 16:48:45 ameade: ^ 16:48:54 this should be self explainatory 16:49:08 i haven't dug into the issue or know everywhere it is done 16:49:10 this is an optimization that should be implemented where possible... not sure if we need to debate design there 16:49:14 ameade, copy image <-----> volume ? 16:49:20 +1 but some poc/spec will be very useful 16:49:25 eharney: +1 16:49:37 yeah if we want a session on this i would totally do some POC next week 16:49:51 I don't think we need a session 16:49:51 What is to discuss that needs a session? 16:49:53 I think we can agree that this shouldn't be done, but we need a proposal on how to avoid it 16:49:56 thingee: sorry for my late. 16:49:59 I do think there's some confusion about goals here 16:50:18 caching converted images on Cinder might be an option 16:50:20 but I'm not a fan 16:50:33 using Cinder as a Glance backing I could be down with 16:50:48 else; internal caching by backend devices is the way to go IMHO 16:50:56 jgriffith: why aren't you a fan of it? 16:51:11 Converting caches images works, cinder as a glance backend I've not really looked at but makes great sense, not sure any of these need discussion? 16:51:13 nova already does some caching of its own for some things 16:51:25 if you want to cache locally, then shouldn't glance be running locally and do it, not sure why cinder should be caching this 16:51:36 thingee: I'm not a fan of the concept of creating another image cache on Cinder node 16:51:46 this sounds like discussion to me :P 16:51:53 ameade: exactly 16:51:57 jgriffith: Don't do it on the cinder node, do it on the storage backend.... 16:52:08 DuncanT-: yeah... that was my point 16:52:11 jgriffith: cinder as a glance backend will actually require a lot of work -- we looked into this during juno and ran away screaming 16:52:22 that part I like (in fact I'm working on as we speak :) ) 16:52:28 ameade: But is it discussion better had in person? That I'm less sure of - face to face time is premium time 16:52:30 bswartz: no kidding ;) 16:52:44 pretty sure this topic covers more than just glance, right? 16:52:52 DuncanT-: ameade I'd say no 16:52:57 DuncanT-: true story 16:52:59 and the reason being is that it's a rat-hole 16:53:08 jgriffith: I agree 16:53:08 cinder as glance backend depends on brick and multiattach too 16:53:11 there's not enough of a solid proposal here IMO 16:53:17 xyang1: disagree 16:53:18 jgriffith: I could agree with that, haven't done the due dilligence 16:53:27 xyang1: it doesn't have to 16:53:36 xyang1: concept could be as simple as "glance owned volumes" in cinder 16:53:52 Ok, with the consensus here, I'm going to punt it. 16:54:07 jgriffith, DuncanT-: ok, I was referring to the original proposal 16:54:08 NEXT TOPIC: brick / cinder agent 16:54:20 think I already know the answer 16:54:37 :) 16:54:46 so that's approved 16:54:48 +1 16:54:51 :P 16:54:55 and that's it for topics 16:55:00 we still have open slots 16:55:08 and only five mins left 16:55:10 when is the decision date for this list? 16:55:15 we do? 16:55:18 I need to know this week 16:55:37 Keep them as general discussion then, or spill over from the other sessions 16:55:47 That worked really well at the mid-cycle 16:55:49 thingee: async error reporting? 16:55:53 rather I need to make a decision this week. so if people want to propose things in channel and you can get enough people behind your proposal...lazy votes in ehterpad, ok! 16:55:57 thingee, How about a topic related to the volume type extra specs enhancements ? https://review.openstack.org/#/c/127646/ 16:56:24 well, if anyone is interested in the DRBD 9 / DRBDmanage progress, I can fill a few minutes, too. 16:56:45 but I don't think it'll be more than 15 minutes - unless there are lots of questions. 16:56:56 #topic Discuss the volume type extra specs cinder-spec 16:57:00 hemna: ^ 16:57:05 three mins 16:57:10 heh ok 16:57:26 #link https://review.openstack.org/#/c/127646/ 16:57:28 hemna: is that a way to get additional config options that a driver has into the UI? 16:57:35 so this spec is all about getting better integration with creating volume types from horizon 16:57:58 okay, so only about volume _types_. ACK. 16:58:02 the idea being don't force the admin to go read documentation for what each driver supports for extra spec keys, when the drivers themselves know it 16:58:30 provide a way to ask the running drivers what their extra spec keys are and display them in the horizon volume type creation workflow 16:58:30 The problem being that the list of supported keys is complex, and some keys only work when other keys are present, etc 16:58:51 DuncanT-, true, but that exists today 16:58:56 hemna: the problem I had with this is how it will work 16:58:58 this doesn't make this better or worse 16:58:59 with or without this feature, I'd encourage admin to read the doc 16:59:11 winston-d: +1 16:59:11 winston-d: +1 16:59:14 winston-d, sure, reading docs is always good 16:59:15 And in some cases you need to know all the backends that a volume type could possibly end up on to get the list 16:59:22 but I also agree with hemna to some degree in horizon would be nice 16:59:28 but the user experience here is terrible. We should do better. 16:59:46 the keys each driver supports in general is deterministic 16:59:48 The problem space is really big and complex. 16:59:53 yup 16:59:57 -1 on displaying every vendors extra-specs key in horizion 16:59:59 horizon 17:00:01 sorry for the summit topics taking up time. I'll move topics we missed to next meeting. thanks everyone. 17:00:03 Pretending it isn't is a bad idea 17:00:05 #endmeeting