16:00:41 #startmeeting cinder 16:00:42 Meeting started Wed May 29 16:00:41 2013 UTC. The chair is DuncanT. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:46 The meeting name has been set to 'cinder' 16:00:49 hi 16:00:51 hi 16:01:05 Hi all. Happy Wednesday! 16:01:11 So no jgriffith this week. 3 items on the agenda so far 16:01:41 Update the wiki or PM or shout up for other items 16:01:51 #topic BUGSQUASH 16:02:32 hi all 16:02:41 So we've a bug squash day scheduled tomorrow... just after the H1 branch was cut, but never mind 16:02:41 hi 16:02:49 Anybody got any questions on that? 16:03:17 Yea we should have done it earlier, but we'll get a good start on H2 :) 16:03:49 avishay: got some last minute stuff through thanks to winston-d 16:04:06 thingee: cool cool 16:04:20 nothing like last minute being a motivator 16:04:25 :) 16:04:31 I'll assume there are no questions then and look forward to the pretty graph plunging tomorrow 16:04:37 :) 16:04:39 :) 16:04:41 :) 16:04:44 :-) 16:05:00 ( ͡ಠ ͜ʖ ͡ಠ) 16:05:01 #topic Blueprint dependencies and generic local attach 16:05:06 thingee: lol 16:05:19 i'm not sure what emotion that is... 16:05:25 now that's ascii art or utf-8 art 16:05:59 So zhiyan highlighted the fact that we have a bunch of blueprints that need some flavour of local attach, but that work hasn't been started yet 16:06:05 hi please take a look on https://etherpad.openstack.org/linked-template-image , the changes need cinder give some support, glance and nova need attach the volume to glance-api host or nova-compute host.. 16:07:03 are you really after 'locally attaching', or is that just a means to do I/O to the volume? 16:07:19 jdurgin: i think the latter 16:07:36 avishay: +1 16:07:51 eharney: just improve vm provisioning performance, for now we use 'linked' volume to prepare template image, in future, we can use that prepare vm's vdisk directly, such as clone to make cow... 16:07:52 connectivity issues worry me - cinder nodes may not have HBAs 16:08:12 same for glance nodes 16:08:36 which is why i suggested VMs to do the copies but nobody liked that idea :) 16:08:42 zhiyan: avishay the etherpad says for iscsi only 16:08:47 avishay: In this case the attach is happening on the compute node, just to the host OS rather than direct to a VM, if I understand the plan 16:09:09 kmartin: not just for iscsi, IMO, for all backend.. 16:09:38 DuncanT: yes, that is 16:10:11 DuncanT: so this isn't that attach "service" refactor thing? 16:10:25 zhiyan: ok then it would need fibre channel as well, I was reading line #84 of the etherpad 16:10:31 so i list somethings we need to have: 1-5 at blow part in https://etherpad.openstack.org/linked-template-image 16:10:46 The same attach would be done for a new 'cinder-ioworker' service in the case of backup, migration etc, which can be run on nodes with appropriate connectivity (including on compute nodes if desired) 16:11:37 kmartin: yes, John tell me iscsi, AOE will be extracted, but not others... 16:12:17 zhiyan: glance can already use block storage as back-end, Ceph is an good example. 16:12:38 I think brick on its own is not enough for this.... it will provide some useful parts but we need something more if we are going to do generic attach properly 16:12:55 DuncanT: agreed 16:13:08 wiston-d: yes, glance have Ceph store driver, but my plan is give a Cinder store driver to glance, then glance can use any type volume as an image from Cinder backend... 16:13:25 DuncanT: yes +1 16:13:47 DuncanT: if a deployer doesn't need to worry about connectivity from non-compute nodes, that's a win in my book 16:14:31 avishay: yes :) 16:15:09 avishay: In my model, the deployer has to run cinder-ioworker on some node(s) with connectivity. This need not be the same nodes that cinder-volume runs on, and can be a compute node if that works outt well with performance and connectivity constraints, or could be a dedicated node 16:15:12 zhiyan: what you suggesting in this bp: https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver is to use volumes created by Cinder as Glance back-end? 16:15:18 I think the abstraction just needs to change slightly to focus on I/O rather than attach/detach - those are implementation details 16:15:38 DuncanT: yup, sounds good 16:16:03 jdurgin: Looks like they plan on mounting the volume then throwing a cow layer over the mounted volume... 16:16:14 DuncanT: what's the different about 'cinder-ioworker' and 'cinder-agent'? 16:16:52 then backends like sheepdog and rbd can be supported (and as DuncanT mentioned, having a general I/O mechanism is useful for several other things as well) 16:17:29 Who's signing up to do this? Does it replace this: https://blueprints.launchpad.net/cinder/+spec/cinder-refactor-attach ? 16:17:31 zhiyan: cinder-ioworker is a dedicated machine has enough connectivity to various cinder back-ends. 16:17:37 winston-d:for now, i just try to get agreement with cinder team....IMO, as i said in the etherpad, "extract attach/detach code from nova directly but NOT waiting cinder project do that, I will extract attach/detach code independently just for cinder-store-driver for glance project, an later (after i landing down the driver for glance) if cinder like it, I'd like also contribute it to cinder." what's your thoughts? 16:17:43 zhiyan: ioworker is a cinder service that offloads long io jobs from the volume service 16:18:06 * markwash showed up 16:18:12 avishay: vish and jgriffith gave that talk jointly at the summit 16:18:13 zhiyan: cinder agent is moving attach/dettach code out of nova into some new cinder service or libbrary 16:18:43 DuncanT: jdurgin: for now, i just plan that cover base/template image preparing, future to cover cow... 16:18:55 markwash: does zhiyan's bp target havana? 16:19:03 zhiyan: If it only works for some cinder backends, I'm like to come along downvoting it... that sort of feature split is very harmful 16:19:08 winston-d: glance-cinder-driver? yes 16:19:21 zhiyan: how about your bp for nova? 16:19:50 zhiyan: Unless you have some fallback mechanism so that it goes back to the old way transparently I guess... 16:20:36 bswartz: this is going beyond 'brick', right? 16:20:41 sorry I am late :-( has there been discussion of cinder supporting attaching volumes to glance api nodes? 16:20:41 Hi all sorry i m late 16:20:56 markwash: That's what we're discussing now 16:20:57 morning 16:21:11 avishay: yes I think so -- vish wanted to ultimately eliminate all the storage code from nova 16:21:32 markwash: yes, we are discussing... 16:21:36 bswartz: yes, i agree that this is working towards that goal 16:21:39 DuncanT: I admit I don't understand all the implications of attaching volumes to glance nodes, but I'm a bit scared of that approach. Do other folks here have alternative proposals? 16:21:43 avishay: cinder agent is going way beyond brick, yes, though it will use lots of brick code I expect 16:22:05 or am I just being chicken little? 16:22:11 markwash: The volume only gets attached to the compute node I think, not the glance node 16:22:12 hemna: morning 16:22:24 markwash: Unless I'm failing to understand the design 16:22:32 avishay, I was going to do the refactor stuff into brick 16:22:52 DuncanT: there is some provision for reading the volume data directly from the glance http api, for backwards compatibility 16:23:13 hemna: OK so you move stuff to brick which gets used by cinder-agent? 16:23:35 markwash: so glance does have to attach the volume to write data? 16:23:37 markwash: Ah, that causes some issue for e.g. fibrechannel installations... 16:23:46 I'm not familiar with the term cinder-agent, but the idea was to eventually remove the attach/detach code from nova 16:23:46 winston-d: yes 16:23:57 and only use the code that I'll put into brick 16:24:04 winston-d: under the given plan, yes. . but as I said I'm hesitant 16:24:15 hemna: cinder-agent was the name put on an early suggestion for doing code removal 16:24:19 but for starters, just write the code in brick, get it working in cinder 16:24:28 then remove the code from nova 16:24:37 DuncanT, ok. 16:25:17 I have a 3par driver feature to get in, then I'll start working on the attach/detach code 16:26:03 hemna: Who cares about 3par? ;) 16:26:07 :P 16:26:28 lol 16:26:31 hamna: yes. but folks, for this 'common' attching/detaching part, seems the options are all under discussing right? 16:26:49 Is there some concrete plan for this transistion? brick, agent, etc.? 16:27:02 avishay, my discussions with john were basically 16:27:07 put the first pass into brick 16:27:11 get that working in cinder 16:27:21 then get brick into oslo 16:27:26 hemna: just iscsi +AOE? 16:27:27 and then use brick in nova 16:27:29 include others? 16:27:41 zhiyan: One option being discussed for things like backup, migration etc is to add a driver API that does reads & writes via the driver, rather than requiring an attach 16:27:53 zhiyan, this originated from the idea of getting FC attach/detach working, so the plan was to do iSCSI and FC at least 16:28:30 ok....if that can I just port attach/detach code to glance from nova? 16:28:31 DuncanT: I really like that idea, then glance can just proxy calls out to that reader 16:28:33 DuncanT, another option discussed was to create a utility VM and make the VM do the work...but that feels heavy 16:28:34 directly? 16:28:42 DuncanT: what do you mean by "read & write via the driver"? 16:28:55 I'm not sure I understand why somethings are in brick and somethings are in cinder agent. seems like it'll confuse new comers. If cinder agent is to take stuff away with volume manager, it should just have implementation in brick 16:28:58 have the backend do the operation itself when applicable 16:29:06 DuncanT: for LVM/ceph/etc? 16:29:09 zhiyan, once brick is in oslo, you should just be able to use the brick attach/detach code 16:29:23 hemna: H-2? 16:29:41 avishay: have some kind of open/read/write/close api, where open/close means attach/detach for lvm, but use other methods for e.g. ceph and sheepdog 16:29:44 avishay: Yeah. Many/most drivers will probably just do an attach and read/write, but if they want to do it over http or soem other voodoo then they are welcome to 16:29:45 hemna: I just want to implement somthing for H-2, such as glance-cinder-driver... 16:29:48 yah, I'll be shooting for H2, but most likely longer :( 16:29:54 :( 16:29:57 zhiyan: but to make FC work, having common code is not enough, you have to have hardware (HBAs) installed on glance node 16:30:00 jdurgin is the mastermind for such designs 16:30:08 jdurgin: sounds good 16:30:10 DuncanT: OK 16:30:16 winston-d, correct, you have to have FC HBAs on the node 16:30:53 winston-d: it sounds like adding that kind of hardware requirement for glance nodes will not work out well 16:30:54 winston-d: yes, i c 16:31:02 Or else have a driver that checks if there is an HBA and if not does an http stream or something to a node that does... 16:31:17 v2 16:31:18 :P 16:31:33 markwash: exactly, that's why i fail to see the value of zhiyan's BP 16:31:58 winston-d, well cinder itself has the same problem right now 16:32:15 it requires a homogeneous deployment 16:32:28 hemna: that's why we come up with ioworker node idea, right? 16:32:33 but I'm not sure why someone would deploy half their OS deployment with FC and half w/o 16:32:46 I suggest that someone (hemna?) do a first pass of some high level design of this whole deal on some wiki/etherpad and everyone can bring up objections/refinements? 16:33:10 I'm all for volunteering Hemna ;-) 16:33:10 avishay, didn't we do that in the design session? 16:33:17 jdurgin: Does this resemble what is in your mind? https://review.openstack.org/#/c/28931/ 16:33:24 hemna: I don't know, I had to miss that one :P 16:33:59 hemna: the etherpad for that is pretty bare 16:34:14 yah I guess no one took notes 16:34:19 The ioworker stuff got discussed, I think the glace wrinkle is new 16:34:22 vincent_hou: almost, I'll leave some comments 16:34:27 DuncanT, yah 16:34:42 avishay: this one? https://etherpad.openstack.org/havana-cinder-local-storage-library 16:34:46 do we have an ioworker BP ? 16:35:13 hemna: i can't find that one 16:35:22 bswartz: i thought this one? https://etherpad.openstack.org/Cinder-Havana-Refactor-Attach-Code 16:35:37 avishay: that's the one for brick 16:35:41 bswartz, I think that was vishy's etherpad for the long term plans of moving all storage related work to cinder 16:35:56 hemna: for now, i'm just now sure, i should waiting your changes or extract attach/detach code from nova to glance directly 16:36:13 since you said that need H-2... 16:36:30 I have H2 as the target. We'll see if I get there. 16:36:53 I also have the state mgmt stuff on my plate as well 16:37:01 I need a clone of myself. 16:37:12 Anyway, again, I think it would be good to have a concrete design in hand that works for all protocols/back-ends/use-cases before someone runs off and writes code? 16:37:14 so, i don't think i should just waiting 'cinder/brick' and cinder-agent.. 16:37:19 hemna: make it 2 or even more. :) 16:37:26 :) 16:37:42 zhiyan: Extracting the current code is not enough. You'll need a fallback path for backends that don't expose a device or else you won't be backend agnostic and I'll be along to complain.... 16:38:00 zhiyan: I'm not sure how you can do fallback 16:38:10 DuncanT, fwiw, we don't really have a fallback mechanism in place now. 16:38:13 DuncanT: i agree. 16:38:42 hemna: Yeah, need to fix that too ;-) 16:38:46 the assumption is a homogeneous deployment on nova nodes now. 16:38:55 yes, we do...but not for H2 16:38:58 DuncanT: yes, so i talked that with you, maybe i need a new api to drive the state machine properly for volume and snapshot. 16:39:38 zhiyan: Yeah, that bit should be easy enough to do... just copy the attach API and change instance to host and add a reason field 16:40:10 DuncanT: yes, maybe need a new api... 16:40:19 but not change 'attach' api directly... 16:40:31 zhiyan: Yes, a new API 16:40:41 state mgmt is another topic...and it's being worked on now 16:41:05 and that new api can be used in future requirement....it's general. 16:41:21 hemna: on going?? who do that 16:41:44 zhiyan: Want to try to code up that API for review and then we can talk about the actual IO again next week when people have had time to think on it? 16:41:59 Looks like we're spinning our wheels a bit now 16:43:00 DuncanT, ok, i like it, but folks, i don't think is the key.... 16:43:15 the key is all about actual IO 16:43:42 zhiyan: the key is even if you can extract attach code from nova to glance, it's not good enough and i think you better wait. 16:43:54 zhiyan: Yes, but people need time to think about that, so we can bring it up again next week after people have had time to discuss it among themselves for a few days 16:44:06 for now, it seems the concrete design just is what hemna did 16:44:14 extract code to 'cinder/brick' 16:44:23 others are all on going... 16:44:24 we extracted the nova attach/detach code from nova for grizzly, but I'm sure it's already behind in bug fixes 16:44:27 brick is not generic attach 16:44:45 brick is a bunch of useful code for doing some driver stuff 16:45:17 well hopefully the attach/detach code in brick will be used by nova 16:45:29 DuncanT: maybe we should work on that then with brick :) 16:45:34 so we don't have dupe code between the 2 projects and need to bugfix twice 16:45:41 DuncanT: thanks. so i'm readly not sure, what's the concrete design about generic attach ? :) 16:46:08 zhiyan: There is no concrete design. People will thing about it and discuss it again next week 16:46:32 hemna: do you have plan to add attach/detach code to brick? 16:46:33 zhiyan: Lots of people have thoughts / concerns, we aren't going to get a conclusion now 16:46:48 zhiyan, I thought I touched on that already? 16:47:08 Duncant: yes, i totally understand. thanks. 16:47:24 ok lets move on 16:47:29 but i just want to do some thing to get my bp move forward... 16:47:48 #topic Rate limiting, types and the database 16:47:49 If everyone goes off and thinks about a design we will either have 20 designs, or more likely be in the same place we are now 16:48:05 avishay: We can take it to #cinder later 16:48:12 DuncanT, +1 16:48:32 So we've a review open for rate limiting 16:48:56 Let one person make a design, email to the list, everyone can raise objections, and we can discuss next week productively? 16:49:23 avishay: +1 16:49:31 avishay, I'll try and throw something together at a high level. I don't think it has to be complicated IMO 16:49:39 hemna: thanks 16:49:53 avishay: +1, i'd like to take 'new api' part, to drive the state machine properly. 16:50:17 #action hemna to rough draft a design for host direct I/O / direct attach for discussion 16:50:18 OK, somebody do it, email it, and we'll discuss. 16:50:20 Moving on 16:50:58 So there were some concerns about adding rate limiting stuff directly to the volume-types table in the database 16:51:25 This grew into a few discussions about what our API goals are now that volume types are getting somewhat overloaded 16:52:02 I threw my thoughts onto an etherpad at https://etherpad.openstack.org/cinder-volume-types-evolution but haven't had much feedback 16:52:29 I believe winston-d has a plan to rework the patch vaguely along these lines to see how it looks? 16:52:36 DuncanT: those other entities better by identified by uuids instead of names :) 16:52:53 guitarzan: I hate UUIDs but you're probably right 16:53:05 well, no I'm just as wrong about it as we are volume types :) 16:53:10 yes, i'll rework the patch and submit soon. 16:53:55 guitarzan: Unique names work fine in my opinion and have the huge advantage of being readable, but if somebody really wants UUIDs I'm not going to push hard against it 16:54:08 Anybody see any serious holes in the design? 16:54:10 DuncanT: I think the same about vtype names :) 16:54:22 it seems a little odd to have those specific classes 16:54:32 it will work obviously 16:54:38 qos and encyption you mean? 16:54:41 yes 16:54:47 is there a line to be drawn somewhere? 16:54:57 or does every distinguishing feature become an entity? 16:55:37 anything else besides qos and encryption? 16:55:39 so the plan is to dump more core features into volume types? 16:55:42 I think we might end up with more classes over time... pushing them all under one abstraction is just going to lead to weird multiplexing. Only the admin sees the features, the external user still jsut sees volume types 16:56:28 hemna: I think volume types are our principle abstraction for differentiating volumes... they make a nice abstraction from an external view in my opinion 16:56:28 so now the capabilities are tied directly to these new classes? 16:56:30 I guess this helps with the whole "standardizing" effort 16:56:50 humm 16:56:56 kinda feels like a dumping ground 16:57:11 we killed 40 minutes with one topic 16:57:43 hemna I'd rather not see lots of complexity exposed to the 'customer' if possible 16:57:49 guitarzan: "killed" is right :) 16:57:50 DuncanT, agreed 16:57:55 lol 16:58:13 I think the last example, type-create feels weird 16:58:33 anyway, if you are interested in rate-limit, qos, you may want to take a look at this as well: https://etherpad.openstack.org/cinder-extend-volume-types 16:58:36 maybe extra-specs suck, but maybe that's just an interface problem 16:58:36 I'd like to maybe see the capability stuff moved to a new class or classes, and volume types just become a bunch of those classes tied together 16:58:49 hey guys… I have a meeting in this room in a few minutes. 16:58:55 hartsocks: we're aware of that 16:58:59 hartsocks: Ok 16:59:11 we can move this to #openstack-cinder 16:59:29 Direction seems right but I will have to look at that last etherpad 16:59:34 Hmmm, since we're just about out of time, can I ask people to look at https://review.openstack.org/#/c/30291/ for testr migration please? 16:59:59 sure 17:00:05 avishay: The details were made up off the top of my head :-) 17:00:20 Right, I'm afraid we need to move everything else over to #openstack-cinder 17:00:26 Thanks folks 17:00:29 #endmeetign 17:00:29 yah we're out of time 17:00:33 thx DuncanT 17:00:34 DuncanT: some of the best and worst ideas come from that sort of thinking ;) 17:00:35 thanks 17:00:37 thanks 17:00:38 #endmeeting