16:00:41 <DuncanT> #startmeeting cinder
16:00:42 <openstack> Meeting started Wed May 29 16:00:41 2013 UTC.  The chair is DuncanT. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:46 <openstack> The meeting name has been set to 'cinder'
16:00:49 <xyang_> hi
16:00:51 <vincent_hou> hi
16:01:05 <jsbryant> Hi all.  Happy Wednesday!
16:01:11 <DuncanT> So no jgriffith this week. 3 items on the agenda so far
16:01:41 <DuncanT> Update the wiki or PM or shout up for other items
16:01:51 <DuncanT> #topic BUGSQUASH
16:02:32 <avishay> hi all
16:02:41 <DuncanT> So we've a bug squash day scheduled tomorrow... just after the H1 branch was cut, but never mind
16:02:41 <bswartz> hi
16:02:49 <DuncanT> Anybody got any questions on that?
16:03:17 <avishay> Yea we should have done it earlier, but we'll get a good start on H2 :)
16:03:49 <thingee> avishay: got some last minute stuff through thanks to winston-d
16:04:06 <avishay> thingee: cool cool
16:04:20 <thingee> nothing like last minute being a motivator
16:04:25 <avishay> :)
16:04:31 <DuncanT> I'll assume there are no questions then and look forward to the pretty graph plunging tomorrow
16:04:37 <med_> :)
16:04:39 <avishay> :)
16:04:41 <winston-d> :)
16:04:44 <jungleboyj> :-)
16:05:00 <thingee> ( ͡ಠ ͜ʖ ͡ಠ)
16:05:01 <DuncanT> #topic Blueprint dependencies and generic local attach
16:05:06 <avishay> thingee: lol
16:05:19 <eharney> i'm not sure what emotion that is...
16:05:25 <med_> now that's ascii art or utf-8 art
16:05:59 <DuncanT> So zhiyan highlighted the fact that we have a bunch of blueprints that need some flavour of local attach, but that work hasn't been started yet
16:06:05 <zhiyan> hi please take a look on https://etherpad.openstack.org/linked-template-image , the changes need cinder give some support, glance and nova need attach the volume to glance-api host or nova-compute host..
16:07:03 <jdurgin> are you really after 'locally attaching', or is that just a means to do I/O to the volume?
16:07:19 <avishay> jdurgin: i think the latter
16:07:36 <winston-d> avishay: +1
16:07:51 <zhiyan> eharney: just improve vm provisioning performance, for now we use 'linked' volume to prepare template image, in future, we can use that prepare vm's vdisk directly, such as clone to make cow...
16:07:52 <avishay> connectivity issues worry me - cinder nodes may not have HBAs
16:08:12 <avishay> same for glance nodes
16:08:36 <avishay> which is why i suggested VMs to do the copies but nobody liked that idea :)
16:08:42 <kmartin> zhiyan: avishay the etherpad says for iscsi only
16:08:47 <DuncanT> avishay: In this case the attach is happening on the compute node, just to the host OS rather than direct to a VM, if I understand the plan
16:09:09 <zhiyan> kmartin: not just for iscsi, IMO, for all backend..
16:09:38 <zhiyan> DuncanT: yes, that is
16:10:11 <avishay> DuncanT: so this isn't that attach "service" refactor thing?
16:10:25 <kmartin> zhiyan: ok then it would need fibre channel as well, I was reading line #84 of the etherpad
16:10:31 <zhiyan> so i list somethings we need to have: 1-5 at blow part in https://etherpad.openstack.org/linked-template-image
16:10:46 <DuncanT> The same attach would be done for a new 'cinder-ioworker' service in the case of backup, migration etc, which can be run on nodes with appropriate connectivity (including on compute nodes if desired)
16:11:37 <zhiyan> kmartin: yes, John tell me iscsi, AOE will be extracted, but not others...
16:12:17 <winston-d> zhiyan: glance can already use block storage as back-end, Ceph is an good example.
16:12:38 <DuncanT> I think brick on its own is not enough for this.... it will provide some useful parts but we need something more if we are going to do generic attach properly
16:12:55 <avishay> DuncanT: agreed
16:13:08 <zhiyan> wiston-d: yes, glance have Ceph store driver, but my plan is give a Cinder store driver to glance, then glance can use any type volume as an image from Cinder backend...
16:13:25 <zhiyan> DuncanT: yes +1
16:13:47 <avishay> DuncanT: if a deployer doesn't need to worry about connectivity from non-compute nodes, that's a win in my book
16:14:31 <zhiyan> avishay: yes :)
16:15:09 <DuncanT> avishay: In my model, the deployer has to run cinder-ioworker on some node(s) with connectivity. This need not be the same nodes that cinder-volume runs on, and can be a compute node if that works outt well with performance and connectivity constraints, or could be a dedicated node
16:15:12 <winston-d> zhiyan: what you suggesting in this bp: https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver is to use volumes created by Cinder as Glance back-end?
16:15:18 <jdurgin> I think the abstraction just needs to change slightly to focus on I/O rather than attach/detach - those are implementation details
16:15:38 <avishay> DuncanT: yup, sounds good
16:16:03 <DuncanT> jdurgin: Looks like they plan on mounting the volume then throwing a cow layer over the mounted volume...
16:16:14 <zhiyan> DuncanT: what's the different about 'cinder-ioworker' and 'cinder-agent'?
16:16:52 <jdurgin> then backends like sheepdog and rbd can be supported (and as DuncanT mentioned, having a general I/O mechanism is useful for several other things as well)
16:17:29 <avishay> Who's signing up to do this?  Does it replace this: https://blueprints.launchpad.net/cinder/+spec/cinder-refactor-attach  ?
16:17:31 <winston-d> zhiyan: cinder-ioworker is a dedicated machine has enough connectivity to various cinder back-ends.
16:17:37 <zhiyan> winston-d:for now, i just try to get agreement with cinder team....IMO, as i said in the etherpad, "extract attach/detach code from nova directly but NOT waiting cinder project do that, I will extract attach/detach code independently just for cinder-store-driver for glance project, an later (after i landing down the driver for glance) if cinder like it, I'd like also contribute it to cinder."  what's your thoughts?
16:17:43 <DuncanT> zhiyan: ioworker is a cinder service that offloads long io jobs from the volume service
16:18:06 * markwash showed up
16:18:12 <bswartz> avishay: vish and jgriffith gave that talk jointly at the summit
16:18:13 <DuncanT> zhiyan: cinder agent is moving attach/dettach code out of nova into some new cinder service or libbrary
16:18:43 <zhiyan> DuncanT: jdurgin: for now, i just plan that cover base/template image preparing, future to cover cow...
16:18:55 <winston-d> markwash: does zhiyan's bp target havana?
16:19:03 <DuncanT> zhiyan: If it only works for some cinder backends, I'm like to come along downvoting it... that sort of feature split is very harmful
16:19:08 <markwash> winston-d: glance-cinder-driver? yes
16:19:21 <winston-d> zhiyan: how about your bp for nova?
16:19:50 <DuncanT> zhiyan: Unless you have some fallback mechanism so that it goes back to the old way transparently I guess...
16:20:36 <avishay> bswartz: this is going beyond 'brick', right?
16:20:41 <markwash> sorry I am late :-( has there been discussion of cinder supporting attaching volumes to glance api nodes?
16:20:41 <rushiagr> Hi all sorry i m late
16:20:56 <DuncanT> markwash: That's what we're discussing now
16:20:57 <hemna> morning
16:21:11 <bswartz> avishay: yes I think so -- vish wanted to ultimately eliminate all the storage code from nova
16:21:32 <zhiyan> markwash: yes, we are discussing...
16:21:36 <avishay> bswartz: yes, i agree that this is working towards that goal
16:21:39 <markwash> DuncanT: I admit I don't understand all the implications of attaching volumes to glance nodes, but I'm a bit scared of that approach. Do other folks here have alternative proposals?
16:21:43 <DuncanT> avishay: cinder agent is going way beyond brick, yes, though it will use lots of brick code I expect
16:22:05 <markwash> or am I just being chicken little?
16:22:11 <DuncanT> markwash: The volume only gets attached to the compute node I think, not the glance node
16:22:12 <winston-d> hemna: morning
16:22:24 <DuncanT> markwash: Unless I'm failing to understand the design
16:22:32 <hemna> avishay, I was going to do the refactor stuff into brick
16:22:52 <markwash> DuncanT: there is some provision for reading the volume data directly from the glance http api, for backwards compatibility
16:23:13 <avishay> hemna: OK so you move stuff to brick which gets used by cinder-agent?
16:23:35 <winston-d> markwash: so glance does have to attach the volume to write data?
16:23:37 <DuncanT> markwash: Ah, that causes some issue for e.g. fibrechannel installations...
16:23:46 <hemna> I'm not familiar with the term cinder-agent, but the idea was to eventually remove the attach/detach code from nova
16:23:46 <zhiyan> winston-d: yes
16:23:57 <hemna> and only use the code that I'll put into brick
16:24:04 <markwash> winston-d: under the given plan, yes. . but as I said I'm hesitant
16:24:15 <DuncanT> hemna: cinder-agent was the name put on an early suggestion for doing code removal
16:24:19 <hemna> but for starters, just write the code in brick, get it working in cinder
16:24:28 <hemna> then remove the code from nova
16:24:37 <hemna> DuncanT, ok.
16:25:17 <hemna> I have a 3par driver feature to get in, then I'll start working on the attach/detach code
16:26:03 <avishay> hemna: Who cares about 3par? ;)
16:26:07 <hemna> :P
16:26:28 <winston-d> lol
16:26:31 <zhiyan> hamna: yes. but folks, for this 'common' attching/detaching part, seems the options are all under discussing right?
16:26:49 <avishay> Is there some concrete plan for this transistion?  brick, agent, etc.?
16:27:02 <hemna> avishay, my discussions with john were basically
16:27:07 <hemna> put the first pass into brick
16:27:11 <hemna> get that working in cinder
16:27:21 <hemna> then get brick into oslo
16:27:26 <zhiyan> hemna: just iscsi +AOE?
16:27:27 <hemna> and then use brick in nova
16:27:29 <zhiyan> include others?
16:27:41 <DuncanT> zhiyan: One option being discussed for things like backup, migration etc is to add a driver API that does reads & writes via the driver, rather than requiring an attach
16:27:53 <hemna> zhiyan, this originated from the idea of getting FC attach/detach working, so the plan was to do iSCSI and FC at least
16:28:30 <zhiyan> ok....if that can I just port attach/detach code to glance from nova?
16:28:31 <markwash> DuncanT: I really like that idea, then glance can just proxy calls out to that reader
16:28:33 <hemna> DuncanT, another option discussed was to create a utility VM and make the VM do the work...but that feels heavy
16:28:34 <zhiyan> directly?
16:28:42 <avishay> DuncanT: what do you mean by "read & write via the driver"?
16:28:55 <thingee> I'm not sure I understand why somethings are in brick and somethings are in cinder agent. seems like it'll confuse new comers. If cinder agent is to take stuff away with volume manager, it should just have implementation in brick
16:28:58 <guitarzan> have the backend do the operation itself when applicable
16:29:06 <avishay> DuncanT: for LVM/ceph/etc?
16:29:09 <hemna> zhiyan, once brick is in oslo, you should just be able to use the brick attach/detach code
16:29:23 <zhiyan> hemna: H-2?
16:29:41 <jdurgin> avishay: have some kind of open/read/write/close api, where open/close means attach/detach for lvm, but use other methods for e.g. ceph and sheepdog
16:29:44 <DuncanT> avishay: Yeah. Many/most drivers will probably just do an attach and read/write, but if they want to do it over http or soem other voodoo then they are welcome to
16:29:45 <zhiyan> hemna: I just want to implement somthing for H-2, such as glance-cinder-driver...
16:29:48 <hemna> yah, I'll be shooting for H2, but most likely longer :(
16:29:54 <zhiyan> :(
16:29:57 <winston-d> zhiyan: but to make FC work, having common code is not enough, you have to have hardware (HBAs) installed on glance node
16:30:00 <DuncanT> jdurgin is the mastermind for such designs
16:30:08 <avishay> jdurgin: sounds good
16:30:10 <avishay> DuncanT: OK
16:30:16 <hemna> winston-d, correct, you have to have FC HBAs on the node
16:30:53 <markwash> winston-d: it sounds like adding that kind of hardware requirement for glance nodes will not work out well
16:30:54 <zhiyan> winston-d: yes, i c
16:31:02 <DuncanT> Or else have a driver that checks if there is an HBA and if not does an http stream or something to a node that does...
16:31:17 <hemna> v2
16:31:18 <hemna> :P
16:31:33 <winston-d> markwash: exactly, that's why i fail to see the value of zhiyan's BP
16:31:58 <hemna> winston-d, well cinder itself has the same problem right now
16:32:15 <hemna> it requires a homogeneous deployment
16:32:28 <winston-d> hemna: that's why we come up with ioworker node idea, right?
16:32:33 <hemna> but I'm not sure why someone would deploy half their OS deployment with FC and half w/o
16:32:46 <avishay> I suggest that someone (hemna?) do a first pass of some high level design of this whole deal on some wiki/etherpad and everyone can bring up objections/refinements?
16:33:10 <DuncanT> I'm all for volunteering Hemna ;-)
16:33:10 <hemna> avishay, didn't we do that in the design session?
16:33:17 <vincent_hou> jdurgin: Does this resemble what is in your mind? https://review.openstack.org/#/c/28931/
16:33:24 <avishay> hemna: I don't know, I had to miss that one :P
16:33:59 <avishay> hemna: the etherpad for that is pretty bare
16:34:14 <hemna> yah I guess no one took notes
16:34:19 <DuncanT> The ioworker stuff got discussed, I think the glace wrinkle is new
16:34:22 <jdurgin> vincent_hou: almost, I'll leave some comments
16:34:27 <hemna> DuncanT, yah
16:34:42 <bswartz> avishay: this one? https://etherpad.openstack.org/havana-cinder-local-storage-library
16:34:46 <hemna> do we have an ioworker BP ?
16:35:13 <winston-d> hemna: i can't find that one
16:35:22 <avishay> bswartz: i thought this one?  https://etherpad.openstack.org/Cinder-Havana-Refactor-Attach-Code
16:35:37 <bswartz> avishay: that's the one for brick
16:35:41 <hemna> bswartz, I think that was vishy's etherpad for the long term plans of moving all storage related work to cinder
16:35:56 <zhiyan> hemna: for now, i'm just now sure, i should waiting your changes or extract attach/detach code from nova to glance directly
16:36:13 <zhiyan> since you said that need H-2...
16:36:30 <hemna> I have H2 as the target.   We'll see if I get there.
16:36:53 <hemna> I also have the state mgmt stuff on my plate as well
16:37:01 <hemna> I need a clone of myself.
16:37:12 <avishay> Anyway, again, I think it would be good to have a concrete design in hand that works for all protocols/back-ends/use-cases before someone runs off and writes code?
16:37:14 <zhiyan> so, i don't think i should just waiting 'cinder/brick' and cinder-agent..
16:37:19 <winston-d> hemna: make it 2 or even more. :)
16:37:26 <zhiyan> :)
16:37:42 <DuncanT> zhiyan: Extracting the current code is not enough. You'll need a fallback path for backends that don't expose a device or else you won't be backend agnostic and I'll be along to complain....
16:38:00 <DuncanT> zhiyan: I'm not sure how you can do fallback
16:38:10 <hemna> DuncanT, fwiw, we don't really have a fallback mechanism in place now.
16:38:13 <winston-d> DuncanT: i agree.
16:38:42 <DuncanT> hemna: Yeah, need to fix that too ;-)
16:38:46 <hemna> the assumption is a homogeneous deployment on nova nodes now.
16:38:55 <hemna> yes, we do...but not for H2
16:38:58 <zhiyan> DuncanT: yes, so i talked that with you, maybe i need a new api to drive the state machine properly for volume and snapshot.
16:39:38 <DuncanT> zhiyan: Yeah, that bit should be easy enough to do... just copy the attach API and change instance to host and add a reason field
16:40:10 <zhiyan> DuncanT: yes, maybe need a new api...
16:40:19 <zhiyan> but not change 'attach' api directly...
16:40:31 <DuncanT> zhiyan: Yes, a new API
16:40:41 <hemna> state mgmt is another topic...and it's being worked on now
16:41:05 <zhiyan> and that new api can be used in future requirement....it's general.
16:41:21 <zhiyan> hemna: on going?? who do that
16:41:44 <DuncanT> zhiyan: Want to try to code up that API for review and then we can talk about the actual IO again next week when people have had time to think on it?
16:41:59 <DuncanT> Looks like we're spinning our wheels a bit now
16:43:00 <zhiyan> DuncanT, ok, i like it, but folks, i don't think is the key....
16:43:15 <zhiyan> the key is all about actual IO
16:43:42 <winston-d> zhiyan: the key is even if you can extract attach code from nova to glance, it's not good enough and i think you better wait.
16:43:54 <DuncanT> zhiyan: Yes, but people need time to think about that, so we can bring it up again next week after people have had time to discuss it among themselves for a few days
16:44:06 <zhiyan> for now, it seems the concrete design just is what hemna did
16:44:14 <zhiyan> extract code to 'cinder/brick'
16:44:23 <zhiyan> others are all on going...
16:44:24 <hemna> we extracted the nova attach/detach code from nova for grizzly, but I'm sure it's already behind in bug fixes
16:44:27 <DuncanT> brick is not generic attach
16:44:45 <DuncanT> brick is a bunch of useful code for doing some driver stuff
16:45:17 <hemna> well hopefully the attach/detach code in brick will be used by nova
16:45:29 <thingee> DuncanT: maybe we should work on that then with brick :)
16:45:34 <hemna> so we don't have dupe code between the 2 projects and need to bugfix twice
16:45:41 <zhiyan> DuncanT: thanks. so i'm readly not sure, what's the concrete design about generic attach ? :)
16:46:08 <DuncanT> zhiyan: There is no concrete design. People will thing about it and discuss it again next week
16:46:32 <zhiyan> hemna: do you have plan to add attach/detach code to brick?
16:46:33 <DuncanT> zhiyan: Lots of people have thoughts / concerns, we aren't going to get a conclusion now
16:46:48 <hemna> zhiyan, I thought I touched on that already?
16:47:08 <zhiyan> Duncant: yes, i totally understand. thanks.
16:47:24 <hemna> ok lets move on
16:47:29 <zhiyan> but i just want to do some thing to get my bp move forward...
16:47:48 <DuncanT> #topic Rate limiting, types and the database
16:47:49 <avishay> If everyone goes off and thinks about a design we will either have 20 designs, or more likely be in the same place we are now
16:48:05 <DuncanT> avishay: We can take it to #cinder later
16:48:12 <hemna> DuncanT, +1
16:48:32 <DuncanT> So we've a review open for rate limiting
16:48:56 <avishay> Let one person make a design, email to the list, everyone can raise objections, and we can discuss next week productively?
16:49:23 <vincent_hou> avishay: +1
16:49:31 <hemna> avishay, I'll try and throw something together at a high level.   I don't think it has to be complicated IMO
16:49:39 <avishay> hemna: thanks
16:49:53 <zhiyan> avishay: +1, i'd like to take 'new api' part, to drive the state machine properly.
16:50:17 <DuncanT> #action hemna to rough draft a design for host direct I/O / direct attach for discussion
16:50:18 <avishay> OK, somebody do it, email it, and we'll discuss.
16:50:20 <avishay> Moving on
16:50:58 <DuncanT> So there were some concerns about adding rate limiting stuff directly to the volume-types table in the database
16:51:25 <DuncanT> This grew into a few discussions about what our API goals are now that volume types are getting somewhat overloaded
16:52:02 <DuncanT> I threw my thoughts onto an etherpad at https://etherpad.openstack.org/cinder-volume-types-evolution but haven't had much feedback
16:52:29 <DuncanT> I believe winston-d has a plan to rework the patch vaguely along these lines to see how it looks?
16:52:36 <guitarzan> DuncanT: those other entities better by identified by uuids instead of names :)
16:52:53 <DuncanT> guitarzan: I hate UUIDs but you're probably right
16:53:05 <guitarzan> well, no I'm just as wrong about it as we are volume types :)
16:53:10 <winston-d> yes, i'll rework the patch and submit soon.
16:53:55 <DuncanT> guitarzan: Unique names work fine in my opinion and have the huge advantage of being readable, but if somebody really wants UUIDs I'm not going to push hard against it
16:54:08 <DuncanT> Anybody see any serious holes in the design?
16:54:10 <guitarzan> DuncanT: I think the same about vtype names :)
16:54:22 <guitarzan> it seems a little odd to have those specific classes
16:54:32 <guitarzan> it will work obviously
16:54:38 <DuncanT> qos and encyption you mean?
16:54:41 <guitarzan> yes
16:54:47 <guitarzan> is there a line to be drawn somewhere?
16:54:57 <guitarzan> or does every distinguishing feature become an entity?
16:55:37 <winston-d> anything else besides qos and encryption?
16:55:39 <hemna> so the plan is to dump more core features into volume types?
16:55:42 <DuncanT> I think we might end up with more classes over time... pushing them all under one abstraction is just going to lead to weird multiplexing. Only the admin sees the features, the external user still jsut sees volume types
16:56:28 <DuncanT> hemna: I think volume types are our principle abstraction for differentiating volumes... they make a nice abstraction from an external view in my opinion
16:56:28 <guitarzan> so now the capabilities are tied directly to these new classes?
16:56:30 <avishay> I guess this helps with the whole "standardizing" effort
16:56:50 <hemna> humm
16:56:56 <hemna> kinda feels like a dumping ground
16:57:11 <guitarzan> we killed 40 minutes with one topic
16:57:43 <DuncanT> hemna I'd rather not see lots of complexity exposed to the 'customer' if possible
16:57:49 <avishay> guitarzan: "killed" is right :)
16:57:50 <hemna> DuncanT, agreed
16:57:55 <hemna> lol
16:58:13 <guitarzan> I think the last example, type-create feels weird
16:58:33 <winston-d> anyway, if you are interested in rate-limit, qos, you may want to take a look at this as well: https://etherpad.openstack.org/cinder-extend-volume-types
16:58:36 <guitarzan> maybe extra-specs suck, but maybe that's just an interface problem
16:58:36 <DuncanT> I'd like to maybe see the capability stuff moved to a new class or classes, and volume types just become a bunch of those classes tied together
16:58:49 <hartsocks> hey guys… I have a meeting in this room in a few minutes.
16:58:55 <guitarzan> hartsocks: we're aware of that
16:58:59 <DuncanT> hartsocks: Ok
16:59:11 <hemna> we can move this to #openstack-cinder
16:59:29 <avishay> Direction seems right but I will have to look at that last etherpad
16:59:34 <DuncanT> Hmmm, since we're just about out of time, can I ask people to look at https://review.openstack.org/#/c/30291/ for testr migration please?
16:59:59 <winston-d> sure
17:00:05 <DuncanT> avishay: The details were made up off the top of my head :-)
17:00:20 <DuncanT> Right, I'm afraid we need to move everything else over to #openstack-cinder
17:00:26 <DuncanT> Thanks folks
17:00:29 <DuncanT> #endmeetign
17:00:29 <hemna> yah we're out of time
17:00:33 <winston-d> thx DuncanT
17:00:34 <avishay> DuncanT: some of the best and worst ideas come from that sort of thinking ;)
17:00:35 <zhiyan> thanks
17:00:37 <avishay> thanks
17:00:38 <DuncanT> #endmeeting