16:00:15 #startmeeting cinder 16:00:16 Meeting started Wed Jun 12 16:00:15 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:19 The meeting name has been set to 'cinder' 16:00:23 Hey everyone! 16:00:23 o/ 16:00:30 hi 16:00:41 hi 16:00:46 hi 16:00:48 agenda for today: https://wiki.openstack.org/wiki/CinderMeetings 16:00:51 hi 16:01:01 hello 16:01:26 One thing I'd like to ask, when folks add items to agenda do me a favor and put your name on there :) 16:01:33 that way we know who's topic it is :) 16:01:41 jgriffith: sure 16:01:46 and on that note... 16:01:55 #topic Ceph as option for backup 16:02:05 hi 16:02:26 I'm down with that, don't know who's proposal it is, but makes sense to me 16:02:27 Hello 16:02:33 I'm curious how much effort is involved 16:02:35 jgriffith: it was seiflotfy_ 16:02:43 jgriffith its mine 16:02:48 ie Ceph/Swift compatability should be pretty easy I would've thought 16:02:50 ahh.. 16:02:55 Given ceph can pretend to be swift, I think you get that for free now? 16:02:57 so there are 2 ways to do it and i would like to discuss which one would fit better with upstream 16:02:59 seiflotfy_: anything specific you want to bring up? 16:03:02 seiflotfy_: I don't think anyone is opposed to the idea. Is there anything you need? 16:03:07 1) we use ceph swift api 16:03:14 Indeed 16:03:18 We just check how to do so 16:03:31 2) we actually add direct support for it in openstack 16:03:46 (which would require a decent amount of code) 16:03:50 We have to do some tests on it but in theory it should work easy 16:03:51 seiflotfy_: really that's your decision. :) 16:04:08 seiflotfy_: I don't care either way, as long as it works 16:04:15 thingee: +1 :) 16:04:25 seiflotfy_: just curious what option #2 buys you over #1? 16:04:28 I'd certainly be interested in hearing how you get on with trying to implement a backup driver, if you go that route... 16:04:33 thingee: well if we go with 1) then the coding might not even exist but more configuration 16:04:43 it needs to be tested 16:04:43 seiflotfy_: yup 16:05:01 seiflotfy_: I was under the impression since it's a compatible api, there shouldn't be a problem 16:05:03 in anyway I think i will start with 1) then later head to 2) 16:05:11 since it wil lrequire some refactoring of the code 16:05:16 seiflotfy_: sounds like a good idea to me :) 16:05:22 seiflotfy_: I've been thinking adding an rbd or rados backup target that can do differential backups would be useful 16:05:22 yup sounds good 16:05:32 mkoderer: went through it and it looks like it will require some refactoring to not make swift the only hardcoded option 16:05:36 but trying 1) first makes sense to me 16:05:51 just flowing through the agenda 16:05:56 refactoring is needed for option 2) 16:06:02 1) should work out of the box 16:06:06 seiflotfy_: Should be a single config option to change the backup target... ping me if it looks harder 16:06:37 I think both options are good... I have no objections, don't think anybody else would either 16:06:51 So, unless there are any questions? 16:06:55 Hi all 16:07:22 Hey avishay. 16:07:23 avishay: yo 16:07:30 morning 16:07:32 Ok, next item 16:07:38 ok cool can i take this task then 16:07:39 ? 16:07:43 hi avishay, hemna 16:07:45 me and mkoderer would do it 16:07:48 seiflotfy_: it's all yours :) 16:07:49 hi 16:07:52 ;) 16:08:11 seiflotfy_: You should link up with jdurgin1 when you get aroung to looking at option 2 16:08:38 avishay hi 16:08:40 hi hemna, could you pls share the progress about the brick implementation? 16:08:41 #topic brick status update 16:08:50 heh 16:08:50 winston-d: hi 16:08:54 :) 16:08:59 ok, well I have a WIP review up on gerrit 16:09:13 I believe I have the iSCSI code working now 16:09:24 https://review.openstack.org/#/c/32650/ 16:09:36 I am just doing some more testing and waiting for my QA guy to give me the thumbs up 16:09:41 including attach and/or detach code? 16:09:54 * jgriffith wants his own QA person! 16:09:54 yes, this is the iSCSI attach/detach code 16:09:59 heh 16:10:00 cool 16:10:03 haha 16:10:15 I've modified the base ISCSIDriver in cinder to use the new brick code and it works 16:10:16 hemna: works for copy image to volume as well, right 16:10:17 (for me) 16:10:22 I could do some testing and QU and supporting seiflotfy_ and mkoderer 16:10:24 xyang_, haven't tried it yet 16:10:27 hemna: you mean on the attach 16:10:27 QA* 16:10:35 :-) 16:10:35 hemna: I moved the target stuff a while back :) 16:10:43 xyang_, I haven't modified the copy image to volume method yet to use brick...that's why it's a WIP still 16:10:45 hemna: there's an issue with nova that disconnecting from an iscsi target disconnects all LUNs...is that a problem here? 16:11:09 +1 16:11:11 avishay, if that's a bug in the current nova libvirt volume driver, then yes, it's a bug in this code 16:11:13 :D 16:11:13 hemna: ok thanks 16:11:45 hemna: no it's not - libvirt keeps track of which VMs are using what, so they disconnect only if nobody is using 16:11:52 avishay: there is a check in nova libvirt volume detaching code... 16:11:53 hemna: do we need similar tracking? 16:12:04 avishay, yes, that code is in this brick code as well 16:12:05 avishay: yes 16:12:11 but we aren't attaching to VMs 16:12:14 hemna: sweet 16:12:24 we are just attaching to the host and using the LUN and then detaching it 16:12:25 hemna: i know, but we still may have multiple LUNs, right? 16:12:53 yes we'll have multiple LUNs 16:12:57 hemna: since copy image to volumes is from cinder, we may still have that problem 16:13:07 but we should only be detaching the LUNs we are done with at the time 16:13:11 hemna: cinder doesn't know what luns are attached 16:13:44 the way nova looks at the attached LUNs is inquiring the hypervisor 16:13:47 hemna: there's a log out call at the end if no luns attaches, that is one thing we don't know in cinder 16:14:06 we don't have a hypervisor in this case 16:14:23 so we probably need to track the connections ourselves 16:14:35 hemna: xyang_ but can't we add that through intiator queries? 16:14:50 well in our case it's always an attach, use, detach for a single LUN 16:15:02 we aren't attaching, then going away and then detaching at some later time. 16:15:05 sorry guys joining late here, if there is a moment at the end I have a few words on the ceph-backup bp 16:15:09 jgriffith: avishay and I discussed about that. so driver can find it out but cinder has to make an additional call 16:15:11 but if cinder dies in that serial process.... 16:15:27 hemna: states will fix that for us :) 16:15:31 cinder never dies!! 16:15:35 :) 16:15:40 so yes, we aren't currently tracking (storing in a DB) which LUNs we have have attached 16:16:00 I hate to go down the path of BDM type stuff in Cinder 16:16:06 yah 16:16:13 I'd like to keep this simple for the first round 16:16:18 what if we get two calls that attach at the same time? 16:16:18 +1 16:16:20 it's already better than the code we copy/pasted from nova 16:16:28 that's existing in cinder now 16:16:44 avishay, lockutils 16:16:51 i'm find with keeping it simple for the first pass, but we should keep these issues in mind 16:17:02 yup! 16:17:03 avishay: I hear ya 16:17:14 it's something we can mull over for H3 16:17:20 hemna: works for me 16:17:24 avishay: Check out the code and raise a bug if you can see a specific scenario that would break it... 16:17:33 DuncanT: yup 16:17:40 as it stands today there are issues with the existing copy volume to image code that doesn't work 16:17:44 that I discovered in the process 16:17:50 like....we never detach a volume..... 16:17:52 :( 16:18:16 this WIP patch already addresses that issue. 16:18:26 Ok, the only other thing there (I think) is the LVM driver migration 16:18:30 I saw another issue in the existing code that failed to issue iscsiadmin logins as well 16:18:32 hemna: there was no detach precisely because of the issue i raised 16:18:37 I am hoping to have that done here shortly 16:18:55 jgriffith, hemna: separate commit for the disconnect and backport? 16:18:56 avishay, that leads to dangling luns and eventually kernel device exhaustion. :( 16:18:56 After that we've got the key components in brick and we've got something consuming all of them 16:19:08 thingee: hmmm? 16:19:35 jgriffith: errr copy volume to image code not deataching 16:19:45 thingee: ahh... 16:19:47 :) 16:19:52 just for Grizzly backport ? 16:19:56 hemna: if nova and cinder are running on the same host, cinder might logout of nova luns 16:20:12 hemna: yea 16:20:20 Oh I guess that was folsom too 16:20:23 hmm 16:20:26 avishay: I'm still unclear on how this got so convoluted 16:20:38 can you issue a copy volume to image when a volume is attached to a VM ? 16:20:39 avishay: We *know* what lun we're using when we atttach for clone etc 16:20:47 there could be more than one luns on the same target, if we logout in copy image to volume, other luns can be affected 16:21:06 xyang_: understood, but since we know the lun why can't we log out "just" that lun 16:21:11 the problem is that when you logout, it disconnect ALL luns on the same target 16:21:27 you can't log out of just one AFAIK 16:21:32 well logout is a separate issue from removing the LUN from the kernel 16:21:32 * winston-d checking connectivity 16:21:45 right, but what I'm saying is I *believe* there's a way to do a logout on JUST the one session/lun 16:21:49 this is how iscsiadm works when logs in to target 16:21:50 hemna: only grizzly. folsom just gets security fixes now 16:21:50 you can remove a LUN from the kernel by issuing an scsi subsystem command 16:21:50 maybe there is a better way than what nova does 16:21:55 w/o doing an iscsi logout 16:22:09 avishay: that's what I'm wondering 16:22:28 avishay: xyang_ regardless... I'd propose we file a bug to track it (thought we already did though) 16:22:33 you don't need to do a logout to remove a lun 16:22:33 hemna: so remove from the kernel, then you can check if there are no more luns and logout? 16:22:40 and address it after we get hemna 's first version landed 16:22:42 jgriffith: I think avishay already logged a bug 16:22:43 you should only logout from an iscsi session when you are done with the host 16:22:44 jgriffith: there already is a bug 16:22:53 xyang_: avishay I thought so :) 16:23:06 avishay, yah there is a way I believe 16:23:14 OK, so there is a bug open, let's fix it in v2 16:23:23 requires some smart parsing of kernel devices in /dev/disk/by-path and knowing the target iqns, etc 16:23:29 I guess the *right* answer is actually the opposite of what I just said 16:23:36 in order to do the backport correctly 16:23:52 fix it in the existing code now and backport, then move forward with the new common code 16:24:02 ok so we have like 3 issues here :) 16:24:14 1) the detach in the existing cinder code 16:24:36 2) iscsi logout issues that can cause host logouts when LUNS are in use 16:24:44 3) detaches from the kernel 16:24:57 FC is so much easier ;) 16:25:00 the important one here for now I think is the issue that thingee raised 16:25:06 :P 16:25:07 avishay: ha! Now that's funny 16:25:16 avishay as long as you have HBA installed? 16:25:23 jgriffith: winston-d :) 16:25:31 I haven't started the FC stuff yet 16:25:31 winston-d: avishay and you don't care about things like zoning 16:25:41 zoning shmoning 16:25:45 hemna: one thing at a time :) 16:25:46 I'll probably do another patch for the FC attach/detach migration into brick 16:25:47 avishay: :) 16:25:56 hemna: yes, please do them separately 16:26:03 yah that was the plan :) 16:26:06 ah forgot dosaboy is working on theceph blueprint 16:26:11 anyway, looks like a good start - nice work 16:26:12 so i will be trying to assist him then 16:26:23 the brocade guys are supposed to be working on the zone manager BP 16:26:30 sorry, guys my network connectivity is very unstable today. 16:26:39 Ok... anything else on this topic? I think hemna has a good idea of the challenges and the point thingee brought up 16:26:59 #topic QoS and Volume Types 16:27:00 what should be the plan for the Grizzly detach issue ? 16:27:13 ok nm we can hash it out in #openstack-cinder 16:27:19 hemna: sounds good 16:27:25 hemna: I would have liked to have seen that addressed already TBH 16:27:45 yah, I didn't notice it until I started the brick work :( 16:27:48 hemna: but yes, we'll talk later between xyang_ hemna and whoever else is interested 16:27:55 +1 16:28:00 sure 16:28:04 and avishay 16:28:15 sorry avishay you can't go home yet ;) 16:28:27 So... QoS 16:28:29 jgriffith: i'm already home :) 16:28:36 avishay: ;) 16:28:41 well then we're all set :) 16:28:53 yes, QoS please. :) 16:29:02 winston-d: where did you patch go? 16:29:09 ahh fond it 16:29:10 found 16:29:24 it's here: https://review.openstack.org/#/c/29737/ 16:29:26 https://launchpad.net/cinder/+milestone/havana-2 16:29:30 oops 16:29:32 sorry 16:29:43 yeah.. what winston-d said ^^ :) 16:29:53 I don't know how many of you have looked at this 16:30:04 but I had some thoughts I wanted to discuss 16:30:17 I think I commented them pretty well in the review but... 16:30:36 to summarize, I'm not crazy about introducing unused columns in the DB 16:30:45 I have as well :) 16:30:52 kmartin :) 16:31:02 and I'm not sure about fighting the battle of trying to cover every possible implementation/verbage a vendor might use 16:31:11 I had two possible alternate suggestions: 16:31:23 1. Use metadata keys 16:31:36 This way the vendor can implement whatever they need here 16:31:58 It's like a "specific" extra-specs entry 16:32:15 jgriffith: +1, non-first class features should not be introducing changes to the model. 16:32:43 The other option: 16:32:44 jgriffith: +1 seems like a sane solution 16:33:14 2. Implement QoS - Rate Limiting and QoS - Iops setting 16:33:24 jgriffith i have concerns about having vender specific implementation keys stored in DB for volume types, that makes volume types not compatible with other back-ends. 16:33:32 while I was working on the wsme stuff for the api framework switch, it made me realize how complex the volume object is becoming =/ 16:33:48 as jgriffith mentioned, we're half of what instances are in nova 16:33:54 winston-d: actually... my proposal 16:34:04 winston-d: would make it such that it's still compatable, just ignored 16:34:29 winston-d: in other words if the keys don't mean anything to the driver it just ignores them 16:34:49 winston-d: this creates some funky business with the filtering, but I think we can resolve that 16:35:04 winston-d: just leave filter scheduling as a function of the "type" 16:35:08 the only thing drivers should agree on is the capability keys. 16:35:09 not QoS setting 16:35:18 thingee: I would agree with that 16:35:29 but... 16:35:43 The problem is I see little chance of us all agreeing on what QoS is and how to specify it 16:35:50 thingee i agree as well, but i think QoS is among capabilities. 16:36:09 winston-d: you're correct, but I think it's a "True/False" 16:36:13 You can't call it QoS - that term is overloaded. This is rate limiting. 16:36:15 Thought that doesn't mean we shouldn't try to get drivers to agree (i.e. point out inconsistencies at review time), jsut let the standards be defacto rather than prescribed... 16:36:29 winston-d: and TBH I'm still border line on whether I count rate-limiting as QoS :) 16:36:34 * thingee thinks there should a way to extend capabilities if it's not a first class feature. 16:36:35 I thought this is what we discussed at the summit :) 16:36:41 thingee, +1 16:36:51 thingee: _1 16:36:54 ooops 16:36:56 +1 16:37:11 DuncanT: so the problem is... there's already an issue 16:37:34 DuncanT: For example, I use "minIOPS, maxIOPS and burstIOPS" 16:37:38 on a voume per volume basis 16:37:44 that can be changed on the fly 16:38:11 Other's use "limit max MB/s Read and limit max MB/s Write" 16:38:17 folks, the QoS bp/patch was at first for client rate-limiting (aka, doing rate-limit at Nova Compute). so we have to deal with back-ends, as well as hypervisors. 16:38:22 While yet other use "limit IOPs" 16:38:38 winston-d: indeed 16:38:52 winston-d: but what I'm saying is maybe that should be "rate-limiting" and not QoS 16:38:54 jgriffith: On-the-fly changes don't seem to fit within the framework we've discussed 16:39:10 jgriffith: Nor per-volume limits (rather than per-type limits) 16:39:12 DuncanT: updates 16:39:26 so you would change those settings on the fly after the volume is created? 16:39:41 probably out of scope for this I would presume 16:39:46 hemna: Yes, that's something I need to be able to do 16:40:02 well... it's not something I'm asking winston-d to put in his patch 16:40:07 ah ok 16:40:10 I definitely feel that is not within the discussed framework, other than via retyping 16:40:15 bu it's something I'm keeping in mind with the design 16:40:25 DuncanT: correct 16:40:26 that smells like v2 to me 16:40:33 hemna there's no reason why not if back-end/hypervisor supports run-time modification 16:40:35 winston-d: does libvirt support changing rate limit settings after the volume is attached? 16:40:41 like a volume type update or something like that 16:40:41 DuncanT: well... it's just like "update extra-specs" 16:41:06 hemna: I'd like to have it be the same volume-type 16:41:16 So the volume-type just tells what back-end to use 16:41:17 jgriffith, but in this case do you want to update the volume type here, or the specific volume instances's settings 16:41:19 jgriffith: I don't think that changes existing volumes? 16:41:55 like for volume X, update it's IOPS settings now. 16:42:06 hemna: DuncanT so I don't want to kill the discussion on winston-d 's work here with my little problems :) 16:42:09 avishay last time we checked, it should be able to do so. but I didn't try that out 16:42:09 but... 16:42:15 winston-d: ok 16:42:25 hemna: but yes, that's what I intend to do 16:42:30 that'd be cool :) 16:43:03 DuncanT: to start it most likely would have to be an update to the volume-type 16:43:21 so for example: volume-type: A, with QoS: Z 16:43:32 Update volume-type: A to have QoS: X 16:43:32 jgriffith: That is entirely outside of any scope of QoS discussed so far... and is going to cause major issues in regards to even slightly trying to standardise behaviours between backends 16:43:43 DuncanT: why? 16:43:56 DuncanT: and BTW I've already submitted a patch for this back in Folsom 16:44:10 well I think it's a new feature that hasn't been discussed yet, but should be put in a new BP and scheduled. 16:44:13 jgriffith: Because the possibility matrix explodes, as far as what backends can do what featyres 16:44:27 DuncanT: that's why I'm saying you don't hard code that shit 16:44:36 DuncanT: That's the whole point of using metadata keys 16:44:39 if we perfer K/V pairs for QoS metadata, maybe we should have a set of fixed keys? 16:44:55 winston-d: can you expand on that? 16:45:12 that's just the key standardization discussion all over again :) 16:45:30 hema: 2 sessions at the summit:) 16:45:36 :) 16:45:43 and no conclusions, obviously 16:45:54 xyang_: hemna the good thing is it's paired own in terms of scope 16:46:03 true 16:46:12 avishay: I think we tried to tackle too large of a problem in the summit sessions 16:46:28 winston-d: can you tell me more about what you're thinking with the standard keys? 16:46:33 jgriffith: agreed. i also think that we failed to agree on simpler use cases than this. 16:46:35 for example, KVM/libivrt only accepts total/read/write bytes/iops per sec. 16:47:08 we need to keep track of what we're discussing... 16:47:18 so for a QoS setting requires client to do the enforcement, these keys must be there, at least 0 16:47:21 we're confusing client side, backend, qos, capabilities 16:47:53 winston-d: I get that 16:48:16 winston-d: so that brings me back to thinking that we have two types of performance control 16:48:27 1. Hypervisor rate-limiting 16:48:34 2. Vendor/Backend implemented 16:48:36 I think the whole idea behind the bp/patch is we try to find a way to express QoS requirements for volume types in Cinder, which can be consumed either by Nova or cinder back-ends. 16:49:18 winston-d: and that I guess is mixing 1 and 2? 16:49:22 winston-d: indeed, but what I'm propsing is 16:49:36 rushiagr: haha.... I think we're thinking the same thing 16:49:44 s/QoS/rate limiting/g might make this issue easier to agree on 16:49:52 winston-d: rushiagr so what if we had set keys for hypervisor limiting 16:50:01 and arbitrary K/V's for vendors 16:50:13 avishay: now that's more what I'm thinking!! 16:50:19 avishay well, it was simply cliend-side rate limiting at first. :) 16:50:26 avishay: I don't think we should treat rate limiting as QoS 16:50:40 10 min warning 16:50:45 DOHHHH 16:50:49 suprrise 16:51:10 I don't think we're going to agree on representation 16:51:20 But I do think we shoudl be able to agree on: 16:51:30 1. Should QoS and Rate Limiting be separate concepts 16:51:40 2. Should QoS be abstract K/V pairs 16:51:51 thoughts on those two points? 16:52:11 +1 to both 16:52:25 winston-d: avishay thingee kmartin rushiagr ? 16:52:31 I'm not convinced QoS and rate limiting are different concepts 16:52:36 +1 for 2 16:52:36 DuncanT: they are 16:52:39 +1 on 2. though 16:52:43 +1 for 2 16:52:43 yes 16:52:45 but I can argue with you over a beer on that one :) 16:52:51 +1 for 2. 16:52:55 QoS means different things to practically everyone 16:53:02 yup 16:53:05 I ok with #2 but #1 is the same as far as HP is concerned 16:53:08 I can say that Flash vs. HDD is QoS 16:53:24 do the decisions on 1 & 2 affect the client vs backend question? 16:53:31 the first one needs some more discussion i guess. Need to think more on the idea of separating hypervisor/backend stuff 16:53:32 kmartin, well HP 3PAR that is. 16:53:50 guitarzan client side usually can only do rate-limiting, AFAIK 16:53:52 QoS is more about guaranteed minimums than it is about maximums 16:53:54 avishay: we call those two different products 16:53:55 jgriffith: Certainly it is a non-trivial argument space but ultimately the only sane conclusion is that they are the same class of thing :-) 16:53:56 guitarzan: that might help me win my argument :) 16:54:04 bswartz, +1 16:54:04 jgriffith: I'll buy the first round 16:54:05 guitarzan: I could go for that :) 16:54:10 DuncanT: :) 16:54:22 Ok.. one more minute on this 16:54:28 I think we all agree on #2 then 16:54:35 The only question is #1 16:54:40 +1 on both 16:54:41 I'm willing to compromise here I think 16:54:47 Yay!!! bswartz 16:54:53 I think #1 and the client/backend question are easily bigger than "what keys" 16:55:09 +2 for 1 if we shift QoS to 'I' release... 16:55:14 I think guitarzan makes a good point, what about separating client and backend 16:55:30 and here's my last off the wall idea 16:55:35 winston-d: hmmm... that could hurt 16:55:47 maybe the client side stuff should be stuck on an "attachment" instead of the volume itself 16:56:00 guitarzan: I actually like that idea 16:56:09 guitarzan: I think it's come up before actually 16:56:10 jgriffith never mind. i can do both for 1. 16:56:38 winston-d: https://launchpad.net/~openstack/+poll/i-release-naming 16:56:44 What do others think of the separation of client/backend implemented? 16:57:02 jgriffith +1 16:57:13 fine by me 16:57:34 cool! 16:57:38 kmartin: you good? 16:57:40 hemna: ? 16:57:44 I think things may get refined after someone actually implements something 16:57:44 bswartz: rushiagr ? 16:57:44 jgriffith: But the end result is the same, whether the rate limit is enforced on hypervisor or backend 16:57:46 DuncanT: ? 16:57:47 s/think/hope/ 16:57:58 +1 16:58:04 +1 16:58:05 I don't understand what the client side imlpementation has to do with cinder 16:58:11 +1 16:58:17 guitarzan https://review.openstack.org/#/c/29737/ 16:58:21 DuncanT: well.. backend become K/V's and client is "set" semantics that get in the DB 16:58:29 winston-d: touche :) 16:58:32 bswartz just like volume encryption in client side. 16:58:36 slight difference - doing it in the hypervisor adds ratelimiting to the network connection as well 16:58:47 clients are welcome to limit themselves, but it's not our business 16:59:11 bswartz: fair point, but I like the idea of having that setting in Cinder via the attach 16:59:18 winston-d: okay well I can see it from that pespective 16:59:20 bswartz: and it allows us to keep from double implementing 16:59:22 shocker, our one minute is over :) 16:59:36 bswartz: in other words set it on the backend and on the hypervisor 16:59:46 Darn you time!!! 17:00:07 bswartz: Like encryption, cinder is the single place to store this kind of info... and I'd really rather most customers don't see things like rate limiting 17:00:16 yup, that's time 17:00:23 okay I take your points 17:00:25 see ya all in #openstack-cinder 17:00:27 Ok... suppose that will do it 17:00:30 Shall we move to the cinder channel? I know dosaboy still have a question... 17:00:31 cinder does need to understand rate limitting 17:00:49 it always make me feel that I'm back to OSD when discussing standardizing things among back-ends. 17:00:53 which is good. :) 17:01:00 :) 17:01:05 who's here for the vmware driver meeting? 17:01:11 alright, I need to wrap and go to my next meeting :( 17:01:18 #end meeting cinder 17:01:23 jgriffith: ah, sorry, thought you were already done 17:01:26 #endmeeting cinder