16:00:52 #startmeeting cinder 16:00:53 Meeting started Wed Dec 17 16:00:52 2014 UTC and is due to finish in 60 minutes. The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:54 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:58 The meeting name has been set to 'cinder' 16:00:59 0/ 16:01:04 hi 16:01:07 o/ 16:01:08 Hi 16:01:10 hi 16:01:10 hi 16:01:11 o/ 16:01:11 hi 16:01:14 hi all 16:01:19 o/ 16:01:21 hi 16:01:23 o/ 16:01:24 hi 16:01:32 hi 16:01:33 meeting agenda https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting ;) 16:01:38 hi 16:01:38 first of all thanks to everyone who has been helping with last minute k-1 merges 16:01:44 here's the current priority list https://launchpad.net/cinder/+milestone/kilo-1 16:01:51 https://etherpad.openstack.org/p/cinder-kilo-priorities 16:02:00 o/ 16:02:24 o/ 16:02:29 We need the reviews in "ready" merged today. 16:02:45 thingee: aye aye captain! 16:02:50 jaypipes: here? 16:03:13 or jgriffith 16:03:18 thingee: I'm here 16:03:35 thingee: the DRBD driver is ready per today. 16:03:37 thingee: yup 16:03:43 is that too late? 16:03:53 jgriffith: lets quickly talk about removing that CI wiki page or updating it 16:03:54 flip214: thanks, will take a look and retarget 16:03:57 hello all 16:04:02 I ask because it was in "abandoned"; I moved it up. 16:04:04 thingee: sure 16:04:13 thingee: so there are multiple wiki pages for CI right now :( 16:04:21 thingee: neither is fully up to date 16:04:46 o/ 16:04:46 * smcginnis Highlander: There can be only one! 16:04:53 thingee: IMO we should pick one (or neither) and have one point of truth that's actually maintained 16:04:54 there's information here https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver 16:05:06 thingee: whether that's market-place or other I don't care 16:05:07 and here https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers 16:05:13 but we shouldn't have multiple copies 16:05:23 jgriffith: I agree 16:05:48 thingee: Quick question - had you reviewed HA etherpad as last weeks AR? I couldn't find anything on ML about this. 16:05:52 I spent the time to move a lot of stuff out how to contribute driver to external wiki pages since it was getting out of date itself 16:05:57 dulek: no 16:06:03 end of milestone problem atm ;) 16:06:08 Okay, thanks! 16:06:12 everyone wants me 16:06:35 jgriffith: ok, I can do the same for this page and move stuff to point to the real third party wiki 16:06:46 and give some suggested forks of jaypipes's work that may or may not work 16:06:50 thingee: so are you stating which is the "real" page ? 16:06:51 :) 16:07:00 do you mind people referring to your basic ci repo? 16:07:20 well I don't think ours is the real one honestly. I don't think we should maintain one either. 16:07:28 thingee: agreed 16:07:31 it's bound to get out of date again 16:07:42 thingee: I think that was mostly setup for our own internal tracking early on 16:07:45 Are we talking about the 3rd party CI page? 16:07:54 jungleboyj: https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers 16:08:13 I think infra's should be the one: https://wiki.openstack.org/wiki/ThirdPartySystems 16:08:26 ok, seems like no one is opposed. I'll start moving that out 16:08:30 ok lets start today's agenda 16:08:39 thingee: Ok, because there was also the question about the accuracy of https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers 16:08:49 #link https://wiki.openstack.org/wiki/CinderMeetings 16:08:50 jungleboyj: that's what started this 16:08:51 Should I make a call for people to get that updated with current status? 16:09:00 one more quick comment on 3rd party ci 16:09:05 #link Third-party self service account creation - http://ci.openstack.org/third_party.html#creating-a-service-account 16:09:33 for anyone one doesn't have an account yet ^^^ 16:11:07 there is another one https://wiki.openstack.org/wiki/Cinder/third-party-ci-status 16:11:56 xyang1: yeah, that was my point earlier about their being two :) 16:12:03 xyang1: Yeah, that was the one I was asking about. 16:12:25 xyang1: there's another on under the infra wiki's as well 16:12:44 Do we need anything more than the contact info on infra? 16:12:50 xyang1: which I can't seem to find again at the moment :( 16:12:55 https://wiki.openstack.org/wiki/ThirdPartySystems 16:13:09 jgriffith: ^^ linked via infra 16:13:09 smcginnis: thank you kind sir 16:13:13 np 16:14:21 so my point earlier was the the Cinder free-form wiki should probably go away 16:14:34 and this infra page should be the "source of truth" 16:14:51 albeit I'd love to add a change to seperate based on project 16:15:16 jgriffith: how about all the other information that where in the first wiki? 16:15:18 #topic Volume deletion while snapshots exist 16:15:18 smatzek: here? 16:15:18 jungleboyj: I'll update the wiki and respond to the ML post 16:15:23 #topic Volume deletion while snapshots exist 16:15:35 thingee: here. 16:16:09 thingee: Ok, won't insert myself there then. 16:16:13 no offense but why are we talking about this again? 16:16:25 we lost thingee 16:16:27 The blueprint https://review.openstack.org/#/c/133822/ has -1s from some Cinder cores and a +1 from another. 16:16:55 smatzek: this topic comes up every few months, and every few months we end up saying no 16:17:03 I'm back 16:17:09 sorry folks...irc issues 16:17:10 smatzek: maybe this time will be different... 16:17:35 jgriffith: are we done with this topic? :P 16:17:42 Why is it a Bad Thing? 16:17:44 thingee: :) yes 16:17:46 :) 16:17:49 :) 16:17:50 Some volume drivers have implemented snapshots in their backend by calling common methods to do volume clones. 16:17:53 smcginnis: +1 16:17:59 smcginnis: because there are a number of drivers that the snapshots are linked and they can't do this 16:18:02 deleting a volume and leaving hanging snapshots doesn't really make sense in our model of how things work. 16:18:04 smatzek: sorry I'll have to review the logs after...just had irc issues 16:18:05 IIRC Ceph is one, LVM is another 16:18:06 I don't understand the resistance to this. 16:18:14 if this were about deleting a volume and removing the snapshots associated with it, it would make sense 16:18:24 We've intentionally designed a use model that avoids this 16:18:32 and other backends can't delete volumes that have snapshots. 16:18:37 eharney: +1 16:18:41 hemna: that's what I just said :) 16:18:44 thangp: We aren't actually done. YOu didn't much. 16:18:45 hemna: linked :) 16:18:52 imo, it should be only for admin 16:18:55 thingee: We aren't actually done. YOu didn't much. 16:18:56 I don't have a problem with allowing deletion of volumes with existing snapshots, but the restriction that you can't do that is buried deep in cinder's DNA, so changing that will requires a really strong use case 16:18:58 ours for one cannot delete the volume and leave the snapshot. 16:18:58 EMC VMAX and SVC are two examples. In these Cinder voluem drivers it would allow space savings to allow the volume to be deleted with Nova instances in this flow: SNapshot Nova Instance, delete nova instance, but the user wants to keep the image of the instance. 16:19:01 we would break 16:19:07 jungleboyj: a proposal involving some kind of "promote this snapshot to a volume" might make sense, so you didn't have hanging leaves 16:19:10 smcginnis: smatzek so the problem is you end up with goofy mixed behaviors based on driver 16:19:15 which is what we strive to NOT have 16:19:25 jgriffith: +1 16:19:26 eharney: jungleboyj there already is! 16:19:30 imo different driver behaviors is not the problem 16:19:32 eharney: jungleboyj create volume from snapshot 16:19:34 we're not going back to a matrix 16:19:45 eharney: jungleboyj if you implement that as a fancy promote, that's great 16:20:00 was the use case already mentioned for this? 16:20:04 heh sorry 16:20:13 Instance snapshot is totaly different from volume snapshot 16:20:30 yup 16:20:32 yeah. I remember we defined what a 'snapshot' means in Cinder terms, and it's totally valid 16:20:52 use case? 16:21:02 The use case is Nova boot from volume. Nova snapshot instance, which produces a Glance image which references Cinder snapshots. Nova delete instance. The Cinder volumes backign the instance are not deleted as they have related snapshots. 16:21:20 smatzek, you are comparing apples and oranges 16:21:21 smatzek: does promote snapshot to volume followed by delete of the original volume address your use case? 16:21:51 images are not snapshots, they are self contained separate entities. volume snapshots are not, for some backends. 16:21:55 tbarron: it does not since the GLance image metadata contains block device mappings which point to the snapshot using a snapshot_id field. 16:22:08 smatzek: instance snapshot is a glance image, it can be completely unrelated to cinder. 16:22:47 winston-d: yeah, but Bootable Volumes or attached Volumes create a Cinder Volume snap under the covers 16:22:51 the BDMs contain references to cinder snapshots? 16:22:57 winston-d: honestly it's kind of a lame implementation IMO 16:22:58 winston-d: they are related. 1. Nova boot from volume. 2. Nova snapshot instance. In this case Nova API will call Cinder snapshot on the volumes behind the VM, producing Cinder snapshots, which are then pointed to by Glance metadata 16:22:59 does it make sense for a flag in delete volume to also delete snapshots? I think that's all smatzek wants here 16:23:20 thingee: that does, but I don't think that's what he wants 16:23:25 thingee: so that was how i originally read this, and i like that idea, but that wasn't what he wanted 16:23:27 thingee, I thought he wanted to keep the snapshot around and delete the volume. 16:23:33 thingee: so currently when you boot you can say "delete on term" 16:23:39 but that pukes if you make a snapshot 16:23:51 which is "ok" IMO 16:24:04 all the tools are there; the volume, the snap etc 16:24:05 hemna: I think that is what he wants. 16:24:07 if that's not what smatzek wants, the use case should be rephrased :) 16:24:11 just create volume from snapshot 16:24:12 done 16:24:15 cinder volume delete -f ? 16:24:15 jungleboyj, yah and that's bad IMO 16:24:16 moving on :) 16:24:18 "The use case is Nova boot from volume. Nova snapshot instance, which produces a Glance image which references Cinder snapshots. Nova delete instance. The Cinder volumes backign the instance are not deleted as they have related snapshots." 16:24:19 what I want is to allow an extra capability that can be set on a per-volume driver basis which will allow volume drivers to delete Cinder volumes in Cinder and in the backend while leaving the Cinder snapshot. 16:24:23 bswartz: nope 16:24:27 thingee: you'd need such a flag on "delete consistency group" then too... 16:24:28 jungleboyj, if all he wants is a cascade delete, then that's ok. 16:24:33 bswartz: you CANT delete volumes with snapshots 16:24:39 bswartz: period, full stop 16:24:40 :) 16:24:45 jgriffith, +1 16:24:48 cinder volume delete -rf ? 16:24:53 bswartz: LOL 16:25:01 there's still an alternate proposal to delete volumes and snapshots 16:25:04 the implication would be delete the snaps first and the volume last 16:25:06 bswartz: seems that's what might be proposed 16:25:15 smatzek: for your first use, sure. I'm not sure we want your second use case. 16:25:19 smatzek, yah that's what I was afraid you were proposing. That fundamentally won't work on many of Cinder's arrays. 16:25:22 reasons already expressed by others. 16:25:23 jgriffith: That's not what's proposed. 16:25:23 bswartz: you need root privilge to do that 16:25:41 * flip214 .oO( bikeshed painting ... ) 16:25:44 smatzek: it's not in your proposal, but it was just suggested in the rolling delete option 16:25:55 What's being proposed won't work for many Cinder volume drivers. But it would work for some. 16:26:03 eharney, that would be ok. I think of it as a cascade delete. :) delete the volume and delete all it's snapshots as well. 16:26:07 smatzek: which makes it unacceptable 16:26:08 hemna: right 16:26:18 smatzek, and that's exactly what we are trying to avoid in Cinder. 16:26:21 smatzek: ^^ see suggestion of cascading delete 16:26:23 forget the driver differences. The concept of "a snapshot that refers to no volume" doesn't make sense. 16:26:32 eharney: +1 16:26:32 eharney: +1 16:26:40 ok folks I'm moving on. There has been an expressed concern with this feature due to it having different behavior from drivers, which we don't want 16:26:43 smatzek: eharney that's why we introduced clone :) 16:26:47 maybe Glance should be using clones instead of snaps for this.. 16:27:01 eharney: +1 16:27:10 eharney: yeah... but on a side; that's not so hot when using LVM 16:27:11 next song. 16:27:12 next 16:27:13 eharney: +1 16:27:15 hemna: +1 16:27:20 eharney: you mean Nova when createing instance snapshot for BFV? 16:27:40 So just to be clear.... 16:27:53 snapshots of instances running BFV is kinda ridiculous anyway 16:28:00 you're using a persitent store 16:28:01 winston-d: dunno, but it feels like it's doing something with a snapshot id that is questionable. I don't really know, so, we should probably sort out that design elsewhere... 16:28:05 eharney: I would like that Nova implementation better as well, but Nova's BDM specification doesn't currently allow it. 16:28:06 you already have an image 16:28:08 per say 16:28:22 if you want to upload it to glance as a regular image... good enough 16:28:30 convert to image 16:28:31 done 16:28:33 moving on 16:28:45 #topic Return request ID to caller 16:28:54 abhijeetm: here? 16:29:00 yes 16:29:12 #link http://lists.openstack.org/pipermail/openstack-dev/2014-December/052822.html 16:29:41 which is good solution to return req id back to caller? 16:30:03 abhijeetm: +1 to solution 1 .... 16:30:27 abhijeetm: to be clear... by caller you mean "internal code" 16:30:32 if there's no split made now, it might be necessary in the future. better do it now while there are less users. 16:30:45 abhijeetm: not like a response to "nova volume-attach xxxx" on the cmd line 16:30:48 client 16:30:50 caller of a method in the client lib 16:30:59 i think? 16:31:10 eharney: me too, but that's why I'm asking 16:31:23 eharney: because depending on the answer my opinion changes :) 16:31:47 But solution # 1 is not be compatible with services which are using old cinder client 16:32:14 abhijeetm: but it doesn't break them 16:32:23 and solution # 2 is already merged in glance : https://review.openstack.org/#/c/68524/7 16:32:28 I think I already decided this in some patch ago and it was solution #1 16:32:34 not sure why this is being brought up again 16:33:06 abhijeetm: is there anything else? 16:33:34 #topic Reopen discussion of "RemoteFS configuration improvements" 16:33:39 no, I will submit spec for sol # 1 16:33:43 erlon: here? 16:33:46 hi 16:33:48 thanks 16:33:58 #link https://review.openstack.org/#/c/133173/ 16:34:06 spec^ 16:34:11 summary of problem: 16:34:14 #link https://etherpad.openstack.org/p/remotefs-improvments 16:34:33 so, basically the discussion about this is 16:34:45 erlon: how about just move all of these to Manilla :) 16:34:52 gah! 16:34:59 the main question this arrived at, i think, is whether NFS/GlusterFS/RemoteFS drivers should support pools 16:35:04 please don't do the manila thing 16:35:09 lol 16:35:10 spec is that we could use pool aware scheduling and have the same benefits proposed 16:35:19 * jgriffith shuts up 16:35:30 why not move this to Manilla ? 16:35:36 i was planning to eliminate the pseudo-scheduler code in the drivers in favor of pushing toward multi-backend in Kilo 16:35:40 hemna: it's not relevant 16:35:44 because Manila is not for serving volumes to instances. 16:35:54 blocks over NFS belongs in cinder 16:35:59 what? isn't the pool support strongly requested by NetApp's NFS driver? 16:36:05 blocks over any filesystem is a cinder thing 16:36:10 hemna: they're right... just the architecture is broken 16:36:14 using the pool aware scheduler we eliminate the pseudo scheduler 16:36:16 we support pools on our NFS driver already 16:36:18 This proposal pushes more toward pool support instead 16:36:20 s/requested/demanded/ 16:36:30 this is not relevant for hardware NFS platforms/drivers for the most part 16:36:38 afaict 16:36:47 winston-d: +1 Certainly was "DEMANDED" 16:37:05 and I'm puzzled after being called "unfair" to Netapp for initially not liking it 16:37:19 ok so what's the problem with enabling the NFS drivers w/ pools ? 16:37:29 I'm not sure what the problem is either 16:37:32 the spec looked good to me 16:37:37 bswartz, +1 16:37:44 which spec? 16:37:46 it solves an existing problem 16:37:58 the initial NFS driver was implemented in a bad way, and this tries to fix it 16:38:01 there may not be a big problem, this proposal just popped up and conflicts somewhat with the spec we already reviewed 16:38:12 and i don't know how they mesh or whether they should yet 16:38:15 bswartz: but conflicts with the pool aware design that most drivers are going in this direction 16:38:20 eharney: https://review.openstack.org/#/c/133173/ 16:38:30 eharney, ok, so it sounds like we need another spec to introduce pools to the NFS drivers, and remove the merged spec ? 16:38:39 bswartz: read line 11 of the etherpad 16:38:41 erlon: I disagree, it just requires drivers to handle the pools case themselves if they want 16:38:44 it's not clear that the NFS drivers should support pools 16:38:50 or what bswartz said as well 16:39:15 sigh... now I'm utterly unsure what to make of this 16:39:29 Pools were proposed for NFS to begin with, weren't they? 16:39:32 if an NFS-based driver wants to export multiple pools, it needs to handle that internally -- not by relying on the parent class's broken approach 16:39:33 if a driver wants to support pools, great. if not, great. next. 16:39:35 let me rephrase 16:39:41 bswartz: it make impossible to NFS drivers to use pools 16:39:42 eharney: and now you're pointing out they shouldn't be there? 16:39:57 everyone is mixing up two kinds of NFS drivers here. 16:39:59 it's not impossible to support pools on NFS, NetApp does that today 16:39:59 this proposal fixes the parent class without harming all the drivers that inherit from it, I believe 16:40:09 bswartz: +1 16:40:13 hemna: the proposed changes to Shares might make that "not possible" 16:40:18 there is a basic NFS driver where you have a list of shares that it mounts and it throws volumes across them, the basic software NFS driver 16:40:31 there are other NFS drivers that are attached to hardware that has pools from the hardware side 16:40:40 eharney: yes, and that's the broken thing, which should be fixed 16:40:42 this proposal is, we should use pools on the software basic NFS driver as well 16:41:04 this proposal = the one erlon just posted 16:41:12 I wouldn't block enhancements to the generic NFS driver as long as they can be overriden by child classes 16:41:14 https://review.openstack.org/#/c/141850/ 16:41:23 akerr: it is possible since we make a small change on RemoteFS driver 16:41:33 so the question of blocking is: we are redesigning the configuration for the remotefs/nfs drivers 16:41:47 and we need to decide if we are redesigning it the way i proposed, or in a different way 16:41:49 eharney: thank you :) 16:42:24 +1 for the way eharney proposed 16:42:31 having just looked at this proposal <24h ago, i haven't thought through the design enough to know all the details 16:43:39 at the very least, we need to do what eharney proposes to fix an existing annoy config issue 16:43:49 eharney: we dont need to redesign the configuration only the scheduling suport part 16:44:04 if someone wants to subclass the generic NFS driver with a pool-supporting-NFS-driver that's cool, and they can configure that however they want 16:44:20 bswartz: I only disagree with the 1 pool limitation 16:44:33 the configuration problem is totaly ok 16:44:33 bswartz: makes sense 16:44:34 erlon: fix that limitation in a subclass 16:44:47 why not just make the share config option a list instead of a single share, and support pools in the generic nfs driver 16:45:05 akerr: that'd be possible, but i thought that's what multi-backend was for 16:45:09 don't force everyone who inherits from remotefs to have the added complexity 16:45:12 so why add the complexity 16:45:21 +1 16:45:28 eharney: +1 16:45:33 bswartz: it does not make sense to write a hole subclass for a driver if I can change only a few lines on the base class 16:45:39 if I have multiple exports on a single share server it makes more sense to allow that to be set up as a single backend with multiple pools 16:45:46 eharney: +1 16:45:59 akerr: +1 16:46:06 akerr: +1 16:46:09 erlon: if those few lines are going to be inheritted by a bunch of classes that don't want them, then it does 16:46:21 erlon: seems like folks are more leaning towards not even having it in the base class. 16:46:31 akerr: +1 16:47:31 do we need a vote here? 16:47:41 I think people already voted 16:47:50 it seems like people don't want this in the base class 16:47:59 honestly i'm still kind of undecided based on evaluating the actual impact 16:48:02 thingee: did you count? 16:48:25 because i'm not sure i saw in the patch the same thing we were discussing here 16:48:51 akerr got quite a few votes too, IIUC, he'd prefer having this in base class? 16:49:17 ok, so we're split there. 16:49:29 eharney: can you finish reviewing and weigh in once you're done? 16:49:39 i'm not sure this is the whole thing, though? 16:49:50 the patch covers passing more info into the driver but not the configuration implications to meet the stated goal 16:50:20 so i'm having trouble seeing the big picture, will need to review more i suppose 16:50:22 thingee: eharney the stated goal for configuration dont need to change at all 16:50:39 well... it has to not change, when we were planning to change it 16:51:04 imo <24h of thought and review is not enough to reach the end of this 16:51:31 ok, I'll let eharney continue to weigh in and defer to that. Otherwise I'll go with the rest of the votes here. 16:51:42 we could defer this for more time to review? 16:51:46 * flip214 time flies 16:51:54 #topic Cinder unit test coverage improvement 16:51:59 winston-d: you're up 16:52:10 really quick 16:52:46 got 2 go 16:52:52 I did some UT coverage tests against Cinder and other openstack projects (Nova, Neutron, Swift) as well 16:53:17 and Cinder has lowest coverage, which is ~74% on master branch. 16:53:37 winston-d: I guess that's partly because there are many drivers that never get used? 16:53:44 winston-d: how much of it was core + LVM code? 16:53:48 flip214: unit tests should cover drivers as well 16:54:00 I'd like to encourage everybody to do a better job writing UT 16:54:11 jgriffith: agree 16:54:17 winston-d: +1 16:54:19 flip214: some drivers, from big vendors also have low coverage 16:54:23 jgriffith: yes, but not count code for *all* drivers if only a single one is run. 16:54:24 winston-d: I'd second that, but I'd also add that I think our architecture of the unit test code needs some work 16:54:36 flip214: it doesn't work that way... 16:54:36 jgriffith: agreed 16:54:48 winston-d: no disagreements there. I think the general response from people about asking for more coverage, and I've experienced this myself when I've asked in reviews is that we don't need to cover everything. 16:54:51 jgriffith: so, will we force moving to mock? 16:55:10 e0ne: that's not even my point, but I think we're slowly doing that 16:55:12 jgriffith: if it only counts driver code to the total if the driver is in use, then okay. I thought that LOC of all drivers are summed up. 16:55:28 e0ne: my point is more around how unit tests are written and what they're checking 16:55:37 I'd start contribute some UT to cinder framework, but driver developers, please take a look at the coverage rate for your driver. 16:55:39 flip214: this isn't the time to debate this, we can talk in #cinder 16:55:42 winston-d: how did you check the coverage? 16:55:45 jgriffith: got it 16:56:10 rhe00_: ./run_tests.sh -c and open the generated html 16:56:23 ok, thanks 16:56:24 can we also start moving the files to mimic the code's directory structure? The flat test directory is getting a git large and unweildy 16:56:24 ok, thanks winston-d 16:56:35 #topic open topic 16:56:59 I've seen many new drivers just testing the 'default' case, which gives roughly 3/4ths of coverage easily 16:57:05 so I'm going to cancel the meeting next week 16:57:05 akerr: a git large huh ? ;-) 16:57:10 bit* 16:57:14 holidays for some folks 16:57:19 thingee: will there meetings next week/following week? 16:57:26 oh nm 16:57:47 31st...are people going to be back? 16:57:56 thingee: it makes sence 16:57:56 ummm new years? 16:57:58 not me :) 16:57:58 thingee: No. 16:58:00 no 16:58:01 no 16:58:02 akerr: +1 16:58:11 no 16:58:19 ok, so the 7th we'll meet again! 16:58:32 https://review.openstack.org/#/c/136980/ any strong exceptions on this patch? 16:58:34 I'll update the wiki accordingly 16:58:46 merry xmas and a happy new year. :) 16:58:50 (sorry for abruptly adding it) 16:58:51 ok... 7th is a Cristmas for some part of us... 16:58:54 reminder to folks, especially core..today is the last day for us to help drivers get in 16:58:57 https://etherpad.openstack.org/p/cinder-kilo-priorities 16:58:58 s/ok/oh 16:58:59 is the deadline for blueprints not related to drivers tomorrow? 16:59:03 Oh, one thing. from me. 16:59:11 thingee: +1 ;) 16:59:27 review time! :-) 16:59:30 I have hotel information in the mid-cycle meet-up wiki: https://etherpad.openstack.org/p/cinder-kilo-midcycle-meetup 16:59:36 rushil: ? deadline for non-driver related bp's? 16:59:42 I encourage core to signup for what they want to help through so we don't spend time on the same patches/burnt out etc. 16:59:42 Please update the list if youy are planning to come! 16:59:43 thingee: can you clarify that? 16:59:52 jgriffith: Yes 16:59:58 jgriffith: we haven't discussed it 17:00:01 thingee: seems problematic to me 17:00:06 thingee: ok... I didn't think so 17:00:18 jgriffith: I 17:00:27 Seems like there's a lot of FUD going around about deadlines, features etc 17:00:36 jgriffith: +1 17:00:39 maybe we should write something up that's *clear* 17:00:47 jgriffith: +1 17:00:47 jgriffith: not sure what you mean 17:00:51 we'll talk in the cinder room 17:00:53 so backup driver BP spec already submitted but not yet approved isnt' excluded b/c of this deadline, right? 17:00:53 #endmeeting