15:59:59 #startmeeting cinder 16:00:00 Meeting started Wed Dec 5 15:59:59 2012 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:04 The meeting name has been set to 'cinder' 16:00:31 kmartin: let's start with you today :) 16:00:37 #topic FC update 16:00:44 sure, we have good news 16:00:53 * jgriffith likes good news 16:00:57 We have a proof of concept working for the Fibre Channel support, working on a few issues with detach. 16:01:13 kmartin: awesome! 16:01:16 I updated the FC spec attached to the cinder blueprint and entered a new blueprint in nova for the required changes 16:01:46 kmartin: very cool 16:02:01 kmartin: you think you guys are going to get the patches in the next week or so? 16:02:08 jgriffith: hey~ 16:02:18 winston-d: morning/evening :) 16:02:23 hello all 16:02:30 jgriffith: morning 16:02:33 avishay: hi~ 16:02:39 hi everyone 16:02:53 hi eharney 16:02:54 * jgriffith thinks he started a touch early today :) 16:02:58 still need to get legal approval for sharing any code to a wide group, but we could set something to show you 16:03:10 kmartin: so what do you think as far as when some patches will hit? 16:03:12 we could do a demo for you at some point 16:03:27 hemna_: That would be cool 16:03:29 we are still waiting for legal 16:03:37 bahhh!!! 16:03:39 we put the 3par driver through legal a week ago. 16:03:42 still waiting for that 16:03:42 * jgriffith dislikes lawyers 16:04:05 jgriffith: likewise 16:04:12 I'm glad it's not only IBM that's like that :P 16:04:16 Ya know, considering the investment and backing HP has in Openstack this should be a no brainer for them 16:04:17 There are still some underlying scsi subsystem issues I'm working out with FC, but it should be solvable 16:04:29 yah 16:04:37 hemna_: Ok... so one recommendation 16:04:51 hemna_: kmartin Gigantic patches are not fun for anybody 16:04:57 I don't think they are hung up in legal....just takes time for them to dot the I's, cross the T's n such 16:05:02 jgriffith: It is but they just want to make sure, it will happen it's just a slooooow process 16:05:18 hemna_: kmartin keep in mind if there's a way to break it into digestible chunks it'll help us move on them when you submit 16:05:35 jgriffith, I made clones of the nova, devstack, cinder repos internally and we are tracking against that and have our code checked into those clones 16:05:44 jgriffith, hemna_: +1 16:06:03 if we didn't have legal, then I'd make those public 16:06:14 hemna_: That's cool, but what I'm getting at is don't just dump a single multi K line patch 16:06:21 yah 16:06:23 agreed 16:06:29 hemna_: Try to break it in to logical chunks as much as possible 16:06:29 the cinder patch is small right now 16:06:34 almost all the work is in nova 16:06:39 and it's fairly small as well 16:06:40 hemna_: Ok... cool, just wanted to point that out 16:06:54 jgriffith: check out the spec to see the changes not very big at all 16:07:04 awesome... so we'll just wait for legal and hope for something in the next week or so :) 16:07:30 we could give you a demo later this week on the POC 16:07:36 and I could walk you through the code if you like 16:07:48 hemna_: I'd be up for that, but probably not this week 16:07:54 I'd rather get a review up front then wait until we submit 16:07:59 hemna_: maybe we could sync up later and try for next week? 16:08:02 ok that's fine then as well 16:08:03 sure 16:08:08 there may be other folks here interested as well 16:08:33 do we have a mechanism for desktop sharing n such ? 16:08:44 hemna_: personally I use Google+ 16:08:47 :) 16:08:48 kmartin: Are you in touch with Dietmar from IBM on the FC stuff? 16:09:06 Google+ does desktop sharing? (linux?) 16:09:12 jgriffith: we're meeting with the Brocade group and we'll update them as well we could probably run it by that group as well 16:09:28 just a thought, isn't showing code to external people before legal approval still a possible legal issue? 16:09:39 avishay: yes, he is part of our weekly meeting 16:09:47 kmartin: great 16:10:13 last time when Samsung tried to do that RedHat guys, RH people said, no, please don't do that before you've done legal process. 16:10:32 winston-d, only if the osrb denies our project, which they shouldn't 16:10:42 winston-d: we would not post the code just a demo of what we have working 16:11:24 demo should be ok but you mentioned walk through code. so... 16:11:24 Ok, we can sort through details on a demo and who's interested offline 16:11:30 ok 16:11:33 sure 16:11:39 I'd be interested and I'm sure others would 16:11:48 I'm interested as well 16:11:52 Not required, but if you guys want to take the time and effort that would be cool 16:11:53 i'd be interested to see demo as well! 16:11:59 do we have a page for the approximate ship date for Grizzly? 16:12:07 hemna_: Yeah 16:12:13 * jgriffith opening a browser 16:12:31 https://launchpad.net/openstack/+milestones 16:12:33 next April, 17th maybe? 16:12:39 thnx 16:12:43 hemna_: the page bswartz reference 16:12:52 jgriffith: did you see the agenda for today? :) 16:12:53 hemna_: and also you should all keep an eye on https://launchpad.net/cinder 16:13:15 avishay: :) 16:13:34 that page says april 1 ? 16:13:45 hemna_: say huh? 16:13:58 hemna_: Ohh... Grizzly 16:14:10 hemna_: thought you were talking about avishay and the meeting wiki 16:14:15 oh :P 16:14:15 Ok... 16:14:23 #topic G2 16:14:33 Speaking of Grizzly and release dates 16:14:48 G2 is scheduled for Jan, HOWEVER 16:15:01 as I mentioned before we loose some time for the holidays 16:15:16 and we loose some time due to code freeze the week of the milestone cut 16:15:19 HP is out for several weeks 16:15:42 I just want to stress again... We need to have the G2 work that's slated done by the end o fthis month 16:15:59 https://launchpad.net/cinder/+milestone/grizzly-2 16:16:14 I'm particularly worried about a couple 16:16:19 Volume Backups... 16:16:35 I've not heard anything from Francis? 16:17:10 does anybody know his irc nick? 16:17:19 (Francis Moorehead)? 16:17:25 HP 16:17:34 anyone... bueller, bueller.... 16:17:38 no idea 16:17:39 I've just pinged him 16:17:47 ollie1: :) thanks 16:17:56 I can look up his email address at work, if he's at HP 16:17:58 so he's part of the cloud services group I'm assuming? 16:18:12 hemna_: his email is on launchpad 16:18:15 ok 16:18:34 anyway... that's one I'm concerned about and would like some updates 16:18:43 The other is the Island work 16:18:48 If you can't get ahold of him, I can ping him on the internal instant messager network 16:19:04 Francis is in the HP cloud services group, 16:19:15 Hi 16:19:22 frankm: :) 16:19:36 have you had a chance to look at your blueprint for volume backups at all? 16:20:20 we're starting to look at it now 16:20:23 and ollie1 I'm also wondering aobut your BP as well :} 16:20:29 i.e. this week 16:20:53 frankm: so do I need to remove the target for G2? 16:20:54 The glance metadata blueprint is done, code is merged 16:21:08 ollie1: sorry... wrong line :( 16:21:36 frankm: do you think this is still going to be something you can get done by Christmas? 16:22:48 chirp, chirp, chirp.... seems to be a cricket in my office 16:23:05 :) 16:23:19 alright, I'll harass ollie1 and others offline :) 16:23:26 jgriffith: I have a couple questions that I wrote down in the agenda concerning volume backups - may I, while we're on the topic? 16:23:27 avishay: here we goo.... 16:23:39 :) I'm gettin to it 16:23:42 :) 16:23:45 #topic volume backups 16:23:52 maybe not by Christmas, but early in new year 16:24:02 frankm: hmmmm 16:24:09 frankm: ok, we'll sync up later 16:24:22 avishay: maybe you have a better solution anyway :) 16:24:40 avishay: care to explain a bit on "volume backups pluggable" 16:24:44 i just have questions so far :) 16:24:58 * jgriffith doesn't need more questions :( 16:25:02 just kidding 16:25:21 Sure. Copying to Swift is a great use case, but it seems useful to allow for more back-ends other than Swift 16:25:21 avishay: so if these are questions, here's some answers... 16:25:49 avishay: well yes but it's not high on my list for a number of reasons 16:25:53 For example, compressing and storing on some file system, backup software, tape, dedup ... 16:26:13 avishay: primarily if an end-user is backing up a volume they don't want to back it up to another higher perf and higher priced storage 16:26:26 the ideal is to swift which is cheaper/deeper storage 16:27:01 or dedup + tape, or some backup software that will manage all the backups plus store them somewhere cheap 16:27:16 heading off to work...l8rs 16:27:21 jgriffith: i guess tape falls into that category 16:27:34 winston-d: avishay I'm NOT doing a tape driver! 16:27:43 * jgriffith left the tape world and isn't going back 16:27:44 jgriffith: also, higher durability due to multiple copies 16:27:54 jgriffith: but IBM guys may. :) 16:28:09 I'm just saying, there are lots of backup solutions out there, so why limit the solution? 16:28:09 smulcahy: winston-d hemnafk so I don't disagree with the *idea* 16:28:28 avishay: because we're a small team and can only do so much 16:28:44 I think we need to prioritize and move forward 16:28:48 Would making it pluggable and adding back-ends over time be a lot more work? 16:28:55 I don't think there's any argument that we should NOT have backups to swift 16:29:14 avishay: i think if we can have a pluggable framework, it's ok to have the first working version only support (have) swift plugin. 16:29:27 winston-d: agreed 16:29:28 winston-d: +1 16:29:34 I totally agree that the first version can be swift-only 16:29:45 But it would be great if it was pluggable for later 16:29:52 avishay: I agree with that 16:29:53 how will pluggable work with regard to authentication? 16:30:08 will all pluggable backends be expected to auth with keystone? 16:30:18 avishay: I'm just saying I don't want to jeopardize useful/needed cases for theory and what if's 16:30:25 smulcahy: authentication with keystone or backup back-ends? 16:30:58 Maybe I'm not clear on how "pluggable" you guys are talking 16:31:12 if you're talking independent services with their own auth model etc 16:31:18 I say hell nooo 16:31:27 No, I meant something along the lines of volume drivers 16:31:35 if you're talking pluggable modules that's fine 16:31:41 avishay: Ok... phewww 16:31:42 jgriffith: agree. 16:31:51 jgriffith: I'm not crazy... :) 16:32:10 avishay: yeah, I'm fine with that but it's a harder problem than just saying *make it pluggable* 16:32:14 jgriffith: agreed, it will dramatically increase the complexity 16:32:33 I'm not clear on how they will be pluggable if they don't share an auth mehcanism 16:32:53 So I'd envision something like a backup manager/layer that can sit between the volume drivers and act as a conduit 16:32:57 smulcahy: they can just share auth API? 16:32:59 or go to swift 16:33:41 Ok, so I think the answer here is *yes* we should try to keep the design somewhat modular to allow expansion in the future 16:33:45 smulcahy: perhaps the same way various volume drivers do their own auth? 16:34:07 jdurgin1: +1, but we'll need to look at changes to conf files 16:34:17 So I don't want to get carried away on this right now 16:34:20 jdurgin: +1 16:34:34 The bottom line is I'm worried we're not even going to get backups to swift in Grizzly at the rate we're going 16:34:46 I don't have a clear design here - I just know that almost every customer that has data today also has a backup solution, and they may like to use it for OpenStack too 16:34:46 let alone add all this cool back-end to back-end stuff to it 16:35:03 If you want to leave it out for now and come back to it later, that's fine 16:35:05 avishay: understood and agreed 16:35:24 avishay: I think it's something to keep in mind with the work being done now 16:35:27 jgriffith: we have working code at the moment, just need to work on porting it to grizzly so we should have something 16:35:37 I think you're right for bringing it up 16:35:48 smulcahy: for which case? 16:35:57 smulcahy: for the backup to swift? 16:36:14 yes, for the backup to swift 16:36:41 smulcahy: are you working with frankm on this? 16:36:49 smulcahy: same work? 16:36:59 yes, same work 16:37:05 Ok.. thanks :) 16:37:22 I'm still getting all the nics together :) 16:37:37 me too - wasn't sure who frankm was there for a second ;-) 16:37:46 Ok... cool, so frankm smulcahy see what you can do about pluggable design thoughts on this 16:38:00 but don't let it jeaopardize getting the code in 16:38:03 IMO 16:38:08 Agreed 16:38:13 Thank you 16:38:16 everybody can hate on me for that if they want :) 16:38:21 thats my initial thought - we can rework the backend part in a future iteration - but will give it some thought 16:38:31 smulcahy: sounds good 16:38:34 jgriffith: whoever wants to hate on you will find reasons :P 16:38:45 #topic backup snapshots rather than volumes 16:38:50 avishay: indeed :) 16:39:05 So here's the problem with snapshots.... 16:39:06 nova are talking about compute cells know which are kinda like zones/az's as far as I can tell - does cinder have any similar concept? 16:39:08 They SUCK 16:39:24 smulcahy: we have AZ's 16:39:41 jgriffith: won't volumes be changing while copying? 16:40:06 avishay: so you can say that to do backups it has to be offline/detached 16:40:11 avishay: it's not ideal 16:40:12 jgriffith: care to elaborate? 16:40:21 quick question re: snapshots - are there any quota limits on them? 16:40:21 bswartz: on snapshots? 16:40:30 jgriffith: on suckage 16:40:39 dtynan: they count against your volume quotas IIRC 16:41:01 dtynan: I'd have to go back and refresh my memory though 16:41:10 bswartz: so... yeah, suckage 16:41:33 The reality is that most of us here are associated with vedors for back-end storage 16:41:44 We all have killer products with specific things we excel at 16:41:46 BUT!!! 16:41:58 the base/reference case for OpenStack is still LVM 16:42:14 so that needs to be a key focus in things that we do 16:42:34 once you create an LVM snapshot you've KILLED your volume performance 16:42:43 it's about 1/8 on average 16:42:54 I've got a patch coming to address this 16:43:08 jgriffith: if you delete the snapshot afterward does performance return? 16:43:14 avishay: yes 16:43:31 avishay: it's a penalty you pay based on how LVM snaps work 16:43:47 so maybe whoever uses LVM can take a snapshot, back it up, and then delete it? 16:44:06 avishay: if theyr'e smart they will :) 16:44:34 avishay: But what I'm saying here is that I don't think we should modify the base code behavior and usage model for something that doesn't work well with LVM 16:44:57 extensions, extra features etc is fine 16:45:07 jgriffith: so you're not complaining about the snapshot concept, you're complaining about the snapshot implementation in the reference driver 16:45:24 bswartz: Yeah, I think that's fair 16:45:38 bswartz: like I said I have a solution but it's not supported in precise yet 16:45:42 at least not officially 16:45:43 are we generally happy with snapshot abstraction as it exists today? 16:45:46 If it didn't work at all, that's one thing, but I think this backup idea is cool, and limiting it to offline volumes because LVM snapshot performance sucks might be holding us back, no? 16:46:00 bswartz: haha... that's a whole nother can o'worms 16:46:18 avishay: fair 16:46:34 avishay: but I wasn't finished.... :) 16:46:47 The reality is, snapshots pretty much are "backups" 16:46:48 if changing the abstraction allows us to solve some problems I'd be interested in disucssing that 16:46:52 that's really the point IMO 16:47:16 jgriffith: my view of snapshots has always been "things you can clone from" 16:47:54 I think the terminology is pretty important to set straight here - we should be clear going forward on what we mean by snapshots and backups and avoid using them interchangeably I think. 16:48:10 snapshots are backups, but you can't put them on swift, can't attach them (yet?), can't restore (yet), ... frustrating :( 16:48:13 smulcahy: and there inlies the challenge 16:48:26 avishay: I feel your pain 16:48:37 avishay: I plan to have the restore as I've mentioned 16:48:47 avishay: backup to swift is ideal IMO 16:49:00 personally I think snapshots like bswartz said are things you can clone from and also things you can create backups from. 16:49:01 avishay: but there are problems with backup 16:49:37 avishay: dtynan bswartz the problem is depending on how the snapshot is implemented it's actually nothing useful once it's copied out 16:50:01 yeah, it's a point-in-time reference that you can use to make a backup or a clone...? 16:50:12 if it's just delta blocks it doesn't do you much good on it's own 16:50:33 jgriffith: you can always make a full copy, even if on the controller it's CoW or similar 16:50:49 avishay: yes 16:51:12 Ok... so this sort of falls into the same problem/challenge I mentioned earlier 16:51:16 but thats not what snapshots are at the minute are they? 16:51:23 we have a lot of great ideas/conversation 16:51:30 but the reality is we need to implement the code :) 16:52:04 I would still like to focus a bit 16:52:17 I'd rather get the blue-prints that are on the table and go from there: 16:52:22 So what I'm saying is: 16:52:45 1. get backups of volumes to swift (TBH I don't care if it's from snap, volume or both) 16:52:59 2. Get snapshot/restore and clone implemented 16:53:13 I thought https://lists.launchpad.net/openstack/msg03298.html clarified the difference between both reasonably well 16:53:15 Then worry about all these other edge cases like tape backups etc 16:53:33 jgriffith: agreed, that sounds like a workable plan 16:53:59 Sounds good to me 16:54:03 smulcahy: thanks for the link, yes agreeed 16:54:19 anybody disagree/object? 16:54:34 So you all have probably noticed a couple of things 16:54:51 1. I prefer to get base implementations in and build on them (start simple and expand) 16:55:14 2. We don't have a TON of submissions in the code (we're light on developers) 16:55:59 make sense? 16:56:11 Agreed 16:56:13 yes 16:56:24 jgriffith: I agree in this case, but in general it's dangerous to implement something without considering how you'll be locked into that implementation forever 16:56:41 bswartz: Yeah, I'm not saying you do it blindly 16:56:41 it's worthwhile to have these discussions 16:56:51 bswartzL I think the api definition is the most critical 16:56:54 bswartz: I'm just saying you don't get stuck in analysis/paralysis 16:56:54 Just to clarify - the issues I'm bringing up aren't for going into the code today - just things to keep in mind so we don't have to toss the code later 16:57:07 jgriffith: agree 16:57:08 avishay: good point, and I totally agree with you 16:57:19 can people give feedback on the api's referenced in https://blueprints.launchpad.net/cinder/+spec/volume-backups ? 16:57:23 smulcahy: agreed 16:57:24 bswartz: it's definitely worthwhile.. but 16:57:51 I also want to point out there are a number of bugs and blue-prints that need work and are not assigned, or not making progress 16:57:55 that's no good :( 16:58:11 jgriffith: I will see if I can help 16:58:16 You can plan and discuss til your project whithers and dies 16:58:42 So that's not a knock or an insult to anybody... I'm just trying to make a point 16:58:52 I'm happy with how Cinder has grown and the participation 16:59:02 I'm also happy with the discussions we have in these weekly meetings 16:59:15 I'm just saying we need to make sure we deliver as well 16:59:45 Ok... surely you've all had enough of me for one day :) 16:59:48 jgriffith: I don't think you need to convince anyone of that :) 17:00:08 avishay: Ok.. cool 17:00:22 So let's knock out these items avishay posted real quick 17:00:28 #topic volume-types 17:00:51 avishay: so you'd like to see some sort of batch create on types? 17:01:26 let's take the example you posted for various options for the solidfire driver - do i need a volume type for every permutation? 17:01:57 avishay: if I remember what you're referencing correctly yes 17:02:02 i can easily script creating as many as i need, the question is if that's the way it's meant to be used, or if I'm missing something 17:02:03 avishay: i think that really depends on admin not back-end provider 17:02:10 winston-d: +1 17:02:24 winston-d: agreed 17:02:32 avishay: Ahhh 17:02:35 avishay: you can always put those useful combination into your back-end manual to educate admin how to fully utilize your back-end 17:02:49 avishay: the exact usage is really going to be dependent on the provider/admin 17:03:06 but yes, if they want/have a bunch of types, they can script it exactly as you describe 17:03:24 so if the back-end supports RAID-5, RAID-6 and also HDD/SDD, that's 4 volume types, right? 17:03:48 avishay: that's the way I would do it 17:03:53 OK cool 17:04:08 avishay: so they're all different types, correct? 17:04:26 I was just thinking if volume types could be used for affinity between volumes (or anti-affinity)...that would require lots of types 17:05:09 avishay: hmmm, so that leads to your next item 17:05:14 avishay: correct? 17:05:34 not really, but I guess I did understand the volume type usage correctly, so we can move on :) 17:05:43 avishay: :) 17:05:48 #topic filter driver 17:06:09 So I think you're right on the money here, types is the first implementation of a filter 17:06:18 there are definitely others we'll want/need 17:06:55 Doh! We're over already 17:07:06 Ok, let's wrap this topic, then I have one more thing to bring up 17:07:17 avishay: do you want to expand on this topic at all? 17:07:27 jgriffith: nevermind, we have two meeting channels now. :) 17:07:50 jgriffith: No, it's just a thought on future directions 17:07:51 winston-d: Oh that's right :) 17:08:15 avishay: Yeah, that's kinda the point of the filter scheduler 17:08:46 avishay: The way it's designed we'll be able to add "different" filters as time goes on 17:08:53 OK cool 17:08:54 just starting with type filters 17:09:15 I was really talking more about the API between the scheduler and back-end 17:09:16 winston-d: slap me upside the head if I'm telling lies :) 17:09:39 avishay: so you mean calls to get that info? 17:09:52 If there should be one function for getting capabilities, another for getting status info, another for getting per-volume info, etc. 17:09:58 avishay: perf, capacity etc 17:10:00 jgriffith: well, i prefer capabilities filter, rather than type filter. :) but we can have type filter. 17:10:21 winston-d: fair... you can call it whatever you like :) 17:11:04 avishay: Yes, I think those are all things that are needed in the volume api's 17:11:29 avishay: back-end reports capabilities, status (of back-end, rather than each volumes) to scheduler. 17:11:48 jgriffith: OK, just another future topic to keep in mind :) 17:11:58 scheduler is also able to request those info 17:12:13 winston-d: I thought per-volume would be useful in the future, but not needed now 17:12:26 avishay: I agree with that 17:12:28 Maybe migrate volumes based on workload, etc. - not in the near future :) 17:12:41 avishay: +1 for migration!!! 17:12:53 jgriffith: working on a design :) 17:12:56 avishay: per volume status should be taken care of by ceilometer, no? 17:12:58 avishay: I've been thinking/hoping for that in H release 17:13:18 I will also see if I can get some more time to allocate to existing code work 17:13:46 cool... speaking of which 17:13:52 #topic bp's and bugs 17:13:59 one last item 17:14:33 I really need help with folks to keep up on reviews 17:15:20 alll I'm askign is that maybe once a day go to: 17:15:22 https://review.openstack.org/#/q/status:open+cinder,n,z 17:15:42 jgriffith: i would make sure i spend time on that from now on 17:15:42 just pick one even :) 17:15:49 rushiagr1: cool 17:16:04 rushiagr1: speaking of which have you been watching the bug reports? 17:16:46 jgriffith: not much in the last week but yes.. 17:17:40 https://bugs.launchpad.net/cinder/+bugs?field.status=NEW&field.importance=UNDECIDED 17:18:26 thingee: thanks... I got kicked off my vpn 17:18:50 So that's another one for folks to check frequently 17:19:14 also notice here: https://launchpad.net/cinder 17:19:25 There's a recent activity for questions, bugs etc 17:19:49 anybody that wants to help me out just drop in there once in a while and see what they can do 17:20:05 alright... I'm off my soapbox for the week 17:20:14 #topic open discussion 17:20:21 thingee: thanks for the link 17:20:32 jgriffith: as a starter, i many a times require a little help to start with a bugfix or a code review, but unfortunately for me, i find very few people available in work hours for my timezones 17:20:57 rushiagr1: understood 17:21:08 I need to go - bye everyone. Thanks for all the time with my questions! 17:21:09 jgriffith: one item, it is okay if we exempt the Netapp drivers from being split into multiple .py files in the drivers directory? 17:21:16 rushiagr1: so *most* of the time there are a few of us on #openstack-cinder 17:21:27 * rushiagr1 thinks its time to change my sleep schedule :) 17:21:28 rushiagr1: which timezone r u in? 17:21:33 I haven't been around at night as much lately, but will be again 17:21:38 also winston-d is there 17:21:43 * winston-d already changed a lot 17:21:43 and thingee never sleeps! 17:21:49 winston-d: india +5:30 17:22:09 bswartz: You mean revert the changes already made? 17:22:11 rushiagr1: I'm on throughout the day PST and the only time I'm able to work on stuff is at night here so I'm usually on all day O_O 17:22:14 errr 17:22:24 winston-d: ah, i'm in china, that's GMT+8, should overlap a lot 17:22:42 I didn't think the netapp drivers has been split as of yet 17:22:55 bswartz: nope, so you don't have to worry 17:23:12 bswartz: I don't think anybody has any plans to do more with that at this time 17:23:15 okay, I'd like to maintain the status quo 17:23:19 that's cool, thank you 17:23:21 thingee: winston-d i usually find almost no activity during my office hours on cinder channel, so assumed everyone there are inactive...shouldnt have assumed 17:23:31 bswartz: if it comes up we'll try to remember and you can -1 the review :) 17:24:00 rushiagr1: ah yeah just ping us. I'm lurking most of the time and just talking when I need input 17:24:06 rushiagr1: you can just ask questions, i'll try to answer if i'm in it. 17:24:24 jgriffith: haha 17:24:45 thingee: winston-d thanks, will surely bother you starting tomorrow :) 17:25:03 rushiagr1: sure, happy to help 17:25:11 Ok... cool, anything else from folks? 17:25:33 rushiagr1: I recommend at the very least, pick something up, drop a question in the channel and worse case you get an answer the next day to proceed. email is acceptable too 17:25:47 thingee: rushiagr1 good point 17:26:01 rushiagr1: I log/highlight anything with my name even when I'm not online 17:26:11 then get back to folks when I arrive 17:26:12 ditto 17:26:20 jgriffith: thingee agree, will take note of it 17:26:26 * jgriffith is a big fan of leaving irc up and running 24/7 17:27:17 * bswartz is too, when internet cooperates 17:27:24 alrighty... thanks everyone 17:27:35 #endmeeting