16:02:00 <jgriffit1> #startmeeting cinder
16:02:00 <openstack> Meeting started Wed Jan  9 16:02:00 2013 UTC.  The chair is jgriffit1. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:01 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:04 <openstack> The meeting name has been set to 'cinder'
16:02:22 <jgriffit1> morning everyone
16:02:34 <bswartz> good morning!
16:02:36 <avishay> good evening :)
16:02:45 <jgriffit1> avishay: :)
16:02:48 <xyang_> morning
16:02:51 <KurtMartin> good morning
16:02:53 <jgriffit1> first topic....
16:03:01 <jgriffit1> #topic G2
16:03:16 <jgriffit1> First off thanks everyone for all the hard work to get G2 out
16:03:38 <jgriffit1> All targetted items made it except my metadata patch :(
16:04:12 <jgriffit1> There's something funky in Nose runner that we can't figure out and rather than continue beating heads against the wall we're just moving forward
16:04:26 <jgriffit1> congratulations to winston-d !!!
16:04:37 <jgriffit1> We now have a filter scheduler in Cinder!!!
16:04:47 <winston-d> :)
16:04:49 <avishay> woohoo!
16:04:51 <bswartz> yay
16:04:52 <xyang_> wonderful
16:04:55 <avishay> winston-d: congrats - great job!
16:04:56 <DuncanT> Woo!
16:04:57 <thingee> winston-d: yes congrats
16:05:05 <KurtMartin> nice we can use that
16:05:06 <rushiagr> thats one big achievement! great!
16:05:37 <jgriffit1> So just a recap of Grizzly so far, we now have a V2 API (thanks to thingee) and we now have the filter scheduler
16:05:43 <winston-d> thx guys, couldn't have done that without your support
16:06:02 <jgriffit1> On top of that we've added I lost track of how many drivers with more to come
16:06:34 <jgriffit1> anyway... Grizzly is going great so far thanks to everyone!
16:06:54 <jgriffit1> I don't really have anything else to say on G2... now it's on to G3
16:07:32 <jgriffit1> anybody have anything they want to hit on G2 wrap up ?
16:08:20 <jgriffit1> Ok
16:08:39 <jgriffit1> bswartz, wanna share your customer feedback regarding Openstack drivers?
16:08:46 <bswartz> sure
16:08:51 <jgriffit1> #topic feedback from bswartz
16:09:03 <bswartz> so many of you may have noticed that netapp has submitted a bunch more driver changes
16:10:01 <bswartz> we started our original driver design in the diablo timeframe, and out vision was a single instance of cinder (actually nova-volume) talking to NetApp management software which managed hundreds of storage controllers
16:10:40 <bswartz> since NetApp had already sunk hundreds of man years into management software it seemed dumb not to take advantage of it
16:11:10 <bswartz> but the feedback we've been getting is that customers don't like middleware and they don't like a single instance of cinder
16:11:20 <bswartz> this probably won't surprise many of you
16:11:46 <bswartz> since (nearly?) all of the existing drivers manage a single storage controller per instance of cinder
16:12:05 <creiht> bswartz: but that's what we did for lunr
16:12:13 <guitarzan> creiht: beat me to it
16:12:19 <winston-d> for what reason they dont like single instance of cinder? HA?
16:12:28 <creiht> single HA instance of cinder talking to our lunr backend
16:12:35 <creiht> guitarzan: sorry to steal your thunder :)
16:12:46 <bswartz> HA is one reason
16:13:01 <bswartz> scalability is another
16:13:22 <creiht> Those really should be orthogonal
16:13:26 <bswartz> a single instance of cinder will always have limits
16:13:31 <jgriffit1> hmm... I've always struggled with this, especially since the cinder node is really just a proxy anyway
16:13:56 <jgriffit1> and we don't have any HA across nodes *yet* :)
16:13:59 <jgriffit1> anyway...
16:14:10 <bswartz> the limits may be high, but it's still desirable to be able to overcome those limits with more hardware
16:14:13 <guitarzan> there's a big difference between single instance of cinder and cinder running on every storage node
16:14:33 <creiht> bswartz: you can get ha with cinder by running however many cinder api nodes you want, all talking to your backend
16:14:40 <bswartz> well, no HA so much as "no single point of failure"
16:15:00 <winston-d> yeah, agree with john. is there any number to tell for the limits?
16:15:15 <bswartz> if you have a single cinder instance and it goes up in smoke, then you're dead -- multiple instances addresses that
16:15:32 <DuncanT> Facts and customer's views are not always related ;-)
16:15:39 <jgriffit1> bswartz: yep, we neeed a mirrored cinder / db option :)
16:15:44 <bswartz> DuncanT: agree
16:15:46 <jgriffit1> DuncanT: amen brother
16:15:58 <jgriffit1> bswartz: so this is good info though
16:16:17 <bswartz> so anyways, we getting on the bandwagon of one cinder instance per storage controller
16:16:22 <creiht> bswartz: but that's the point, is if you have a driver then you can run load balanced cinder instances that talk to the same backend
16:16:28 <creiht> that's what we do with lunr
16:16:30 <bswartz> and the new scheduler in grizzly will make that option a lot cooler
16:16:47 <DuncanT> That's also what we do
16:16:50 <bswartz> creiht: we are also pursuing that approach
16:17:19 <bswartz> creiht: however the new drivers that talk directly to the hardware is lower hanging fruit
16:17:26 <DuncanT> I can't find our
16:17:31 <DuncanT> Sorry, ignore that
16:17:43 <bswartz> there are other reason customers take issue with out management software -- and we're working on addressing those
16:17:51 <bswartz> s/out/our/
16:18:22 <bswartz> anyways, I just wanted to give some background on what's going on with out drivers, and spur discussion
16:18:26 <jgriffit1> bswartz: so bottom line, most of the changes that are in the queue are to address whcih aspect?
16:18:29 <bswartz> I didn't understand the comments about lunr
16:18:53 <creiht> bswartz: so lunr is a storage system we developed at rackspace that has its own api front end
16:19:03 <bswartz> jgriffit1: the submitted changes add new driver classes that allow cinder to talk directly with our hardware with no middleware installed
16:19:38 <creiht> cinder sits in front, and our driver passes the calls on to the lunr apis
16:19:39 <bswartz> jgriffit1: our existing driver require managmenet software to be installed to work at all
16:19:41 <jgriffit1> bswartz: ahhh... got ya
16:20:28 <bswartz> creiht: how do you handle elimination of single points of failure and scaling limitations?
16:20:42 <creiht> traditional methods
16:20:55 * creiht looks for the diagram
16:21:29 <DuncanT> We solve SPoF via HA database, HA rabbit and multiple instances of cinder-api & cinder-volume
16:21:43 <guitarzan> we do the same, except we aren't using rabbit at all
16:21:49 <DuncanT> (All talking to a backend via apis in a similar manner to lunr)
16:21:50 <jgriffit1> so the good thing is I don't think bswartz is necessarily disagreeing with creiht or anybody else on how to achieve this
16:22:02 <bswartz> DuncanT: so multiple drivers [can] talk to the same hardware?
16:22:07 <guitarzan> absolutely
16:22:13 <DuncanT> bswartz: yup
16:22:19 <creiht> bswartz: http://devops.rackspace.com/cbs-api.html#.UO2ZJeAVUSg
16:22:30 <creiht> that has a diagram
16:22:40 * jgriffit1 shuts up now as it seems he may be wrong
16:22:59 <creiht> where the volume api box is basically several instances of cinder each with the lunr driver that talks to the lunr api
16:23:09 <bswartz> creiht: thanks
16:23:36 <bswartz> jgriffit1: no I don't think there is any disagreement, just a lot of different ideas for solving these problems
16:23:46 <jgriffit1> :)
16:23:46 <DuncanT> There are a couple of places (snapshots for one) where cinder annoyingly insists on only talking to one specific cinder-volume instance, but they are few and fixable
16:24:04 <jgriffit1> DuncanT: avishay is working on it :)
16:24:16 <DuncanT> jgriffit1: Yup
16:24:23 <creiht> well and our driver also isn't a traditional driver
16:24:31 <avishay> jgriffit1: I am? :/
16:24:38 <bswartz> lol
16:24:38 <jgriffit1> avishay: :)
16:24:52 <jgriffit1> avishay: I didn't tell you yet?
16:25:08 <avishay> jgriffit1: ...what am I missing?
16:25:15 <DuncanT> lol
16:25:19 <winston-d> lol
16:25:27 <DuncanT> It'll get done anyway... several people interested
16:25:32 <jgriffit1> avishay: so your LVM work and the stuff we talked about last night regarding clones etc will be usable for this
16:25:35 <jgriffit1> anyway..
16:25:57 <jgriffit1> yeah... sorry to derail
16:25:59 <avishay> jgriffit1: yes, it's a start, but not tackling the whole issue :)
16:26:25 <avishay> mutiny?
16:26:35 <jgriffith> haha
16:26:48 <avishay> jgriffith: sorry, thought somebody offed you ;)
16:26:55 <jgriffith> Ok... so bswartz basicly your changes are to behave more like the *rest of us* :)
16:27:00 <jgriffith> bswartz: You have been assymilated :)
16:27:09 <jgriffith> bswartz: just kidding
16:27:12 <bswartz> jgriffith: yes, it's been a learning process for us
16:27:20 <jgriffith> but in a nut shell, these changes cut the middleware out
16:27:25 <creiht> joined the darkside
16:27:26 <creiht> :)
16:27:34 <jgriffith> Ok... awesome
16:27:38 <creiht> or maybe we are the darkside :)
16:27:39 <bswartz> we're not giving up on our loftier ideas, but in the mean time we're conforming
16:27:45 <jgriffith> creiht: hehe
16:27:52 * jgriffith cries
16:28:00 <guitarzan> which ideas are the lofty ones? I'm curious what seems more ideal to folks
16:28:01 <jgriffith> bswartz: make you a deal, pick one or the other :)
16:28:06 <jgriffith> guitarzan: NFS
16:28:14 <jgriffith> CIFS to be more specific in Cinder
16:28:28 <jgriffith> bswartz: can provide more detail
16:28:37 <guitarzan> I mean in regards to this HA cinder to external backend question
16:29:00 <bswartz> jgriffith: I'm not sure what you're asking
16:29:14 <bswartz> the NAS extensions are completely separate from this driver discussion
16:29:38 <jgriffith> I assumed that's what you meant by "loftier" goals
16:29:49 <jgriffith> so what "lofty" goal are you talking about then?
16:30:00 <jgriffith> Please share now rather than later with a 5K line patch :)
16:30:07 <bswartz> no, loftier means that we're leaving the original drivers in there and we have plans to enhance those so customers hate them less
16:30:38 * jgriffith is now really confused
16:30:41 <bswartz> so the netapp.py and netapp_nfs.py files are getting large
16:31:17 <bswartz> jgriffith: we talked about reworking the NAS enhancements so that the code would be in cinder, but would run as a separate service
16:31:25 <bswartz> that rework is being done
16:31:26 <avishay> jgriffith: the new patch doesn't replace the old drivers that access the middleware, just add the option of direct access. the lofty goal is to improve the middleware so that customers won't hate it. bswartz - right?
16:31:45 <jgriffith> Ok.. I got it
16:31:49 <bswartz> avishay: yes
16:31:50 <jgriffith> sorry
16:32:03 <jgriffith> Why do you need both?
16:32:15 <bswartz> addresses 2 different customer requirements
16:32:25 <jgriffith> really?
16:32:31 <bswartz> on is for blocks, other is for CIFS/NFS storage
16:32:51 <avishay> the direct access vs. the middleware access?
16:32:52 <bswartz> we have lots of different drivers for supporting blocks
16:32:53 <jgriffith> alright, I'm out
16:33:00 <jgriffith> avishay: yes :)
16:33:10 <bswartz> sorry this has gotten confusing and out of hand
16:33:17 <DuncanT> Yup
16:33:22 <jgriffith> LOL.. yes, and unfortunately it's likely my fault
16:33:38 * bswartz remembers not to volunteer to speak at these things
16:33:40 <avishay> bswartz: the question is why you need two options ( direct access vs. the middleware access) - not related to the NFS driver
16:33:53 <jgriffith> avishay: thank you!
16:33:59 <jgriffith> avishay: from now on you just speak for me please
16:34:07 <avishay> jgriffith: done.
16:34:14 <jgriffith> :)
16:34:15 <avishay> ;)
16:34:21 <xyang_> I agree we should give customer more options, with direct and middleware access
16:34:31 <bswartz> avishay: regarding our blocks drivers, we're leaving the old ones in, and we're adding the direct drivers
16:34:37 <jgriffith> I disagree, but that's your business I suppose
16:34:41 <winston-d> xyang_: do you plan to do similar thing for EMC driver?
16:34:56 <bswartz> avishay: long term we will deprecate one or the other, depending one which works better in practice
16:35:04 <jgriffith> cool
16:35:25 <avishay> OK.  I guess all this doesn't affect the "rest of us" anyway.
16:35:49 <bswartz> avishay: the thing that affects the rest of you is the NAS enhancements, and jgriffith made his opinions clear on that topic
16:36:00 <DuncanT> Other than monster reviews landing
16:36:07 <jgriffith> DuncanT: +1
16:36:10 <bswartz> avishay: so our agreement is to resubmit those changes as a separate service inside cinder, to minimize impact on existing code
16:36:39 <avishay> bswartz: yes i know, that's fine
16:36:47 <bswartz> avishay: the changes will be large, but the overlap with existing code will be small
16:36:54 <bswartz> that is targetted for G3
16:37:26 <DuncanT> Ah ha, that makes sense
16:37:31 <bswartz> you've already seens the essence of those changes with our previous submission, the difference for G3 is that we're refactoring it
16:37:40 <bswartz> seen
16:38:16 <bswartz> okay I'm done
16:38:22 <bswartz> sorry to take up half the meeting
16:38:37 <jgriffith> bswartz: no problem
16:38:42 <DuncanT> Things are now reasonably clear, thanks for that
16:38:48 <winston-d> bswartz: thx for sharing.
16:39:18 <avishay> yup, thank you
16:39:28 <winston-d> i've decided to add stress test for single cinder volume instance to see what the limit is in our lab. :)
16:40:02 <jgriffith> bswartz: yeah, appreciate the explanation
16:40:20 <jgriffith> winston-d: cool
16:42:12 <jgriffith> alright, anybody else have anything?
16:42:28 <xyang_> How is FC development?
16:42:46 <xyang_> Will it be submitted soon
16:43:16 <KurtMartin> xyang_, plan is to get the nova side changes submitted next week for review
16:43:40 <DuncanT> Volume backup stuck in corporate legal hell, submission any day now[tm]
16:44:17 <KurtMartin> we resolved the one issue we had with detach and have HP's blessing :)
16:45:33 <winston-d> creiht: where's clayg?  haven't seen him for very long time
16:46:18 <winston-d> clayg also sth on volume backup, if my memory serves me right
16:48:31 <jgriffith> :)
16:48:55 <jgriffith> ok, DuncanT beat up lawyers
16:49:22 <DuncanT> I wonder if 'My PTL made me do it!' will stand up in court?
16:49:32 <jgriffith> I'm going to try and figure out why tempest is randomly failing to delet volumes in it's testing
16:49:48 <jgriffith> anybody else looking for something to do today that would be a great thing to work on :)
16:49:55 <jgriffith> Sure.. why not!
16:49:59 <creiht> winston-d: he has abondoned us :(
16:50:21 <creiht> winston-d: he went to work for swiftstack
16:50:26 <creiht> on swift stuff
16:50:37 <winston-d> creiht: oh, ouch.
16:51:29 <creiht> winston-d: sorry was being a little silly when I said abandoned :)
16:51:55 <creiht> and I don't have much room to talk, as I'm back working on swift stuff as well
16:52:17 <avishay> Things we're working on for Grizzly is generic iSCSI copy volume<->image (the LVM factoring is part of that), and driver updates of course
16:52:34 <winston-d> creiht: :) so who's new guy in rackspace for cinder/lunr now?
16:52:40 <creiht> guitarzan:
16:52:45 <creiht> winston-d: -^
16:52:59 <winston-d> k. good to know.
16:53:57 <guitarzan> winston-d: creiht has also abandoned us
16:54:13 <creiht> well, I haven't left the channel yet :)
16:54:22 <jgriffith> :)
16:54:50 <winston-d> i thought block storage is more challenging that obj. :) i still think so.
16:55:00 <creiht> different challenges
16:55:05 <jgriffith> Ok folks... kind of an all over meeting today, sorry for that
16:55:13 <jgriffith> but we're about out of time...  anything pressing?
16:55:19 <creiht> jgriffith: doh... sorry :(
16:55:24 <jgriffith> we can all still chat in openstack-cinder :)
16:55:34 <jgriffith> creiht: No... I feel bad cutting short
16:55:50 <creiht> I didn't realize I was in the meeting channel :)
16:55:57 <jgriffith> creiht: I've doing 4 things at once and I have been trying to be polite for john and the xen meeting that follows
16:56:04 <jgriffith> creiht: HA!!!
16:56:14 <jgriffith> creiht: Awesome!
16:56:18 <winston-d> jgriffith: xyang_ DuncanT bswartz please remember to update your driver to provide capabilities/status for scheduler
16:56:19 * creiht is in too many channels
16:56:30 <jgriffith> winston-d: yeppers
16:56:33 <DuncanT> winston-d: Yup
16:56:40 <bswartz> winston-d: got it
16:56:47 <jgriffith> speaking of which... please review my driver patches, and any other patches in the queue
16:56:58 <jgriffith> catch ya'll later
16:57:02 <jgriffith> #endmeeting
16:57:07 <avishay> bye all!
16:57:10 <xyang_> bye
16:57:12 <jgriffith> # end meeting
16:57:16 <jgriffith> grrrr
16:57:20 <jgriffith> #end meeting
16:58:27 <jgriffith> hrmm???
16:58:27 <rushiagr> jgriffith: you started meeting with nick jgriffit1. Is it the reason for this glitch?
16:58:42 <jgriffith1> #endmeeting
16:58:45 <winston-d> that's quick
16:58:45 <avishay> without the h
16:58:52 <rushiagr> nope, it was without a 'h'
16:59:05 <jgriffit1> #endmeeting