16:04:04 <jgriffith> #startmeeting cinder
16:04:05 <openstack> Meeting started Wed Jun 19 16:04:04 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:04:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:04:08 <openstack> The meeting name has been set to 'cinder'
16:04:16 <seiflotfy_> ok cool
16:04:17 <jgriffith> Happy Wednesday everyone!
16:04:53 <jgriffith> Let's rolll.. I wanna save some time for zhiyan
16:05:09 <seiflotfy_> ok cool
16:05:11 <jgriffith> #topic refactoring SwiftBackupService methods
16:05:23 <seiflotfy_> ok this is my item
16:05:26 <jgriffith> :)
16:05:45 <seiflotfy_> basically i looked intro refactoring the backup method
16:05:45 <zhiyan> thanks ~
16:05:57 <seiflotfy_> there is not api change for this method
16:06:11 <seiflotfy_> but instead of doing everything in one big call
16:06:29 <jgriffith> seiflotfy_: so that works?
16:06:33 <seiflotfy_> backup now looks as followshttp://fpaste.org/19661/65649113/
16:06:35 <seiflotfy_> yes it works
16:06:45 <seiflotfy_> it did not break the tests
16:06:54 <seiflotfy_> i need to clean up the patch though
16:06:55 <jgriffith> ollie1: around?
16:07:15 <seiflotfy_> and based on that i implemented the next item (rbddriver) which also works without having to map the ceph volumes or export them
16:07:19 <ollie1> yep
16:07:31 <jgriffith> I'd be curious to get input from the HP folks that initially wrote this code
16:07:46 <jgriffith> seiflotfy_: if it's functionally equivalent and works I say submit it
16:08:04 <seiflotfy_> jgriffith: basically i did not change the code at all but just moved pieces of it around
16:08:07 <dosaboy> seiflotfy_: you do realise I am working on the rbd side of things as part of A PUT without content length needs to be chunked. If the missing
16:08:10 <seiflotfy_> into new function
16:08:16 <DuncanT-meeting> I looked at it briefly...  Looks good
16:08:17 <dosaboy> whoops
16:08:26 <ollie1> griffith: I think smul has been talking to seiflotfy about the backup code
16:08:33 <dosaboy> https://blueprints.launchpad.net/cinder/+spec/cinder-backup-to-ceph
16:08:35 <ollie1> that what you're referring to?
16:08:37 <jgriffith> ollie1: DuncanT-meeting seiflotfy_ awesome!
16:08:47 <jgriffith> seiflotfy_: I say finish it up and submit the patch
16:08:55 <jgriffith> first glance it looks pretty good to me
16:08:56 <seiflotfy_> its 2 patches
16:08:59 <seiflotfy_> one of it to refactor
16:09:06 <seiflotfy_> and based on it another for the rbddriver
16:09:07 <jgriffith> seiflotfy_: fair enough
16:09:27 <jgriffith> seiflotfy_: have you looked at Ceph ObjectStore as a target?
16:09:32 <seiflotfy_> dosaboy: i did not understand what you were telling me
16:09:52 <seiflotfy_> jgriffith: i am going from ceph 2 ceph
16:09:55 <dosaboy> so I intend to implemet the backup to Ceph object store stuff as part of https://blueprints.launchpad.net/cinder/+spec/cinder-backup-to-ceph
16:09:58 <mkoderer> Ceph ObjectStore is working as target
16:10:00 <dosaboy> which I am working on right now
16:10:01 <seiflotfy_> i based my work ontop of the work of mkoderer
16:10:02 <jgriffith> seiflotfy_: haha... I think I was asking the same thing dosaboy was pointing to
16:10:26 <mkoderer> but this patch must be applied https://review.openstack.org/#/c/33639/
16:10:27 <mkoderer> ;)
16:10:31 <dosaboy> just want to be sure we don't overlap/conflict ;)
16:10:36 <seiflotfy_> so yes for me here it backs up to ceph based on https://review.openstack.org/#/c/33639/
16:11:08 <jgriffith> cool!
16:11:18 * jgriffith admits he's been behind on reviews
16:11:24 <dosaboy> ah you are talking about Ceph + RGW
16:11:35 <seiflotfy_> dosaboy: yes sir
16:11:56 <seiflotfy_> basically i am reading in and saving the stuff just like lvm did so it stays generic
16:11:57 <dosaboy> I am talking about backup to Ceph object without RGW ;)
16:11:59 <seiflotfy_> thus i needed to refactor
16:12:05 <seiflotfy_> dosaboy: that would be great
16:12:11 <seiflotfy_> we did not touch that at all
16:12:20 <dosaboy> gotcha
16:12:27 <mkoderer> no, we just used radiosgw
16:12:31 <mkoderer> so based on swift api
16:12:35 <seiflotfy_> yes
16:12:44 <dosaboy> just quickly, I pinged the Ceph guys re your PUT issue
16:12:52 <seiflotfy_> and?
16:12:59 <dosaboy> I will paste their response in maling list
16:13:12 <dosaboy> they say that exising backup service is not using api correctly
16:13:23 <dosaboy> if content-lenght is not specified,
16:13:33 <dosaboy> you need to set 'chunked' in metadata
16:13:38 <dosaboy> I will post ina bit
16:13:44 <seiflotfy_> dosaboy: great
16:13:47 <mkoderer> ok intressing
16:14:16 <seiflotfy_> dosaboy: do you however mind a refactor to the backup function
16:14:29 <dosaboy> should be fine
16:14:35 <seiflotfy_> dosaboy: great
16:14:42 <dosaboy> I intentd to use it, as much as possible, as it is
16:14:47 <dosaboy> so go ahead
16:14:51 <seiflotfy_> ok cool
16:15:02 <seiflotfy_> will post a patch then
16:15:21 <seiflotfy_> any questions regarding this issue or should we move on
16:15:22 <seiflotfy_> ?
16:15:36 <mkoderer> move on.. ;)
16:15:38 <rushiagr> oops, late, Hi all!
16:15:54 <jgriffith> awesome... thanks guys
16:16:19 <avishay> rushiagr: yo
16:16:40 <zhiyan> next item for 'brick' status update?
16:16:44 <jgriffith> seiflotfy_: so that seems to take care of items 1 and 2 on the agenda no?
16:16:50 <seiflotfy_> yes
16:16:57 <seiflotfy_> will post both patches first thing in the morning then
16:16:59 <seiflotfy_> :D
16:17:05 <jgriffith> Ok
16:17:15 <jgriffith> #topic brick status update
16:17:24 <jgriffith> zhiyan: you're up
16:17:33 <zhiyan> hemna, you there?
16:17:36 <jgriffith> I think hemnafk is not here yet
16:17:43 <kmartin> hemna is on his way to work
16:18:01 <kmartin> might want to bring this up at the end
16:18:08 <zhiyan> ok, i have reviewed the https://review.openstack.org/#/c/32650/
16:18:12 <jgriffith> zhiyan: his changes are coming along nicely and I think they'll land this week
16:18:15 <zhiyan> and give some comments there
16:18:15 <kmartin> of the meeting, he should be here by then
16:18:29 <avishay> looks like his patch is almost ready, but xyang_ found an issue in testing
16:19:11 <jgriffith> unfortunately I think all of this is why we ended up with BDM tables :(
16:19:13 <zhiyan> yes, seems hook host_driver in brick has some issues..
16:19:37 <xyang_> avishay: I didn't test this particular patch.  My concern is based on my observations of /dev/disk/by-path
16:19:48 <avishay> jgriffith: BDM?
16:19:52 <zhiyan> add table to booking those device will solve that
16:19:57 <jgriffith> avishay: Block Device Mapping table
16:20:04 <jgriffith> zhiyan: :)
16:20:04 <avishay> ahh ok, yep
16:20:20 <jgriffith> I'd really like to avoid if we can... but it may be inevetable
16:20:27 <zhiyan> don't like that also...
16:20:50 <jgriffith> zhiyan: FYI nova already has it and will continue to use it... but we'd certainly like to not have to duplicate it in Glance etc
16:20:53 <jgriffith> or Cinder for that matter
16:20:58 <zhiyan> adding a file to save them?
16:21:01 <jgriffith> worst case however we could put it in Cinder
16:21:09 <jgriffith> and make the info available via API
16:21:11 <zhiyan> jgriffith: yes, yes
16:21:23 <jgriffith> zhiyan: not a fan of flat files to store data
16:21:45 <avishay> jgriffith: that's what i thought to do months ago...cinder should track that for everyone IMO
16:21:51 <jgriffith> we'll let hemnafk take a look and give us his ideas here
16:21:57 <zhiyan> that will push brick to bind with a table/database...
16:21:58 <jgriffith> avishay: perhaps
16:22:16 <avishay> any other topics that we discuss until hemnafk gets here?
16:22:21 <jgriffith> avishay: I think I threw up in my mouth when you first suggested it
16:22:22 <zhiyan> i have one
16:22:29 <jgriffith> avishay: but I think it may be the right thing to do
16:22:32 <avishay> jgriffith: haha
16:22:43 <jgriffith> zhiyan: ?
16:22:45 <zhiyan> do we have a long term play to add other type volume attaching/detaching support to brick?
16:23:01 <jgriffith> zhiyan: sorry... not sure I follow?
16:23:04 <avishay> zhiyan: other type == other protocols?
16:23:06 <zhiyan> such as sheepdog ( + fuse)..
16:23:07 <zhiyan> yes
16:23:08 <xyang_> an alternative that Avishay and I talked about a while ago, is to have a method and driver has to implement it and returns whether there are still luns on a target.
16:23:12 <jgriffith> zhiyan: :)
16:23:26 <zhiyan> jgriffith: we have?
16:23:27 <jgriffith> xyang_: I might like that better
16:23:36 <jgriffith> zhiyan: it's come up
16:23:53 <winston-d> zhiyan: why sheepdog? any special reason?
16:24:07 <xyang_> jgriffith: it's just every driver has to implement it.
16:24:09 <zhiyan> maybe will be addressed in I releease?
16:24:21 <avishay> xyang_: oh yes, i forgot about that option!
16:24:27 <jgriffith> xyang_: yeah, could be troublesome
16:24:43 <zhiyan> winston-d: no ,just example, we should give a order...
16:24:44 <jgriffith> xyang_: I like the idea on the surface... I don't like the idea of implementing it everywhere :)
16:24:55 <xyang_> jgriffith: ya, it has problems.
16:25:10 <jgriffith> xyang_: it's easy for us to force it by having an unimplemented method in the base driver, it's different to get everybody to implement it and work
16:25:24 <avishay> it seems that there's no easy fix
16:25:34 <zhiyan> avishay: yes :)
16:25:46 <jgriffith> avishay: that's a good thing :)
16:26:15 <zhiyan> seems iscsi tool miss some control plane api...
16:26:57 <xyang_> jgriffith: no good solution
16:27:19 <zhiyan> jgriffith: sorry, about more protocol support for brick question, we will address them in I release?
16:27:53 <zhiyan> or have no draft plan for that yet..
16:29:03 <zhiyan> jgriffith: need some coffee? :)
16:29:20 <guitarzan> haha
16:29:24 <jgriffith> zhiyan: haha... sorry
16:29:36 <jgriffith> zhiyan: have people in my cube looking at a problem :)
16:29:53 <jgriffith> zhiyan: I think we may be able to address them sooner
16:29:54 <zhiyan> oh, never mind
16:30:03 <jgriffith> zhiyan: but I want to get the basic implementation committed first
16:30:15 <jgriffith> zhiyan: and build off of that
16:30:16 <zhiyan> yes, yes
16:30:20 <jgriffith> zhiyan: if we can solve iSCSI and FC first that would be ideal
16:30:32 <zhiyan> so, the device booking issue is a big blocker
16:30:58 <jgriffith> zhiyan: let's talk to hemnafk later this morning...
16:31:13 <zhiyan> sure, of cause, we are in the same page
16:31:15 <jgriffith> zhiyan: then we can decide to either add a table of consider the driver approach
16:32:04 <jgriffith> other thoughts/concerns?
16:32:11 <zhiyan> yes, IMO, adding a table is fine for nova and cinder, since nova already have it, cinder add it is make sense, but for glance, there are some challenge
16:32:29 <jgriffith> zhiyan: what I mean by that is add the table in Cinder
16:32:39 <jgriffith> zhiyan: and create an API call to update/access it
16:32:48 <zhiyan> ok, got it
16:33:01 <avishay> need to think about the table approach...make sure the table stays sync'ed with reality...for example, what if a connection is dropped for some reason without calling the cinder method?
16:33:05 <zhiyan> not much good, but can work well..
16:33:20 <zhiyan> avishay: +1
16:33:25 <jgriffith> avishay: yup, that's one of the big problems with that approach
16:33:36 <zhiyan> we need SYNC them manually
16:33:57 <zhiyan> ok, next item?
16:33:59 <jgriffith> zhiyan: if it were that easy we wouldn't need the table to begin with :)
16:34:13 <xyang_> jgriffith: are we still keeping that table in nova if we add one in cinder?  need to sync them too
16:34:27 <zhiyan> haha
16:34:30 <jgriffith> xyang_: if we add a table in Cinder I would propose no
16:34:43 <jgriffith> xyang_: but it's still unknown if we'll go that route or not
16:34:51 <jgriffith> xyang_: and the Nova folks may not be keen on the idea
16:34:59 <xyang_> jgriffith: ok
16:35:06 <jgriffith> xyang_: initially maybe there would be a duplicate to make sure they can trust us :)
16:35:21 <xyang_> jgriffith: :)
16:35:22 <zhiyan> save one data to multiple place, bad idea...
16:35:35 <jgriffith> xyang_: the other challenge is there are cases where they have items in there that aren't Cinder
16:35:52 <jgriffith> xyang_: so going back to what started brick being local devices for Nova
16:36:14 <xyang_> jgriffith: ok
16:36:51 <jgriffith> #topic H2 status
16:37:06 <jgriffith> Just a reminder that H2 is scheduled to land 7/18
16:37:13 <zhiyan> yep
16:37:24 <zhiyan> i need speed up :)
16:37:31 <jgriffith> Freeze will be a few days before that
16:37:44 <jgriffith> and we'll probably have a Cinder freeze imposed before that as well
16:38:06 <avishay> i have a dependency on the brick attach/detach - would be awful to merge it with this table issue outstanding?
16:38:08 <jgriffith> after G2 I said I wouldn't stay up all night doing reviews and baby-sitting Jenkins any more :)
16:38:31 <jgriffith> avishay: I'd prefer you wait if you don't mind
16:38:38 <avishay> jgriffith: ok
16:38:41 <jgriffith> avishay: I think we'll sort something out in the next couple of days
16:38:50 <avishay> sure
16:39:04 <jgriffith> So back to H2 :)
16:39:11 <jgriffith> well.. that kinda is H2
16:39:14 <jgriffith> but...
16:39:36 <jgriffith> If you have items assigned please please please keep me updated on your progress
16:39:50 <jgriffith> if you think you're not going to finish them let me know earlier rather than later
16:40:00 <kmartin> jgriffith: Could winston-d provide an update on the QoS support patch, if time allows?
16:40:18 <jgriffith> There's nothing wrong with removing the target and adding in later if you end up making it
16:40:24 <jgriffith> kmartin: Yep, I was saving the last 15 minutes :)
16:40:36 <zhiyan> kmartin: sorry :)
16:40:46 <kmartin> ok, go on for another 5 minutes then? :)
16:40:54 <jgriffith> but if nobody has anything else on this topic... :)
16:41:10 <jgriffith> anybody have questions, new things to add for H2?
16:41:12 <avishay> so i put up a WIP of volume migration
16:41:33 <avishay> still a stub there waiting for attach/detach, but if anyone wants to take a look i'd appreciate it
16:41:44 <jgriffith> avishay: will do
16:41:50 * jgriffith said that last week too
16:41:54 <avishay> :)
16:41:58 <jgriffith> alright...
16:42:01 <avishay> it wasn't ready last week, it is now
16:42:11 <jgriffith> #topic QoS
16:42:15 <winston-d> kmartin: sure
16:42:19 <jgriffith> winston-d: how goes the battle
16:43:12 <winston-d> so we talked about how we should store QoS/Rate-limit info in Cinder DB after meeting last week.
16:43:50 <winston-d> and I'm not sure DuncanT-meeting and jgriffith who wins the argument...
16:44:06 * jgriffith should win just because :)
16:44:22 <winston-d> :)
16:44:25 <jgriffith> winston-d: actually I though DuncanT-meeting and I ended up agreeing for the most part
16:44:54 <kmartin> yeah, jgriffith wins since DuncanT-meeting isn't here
16:45:03 <jgriffith> winston-d: I conceded that modifying QoS in an existing type could be skipped
16:45:13 <jgriffith> kmartin: haah!  I like that
16:45:17 <jgriffith> Until I'm not here
16:45:43 <jgriffith> winston-d: I think that was the big sticking point... retype versus modify type
16:45:58 <jgriffith> winston-d: retype is the one that's important to me :)
16:46:00 <winston-d> k. so i'll submit another patch for rate-limit first.
16:46:17 <winston-d> jgriffith: that's right.
16:46:23 <jgriffith> kmartin: so were you synched up on that discussion?
16:46:34 <jgriffith> rate-limiting versus QoS
16:46:38 <kmartin> winston-d: I believe so
16:46:42 <avishay> hemna: woohoo!  we're all waiting for you! :P
16:46:52 <jgriffith> kmartin: winston-d cool cool
16:46:56 <hemna> :P
16:46:58 <hemna> doh
16:47:05 <hemna> sorry guys...traffic was bad today
16:47:13 <jgriffith> hemna: that's what you get for living in Cali
16:47:21 <zhiyan> oh, hi hemna~
16:47:22 <hemna> :P
16:47:24 <jgriffith> Isn't traffic always bad?
16:47:41 <hemna> yah usually...today was logging trucks
16:47:42 <hemna> bleh
16:47:46 <jgriffith> winston-d: ok... so did we have any outstadnign questions?
16:48:20 <winston-d> i will be in cinder channel after meeting if there's anything to follow up
16:48:31 * jgriffith 's typing stinks today
16:48:37 <jgriffith> cool...
16:48:45 <winston-d> i think hemna should have some time for brick
16:48:46 <jgriffith> then I guess we'll actually wrap it up early today :)
16:48:56 * jgriffith will save controversial stuff for next time :)
16:49:12 <jgriffith> #topic open discussion
16:49:19 <jgriffith> last chance if anybody has anything?
16:49:25 <hemna> brick ?
16:49:26 <zhiyan> jgriffith: ready for 'volume-host-attaching' bp design discussing?
16:49:58 <jgriffith> zhiyan: oh... yah
16:50:03 <jgriffith> zhiyan: #openstack-cinder
16:50:14 <jgriffith> alrighty folks.. thanks!
16:50:15 <zhiyan> i have post the design/question on https://etherpad.openstack.org/volume-host-attaching , can you check it?
16:50:32 <winston-d> oh, btw, there's some interests in more filter/weigher in operator mailing list. i'll try address that after QoS/ratelimit settles
16:50:42 <jgriffith> guitarzan: hemna kmartin we should all take a look
16:51:01 <jgriffith> winston-d: ooooo... I'm not on that list anymore I don't think :(
16:51:23 <jgriffith> winston-d: if you have time I'd like your input on zhiyan etherpad as well
16:51:27 <jgriffith> alright everyone
16:51:32 <jgriffith> thanks!
16:51:34 <hemna> ok
16:51:37 <jgriffith> #end meeting cinder
16:51:41 <winston-d> jgriffith: because capacity weigher sucks when every back-end reports infinite...
16:51:42 <jgriffith> #endmeeting cinder