16:04:04 #startmeeting cinder 16:04:05 Meeting started Wed Jun 19 16:04:04 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:04:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:04:08 The meeting name has been set to 'cinder' 16:04:16 ok cool 16:04:17 Happy Wednesday everyone! 16:04:53 Let's rolll.. I wanna save some time for zhiyan 16:05:09 ok cool 16:05:11 #topic refactoring SwiftBackupService methods 16:05:23 ok this is my item 16:05:26 :) 16:05:45 basically i looked intro refactoring the backup method 16:05:45 thanks ~ 16:05:57 there is not api change for this method 16:06:11 but instead of doing everything in one big call 16:06:29 seiflotfy_: so that works? 16:06:33 backup now looks as followshttp://fpaste.org/19661/65649113/ 16:06:35 yes it works 16:06:45 it did not break the tests 16:06:54 i need to clean up the patch though 16:06:55 ollie1: around? 16:07:15 and based on that i implemented the next item (rbddriver) which also works without having to map the ceph volumes or export them 16:07:19 yep 16:07:31 I'd be curious to get input from the HP folks that initially wrote this code 16:07:46 seiflotfy_: if it's functionally equivalent and works I say submit it 16:08:04 jgriffith: basically i did not change the code at all but just moved pieces of it around 16:08:07 seiflotfy_: you do realise I am working on the rbd side of things as part of A PUT without content length needs to be chunked. If the missing 16:08:10 into new function 16:08:16 I looked at it briefly... Looks good 16:08:17 whoops 16:08:26 griffith: I think smul has been talking to seiflotfy about the backup code 16:08:33 https://blueprints.launchpad.net/cinder/+spec/cinder-backup-to-ceph 16:08:35 that what you're referring to? 16:08:37 ollie1: DuncanT-meeting seiflotfy_ awesome! 16:08:47 seiflotfy_: I say finish it up and submit the patch 16:08:55 first glance it looks pretty good to me 16:08:56 its 2 patches 16:08:59 one of it to refactor 16:09:06 and based on it another for the rbddriver 16:09:07 seiflotfy_: fair enough 16:09:27 seiflotfy_: have you looked at Ceph ObjectStore as a target? 16:09:32 dosaboy: i did not understand what you were telling me 16:09:52 jgriffith: i am going from ceph 2 ceph 16:09:55 so I intend to implemet the backup to Ceph object store stuff as part of https://blueprints.launchpad.net/cinder/+spec/cinder-backup-to-ceph 16:09:58 Ceph ObjectStore is working as target 16:10:00 which I am working on right now 16:10:01 i based my work ontop of the work of mkoderer 16:10:02 seiflotfy_: haha... I think I was asking the same thing dosaboy was pointing to 16:10:26 but this patch must be applied https://review.openstack.org/#/c/33639/ 16:10:27 ;) 16:10:31 just want to be sure we don't overlap/conflict ;) 16:10:36 so yes for me here it backs up to ceph based on https://review.openstack.org/#/c/33639/ 16:11:08 cool! 16:11:18 * jgriffith admits he's been behind on reviews 16:11:24 ah you are talking about Ceph + RGW 16:11:35 dosaboy: yes sir 16:11:56 basically i am reading in and saving the stuff just like lvm did so it stays generic 16:11:57 I am talking about backup to Ceph object without RGW ;) 16:11:59 thus i needed to refactor 16:12:05 dosaboy: that would be great 16:12:11 we did not touch that at all 16:12:20 gotcha 16:12:27 no, we just used radiosgw 16:12:31 so based on swift api 16:12:35 yes 16:12:44 just quickly, I pinged the Ceph guys re your PUT issue 16:12:52 and? 16:12:59 I will paste their response in maling list 16:13:12 they say that exising backup service is not using api correctly 16:13:23 if content-lenght is not specified, 16:13:33 you need to set 'chunked' in metadata 16:13:38 I will post ina bit 16:13:44 dosaboy: great 16:13:47 ok intressing 16:14:16 dosaboy: do you however mind a refactor to the backup function 16:14:29 should be fine 16:14:35 dosaboy: great 16:14:42 I intentd to use it, as much as possible, as it is 16:14:47 so go ahead 16:14:51 ok cool 16:15:02 will post a patch then 16:15:21 any questions regarding this issue or should we move on 16:15:22 ? 16:15:36 move on.. ;) 16:15:38 oops, late, Hi all! 16:15:54 awesome... thanks guys 16:16:19 rushiagr: yo 16:16:40 next item for 'brick' status update? 16:16:44 seiflotfy_: so that seems to take care of items 1 and 2 on the agenda no? 16:16:50 yes 16:16:57 will post both patches first thing in the morning then 16:16:59 :D 16:17:05 Ok 16:17:15 #topic brick status update 16:17:24 zhiyan: you're up 16:17:33 hemna, you there? 16:17:36 I think hemnafk is not here yet 16:17:43 hemna is on his way to work 16:18:01 might want to bring this up at the end 16:18:08 ok, i have reviewed the https://review.openstack.org/#/c/32650/ 16:18:12 zhiyan: his changes are coming along nicely and I think they'll land this week 16:18:15 and give some comments there 16:18:15 of the meeting, he should be here by then 16:18:29 looks like his patch is almost ready, but xyang_ found an issue in testing 16:19:11 unfortunately I think all of this is why we ended up with BDM tables :( 16:19:13 yes, seems hook host_driver in brick has some issues.. 16:19:37 avishay: I didn't test this particular patch. My concern is based on my observations of /dev/disk/by-path 16:19:48 jgriffith: BDM? 16:19:52 add table to booking those device will solve that 16:19:57 avishay: Block Device Mapping table 16:20:04 zhiyan: :) 16:20:04 ahh ok, yep 16:20:20 I'd really like to avoid if we can... but it may be inevetable 16:20:27 don't like that also... 16:20:50 zhiyan: FYI nova already has it and will continue to use it... but we'd certainly like to not have to duplicate it in Glance etc 16:20:53 or Cinder for that matter 16:20:58 adding a file to save them? 16:21:01 worst case however we could put it in Cinder 16:21:09 and make the info available via API 16:21:11 jgriffith: yes, yes 16:21:23 zhiyan: not a fan of flat files to store data 16:21:45 jgriffith: that's what i thought to do months ago...cinder should track that for everyone IMO 16:21:51 we'll let hemnafk take a look and give us his ideas here 16:21:57 that will push brick to bind with a table/database... 16:21:58 avishay: perhaps 16:22:16 any other topics that we discuss until hemnafk gets here? 16:22:21 avishay: I think I threw up in my mouth when you first suggested it 16:22:22 i have one 16:22:29 avishay: but I think it may be the right thing to do 16:22:32 jgriffith: haha 16:22:43 zhiyan: ? 16:22:45 do we have a long term play to add other type volume attaching/detaching support to brick? 16:23:01 zhiyan: sorry... not sure I follow? 16:23:04 zhiyan: other type == other protocols? 16:23:06 such as sheepdog ( + fuse).. 16:23:07 yes 16:23:08 an alternative that Avishay and I talked about a while ago, is to have a method and driver has to implement it and returns whether there are still luns on a target. 16:23:12 zhiyan: :) 16:23:26 jgriffith: we have? 16:23:27 xyang_: I might like that better 16:23:36 zhiyan: it's come up 16:23:53 zhiyan: why sheepdog? any special reason? 16:24:07 jgriffith: it's just every driver has to implement it. 16:24:09 maybe will be addressed in I releease? 16:24:21 xyang_: oh yes, i forgot about that option! 16:24:27 xyang_: yeah, could be troublesome 16:24:43 winston-d: no ,just example, we should give a order... 16:24:44 xyang_: I like the idea on the surface... I don't like the idea of implementing it everywhere :) 16:24:55 jgriffith: ya, it has problems. 16:25:10 xyang_: it's easy for us to force it by having an unimplemented method in the base driver, it's different to get everybody to implement it and work 16:25:24 it seems that there's no easy fix 16:25:34 avishay: yes :) 16:25:46 avishay: that's a good thing :) 16:26:15 seems iscsi tool miss some control plane api... 16:26:57 jgriffith: no good solution 16:27:19 jgriffith: sorry, about more protocol support for brick question, we will address them in I release? 16:27:53 or have no draft plan for that yet.. 16:29:03 jgriffith: need some coffee? :) 16:29:20 haha 16:29:24 zhiyan: haha... sorry 16:29:36 zhiyan: have people in my cube looking at a problem :) 16:29:53 zhiyan: I think we may be able to address them sooner 16:29:54 oh, never mind 16:30:03 zhiyan: but I want to get the basic implementation committed first 16:30:15 zhiyan: and build off of that 16:30:16 yes, yes 16:30:20 zhiyan: if we can solve iSCSI and FC first that would be ideal 16:30:32 so, the device booking issue is a big blocker 16:30:58 zhiyan: let's talk to hemnafk later this morning... 16:31:13 sure, of cause, we are in the same page 16:31:15 zhiyan: then we can decide to either add a table of consider the driver approach 16:32:04 other thoughts/concerns? 16:32:11 yes, IMO, adding a table is fine for nova and cinder, since nova already have it, cinder add it is make sense, but for glance, there are some challenge 16:32:29 zhiyan: what I mean by that is add the table in Cinder 16:32:39 zhiyan: and create an API call to update/access it 16:32:48 ok, got it 16:33:01 need to think about the table approach...make sure the table stays sync'ed with reality...for example, what if a connection is dropped for some reason without calling the cinder method? 16:33:05 not much good, but can work well.. 16:33:20 avishay: +1 16:33:25 avishay: yup, that's one of the big problems with that approach 16:33:36 we need SYNC them manually 16:33:57 ok, next item? 16:33:59 zhiyan: if it were that easy we wouldn't need the table to begin with :) 16:34:13 jgriffith: are we still keeping that table in nova if we add one in cinder? need to sync them too 16:34:27 haha 16:34:30 xyang_: if we add a table in Cinder I would propose no 16:34:43 xyang_: but it's still unknown if we'll go that route or not 16:34:51 xyang_: and the Nova folks may not be keen on the idea 16:34:59 jgriffith: ok 16:35:06 xyang_: initially maybe there would be a duplicate to make sure they can trust us :) 16:35:21 jgriffith: :) 16:35:22 save one data to multiple place, bad idea... 16:35:35 xyang_: the other challenge is there are cases where they have items in there that aren't Cinder 16:35:52 xyang_: so going back to what started brick being local devices for Nova 16:36:14 jgriffith: ok 16:36:51 #topic H2 status 16:37:06 Just a reminder that H2 is scheduled to land 7/18 16:37:13 yep 16:37:24 i need speed up :) 16:37:31 Freeze will be a few days before that 16:37:44 and we'll probably have a Cinder freeze imposed before that as well 16:38:06 i have a dependency on the brick attach/detach - would be awful to merge it with this table issue outstanding? 16:38:08 after G2 I said I wouldn't stay up all night doing reviews and baby-sitting Jenkins any more :) 16:38:31 avishay: I'd prefer you wait if you don't mind 16:38:38 jgriffith: ok 16:38:41 avishay: I think we'll sort something out in the next couple of days 16:38:50 sure 16:39:04 So back to H2 :) 16:39:11 well.. that kinda is H2 16:39:14 but... 16:39:36 If you have items assigned please please please keep me updated on your progress 16:39:50 if you think you're not going to finish them let me know earlier rather than later 16:40:00 jgriffith: Could winston-d provide an update on the QoS support patch, if time allows? 16:40:18 There's nothing wrong with removing the target and adding in later if you end up making it 16:40:24 kmartin: Yep, I was saving the last 15 minutes :) 16:40:36 kmartin: sorry :) 16:40:46 ok, go on for another 5 minutes then? :) 16:40:54 but if nobody has anything else on this topic... :) 16:41:10 anybody have questions, new things to add for H2? 16:41:12 so i put up a WIP of volume migration 16:41:33 still a stub there waiting for attach/detach, but if anyone wants to take a look i'd appreciate it 16:41:44 avishay: will do 16:41:50 * jgriffith said that last week too 16:41:54 :) 16:41:58 alright... 16:42:01 it wasn't ready last week, it is now 16:42:11 #topic QoS 16:42:15 kmartin: sure 16:42:19 winston-d: how goes the battle 16:43:12 so we talked about how we should store QoS/Rate-limit info in Cinder DB after meeting last week. 16:43:50 and I'm not sure DuncanT-meeting and jgriffith who wins the argument... 16:44:06 * jgriffith should win just because :) 16:44:22 :) 16:44:25 winston-d: actually I though DuncanT-meeting and I ended up agreeing for the most part 16:44:54 yeah, jgriffith wins since DuncanT-meeting isn't here 16:45:03 winston-d: I conceded that modifying QoS in an existing type could be skipped 16:45:13 kmartin: haah! I like that 16:45:17 Until I'm not here 16:45:43 winston-d: I think that was the big sticking point... retype versus modify type 16:45:58 winston-d: retype is the one that's important to me :) 16:46:00 k. so i'll submit another patch for rate-limit first. 16:46:17 jgriffith: that's right. 16:46:23 kmartin: so were you synched up on that discussion? 16:46:34 rate-limiting versus QoS 16:46:38 winston-d: I believe so 16:46:42 hemna: woohoo! we're all waiting for you! :P 16:46:52 kmartin: winston-d cool cool 16:46:56 :P 16:46:58 doh 16:47:05 sorry guys...traffic was bad today 16:47:13 hemna: that's what you get for living in Cali 16:47:21 oh, hi hemna~ 16:47:22 :P 16:47:24 Isn't traffic always bad? 16:47:41 yah usually...today was logging trucks 16:47:42 bleh 16:47:46 winston-d: ok... so did we have any outstadnign questions? 16:48:20 i will be in cinder channel after meeting if there's anything to follow up 16:48:31 * jgriffith 's typing stinks today 16:48:37 cool... 16:48:45 i think hemna should have some time for brick 16:48:46 then I guess we'll actually wrap it up early today :) 16:48:56 * jgriffith will save controversial stuff for next time :) 16:49:12 #topic open discussion 16:49:19 last chance if anybody has anything? 16:49:25 brick ? 16:49:26 jgriffith: ready for 'volume-host-attaching' bp design discussing? 16:49:58 zhiyan: oh... yah 16:50:03 zhiyan: #openstack-cinder 16:50:14 alrighty folks.. thanks! 16:50:15 i have post the design/question on https://etherpad.openstack.org/volume-host-attaching , can you check it? 16:50:32 oh, btw, there's some interests in more filter/weigher in operator mailing list. i'll try address that after QoS/ratelimit settles 16:50:42 guitarzan: hemna kmartin we should all take a look 16:51:01 winston-d: ooooo... I'm not on that list anymore I don't think :( 16:51:23 winston-d: if you have time I'd like your input on zhiyan etherpad as well 16:51:27 alright everyone 16:51:32 thanks! 16:51:34 ok 16:51:37 #end meeting cinder 16:51:41 jgriffith: because capacity weigher sucks when every back-end reports infinite... 16:51:42 #endmeeting cinder