16:13:55 #startmeeting cinder 16:13:56 Meeting started Wed Sep 25 16:13:55 2013 UTC and is due to finish in 60 minutes. The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:13:57 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:13:59 The meeting name has been set to 'cinder' 16:14:05 Go Mike :-) 16:14:06 hi! 16:14:16 hey all 16:14:25 hi 16:14:31 aloha 16:14:37 hi 16:14:39 agenda items for today https://wiki.openstack.org/wiki/CinderMeetings 16:14:40 hi 16:14:41 hi 16:14:50 LOL 16:15:15 we'll skip the what's broken in Havana part in hopes jgriffith shows up. I'm familiar with some, but not all. 16:15:22 thingee: o/ 16:15:22 and circle back 16:15:27 ha 16:15:48 #topic What's broken in Havana 16:15:49 Getting some coffee... took longer than expected 16:16:02 So there's the list :) 16:16:12 pretty sad actually IMO 16:16:39 I'd like ot get a few of those fixed for RC 16:16:45 if not all of them 16:16:53 agreed. where's our bug fix day? 16:17:12 o/ 16:17:13 thingee: if folks think that will help I'll propose Friday 16:17:18 jgriffith: actually i think r/o-attach should be also on the list 16:17:38 I think it'll help, just so people can speak with their employer ahead of time of hopefully getting one day set aside to help? 16:17:45 zhiyan: well... these are existing features that don't work, or break existing functionality 16:17:55 I'd prioritize them way higher than R/O for now 16:18:01 I can't do Friday, but some of the HP cinder team will be able to 16:18:21 i have collated backup-related issues into bp/cinder-backup-improvements so that we can thrash out what fixes can get into H 16:18:34 https://blueprints.launchpad.net/cinder/+spec/cinder-backup-improvements 16:18:35 jgriffith: r/o-attach not work without nova side change ,from end2end view...yes agree 16:18:43 dosaboy: just saw that, thanks! 16:19:21 dosaboy: I looked at independent bup services for each backend and got it *kinda* working 16:19:30 dosaboy: but there seems to be some other issues with that 16:19:31 i was not aware of the multi backend issue but having discussed with DT looks like an H solution may be doable 16:19:34 ah cool 16:19:39 dosaboy: startup sequence and scaling 16:19:48 I'm not sure it's an answer 16:19:50 jgriffith: is there a bug for multi-backend not working with backups? 16:19:58 well any ideas please add to the bp 16:20:01 thingee: there is... I'd have to find it 16:20:04 thingee: I think so one sec 16:20:07 thingee it's all in teh bp 16:20:10 https://bugs.launchpad.net/cinder/+bug/1228223 16:20:13 Launchpad bug 1228223 in cinder "cinder-backup does not work with multi backend enabled" [Undecided,Confirmed] 16:20:14 i ref'd all the bugs 16:20:31 The bigger one that bumped up my list is the CONF flags in brick 16:21:04 I'm working on some of that this morning, and cburgess was going to have a go at the shares portion 16:21:27 It would be helpful if we split that work up as it's not trivial/small 16:21:40 do folks understand the issue there? 16:21:56 I thought I did but maybe some review would help 16:22:05 bswartz: review never hurts 16:22:11 https://launchpad.net/cinder/+milestone/havana-rc1 16:22:14 jgriffith: This might be a bit more complicated then we thought. 16:22:23 cburgess: I know :( 16:22:46 cburgess: it means feeding everything back in to "cinder" and creating a caller/wrapper where needed 16:22:47 #link https://bugs.launchpad.net/cinder/+bug/1230066 16:22:49 I found a lot of code in the brick initiator stuff last night that has no notion or method for passing options around. Fixing the volume drivers is the easy part really. 16:22:50 Launchpad bug 1230066 in cinder "Should not be using CONF settings in brick" [High,Triaged] 16:23:02 initiator and iscsi are the worst 16:23:07 jgriffith: are the backup improvements bp going to be targeted for h-rc1? 16:23:28 thingee: I haven't targetted as I'm not sure it's going to be feasible 16:23:52 thingee: and it fell off my plate so if nobody else can grab it I don't see how it can make H to be honest 16:23:55 The nfs_mount_options flag is actually designed to work okay in a multi-backend scenario 16:24:02 bswartz: Oh? 16:24:11 bswartz: how...? 16:24:32 ok, lets target backup multi-backend bug at least for rc1 16:24:33 bswartz: that's good news, but I couldn't see how it would work if you wanted different optiosn for each backend? 16:24:52 thingee: Ok by me 16:25:01 done 16:25:01 well actually nfs_mount_point_base was designed to work in a multibackend scenario -- you're right that nfs_mount_options could potentially have issues 16:25:31 bswartz: yeah, mount point seemed ok as it creates sub-dirs off the parent 16:25:35 bswartz: Looks like both are in remotefs now and referenced directly from CONF. 16:25:39 bswartz: options I don't think can work 16:25:51 well it can work as long as everyone uses the same options 16:25:59 but clearly that's not going to be true in all cases 16:26:09 jgriffith: does this have the right target? https://bugs.launchpad.net/cinder/+bug/1202896 16:26:11 Launchpad bug 1202896 in nova "quota_usage data constantly out of sync" [High,Confirmed] 16:26:15 cburgess: bswartz my other issue tough that I pointed out to hemna_ is I don't think we should require duplicate conf entries in projects that use brick 16:26:37 in other words they should define their conf options etc and we should enforce needed settings etc via __init__ 16:27:03 it's up to each project to figure out how they want to deal with things in terms of those options, or if they even want to provide options 16:27:16 thingee: the idea was to at least safegaurd if we cannot solve for cinder backup issues in H 16:27:34 thingee: yes, I removed the H target 16:27:36 jgriffith: you're proposing a significant refactor 16:27:43 bswartz: yep 16:27:52 bswartz: it's either that or existing functionality is broken 16:28:01 >_< 16:28:04 bswartz: which is what I have been kinda pushing on all along 16:28:16 bswartz: but I was outvoted :) 16:28:26 unless folks see another option? 16:28:37 Personally the whole brick thing is a train wreck the way it is now 16:28:58 in the case of NFS, the "brokenness" is very minor -- I bet people could survive 16:29:12 bswartz: I think cburgess and other might disagree 16:29:13 I can't speak about the other connectors 16:29:20 bswartz: and broken is broken IMO 16:29:29 morning 16:29:42 so...lots of love for brick I see. what's up? 16:29:46 bswartz: sadly or happily I think NFS is the easiest to fix 16:29:57 jgriffith: Its manageable for us, not ideal, but manageable. That being said, I do agree that we need some way of passing what amounts to highly variable and options like mount option through brick. 16:30:03 yeah I'm happy to fix the NFS stuff if we can decide on a better approach 16:30:15 jgriffith: Easy, except for the brick initiator stuff. 16:30:30 bswartz: read CONF from wrapper/driver and pass in to brick objects on __init__ 16:30:46 cburgess: yes, I'm seperating that :) 16:30:49 ? 16:30:54 cburgess: initiator and iscsi are hosed! 16:31:22 hemna: scroll back... but the issue I raised regarding global CONF in brick 16:31:34 jgriffith: Yeah ok if you are fine breaking initiator, or having a fall back to you didn't pass it in use CONF then the nfs driver is trivial to fix. 16:31:38 bswartz: if you look at LVM it shows what I'm taling bout 16:31:59 jgriffith: +1 16:32:04 cburgess: so I would do a default on init that would be the same as what we're setting the default CONF to 16:32:33 jgriffith: Yeah something like that is easy. 16:32:53 cburgess: it's reliable and at least it keeps things *working* 16:33:03 jgriffith: yes 16:33:05 cburgess: room for improvement cleanup later 16:33:28 but really anybody cosuming brick should be using a wrapper of some sort IMO including Cinder 16:33:31 jgriffith: Also prevents the need for the doc bug for those of us with backend specific mount options and mount dirs wondering what happened. 16:33:32 that's the whole point 16:33:41 jgriffith: about about option defaults which themselves rely on other options? 16:33:41 cburgess: Yes!! Added bonus 16:33:46 who else is using brick ? 16:33:48 abstraction is our friend :) 16:34:02 the default for 'nfs_mount_point_base' is '$state_path/mnt' 16:34:16 hemna: noone yet, and the way it is now noone ever will 16:34:23 ok, seems like we're in agreement on a first approach...can we take any additional discussion to #openstack-cinder after meeting? Got a few more agenda items and we're running out of time as usual. :) 16:34:27 hemna: so why shuffle everything for nobody to use it 16:34:37 Nexenta still doesn't understand brick, so were immune. 16:34:37 :) 16:34:38 *sigh* 16:34:47 bswartz: We can probablhy actually just keep those nfs options in the remotefs driver. The backend aware code can then pass them in from the nfs and gluster volume drivers if need be. 16:34:47 thingee: take it away 16:34:51 #topic Cinderclient release plans/status? 16:34:54 eharney: you're up 16:35:15 just wanted to know what the general idea was around what has to happen for the next cinderclient release and when we're aiming for that 16:35:25 i think there is still some Havana code needing review there.. 16:35:35 eharney: I do them typically when we cut the milestone 16:35:41 eharney: +1 16:35:50 eharney: so I'd push to pypi when rc1 cuts 16:35:55 eharney: then again at release time 16:36:05 eharney: but I have no problem doing it sooner 16:36:18 eharney: I'd just as soon get everyting in the queue compeleted first though 16:36:29 just keep in mind that we need to push a requirement update through openstack/reqs and to nova 16:36:29 eharney: queue == gerrit 16:36:48 eharney: not following ? 16:37:16 Oh 16:37:19 we don't want to cut the next cinderclient release so late that Nova doesn't want us to update their reqs for the new features 16:37:20 yes 16:37:34 not sure how that usually shakes out 16:37:34 eharney: alright, I'll cut this week for sure 16:37:58 eharney: TBH with the changes we added in probably should have been done already :) 16:38:09 eharney: ie the Nova side 16:38:10 i would agree :) 16:38:17 #action contributors need to review cinderclient changes https://review.openstack.org/#/q/status:open+project:openstack/python-cinderclient,n,z 16:38:22 alright, I'll get an interim push out 16:38:30 eharney: anything else? 16:38:34 nope 16:38:45 #topic OSLO imports 16:38:48 DuncanT-: you're up 16:39:02 OSLO imports 16:39:03 -2 16:39:14 yeah i dunno what's going on here 16:39:16 -2 16:39:25 we shouldn't be pulling those in at this late stage 16:39:31 haha.. sorry DuncanT- speak your peace 16:39:31 So we're getting these massive code drops from OSLO that are totally impossible to review 16:39:47 I think we all agree here 16:39:55 If it's not an existing bug that affects cinder -2 16:40:03 I've been hitting -2, but if anybody sees any specific fixes we need can they push them through in as small a unit as possible 16:40:13 Cool, looks like there's no arguement 16:40:16 so i just posted a requirements update that kind of fits in this same category.. 16:40:17 DuncanT-: the only one I saw was processutils for the windows bug 16:40:27 Consider me happy 16:40:29 excellent 16:40:38 #topic bp/cinder-backup-improvements 16:40:49 dosaboy: anything else you wanted to added that wasn't already discussed? 16:40:53 DuncanT-, I -2'd one this morning saying it should go in Icehouse 16:40:59 ok so we've kind of dicussed 16:41:05 couple more things, 16:41:17 i think we housl at least get https://bugs.launchpad.net/cinder/+bug/1137908 16:41:17 * jgriffith drank his coffee too quickly 16:41:18 Launchpad bug 1137908 in cinder "volume glance metadata not included in backups." [Undecided,Confirmed] 16:41:23 and https://bugs.launchpad.net/cinder/+bug/1228223 16:41:27 fixed up for H 16:41:27 Launchpad bug 1228223 in cinder "cinder-backup does not work with multi backend enabled" [Undecided,Confirmed] 16:41:39 i can take on the metadata one 16:41:52 * jgriffith cries 16:41:53 sine it sounds like people are already working the mb issues 16:42:08 I won't get back to the mb one guaranteed 16:42:11 dosaboy: who is working on the multi-backend issue? 16:42:20 i though jdg was ;) 16:42:29 ok well I can take that on then 16:42:41 * thingee waits for bug update before he believes that 16:42:48 thingee: ha! 16:42:50 touchet 16:42:55 :) 16:43:00 I'll look at the brick changes 16:43:06 well the metadata issue *should* be easy 16:43:07 and see if I can get something done today 16:43:07 hemna: which ones 16:43:18 right now we cannot restorew a bootable vol 16:43:26 hemna: divide and conquer 16:43:27 dosaboy: both of those are set for rc1 now 16:43:29 the CONF issues raised here, even though it's the first I've heard of it. 16:43:38 hemna: update the bug with what yo're looking at, just take on section at a time 16:43:41 was thinking of just shoving metadat into backend store 16:43:45 hemna: ie "initiator" 16:43:52 I'm a bit concerned about it at this late stage 16:43:58 hemna: no shit! 16:44:00 :) 16:44:02 one related point, afaik the encrypiton support is aiming to put keys in db for backup 16:44:18 why not put into backend store like metadata? 16:44:19 hemna: but my option is fix it or revert all of your brick changes at this point 16:44:27 hemna: since it breaks existing fucntionality 16:44:40 hemna: dont think you want that :) 16:44:44 or anybody else 16:45:09 thingee: I am not gonna have time to do both those issues :) 16:45:10 dosaboy: I thought we squashed that 16:45:13 https://review.openstack.org/#/c/39573/ 16:45:19 yeah sorry I missed those cons 16:45:21 Still open jgriffith 16:45:22 convs 16:45:43 just came up in a chat i was having today 16:46:04 Yeah, we've pushed back on similar things already and I think should do so again 16:46:43 #action hemna is going to look into dup confs in brick https://bugs.launchpad.net/cinder/+bug/1229894 16:46:46 Launchpad bug 1229894 in cinder "brick has duplicate conf entries in iser and iscsi" [High,In progress] 16:46:48 anything else dosaboy? 16:46:54 guess not 16:46:58 off hand automatially backing up encryption keys without a description of how you are doing tht securely is kind of missing the point. 16:47:16 #topic bp/multi-attach 16:47:22 zhiyan: you're up 16:47:49 yes, i'm preparing the basic model changing, and have three question here like to get you input. 16:48:35 1. i plan to separate volume attachment out to a dedicated table, called volume_attachment. 16:48:50 http://paste.openstack.org/show/47508/ do you think it is ok? 16:49:22 2. how about 'status' column of 'volumes' table? it have three status 'in-use', 'attaching, and 'detaching' hard to handle, since they are conflict with 'attach_status' under multi-attach situation, a volume maybe have two attachments, one in 'in-use' status and other one in 'attaching', then how should we give a general 'status' to the volume? if want to keep backwork-compatibility. 16:49:23 zhiyan: so each attachment is its own db record? 16:49:31 Caitlin56: yes 16:49:40 3. currently i save 'attached_mode' in volume's admin_metadata (r/o-attach change did), under mutli-attach a attaching mode should be related to a attachment but volume, and because metadata is a flat key-value pair, so i prepared to save it to volume metadata as a json string to the 'value' filed like this: http://paste.openstack.org/show/47471/ 16:50:03 zhiyan: I have some comments on this. I think we can move that to #openstack-cinder though 16:50:15 For back compatability, 'attached' = one or more attachments, I think. 'detached' = no attachments 16:50:19 zhiyan: your 'in use' state is actually derived from the existence of attachment records. 16:50:40 DuncanT-: +1 16:50:47 thingee: agree, i think so it probably need more discussion.. 16:50:51 I need to think about it a bit more..but that sounds good so far 16:51:04 DuncanT-: how about 'attaching'? 16:51:37 zhiyan: Doesn't matter that much. Maybe attaching for the first attaching state, once there is an attached then that supeceeds? 16:51:46 Multi-attach needs to end the overely stateful use ofthe volume status. Things like 'backing up' or attaching'. 16:52:02 zhiyan: what's the questin with #3? 16:52:06 question* 16:53:05 thingee: humm, you know in r/o-attach change, i add the 'attached_mode' to the attached volume to represent its access mode for the connection. 16:53:18 thingee: i store it by admin_metadata. 16:53:27 yes.. 16:54:04 thingee: it's a key-value flat struecture. but in mult-attach situation, an 'attached_mode' should not only belong to a volume, but a particular attaching-session 16:54:21 so there are three things need put into one k-v recored within admin_metadata table 16:54:57 #action thingee and whoever else will discuss with zhiyan about storing volume attachment information 16:54:59 those are 'attached_mode', 'attachment_id' (value) , and the volume (key) 16:55:16 thingee: cool. 16:55:36 #topic PTL nominations 16:55:36 5 minute warning 16:55:40 jgriffith: you're up 16:55:43 thingee: this is a basic model changing question for mutl-attach 16:55:45 zhiyan: where;s the other half of the key? 16:55:51 jgriffith: you got five mins :) 16:56:00 hiyan: where's the other half of the key? 16:56:15 So I wanted to make sure that everybody knew they have until tomorrow to submit their nomination if they're interested in being Cinder PTL 16:56:41 and if anybody had any questions about the job, process etc 16:57:07 anyone have anything on that? 16:57:12 dress code? 16:57:21 jeans and t'shirts 16:57:25 :) 16:57:25 I thought we agreed avishay would be the new PTL 16:57:28 dosaboy: hawaiian shirts 16:57:34 :) 16:57:36 bswartz: Oh? 16:57:43 (just poking him if he's here) 16:57:45 avishay 2013 folks 16:57:51 I know he hates that 16:58:02 #action someone might run against jgriffith 16:58:03 * jgriffith feels bad everybody keeps recommendign somebody else be PTL 16:58:06 but oh well 16:58:17 jgriffith: we all love you 16:58:22 bswartz: lies 16:58:24 :) 16:58:34 jgriffith: I think the fact that you have to ask for someone to run against you is a good sign ;) 16:58:47 I'll take that 16:58:52 anyway... 16:59:00 I just wanted to make it clear to everyone 16:59:07 competition for the position is healthy 16:59:26 if you're interested 16:59:34 I will say it's not *easy* though 16:59:48 that's about all I had 16:59:50 * hemna watches other people raise their hands 17:00:02 I thought being a PTL was a punishment -- I didn't realize people actually ran for the job 17:00:10 #endmeeting