16:01:37 #startmeeting cinder 16:01:38 Meeting started Wed Oct 31 16:01:37 2012 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:39 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:40 The meeting name has been set to 'cinder' 16:01:52 o/ 16:02:05 e'lo, e'lo 16:02:25 hello 16:02:27 hey 16:02:27 hi~ 16:02:35 There's everybody :) 16:02:39 cool.. 16:02:45 Agenda: http://wiki.openstack.org/NovaVolumeMeetings 16:03:05 #topic removal of nova-volume code 16:03:07 hehe 16:03:11 :) 16:03:13 hello 16:03:24 I'll be here the first 30 mins 16:03:32 made it 16:03:35 So... I think there was some confusion (on my part included) as to how the deprecation of nova-vol was going to happen 16:03:40 rnirmal: excellent 16:04:00 For those that haven't seen patches have been going into nova master to remove nova-vol 16:04:18 A few folks were surprised by this, so thought I should make sure we brought it up today 16:04:31 So it's deprecated in Folsom, out in Grizzly 16:04:42 yah that makes sense 16:04:48 it's confusing that both are in now 16:04:49 I know this might cause some trouble for some folks (rnirmal) 16:04:54 for those of us that want to run off of trunk 16:05:08 any concerns/issues from other folks that might be using nova-vol/master in production? 16:05:22 well didn't seem to be really deprecated in folsom right since all patches were backported 16:05:48 rnirmal: Yep, agreed 16:05:50 We were expecting nova-volumes of be bugfix only for G, not pulled 16:06:17 So DuncanT were you all planning to *use* nova-vol in Grizzly? 16:06:24 This change is definitely causing us to have to change plans for realignment with trunk 16:06:36 jgriffith: Possibly for a short period, yes 16:06:37 jgriffith: I don't think it was effectivly communicated to the community 16:06:58 I thought n-vol was out in Grizzly 16:06:59 the process as such. 16:07:01 creiht: You are correct, it was not 16:07:06 though I feared that might happen, and was one of the reasons we switched to cinder asap 16:07:16 hemna: Yes, some thought it was, some thougt it wasn't 16:07:20 heh ok 16:07:20 i thought n-vol will be there for 2,3 more release 16:07:22 creiht: :) 16:07:31 jgriffith: It isn't that we can't do things differently, just that I specifically asked, got an answer and the answer was wrong 16:07:33 winston-d: No, but 1 maybe 16:07:51 1 more release, with no new features, was what we expected 16:07:58 DuncanT: Yeah and I think that was more my fault than anybody elses 16:08:09 DuncanT: TBH, that's what I expected as well 16:08:36 So, here's my point 16:09:00 If this is going to be a huge deal for folks we can always go to the Nova team and state a case for leaving it in as deprecated 16:09:17 TBH I'm happy to see it gone, but I understand the *real* world works differently 16:09:47 jgriffith, TBH i'm happy too, but i'm not a operation guy. 16:10:00 So my biggest concern is how to avoid this happening again with other nova <-> cinder integration issues in future. Seeing it back in and depricated according to communicated schedule would be nice, but is a secondary concern 16:10:05 winston-d: yes, you and I are lucky 16:10:30 Seeing it stay in until after G-1 would be a good compremise, maybe? 16:10:37 DuncanT: Well, as far as Cinder... I'll learn from my mistakes :) 16:10:47 DuncanT: I can't promise what other folks/projects will do 16:11:02 yeah agree with DuncanT on atleast till G-1 16:11:15 DuncanT: Although TBH as I mentioned I was just as surprised to see it removed 16:11:28 DuncanT: rnirmal Ok, I'll talk to vish about it 16:12:04 so if n-vol stays in G, will new features be added to it? no? 16:12:13 winston-d: no 16:12:20 'No' I believe is the right answer there 16:12:29 winston-d: it would be deprecated, just there for compat issues 16:12:38 yeah I don't think new features should be added 16:12:39 good then. 16:12:44 Keep in mind this introduces a significant issue for cinder migrations though 16:12:59 but "critical" bug fixes and security fixes are likely candidates to backport 16:13:14 at least in my opinion 16:13:22 creiht, agree 16:13:25 So one of the things that worked in our favor for migrating folsom-nova-vol -->cinder was everything was the *same* 16:13:36 That was part of the logic behind the sync up between the two 16:14:03 If we did the deprecated/leave it in Grizzly and move forward with cinder 16:14:17 That is why I think post-G1 is a good compremise - they aren't carrying cruft for too long and certainly no where near a release, and those who were organised have already migrated 16:14:22 migrating from folsom/nova-vol to grizzly/cinder could be problematic 16:14:36 it already is problematic :) 16:14:58 creiht: yes, it is... but I suspect it could get significantly worse 16:15:09 oh certainly 16:15:15 well if it's deprecated.. I don't think we need to support a grizzly migration path 16:15:24 rnirmal: I wish that were true 16:15:28 rnirmal: but I don't think it is 16:15:36 hmm 16:15:52 perhaps I'm wrong on that 16:16:05 But regardless it doesn't seem like it would be the *right* thing to do 16:16:17 and I'd REALLY like to put the issues associated with migrations behind us 16:16:28 yeah you still have to have a migration path 16:16:29 the reality is that if/when bugs came in for it we'd have to deal with them 16:16:43 jgriffith: that's why I said only "critical" bugs 16:16:49 So... it sounds like folks would like to see nova-vol in until G1 at least? 16:16:51 like attach doesn't work any more 16:17:03 creiht: I have a patch in to fix that :) 16:17:12 * jgriffith hints to folks to check reviews :) 16:17:19 oh I was giving that as just an example :) 16:17:28 didn't realize it was actually broken 16:17:32 creiht: I know, but it was a perfect segway :) 16:17:39 creiht: OHHH 16:17:45 creiht: never mind then, nothing to see here 16:17:55 jgriffith: perhaps if you could find someone to volunteer to be the steward of nova-volume? 16:17:56 creiht: It was actually non-cinder related 16:18:10 creiht: interesting idea 16:18:21 creiht: So there would be two sides to it 16:18:27 1. finding a volunteer 16:18:36 2. getting together with the nova team to agree 16:18:40 jgriffith: because I do agree that it would be a lot on your plate to try to manage both cinder and nova-volume 16:18:54 Is the Nova team wanting to nuke it now? 16:18:59 It's not even that so much as it just *sucks* :) 16:19:09 hemna: They've already started 16:19:23 jgriffith: well my thought is that if someone really needs it still, they will want to make sure it still works :) 16:19:37 creiht: agreed 16:19:38 if nobody stands up, then maybe that means that they can remove it 16:19:50 creiht: true.. whether that needs to happen in the open or just have them manage it internally 16:19:59 Wondering if we should take a vote? 16:20:13 ok 16:20:14 I need to have a wider discussion here to find out how concerned we are... can I get 24 hours or so to do that please? 16:20:22 DuncanT: absolutely 16:20:34 same here 16:20:39 So we can sit on this for the moment and wait for some feedback? 16:20:43 That's cool by me 16:20:50 +1 16:20:56 Do what you need to and hit me up with your cases 16:21:07 Cool, cheers 16:21:10 Ok... shall we move on? 16:21:20 i can collect some feedback from local cloud vendors 16:21:38 I'm not making any promises by the way 16:21:53 Just saying if it's a major issue we can look at fixing it 16:21:54 jgriffith: :) 16:22:52 Ok.. next topic 16:23:07 #topic blueprint updates 16:23:23 jgriffith: yeah I still need to get those in... :) 16:23:36 for the api stuff 16:23:52 creiht: :) 16:24:19 I was hoping to have the G1 stuff at minimum all set up by end of week or Monday at latest 16:24:24 No progress from us either 16:24:35 DuncanT: creiht Does end of this week seem doable? 16:25:03 jgriffith: yeah I think I can try to focus on that Thurs/Fri 16:25:14 I don't think it's the end of the world, but it would just be nice to have a reasonable road-map going sooner rather than later 16:25:27 yeah I agre 16:25:29 agree 16:25:32 jgriffith, we have entered the FC Channel support blueprint and meeting again tomorrow with the list of interested parties(Broacade, IBM, ...) 16:25:36 stupid question, is G1 the first sprint of Grizzly dev (month from now) ? 16:25:38 I'll see what we can do. Seems like a reasonable aim 16:25:41 gotta run, sorry... will check the backlog later 16:25:46 hemna: yes 16:25:49 thnx 16:25:51 creiht: No worries... catch ya later 16:26:21 grizzly release schedule: http://wiki.openstack.org/GrizzlyReleaseSchedule 16:26:32 thank you 16:27:07 Ok, so just wanted to throw that out as a reminder. If you have blueprints you know you're going to want try to get them in and we should try and get them targetted by Monday 16:27:21 jgriffith: how are we prioritizing items for G1 ? 16:27:27 End of the week for blueprints looks far more reasonable with that on my screen :-) 16:27:32 just based on if someone is going to get it done by g1 16:27:41 DuncanT: :) 16:28:04 rnirmal: So far it's been completely subjective... I look at the BP and mark it 16:28:08 :) 16:28:13 ah ok 16:28:20 rnirmal: Right now that's fine 16:28:33 rnirmal: If folks start getting stuff posted that will change 16:29:07 hemna: I have a meeting this afternoon to talk about your FC blueprint 16:29:13 cool 16:29:21 hemna: That's going to need some work/detail added 16:29:23 cool... would be better to get a clearer picture of what's being worked on looking at the blueprints 16:29:29 the 3Par driver blueprint should be moved to G2 16:29:31 rnirmal: agreed 16:29:46 hemna: ok 16:30:04 It's basically done, but there are external deps I'm still working on ironing out. 16:30:13 anybody else have bp's they're active on that they'd like to see targetted to a specific release? 16:31:17 Will let you know by the end of the week 16:31:20 hemna: I've updated the BP and assigned you. Make sure you keep things updated 16:31:22 jgriffith: we can talk about schedule for the FC this afternoon thinking G2 or G3 16:31:27 thanks, will do 16:31:28 DuncanT: thanks 16:31:33 I'll add more details to the BP 16:31:41 kmartin: Sure 16:31:57 speaking of BP's... no bswartz? 16:32:03 anybody else from NetApp around? 16:32:05 i'd like to see if vol-type-scheduler can make it in G1. and volume RPC API in G1 too. 16:32:18 winston-d: Yes! I agree on both 16:32:32 winston-d: Just need to get the reviews in 16:32:44 jgriffith, :) 16:32:49 Ok, so we don't need to talk NFS today? 16:33:07 Or do folks have some input on it? 16:33:19 I still haven't seen anything on the ML either 16:33:25 NFS part of cinder??? 16:33:38 hemna: Yes, that's the proposal 16:34:08 alright, I'm going to move off of that subject :) 16:34:28 I didn't get the volume-type wiki done but should have it in the next day or two 16:34:33 .hiub #openstack-ci 16:34:40 I'll let folks know when it's up 16:34:57 One other thing I wanted to ask.... 16:34:58 ok thanks, I'm interested in that 16:35:24 jdurgin: or thingee or anybody that's worked on the write_image to volume code 16:35:45 I've worked with a couple of folks trying to get this working and there was a lot of confusion 16:36:10 I'm wondering if it would be possible for somebody that has gotten it to work to write up a short wiki on the steps on how to do it 16:36:13 ? 16:36:53 * jgriffith hears crickets chirping... 16:36:57 I think we've probably somebody who can make some input on that, sure 16:37:23 DuncanT: that would be great, anything would be useful 16:37:24 I think we've had to add patches to cope with QCOW images too, which we should push out some time soon 16:37:29 sorry write_image? clone image or copy image to volume? 16:37:40 DuncanT: ahhh.. that may be why it wouldn't work for me 16:37:54 thingee: my bad.. .copy image to volume 16:37:59 gotcha 16:38:01 thingee: and how to boot it 16:38:02 jgriffith: Yeah, with the default code you need a 'raw' image 16:38:16 DuncanT: Well that's why mine kept bombing out most likely 16:38:21 :) 16:38:26 Got some instructions for converting a standard qcow image to raw somewhere, will send them on 16:38:33 Ok, if we could get a wiki on this it would be immensely helpful for folks 16:38:41 jgriffith: ok 16:39:02 marking it 16:39:43 I propose adding a section here: http://wiki.openstack.org/Cinder 16:39:59 Mabye a Features/How-To section 16:40:13 Then we can port them to docs.openstack.org at some point 16:40:19 s/we/I/ 16:40:21 :) 16:40:26 sounds good 16:40:32 awesome! 16:40:53 good point, definitely helpful for docs. 16:41:05 yeah, we really need to fix that problem at some poing :) 16:41:22 point! 16:41:32 #topic bug management 16:41:56 Just a reminder, especially for folks that want to get involved but aren't sure how/where 16:42:05 check out the bugs list and grab one :) 16:42:29 Most of the valid ones are assigned/in-progress so that's good 16:42:48 Let's just make sure we don't get behind the curve 16:43:05 rnirmal: I'm waiting on a couple for the driver layout refactor 16:43:26 rnirmal: Not sure if you saw my comments on that last patch or not 16:43:37 jgriffith: yeah did see that 16:43:57 everybody else: https://review.openstack.org/#/c/15038/ 16:44:05 rnirmal: Does my proposal suck? 16:44:06 I'll update that today... so good with cinder/volume/driver/* 16:44:14 rnirmal: cool! 16:44:29 yeah that just breaks backward compatibility in config for the san drivers 16:44:42 actually for all the other ones too 16:44:45 rnirmal: Unless we go volume/driver/san right? 16:44:46 if that's ok 16:44:59 except for the ones in driver.py already 16:44:59 Hmmm ? 16:45:24 jgriffith: talking about moving all the other drivers under cinder/volume/driver/ too right? 16:45:44 this leads to something jdurgin mentioned glance was doing 16:45:57 rnirmal: yes 16:46:05 not depending on the python modules structure to configure the drivers 16:46:14 rnirmal: they all inherrit from the base VolumeDriver don't they? 16:46:21 jgriffith: yes 16:46:51 So I was thinking this layout would work by replacing driver.py with and __init__ in volume/driver/ 16:46:56 but maybe I'm wrong 16:47:05 Also interested in what you and jdurgin talked about 16:47:06 jgriffith: yes it works for the ones already in driver.py 16:47:27 ahh.. ok, just the ones that aren't there. Hmm 16:47:48 Well, IMO it's important enough to require a change at some point 16:47:59 take for example solidfire... currently it is volume_driver=cinder.volume.solidfire.SolidFire 16:48:16 it will become volume_driver=cinder.volume.driver.solidfire.SolidFire 16:48:40 Yeah, and I have NO issue with that changing. Unless the Glance model is something we could adopt 16:48:50 yeah that's what I was thinking 16:49:02 rnirmal: alright, I'll stop bugging you and let you sort it out :) 16:49:03 i guess that's the pain we have to endure sooner or later? 16:49:04 if we are changing it.. might as well move to a better model than module names 16:49:15 rnirmal: +1 16:49:27 * jgriffith gets out of the way 16:49:36 winston-d: yes. 16:50:07 winston-d: Yup, so we should do it *right* when we do it for sure 16:50:27 totally agree. 16:50:31 cool 16:50:49 alright, I have a hard stop coming up here so I'm going to move to the next topic 16:50:56 #topic status updates 16:51:05 docs? 16:51:15 rnirmal: yes, docs :) 16:51:37 jgriffith: I suppose that's a status update :P 16:51:45 haha 16:51:56 sadly the status is we haven't really made any progress there and we need to 16:52:37 any volunteers? 16:52:43 chirp, chirp, chirp.... 16:53:09 I'll make it easy, even if you just want to write up a short google doc on your features etc 16:53:19 I'll put it into the official docs repo 16:53:44 i'll do that for scheduler 16:53:53 winston-d FTW!!!!! 16:54:03 volume types, extra specs 16:54:11 I have already foolishly volenteered to do a howto for BfV 16:54:29 winston-d: I'll get the volume_types/extra-specs via the wiki I promised last week 16:54:37 winston-d: You're plate is getting full I noticed :) 16:54:51 DuncanT: Yes, you're now officially on the hook :) 16:55:02 What is left to doc? 16:55:05 * jgriffith is happier now 16:55:09 hemna: EVERYTHING 16:55:12 :( 16:55:12 lol 16:55:40 hemna: If we can get the items we mentioned here though it will go a long way 16:55:47 jgriffith: should this be coordinated with annegentle to get an official cinder doc 16:55:48 hemna: One other thing would be install/config 16:55:51 so do we have a doc structure, like how many chapters, and what are they? 16:56:02 winston-d: that's what I was getting at 16:56:11 rnirmal: yes, but I'd like to just get something rough and then get her help with making it official 16:56:23 she's already been very helpful and offered to work with us multiple times 16:56:30 I just haven't had any content to provide :( 16:56:33 I'd volunteer, but I haven't been plugged in long enough yet 16:56:42 * hemna is still getting up to speed 16:56:42 jgriffith: understood cool. 16:56:56 Greetings all! 16:57:14 So everybody take a look at docs.openstack.org 16:57:27 That will give you an idea of what we *should* have, versus what we do have 16:57:56 specific link on that site? 16:58:05 The Developer docs are in the cinder source... cinder/doc 16:58:20 CaptTofu: That is the link: http://docs.openstack.org/ 16:58:32 CaptTofu, the cinder meeting is still going on 16:58:39 i can provide some installation docs based on Folsom. 16:58:41 oh, so very sorry. 16:58:45 haha 16:59:14 winston-d: great, the biggest thing folks miss right now is /etc/tgt/conf.d/ 16:59:30 winston-d: and set_path was broken in ubuntu packaging 16:59:39 and on that note.... 16:59:44 we should give up the room 16:59:48 Thanks everybody! 16:59:56 #endmeeting cinder