16:01:41 #startmeeting Cinder 16:01:42 Meeting started Wed Feb 6 16:01:41 2013 UTC. The chair is DuncanT. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:45 The meeting name has been set to 'cinder' 16:01:48 Lo all 16:01:53 hi 16:01:53 hi 16:01:54 :) 16:01:54 hi 16:01:56 hi 16:01:59 hi 16:02:06 o/ 16:02:13 hi 16:02:55 JGriffith is away, so he's asked me to chair. There's the bare bones of an agenda at http://wiki.openstack.org/CinderMeetings as usual but PM me or shout up if there's something you want to discuss 16:03:37 So I saw avishay's comments on his issues he's having with FC 16:03:53 I'm going to try and see if I can dig up a QLogic HBA today at work and try and reproduce his issues. 16:04:13 FC still isn't merged, correct? 16:04:18 correct 16:04:26 (sorry, been away for a bit, still playing catchup) 16:04:39 it's been getting a good amount of reviews from the nova guys lately though 16:04:51 Sounds like testing is progressing anyway, which is great 16:04:56 yup 16:05:04 Anything else you need 16:05:05 ? 16:05:13 don't think so 16:05:34 Good stuff 16:06:44 The only blueprint I'm involved with is volume backup... we're fixing uuids and iscsi attach and a new review will be up shortly. I don't understand the comments from ronenkat but I'm hoping they'll get back to me with more detail 16:07:52 need more eyes for the review? 16:08:15 DuncanT: is volume backup working completely now? 16:08:41 bswartz: Multi-node is broken in some cases until the iscsi attach change turns up 16:08:43 DuncanT: some of the questions I've been asking about coverage with the backup manager seemed to get missed last time. Francis was saying it has 100% but I just can't see that with coverage report. 16:09:08 DuncanT: cool 16:09:14 thingee: He's going to mail you about that... definitely not seeing what you're seeing 16:09:33 thingee: We've re-ran the coverage report on fresh devstacks with the branch and we're getting very different coverage reports 16:09:43 thingee: So not sure why the discrepancy 16:10:10 hemna: More eyes are always good, particularly as we thing the next patch will be pretty much done, except possibly some testing issues 16:10:11 it must be something weird in my env. I'll try a fresh repo this time instead of just new venv 16:10:19 smulcahy, DuncanT ^ 16:10:39 thingee: Thanks for that 16:11:07 coolio 16:11:28 Any other blueprints to comment on? Multi backend scheduler? 16:11:49 thingee: thanks - yeah, maybe try from scratch because we're not seeing your coverage results. 16:11:50 DuncanT: o/ 16:11:50 yup 16:12:06 thingee: Yup 16:12:21 DuncanT: cinderclient v2 is up for review. 16:12:45 DuncanT: jgriffith mentioned he wanted args to be consistent. I'll make a comment to the review and switch back to wip 16:12:45 Ooo, hadn't spotted that 16:12:46 i have some comment with multi back-end volume service patch. but haven't gone through the whole patch yet. i'll talk to hub_cap offline 16:12:58 and so is 'NAS as a separate service' code 16:13:08 DuncanT: I'm going to deprecate the other arg style for a release so everyone is happy :) 16:13:31 thingee: People are never happy 16:13:38 DuncanT: I'm happy 16:14:04 We've a general plea for reviews... quite a few open and quite a few un-responded to review comments 16:14:05 DuncanT: docs are coming along. not too worried about "feature freeze" deadline with 'em ;) 16:14:30 https://review.openstack.org/#/q/status:open+project:openstack/cinder,n,z 16:14:53 DuncanT: v1 doc is just about done. will be doing that more and maybe starting v2 over the weekend 16:15:01 https://review.openstack.org/#/q/status:open+project:openstack/python-cinderclient,n,z 16:15:21 thingee: Good stuff. Will take a look at the v2 client stuff asap 16:16:05 DuncanT: is there any way we, as reviewers, can prioritize what to review? 16:16:31 bswartz: yea hang on 16:16:45 bswartz: https://launchpad.net/cinder/+milestone/grizzly-3 16:16:47 bswartz: i think those reviews that are targeted to G3 should come first 16:16:52 bswartz: whatever is in code review 16:17:00 bswartz: I tend to go with stuff I've previously commented on that has been updated, followed by G3 stuff 16:17:17 okay, all good suggestions 16:17:26 bswartz: I also encourage people to shout up when they've something they feel is being ignored 16:17:42 it would be ideal if we there was a way to minimize overlapping review work so everything gets equal coverage, but perhaps that's not possible 16:17:58 DuncanT: +1 that works too 16:18:18 what about reviews for bug fixes? they are not targeted, but have to go in, right 16:18:45 xyang_: technically, bugfixes could go in after G-3 16:19:10 bswartz: ok 16:19:19 xyang_: you can ping core devs available in #openstack-cinder too 16:19:26 or #openstack-dev 16:19:32 that's not a reason to ignore them, but they feel lower priority to me than new features 16:19:54 DuncanT: what else? 16:20:02 thingee: ok 16:20:10 #topic Policy on what's required for a new driver 16:20:10 what about bugs which impact BP devs (volume_create issues) ? 16:20:44 Yada: If you think something needs bumping up the priority, your best bet is to poke people in #openstack-cinder 16:21:20 Yada: Most of the core team hang about in there, and are more likely to be responsive to people being keen 16:21:21 We will cause it may block our cinder BP approval : currently not working anymore (was ok days ago) ;-) 16:22:08 Working on it to dig and provide as much infos as possible 16:22:34 Yada: Will follow up with you in the cinder room after the meeting if you want 16:22:46 no worries 16:23:12 So John posted something to the openstack mailing list a week and a bit ago about minimum features in new drivers 16:23:54 I can't find it right now but the gist was that new drivers should be at least as functional as the LVM one is at time of merging, unless they explain why they can't be 16:24:12 DuncanT: I haven't been able to find it either. Maybe that's why he feels he got no reply :P 16:24:21 Since nobody replied, this is likely to become policy unless somebody complains sharpish 16:24:33 thingee: I did find it earlier... it was a reply down a thread 16:25:09 Ah ha, in the thread "[Openstack] List of Cinder compatible devices" 16:25:21 "Having to go through and determine what feature is or is not supported per driver is EXACTLY what I want to avoid. If we go down the path of building a matrix and allowing partial integration it's going to create a huge mess and IMO the user experience is going to suffer greatly.  Of course a driver can do more than what's on the list, but I think this is the minimum requirement and I've been pushing back on submissions based o 16:25:24 the problem is partly due to his sentences come at the very bottom of the mail 16:25:39 Yeah, I missed it as well...maybe he could add it to http://wiki.openstack.org/Cinder 16:25:46 Yep he replied to an email from : Xiazhihui (Hashui, IT) 16:25:49 DuncanT: so that policy implies that as new features are added to the LVM driver, all of the other drivers have to catch up eventually -- can we say something about how quickly that needs to happen? 16:26:16 bswartz: I'd like a statement about that too, but not sure how to word it 16:26:39 it would also be useful to list said features 16:26:43 bswartz: Certainly I'd like a policy where we can threaten to drop unmaintained drivers 16:26:46 JM1: +1 16:26:58 JM1: +1 16:27:52 JM1: Any such list becomes stale if it is external to the code, but certainly a list of 'as of xxx date, the minimum feature list is...' 16:28:04 Anybody got a problem with the concept? 16:28:19 clearly list each feature that needs implemented and add it to http://wiki.openstack.org/Cinder, since all new developers tend to start there 16:28:19 nope, sounds good to me 16:28:31 kmartin: +1 16:28:40 +1 16:28:43 kmartin: +1, good point 16:28:53 Based on my understanding and chat with John it is : Volume create | delete | attach | detach + Snapshot create | delete + Create Volume from Snapshot 16:28:58 also, if a driver needs to updated to comply with a new feature, does that update count as a bugfix, or does it need a blueprint, and milestone, etc 16:29:11 bswartz: bugfix in general I think 16:29:14 how about volume to/from image? 16:29:36 there's a generic function now in driver.py 16:29:42 JM1: That and clone are there now in LVM, so I guess they are needed for a new driver 16:29:43 Each new release the list of features should be revisted and updated 16:29:54 I'm testing that function, but have to override something to get it to work 16:30:54 JM1: some of the new features should have a little lag time for the drivers to be updated, like the next release. 16:31:35 Next milestone release or next full release? 16:32:13 Next full release, some features are not completed until the last sprint and its hard for all the drivers to get updated that quickly 16:32:32 kmartin: +1 16:32:50 And what about the new BP ? Will be "fair" if it is the same rules for all and if it does not block BP validation IMHO 16:33:07 Fair enough, though I think we should strongly encourage quicker updates where we can 16:33:42 Yada: I don't understand the question sorry 16:34:08 DuncanT: I agree, strongly encouraged but not required 16:34:09 I know speaking for NetApp, we have development schedules, and it's not always easy to make time for stuff that comes up at the last minute. However if driver changes to comply with new features count as bugfixes then that relaxes the deadline to get them done. 16:34:44 I mean : if all agree on the minimum cinder features supported than can we apply the same for the new BP instead of asking new BP to commit on all the features I listed above 16:34:58 any new feature need legal approval too, that could take very long 16:35:30 Have to remember some of these features may require legal approval from the bigger companies...and we all know how fast that happens 16:35:47 kmartin: I work for HP too, I know your pain ;-) 16:35:54 xyang_: :) beat me to it 16:36:17 kmartin: :) 16:36:26 Yada: That makes sense, though sometimes it is a matter for taste... we can discuss exceptions at these meetings 16:36:47 Right, it sounds like we have a general agreement. Any volenteers to draft it on the wiki? 16:37:44 Anybody at all? 16:38:09 Hell, I'll do it 16:38:13 :-) 16:38:33 #action kmartin to draft new driver policy for the wiki 16:38:34 phew 16:38:57 So the last item on our agenda is... 16:39:03 #topic AZs (again) 16:39:11 sorry... but legal ain't my problem :) 16:39:21 he lives 16:39:26 :) 16:39:36 DuncanT: are we educated in this topic now? 16:39:45 thingee: I don't think so, no 16:40:19 make that an action item :P...someone should take lead and get that figured out 16:40:54 * jgriffith pretends he's not back yet :) 16:41:06 We have our own ideas, but it comes down to 'There is an AZ field in several parts of our API. What do we want it to mean?' 16:41:13 I'll look at getting something documented 16:41:17 DuncanT: not that simple 16:41:26 DuncanT: It actually has a distinct meaning 16:41:34 DuncanT: Particularly in the context of EC2 16:41:47 DuncanT: You can only attach volumes to instances in the same AZ 16:41:53 so who raised this issue? 16:41:59 winston-d_: Me 16:42:17 jgriffith: we assigned all the action items to you while you were gone 16:42:23 haha :) 16:42:30 jgriffith: I'm not sure of the details of the EC2 API 16:42:30 jk 16:42:32 then you should educate us, at least a problem statement 16:42:50 winston-d_: who me? 16:43:02 winston-d_: I'm not the one who asked what they were :) 16:43:10 Hi all, sorry I'm (very) late 16:43:13 DuncanT: ^^ 16:43:19 winston-d_: Ohh... :) 16:43:40 winston-d_: The problem is that the fields in the API currently don't do much in relation to the same fields in the nova api 16:43:57 They have no clear meaning, and inconsitent behaviour 16:44:04 avishay: hi. can I talk to you after the meeting? I'm merging with your changes but have issues 16:44:06 *consistent 16:44:15 xyang_: of course 16:44:17 DuncanT: hmmm... interesting 16:44:46 So I'll look at getting this documented and cleared up a bit 16:44:49 what kind of consistency are you looking for ? 16:44:56 Nova seems to treat them as a specific specialisation of aggregates that the scheduler treats specially 16:45:08 DuncanT: That's new :( 16:45:17 DuncanT: so we have to play some catch up 16:45:44 The disparity you see currently is because they've moved forward with aggregates and such 16:45:51 winston-d_: A definition of what they mean, and what the limitations are (e.g. can an instance in az-xyz mount a volume in az-abc?) 16:46:48 (I hope the answer to that ends up being 'no', but currently it isn't enforced (draft patch from a colleague to fix that in the queue) but currently no two people seem to entirely agree) 16:46:58 the 2nd part of the question is controlled by Nova API or Cinder API? 16:47:11 winston-d_: Both 16:47:27 winston-d_: Can you clone a volume between AZs? (pure cinder) 16:47:41 winston-d_: Attach is a decision for both 16:47:47 winston-d_: There are other questions 16:48:27 #action jgriffith and DuncanT to look at documenting availability zones 16:48:46 :) 16:48:49 how do OPS people think about it? do think they nova/cinder should allow such action? 16:49:03 since AZ is defined by them 16:49:04 I have no idea what providers are using availability zones. We are, I'm pretty sure Rackspace don't 16:49:20 So the best advice I can provide for a quick overview is lookat AWS 16:49:21 AWS did 16:49:35 winston-d_: There is no definition of an AZ, so different people have totally different models in mind 16:49:35 Thta's it was intially modeled after 16:49:53 DuncanT: that is the real problem i guess 16:49:56 winston-d_: Cells have removed some of the confusion I think 16:50:10 DuncanT: I'd argue that cells introduced more confusion but anyway :) 16:50:21 DuncanT: but cell is transparent to end user (aka API) 16:50:28 winston-d_: We (HP) don't want cross az mounting 16:50:36 OK... time out 16:50:44 No sense beating on this right now 16:50:49 winston-d_: Indeed, so pan-cell mounting is a 'it should just work' 16:50:56 DuncanT: and jgriffith will flush this out and doc it for folks 16:51:28 AWS has cells too? or is it a new concept by us folks? 16:51:45 rushiagr: Unknown since they aren't user visible 16:51:50 rushiagr: no, we don't know since it transparent to end users 16:51:50 Well never mind then... carry on :) 16:52:10 jgriffith: :) 16:52:22 So, any other business? We've ten minutes left 16:52:32 #topic any other business 16:52:50 jgriffith? 16:53:10 I don't have much, but I haven't gone through evertyhging you guys covered yet 16:53:21 My main thing is the usual plea for reviews :) 16:53:30 We're getting a pretty good back-log again 16:53:40 Just a note on stable/folsom 16:53:57 Those patches need to be reviewed/approved by OSLO core team 16:54:51 jgriffith: Status of blueprint: NAS as a separate service. WIP submitted. 16:55:09 rushiagr: Saw that... thanks! 16:55:13 With core team discussions coming up, I expect people will be extra keen on reviews ;-) 16:55:23 rushiagr: It helps a TON to have something in progress for folks to work on 16:55:43 rushiagr: Got a link to that? 16:56:03 DuncanT: https://review.openstack.org/#/c/21290/ 16:56:20 Cheers 16:56:37 I'd like to bring up a topic to start thinking about - a framework for certifying hardware 16:57:07 jgriffith: I know, its better than having a multi thousand line code drop at the last moment 16:57:14 wow, big topic 16:57:41 avishay: rackspace is working on something like that 16:57:45 The Nova FC code doesn't happen to work with my HBA. We'll try to fix that. But there should be a way to certify hardware (HBAs, controllers, etc.). 16:57:52 are you in touch with them? 16:58:09 bswartz: No. I'd appreciate any pointers. 16:58:21 avishay: I will get some and get back to you 16:58:28 bswartz: thanks a lot 16:58:51 It's called Alamo 16:58:53 Would be good to hear about those plans too 16:59:00 I'd rather focus first on black-box driver qualification 16:59:25 But I agree... if we're going down the paths folks seem to be taking us these days, Hardware may start to become an issue 16:59:25 Alamo has a driver+hardware qualification suite 16:59:28 jgriffith: That too 17:00:27 bswartz: Alamo doesn't cover unreleased code though. I think avishay is asking about that 17:00:30 jgriffith: sounds something to discuss at the summit 17:00:39 xyang_: not necessarily 17:00:49 I think it's reasonable to say that the cinder core team will NOT worry about hardware qualification, and we will leave that to distros and vendors who support this stuff? 17:01:04 avishay: +1 17:01:10 xyang_: but there should be some "official" test suite that vendors can run to make sure their HW works with OpenStack 17:01:10 +1million 17:01:24 bswartz: I would agree up to a point 17:01:44 bswartz: since we're going to introduce things like FC we have to be slightly more pro-active I htink 17:01:47 think 17:01:49 The trouble with 'official tests' is they turn into 'it passes the test suite, cinder must be broken' 17:02:05 TBH to me that just means... "supported HBA/driver list" 17:02:17 bswartz: but I would agree, that should fall to the vendors who want/use FC 17:02:20 avishay: good idea 17:02:25 Supported by whom? 17:02:29 bswartz: else from my perspective, take FC out 17:02:38 DuncanT: Supported is a bad choice of words 17:02:50 :-) 17:03:06 time check, we're about to get booted 17:03:06 I've been spending too much time around lawyers ;-) 17:03:12 haha 17:03:17 Any final words? 17:03:29 "rosebug" 17:03:38 #endmeeting