16:00:42 #startmeeting 16:00:43 Meeting started Wed Aug 15 16:00:42 2012 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:05 o/ 16:01:08 bswartz: thingee rnirmal ...around? 16:01:21 I'm here 16:01:34 anybody else? 16:01:42 me 16:01:51 Hi vincent_hou 16:02:01 hi 16:02:17 don't physically see durgin =/ 16:02:36 We'll get started anyway, should be a short meeting 16:02:50 DuncanT: ? 16:03:07 #topic F3 status 16:03:13 #link https://launchpad.net/cinder/+milestone/folsom-3 16:03:42 So most everything here is under review or done 16:03:58 I'm here 16:04:03 The two exceptions: https://blueprints.launchpad.net/cinder/+spec/cinder-notifications 16:04:24 and https://bugs.launchpad.net/bugs/1023311 16:04:25 Launchpad bug 1023311 in cinder "Quotas management is broken" [High,Triaged] 16:04:47 I think cinder-notifications is going to slip, unless I catch up with Craig and we get it in today 16:05:07 It's actually probably pretty close, just bit rotted a bit with all of the changes over the past month or so 16:05:24 jgriffith: I can update on notifications 16:05:31 rnirmal: cool 16:05:38 cp16net: has it mostly done...having issues with the tests 16:05:46 #action rnirmal take a look at finishing cinder-notifications 16:05:52 after his update... tox doesn't seem to run any tests 16:06:01 rnirmal: yeah, I think it was mostly just moving to openstack.common 16:06:08 yup 16:06:20 probably rebase off of master and should be ok 16:06:22 cool 16:06:37 I'm moving Quota management to RC1 16:06:41 jgriffith: will ask cp16net to ping offline for any help on that 16:06:42 Sorry, just back from the dentist 16:06:48 here, too 16:06:57 rnirmal: Sounds good.. and if you need something from me shout 16:07:06 DuncanT: fun fun 16:07:28 So other than those two, we have code for everything else 16:07:43 Just a matter of getting it reviewed, making any fixes and submitting before end of day 16:08:10 I've a patch that is subbonly refusing to work to make size optional when creating volumes from snapshots 16:08:13 ttx will cut F3 late tonight and after that new features are pretty much shut down 16:08:27 DuncanT: haven't seen it? 16:08:55 DuncanT: Throw it out, maybe some of us can help figure out the issue? 16:08:56 I have https://review.openstack.org/#/c/11141/ 16:09:22 dricco: That's nova, this is cinder ;) 16:09:30 jgriffith: Will put it up in a moment 16:10:04 sorry, though some of ye guys had nova core status 16:10:21 dricco: No problem, kinda just giving you a hard time 16:10:22 I'll wait for Russell Bryant to get back on it 16:10:26 lol 16:10:29 :-) 16:10:34 dricco: Yes, it's good to bring it to everybody's attention 16:10:36 hm? 16:10:42 russellb: ?? 16:11:48 Ok, so it looks like everybody has their drivers in 16:12:07 We should all focus on clearing out the reviews today 16:12:26 I'm happy to review code if anyone needs it 16:13:02 bswartz: (and all) https://review.openstack.org/#/q/status:open+cinder,n,z 16:13:20 I just monitor this page throughout the day, easier than trying to catch email notifications etc 16:13:35 of course if you have bandwidth help out on the Nova side too 16:13:52 Just a reminder... 16:14:07 After F3 is cut it's bug fixes only unless there's an FFE 16:14:28 So if you have a feature it's going to get increasingly difficult to introduce it after today 16:14:46 #topic RC 16:14:52 Speaking of RC... 16:15:03 The other thing that was decided at the PPB yesterday... 16:15:23 Due to the screaming and yelling on the ML regarding nova-vol and Cinder 16:15:40 After RC1 we'll backport all Cinder changes/additions to Nova-Volume 16:16:04 The idea is having a feature to feature match between nova-vol and Cinder 16:16:14 I'm not crazy about it, but I see the reasoning behind it 16:16:21 does that include drivers? 16:16:28 Then hopefully we can truly deprecate nova-vol in Grizzly 16:16:31 bswartz: yes 16:16:43 bswartz: You should be covered already though no? 16:17:18 jgriffith: we've submitted 4 different drivers, and only one of them is in nova-vol 16:17:34 I will need to port the other 3 back 16:17:38 bswartz: I thought all of them were there... sorry 16:17:57 bswartz: Don't worry about it right now, just keep it in mind that you'll want to do it in the coming weeks 16:18:02 jgriffith: what is the deadline for backporting driers from cinder to nova-vol? 16:18:10 drivers* 16:18:35 bswartz: So there's going to be a massive effort to dump/backport everything after RC1 16:19:20 This was just decided yesterday so it's not going to be unrealistic in terms of timeline 16:19:21 jgriffith: do you have a link for the schedule for the rest of the release? 16:19:42 bswartz: http://wiki.openstack.org/FolsomReleaseSchedule 16:20:06 thanks 16:20:25 :q 16:20:52 :O $#%$#%# 16:20:57 That's me blowing chunks 16:21:03 :) 16:21:21 Ok, any questions on F3 or Folsom in general? 16:21:38 I was hoping to catch up with winstond regarding scheduler, but no luck 16:21:51 #topic open discussion 16:21:54 when does trunk get branched for folsom 16:22:07 Sorry rnirmal I just cut you off :) 16:22:15 np... I was a tad late 16:22:23 is it F3 or RC1 16:22:25 That's a good question... 16:22:26 DuncanT: making size optional when creating from an image would be good as well 16:22:29 I had that date 16:22:42 I believe ttx will do that when he cuts F3 but might be later 16:22:53 I plan on fixing the regression around scheduler and volume nodes being down at some point... but I see that as a bug fix :-) 16:22:54 I would expect no later than the end of this month 16:23:04 DuncanT: exactly 16:23:10 DuncanT: and a critical bug fix no less 16:23:28 Just remember, soooner is better at this stage 16:23:40 Each day past F3 things will get more difficult 16:24:02 I also wanted to clarify some things about volume_type :) 16:24:25 jgriffith: oh yes, I was going to ask about that 16:24:31 bswartz: :) 16:24:50 The idea behind volume_type was to give a way to tell the scheduler to select different back-ends 16:25:15 We've had dicussions about other uses (such as QOS) but haven't implemented anything yet 16:25:28 jgriffith: but volume_type was added in diablo, long before we supported multiple backends 16:25:48 jgriffith: isn't volume_types also a user facing feature ? 16:25:52 bswartz: added, implemented and used are all different things 16:26:04 rnirmal: yes it is (user facing) 16:26:06 bswartz: You could always run different abckends on different volume nodes 16:26:14 jgriffith: in the diablo timeframe, I thought the Zedara driver used it for qos 16:26:27 bswartz: yes *but* 16:26:47 bswartz: zadara's definition of qos is actually disk/backend type 16:27:08 bswartz: sata, scsi, ssd etc 16:27:44 jgriffith: back to my question... if volume_types is being thought of as sata, scsi, ssd etc 16:28:01 then it differs slightly from the volume backend for the scheduler to choose 16:28:08 supposing multiple backends support ssd 16:28:21 rnirmal: Well...... it doesn't have to be limited to that either 16:28:30 volume_types of "gold, silver, budget" were also suggested 16:28:33 the volume_type was added specifically so you could support multiple backends 16:28:36 jgriffith: I would argue that it doesn't matter how the driver interprets the volume_type, only that it's processed inside the driver rather than outside the driver 16:28:42 You can say something like: netapp = type-1, SF=type2, rbd=type3 16:28:42 or multiple options within one backend 16:28:43 true but what I'm getting at is 16:28:54 volume_type doesn't necessarily translate to a single backend 16:29:03 correct 16:29:14 it is up to the interpretation of the scheduler 16:29:27 rnirmal: Yes, I understand your point 16:29:37 I'm not sure if the default scheduler ever got updated so that you could map volume_types to the backends 16:29:42 what I'm getting at is the scheduler needs something more than just volume_type to scheduler to the correct volume backend 16:29:50 creiht: So there's the *PROBLEM* 16:29:53 creiht: nope I don't think it has it 16:30:02 creiht: The scheduler doesn't supoort it anyway 16:30:07 There was a method added to driver so that it could provide key/value pairs to the scheduler 16:30:20 I think only one driver implemented it and the scheduler was never written 16:30:25 Ok, before we rat hole.... 16:30:34 I think we need to clearly seperate data meant to be consumed by the scheduler from data meant to be consumed by the drivers 16:30:38 our driver uses it :) 16:30:43 DuncanT: Yes, that's the problem, nothing is implemented in the scheduler yet anyway 16:30:58 bswartz: I have no problem with that 16:31:09 driver.get_volume_stats is what reports back to the scheduler 16:31:23 bswartz: But in the case of driver that do require/use extra information where would you propose that comes from other than metadata? 16:31:50 jgriffith: I have to admit that I don't know how volume metadata works 16:31:58 I need to look into that 16:32:00 bswartz: I don't think anyone does :) 16:32:08 bswartz: It's just metadata you get to add to a volume when you create it 16:32:16 In a nut shell 16:32:20 jgriffith: is it surfaced at the API/CLI? 16:32:32 bswartz: yes 16:32:44 In fact you'll notice creiht submitted a bug on this very topic 16:33:04 we allow you to set it, but then don't return it in the response info 16:33:12 jgriffith: Then I support improving the documentation for volume metadata and encouraging everyone to use that instead 16:33:15 This was a bug, becuase it should be surfaced via the API 16:33:37 bswartz: HA HA... I support improving *ALL** documentation 16:33:51 bswartz: our documentation isn't so good for a new comer IMHO 16:34:02 hah... yeah the volume documentation is a bit lacking 16:34:07 bswartz: This is something we really need to try and improve 16:34:22 I think volume_types can provide an easier to understand interface than metadata if it is fully implemented 16:34:37 DuncanT: yes, but they have *different* uses! 16:34:40 it depends on what you want to do with metadata 16:35:01 think of metadata a per volume basis 16:35:12 We also have a requirement (and have expressed for a while) that volume_type gets all the way to the driver 16:35:13 rnirmal: exactly!!! 16:35:13 I will see what NetApp can do about documentation -- we are hiring more people on my team. No promises though 16:35:13 rather than from the backend provider basis 16:35:30 Since we do multiple types from one backend 16:35:43 I need to see as well what we can do to cross-pollinate some of our doc efforts 16:35:57 DuncanT: It does already 16:35:59 DuncanT: indeed, same boat here 16:35:59 that would be awesome, both of you 16:36:08 annegentle: :P 16:36:10 :) 16:36:19 it'll pay back in spades :) 16:36:20 annegentle: I was going to pass that buck to you :) 16:36:37 DuncanT: It's in the volume db object that's passed into the driver on create, or do you mean something different? 16:36:41 lol I really am working on it behind the scenes, believe me. 16:37:03 annegentle: The problem lies on our side IMO 16:37:05 annegentle: yeah I know, just need to make sure we have things set up correctly so that what david does on our docs can also help you guys 16:37:12 Shall we try to get some usecases of volume_types .v. metadata written up and see if we're ont he same page? It's a bit fluffy at the moment 16:37:18 annegentle: We all throw our code in but *never* document it :( 16:37:22 creiht: that's perfect, thanks 16:37:38 DuncanT: So one use case for metadata is my patch submission :) 16:37:40 DuncanT: yes that would be really helpful 16:37:54 jgriffith: we have sysadmins who wrote the volumes stuff that already exists who would LOVE more info to write more docs. So it's really a matter of matching up people 16:37:55 gah... and I gotta run 16:38:05 The only use I've got for metadata is affinity / anti-affinity 16:38:07 jgriffith: if there are any areas that I can help with this stuff, please email me 16:38:14 and I'll check back on the backlog later 16:38:19 creiht: we'll do... thanks! 16:38:21 I think I understand jgriffith's case as well 16:38:42 So let's ignore *what* the metadata contains for a second... 16:39:11 is metadata here == volume_type extra specs? 16:39:34 rnirmal: gaaaa.... I didn't even want to talk about that one yet :) 16:39:43 ok :) 16:39:46 So this is exaclty the problem IMO 16:40:04 We have metadata, volume_type and the extra specs 16:40:12 cos I think we are going in circles confused between the two... without a clear separation 16:40:24 But we don't have a clear agreement/understanding of what they're intended use is 16:40:43 rnirmal: yes, I think you are precisely correct 16:41:39 So volume_types as I understand it was intended to be used to make scheduler decisions 16:41:54 does anybody disagree with that? 16:42:06 can we clear it up a little more 16:42:07 Ish 16:42:13 jgriffith: I can't speak to that, but if it's true, then Netapp is definitey doing the wrong thing 16:42:48 rnirmal: I was intentional avoiding specific use cases 16:42:53 bswartz: I don't think that's true 16:43:12 jgriffith: I don't want to avoid specific cases right now 16:43:17 bswartz: I think it works extremely well for your cases where you've used it 16:43:19 If you take a broad definition of 'scheduler' 16:43:19 if we are to implement the rest of it correctly 16:43:22 rnirmal: :) fair enough 16:43:52 The NetApp driver assumes that its the only backend running 16:44:17 We need to do some testing of the multi-backend scenarios to see if anything evil happens 16:44:20 bswartz: as it should 16:44:44 bswartz: That shouldn't be your problem, it should be up to the scheduler and API's to sort that out 16:45:07 bswartz: The whole point of the abstraction is it shouldn't matter to the driver 16:45:15 yeah the driver need not understand beyond it's presence. 16:45:17 rnirmal: Ok... I'm not ignoring you I promise 16:45:23 so if there are multiple backends, don't they all share the same cinder.volumes DB table? 16:45:51 bswartz: yes 16:46:11 jgriffith: how does the scheduler know which backends created which volumes? 16:47:06 bswartz: ? 16:47:23 bswartz: You mean the host column? 16:47:30 right now it's just the host column 16:47:38 I don't think anything else is being used 16:47:39 :) 16:47:39 well, once a volume is created, when an attach call comes in, it needs to get sent to the right backend 16:47:49 so the host column is it? 16:47:58 perhaps that's all that's needed 16:47:58 bswartz: Ahhh... that's different, that's the driver 16:48:27 So here's the thing... 16:49:10 okay maybe I'm not understanding this 16:49:16 can different backends have different drivers? 16:49:38 yes if you run them on different hosts currently 16:49:57 if they can, then it matters which backend the the attach call goes to, the the right driver can handle it 16:50:04 I thought somebody did some work to allow different drivers on one host? 16:50:13 DuncanT: I'm working on it 16:50:29 DuncanT: rnirmal did that, yes but it doesn't look like it's going to make it for Folsom 16:51:11 Ah, ok, got it 16:51:43 bswartz: The volume/api will do an rpc cast to the appropriate volume node 16:51:57 bswartz: That volume node will *only* support a single backend/driver 16:52:18 jgriffith: until rnirmal's change 16:52:26 bswartz: It figures out what volume node to use via the scheduler 16:52:44 bswartz: yes, but that's not in so let's leave it out of your question for now 16:52:49 okay 16:53:02 bswartz: I'm just trying to explain why the backend doesn't need to *know* or care and how it works 16:53:35 bswartz: So the current solution for multiple back-ends is multiple volume nodes 16:53:41 I'm willing to believe that everything just somehow works, but I plan to do some testing to see exactly how it works 16:53:53 bswartz: :) that's what I had to do 16:54:06 bswartz: I used pdb and traced a bunch of crap to figure it out 16:54:07 bswartz: :) 16:54:39 I think we still need to tackle the user interface for selecting qos stuff vs driver type stuff 16:54:40 is there anything about CHAP? 16:54:57 having one argument that's overloaded for both purposes seems like a recipe for trouble 16:55:13 vincent_hou: don't think we'll get to it, do you have any updates? 16:55:20 bswartz: ? 16:55:22 Might be worth enhancing the dummy driver so that it publishes enough to allow the scheduler to tell it apart from a real backend? 16:55:40 i wrote soem specs 16:55:42 http://wiki.openstack.org/IscsiChapSupport 16:56:03 i hope people can help to look at it 16:56:09 bswartz: The whole point I'm trying to make is in my case qos stuff *IS* driver stuff (as you put it) 16:56:20 vincent_hou: Yes, definitely! 16:56:47 vincent_hou: I meant to talk to you the other night... single way chap seems fine to me 16:56:52 bswartz: +1 for not overloading 16:57:06 rnirmal: bswartz: overloading what??? 16:57:13 ok 16:57:19 rnirmal: bswartz: I still don't know what's being overloaded? 16:57:42 jgriffith: n/m we can talk abt it later... overloading a single construct for deciding user specified type and which backend to choose 16:57:45 jgriffith: I don't like the idea of volume_types being consumed by both the scheduler and the drivers 16:57:59 I'm happy to table that discussion though 16:58:04 I don't think volume_types is purely about backend selection... indeed bbackend selection should be invisible to the user 16:58:13 bswartz: Ahhh.. I see what you're saying now 16:58:16 bswartz: hmmm 16:58:26 I think they are about classes of service 16:58:37 bswartz: I don't know that I see a problem with that, but I'm open minded 16:58:43 jgriffith: that goes back to why we didn't use volume_types and choose volume_backends instead 16:58:57 I don't want the user to have to know or care about backends at all 16:59:18 rnirmal: Yes! That's correct 16:59:32 I think it's possible to make things work with the current design, but I also think that it can lead to trouble, and we'd be better off changing the design to avoid future problems 17:00:05 bswartz: I don't necessarily see why allowing the backend to read volume_type is *dangerous* 17:00:06 I there was on argument for the scheduler, and then some other argument for the driver-specific stuff, that would be better IMO 17:00:29 bswartz: From the user facing API? Yuck yuck yuck 17:00:45 jgriffith: it's not dangerous just a ton more confusing. 17:01:00 Ok... so here's what I propose 17:01:09 maybe just add a QOS user parameter? 17:01:13 and potentially relay to the user what the backends are maybe 17:01:18 keep in mind that this is going to be Grizzly work and not Folsom 17:01:27 bswartz: That won't work I don't think even though I'd like it :) 17:01:50 bswartz: Or, I should say it definitely won't work for Folsom, but we can pitch it for Grizzly 17:02:35 So I propose that we flush out the meaning/purpose of volume_type including some use cases 17:02:41 yeah all of this should just be grizzly... but getting it early on in grizzly is going to be tremendously helpful.. since it's a lot of moving parts 17:02:49 In addition we do the same thing for metadata 17:03:14 I'll also agree that a blueprint for exposing QOS is the best thing for Grizzly 17:03:16 jgriffith: use cases would be good, so we can have a concrete discussion 17:03:42 bswartz: I'm glad you said that because I'm going to ask everybody involved in this converstation to present some :) 17:04:16 #action bswartz DuncanT rnirmal jgriffith Work on use cases/definition for volume_type and metadata for next week 17:04:27 jgriffith: I def have a few 17:04:31 And on that note we're out of time :) 17:04:38 Don't get too bogged down on this right now 17:04:43 We need to focus on Folsom 17:05:08 But it's good that this came up, we definitely won't to get it ironed out for Grizzly 17:05:27 errr... "won't == want to" 17:05:36 Anything else real quick? 17:05:49 everyone do some reviews! 17:05:54 Yes!!! 17:06:12 And don't get too wrapped up in whether somebody uses metadata versus volume types :) 17:06:17 Thanks everyone! 17:06:22 #endmeeting