15:59:49 #startmeeting cinder 15:59:50 Meeting started Wed Mar 20 15:59:49 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:59:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:59:54 The meeting name has been set to 'cinder' 16:00:03 hello 16:00:05 Seen a number folks lurking.. 16:00:06 hey 16:00:26 hello 16:00:31 No winston? 16:00:36 vincent? 16:00:45 kmartin: 16:00:46 hi 16:00:47 hemnafk: 16:00:49 eharney: 16:00:49 hi! 16:00:51 cool 16:00:57 Let's get started 16:01:04 hello 16:01:05 I failed to update the wiki but that's ok 16:01:11 easy meeting today 16:01:16 #topic RC2 16:01:35 So since everybody decided to wait for RC1 for all of the driver bugs... 16:01:44 and there were a number of new things discovered in core 16:01:49 we're going to do an RC2 16:01:58 I plan to cut this tomorrow 16:02:19 I need some updates from folks on a few things though 16:02:40 DuncanT: Do you hve any updates on the oslo sync Ollie started? 16:03:39 hmmm.... guess not 16:03:47 No news I'm afraid. Will see if there is any by the end of the meeting 16:03:55 DuncanT: k, thanks 16:04:05 DuncanT: otherwise I'll see if I can finish it 16:04:18 ahh.. vincent_hou 16:04:20 :) 16:04:27 hey 16:04:36 how are u 16:04:43 vincent_hou: good thanks.. you? 16:04:54 vincent_hou: I have some questions on a few of your bugs :) 16:05:06 jgriffith: i am fine 16:05:10 yes 16:05:20 vincent_hou: https://bugs.launchpad.net/cinder/+bug/1157042 16:05:22 Launchpad bug 1157042 in nova "VMs and volumes can be accessed in a different tenant by a different user" [Undecided,Triaged] 16:05:58 The DB api filters seems to filter out the context appropriately when I tested this 16:06:06 morning 16:06:10 hemna: :) 16:06:17 i do this test yersterday 16:06:26 vincent_hou: yeah, very odd 16:06:51 vincent_hou: That's a very serious issue if you can reproduce it 16:06:55 i thought the vm and volumes are not isolated among users 16:07:06 One question: Is the second user an admin user? 16:07:14 no 16:07:29 it can be any user 16:07:35 I can't reproduce this as normal users 16:07:46 vincent_hou: it seems very odd 16:07:56 vincent_hou: especially when list doesn't show it but delete works 16:07:57 right 16:08:08 delete uses the same mechanism to get the volume as list 16:08:39 unfortunately the nova side was marked as triaged but I don't think anybody actually tried it yet 16:09:01 vincent_hou: I guess the only thing to do at this point... 16:09:10 jgriffith: here is what i did 16:09:18 vincent_hou: k... go on 16:09:59 i opened two terminals on one machine. set different users and tenants to two of these terminals. 16:10:25 do u think it is a correct way? 16:10:34 sure... that should be fine 16:11:07 vincent_hou: I know this is an awful question to ask, but is it possible you got mixed up on which terminal had which settings? 16:11:08 ok. that was how i did the tests 16:11:41 vincent_hou: alright, how about you try and reproduce it again on a fresh devstack install 16:12:00 one terminal username=admin and tenant=admin; the other username=cinder and tenant=service 16:12:01 If you can reproduce it, ping me and we'll detail exactly how you did it 16:12:15 vincent_hou: ohhh.... ummmm 16:12:36 vincent_hou: admin and service tenants have some elevated privelleges 16:12:42 it is from a fresh install 16:12:46 that *could* have something to do with it 16:13:44 oh i expected user and tenant can both separate resources 16:14:59 vincent_hou: Normal tenants/users do, but service and admin are special... they shouldn't be used for normal operations 16:15:35 vincent_hou: I believe what you saw is expected 16:15:55 vincent_hou: the other thing to keep in mind is that devstack sets a number of special permissions for service and admin accounts 16:16:12 You can view these via the dashboard or from keystoneclient if you want 16:16:17 ok. 16:16:18 but I think that explains it 16:16:24 phewww 16:16:40 I was very worried last night, that's obviously a HUGE security issue 16:16:46 alright... moving on 16:16:56 ollie: I saw you drop in :) 16:17:00 ollie: welcome 16:17:04 hi 16:17:10 ollie: any thoughts on the OSLO update patch? 16:17:10 i need to check more about the permission in keystone 16:17:19 Duncan poked me, 16:17:23 :) 16:17:31 I won;'t get to that patch until next week I think 16:17:31 DuncanT: is tired of me poking him :) 16:17:37 :) 16:17:39 ollie: ok, next week will be too late 16:17:48 ollie: mind if I try to finish it out? 16:17:51 what needs to be updated from oslo? 16:18:03 hemna: well at the very least lockutils 16:18:09 sorry about that, wrestling with some billing issues at the moment 16:18:10 hemna: rpcnotifier 16:18:18 want me to give it a go? 16:18:26 sure, 16:18:26 I can try working on that today 16:18:31 hemna: sure if you have the bandwidth 16:18:32 theres a bug open 16:18:42 url? 16:18:47 moment 16:18:49 folks, how about this one https://bugs.launchpad.net/cinder/+bug/1155512 16:18:50 Launchpad bug 1155512 in nova "Issues with booting from the volume" [Undecided,New] 16:19:02 yes 16:19:15 uvirtbot: u have me 16:19:16 vincent_hou: Error: "u" is not a valid command. 16:19:33 vincent_hou: so a couple things; 16:19:46 1. did you set the security rules on the firewall to allow ping/ssh 16:19:46 hemna: https://bugs.launchpad.net/cinder/+bug/1157126 16:19:49 Launchpad bug 1157126 in cinder "rpc notifier should be copied into openstack common" [Undecided,In progress] 16:19:57 ollie, thanks 16:20:12 2. what image did you use? Cirros as I recall has some issues sometimes 16:20:25 yes 16:20:44 vincent_hou: regardless, failure to ping typically for me points to nova-net/quantum 16:20:51 the strange thing is it worked for booting from image , but not volume 16:21:08 vincent_hou: k... I'll take another look at that one too then 16:21:14 hmm. 16:21:14 vincent_hou: and you did use cirros? 16:21:17 What did the console log from the boot show? 16:21:20 yes 16:21:35 vincent_hou: I only test BFV with *real* images 16:21:46 nothing against cirros... it's great 16:21:52 there is no error showing in the log 16:22:12 vincent_hou: so you can't ping the private or floating IP from the compute node? 16:22:14 hi all, sorry i'm late 16:22:20 private 16:22:23 avishay: evening 16:22:31 seems very strange 16:22:36 ok.. I'll have a look at it 16:22:46 vincent_hou: You going to be online for a bit? 16:23:03 back to our regularly scheduled program.... 16:23:08 after the meeting i will go to bed 16:23:18 hemna: so you got what you need to take a look at the OSLO stuff? 16:23:29 hemna: You should be able to just pull Ollies patch 16:23:33 It looks like we just need to pull in rpcnotifier 16:23:41 hemna: Well... 16:23:46 not really 16:23:52 ok 16:24:04 I see his patch failed 16:24:05 hemna: https://review.openstack.org/#/c/24774/ 16:24:09 I'll have to look into that 16:24:24 Yeah, so my thought was... try to fix all the crap that broke 16:24:26 :) 16:24:32 sure you want this one still? 16:24:42 hehe I'll see what I can do 16:24:46 if I get stuck I'll ping you 16:24:53 k... keep me posted 16:25:00 ok will do 16:25:08 I'll let you know either way throughout the day today 16:25:08 it is huge patch 16:25:12 So I have a question for everybody too.... 16:25:29 Have all of you submitted your driver changes? 16:25:32 are we done with that now? 16:25:48 we are done for G afaik. 16:25:58 jgriffith: as far as i know, i am. hopefully no more bugs pop up. 16:25:59 We really need to be moving on to the bugs in the core project and docs 16:26:05 avishay: I hear that :) 16:26:29 I haven't gone back to my driver but I've been focusing on all the other project stuff so mine will be late 16:26:54 but, I think we're at a point where we need to put a line in the sand and get this thing out the door 16:27:26 bswartz: how about from your end? 16:27:27 the NetApp driver has one bug I'd like to fix, only if the fix is a small change. if it's a big change I'll wait 16:27:38 bswartz: k 16:27:58 #bugs 16:28:14 #topic rc2 targets 16:28:18 https://launchpad.net/cinder/+milestone/grizzly-rc2 16:28:26 So this is what I have *officially* 16:28:42 I could use some help triaging the bug list 16:29:27 7 on the list for RC2 16:29:35 hemna: for now, correct 16:29:38 jgriffith: will keep working on the bug list 16:29:55 avishay: thanks, would like to see some other folks take a look as well 16:30:03 jgriffith: except for the driver specific bug, can help there 16:30:15 rushiagr: excellent 16:30:38 anybody know of anything that's NOT already listed and is NOT a driver bug? 16:30:49 by listed I mean, no bug filed yet? 16:31:07 nope 16:31:11 not I 16:31:20 DuncanT: ? 16:31:26 jgriffith: can I ask about the snapshot quota stuff? 16:31:32 guitarzan: sure 16:31:43 I'm not sure it's a bug, but definitely a leaked abstraction :) 16:31:59 Do we have a pub yet that has Pliny on tap for the summit? Should I file that as a feature request? 16:32:00 guitarzan: english man.. english! :) 16:32:07 haha! 16:32:16 guitarzan: soo.... 16:32:16 :P 16:32:18 snapshots taking up volume gig quota 16:32:22 I had planned to bring this up 16:32:25 I'm not aware of any 16:32:38 guitarzan: doesn't like using the same quota for snaps and volumes gigabytes 16:32:56 I thought this was nice and clean.... 16:33:08 but I'm fine with changing it depending on what other folks thing 16:33:10 think 16:33:18 well, by "doesn't like" it's just going to prevent rackspace from switching to grizzly for a while 16:33:44 guitarzan: which none of us like :) 16:34:02 Any objection to me just making a seperate snapshot-gb quota? 16:34:23 that would work for us 16:34:23 Or would you want to see Flag that says independent versus shared? 16:34:46 I think the flag idea would be more complicated 16:34:51 DuncanT: you're the other big SP in the room 16:35:02 guitarzan: certainly would 16:35:41 crickets... crickets everywhere 16:35:47 snapshots and volumes sharing quota suits us, 16:35:56 I'd have to ask around... the current system works fine for us but I can't comment on a split quota without checking 16:36:07 here's the real issue for us 16:36:15 but I can't think of a reason why we'd object to a change 16:36:16 snapshot quotas are being introduced at the same time that backups are 16:36:38 so we're cool with moving to backups 16:36:55 Not having snapshot quotas *was* a big issue for us... trivial DoS 16:36:56 but doing both (grizzly & backups) at the same time is going to be difficult 16:37:03 DuncanT: agreed 16:37:11 Question: is there an assumption anywhere that the two are separate? i.e quota for snapshot vs volume? 16:37:25 lakhindr_: At the moment, no 16:37:31 lakhindr_: it didn't even exist for snapshots 16:37:39 until last week 16:38:01 guitarzan: Would a flag to turn off snapshot quota entirely be enough for you? 16:38:06 DuncanT: absolutely 16:38:17 guitarzan: or what about just commenting out the line of code in the check :) 16:38:36 jgriffith: yeah, that's my other option 16:38:42 guitarzan: alright, well if a flag to disable it works for you... 16:39:08 I'm more than comfortable with that, but I also don't want to come back in a month and add seperate quota counts for snaps 16:39:27 jgriffith: nah, the only reason that was a suggestion is because our snapshot quotas would be -1 :) 16:40:10 we'd be really happy with optional snapshot quotas 16:40:24 then we'll move to backups and you won't have to hear me talk about snapshots ever again 16:40:34 guitarzan: k... both count and Gigabytes as options? 16:40:50 jgriffith: sure, we want neither one 16:40:53 guitarzan: actually, since this is just or Rax, maybe you should write the patch :) 16:41:00 hah 16:41:09 maybe 16:41:30 Ok... we'll figure that out later 16:41:35 we should move on 16:41:40 #topic summit-sessions 16:41:54 So we're pretty full on summit proposals 16:42:07 cut off is tomorrow, and we're already OVER our alloted time 16:42:24 We are probably going to be able to get 10 sessions total 16:42:55 each 40 minutes? 16:43:08 how many do we have now 16:43:39 vincent_hou: http://summit.openstack.org/cfp/topic/11 16:43:50 kmartin: yes, 40 mins 16:44:09 So we're at 15 16:44:23 which means we'll be cutting a few things obviously 16:44:31 jgriffith: how do we decide? 16:44:47 avishay: So I get to decide :) 16:44:52 avishay: but seriously 16:44:54 :) 16:44:58 the benevolent dictator decides 16:45:00 So I'll work on trying to consolidate some of them 16:45:18 and working with the individuals who suggested them to see if we can compromise 16:45:33 avishay: this has never been a problem in the past and I don't expect it be this time around 16:45:43 can smaller ones be combined into one slot? 16:45:49 last conference we made excellent use of unconference sessions 16:45:51 jgriffith: if yes, start sharpening your ax :) 16:45:58 kmartin: yeah, that's exactly point 16:46:09 bswartz: and yes, that's our other ace up the sleeve 16:46:32 I'll start working on it and probably pinging folks as I do 16:48:44 I'm confused by two topics. "Cinder plugin interface" - that already works. "Independant scheduler service" - That already works 16:49:00 jgriffith: http://summit.openstack.org/cfp/details/130 this one is similar to one i submitted 16:49:16 can be combined 16:49:33 DuncanT: recarding the scheduler service -- I understand it's tied to the API service atm 16:50:20 bswartz: I don't understand what the perceived tie is? 16:50:22 yeah, the external driver thing is already a gimme 16:50:40 bswartz: Can discuss it after the meeting if you like? 16:50:47 DuncanT: yes 16:51:50 does a topic like "read only volumes" need a full topic? i think there are some other small topics that didn't get proposals (like volume import, for example) 16:52:10 read only volumes, aka multi attach may get pretty interesting 16:52:36 there are some subtleties to read only volume and multi attach 16:52:44 sorry... 16:52:45 we could talk about it for 2 whole sessions I'm sure 16:52:47 read-only volumes are also a way of implementing the snapshot semantics... 16:52:47 avishay: jgriffith and I talked and I had a little to add here regarding clusterd host support in the drivers 16:53:20 ok, i take it back :) 16:53:29 hahaha....slooowwww down folks 16:53:32 i didn't think about it too much 16:53:51 Ok... so sorry I got pulled away for a minute and missed the excitement 16:53:59 plugins is going to get axed 16:54:10 R/O is going to be combined with multi-attach 16:54:26 :) 16:54:29 +1 16:54:32 the plugin/external driver idea is interesting... 16:54:40 it's also easy... :) 16:54:44 The idea is that the drivers won't actually be in the OpenStack repo 16:54:53 it'll be an external plug in module 16:55:03 can someone give me link to plugins proposal? 16:55:12 http://summit.openstack.org/cfp/details/28 16:55:24 plugins just works, it is how we do our driver now... 16:55:26 I'm not really sure what it's about though...our driver isn't in openstack 16:55:28 This sounds great in theor 16:55:29 guitarzan: k thanks 16:55:37 theory 16:55:56 you can do that today, nothing is stopping someone distributing a cinder driver from there own repo today 16:56:02 but I know that at least 90% of you would probably no longer be here if we did it 16:56:18 jgriffith: I don't think that's true 16:56:21 kmartin: you can but it's not as easy as it could be 16:56:24 we already are writing our own drivers 16:56:26 and it is easy 16:56:54 oh, we'll still be here 16:57:00 sorry... what I mean is, to develop an architecture where things are just plugged in easily via configs 16:57:04 and testing etc etc etc 16:57:25 Hey... if folks want to talk about it, by all means I'm game 16:57:49 "volume_driver = external.python.package.mydriver" in cinder.conf... is that not easily plugged in? 16:58:04 DuncanT: Yes, 16:58:14 but testing, keeping up with changes etc etc 16:58:31 Loook, I'm not arguing against it, I just didn't think there would be much interest 16:58:39 apparantly there is so forgive me 16:58:44 we'll talk about it 16:58:51 I think if you aren't going to merge, then keeping up is your own problem... I don't see that there is much to talk about 16:58:53 It'll make my job easier 16:59:05 jgriffith: I think you're getting the sides mixed up :) 16:59:23 guitarzan: oh... wouldn't be the first time :) 16:59:39 we're saying, it's done, but I'm guessing we don't have someone on the other side of the argument present 16:59:50 also, our hour is gone 16:59:56 dang 17:00:24 so real quick on that... there's another level it could be taken but anyway, another time 17:00:25 * bswartz points to the #openstack-cinder channel 17:00:32 bswartz: indeed 17:00:34 no reason we can't continue discussion 17:00:42 ok... everybody run across the hall! 17:00:46 #endmeeting