15:59:49 <jgriffith> #startmeeting cinder
15:59:50 <openstack> Meeting started Wed Mar 20 15:59:49 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:59:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:59:54 <openstack> The meeting name has been set to 'cinder'
16:00:03 <bswartz> hello
16:00:05 <jgriffith> Seen a number folks lurking..
16:00:06 <DuncanT> hey
16:00:26 <jgallard> hello
16:00:31 <jgriffith> No winston?
16:00:36 <jgriffith> vincent?
16:00:45 <jgriffith> kmartin:
16:00:46 <eharney> hi
16:00:47 <jgriffith> hemnafk:
16:00:49 <jgriffith> eharney:
16:00:49 <rushiagr> hi!
16:00:51 <jgriffith> cool
16:00:57 <jgriffith> Let's get started
16:01:04 <kmartin> hello
16:01:05 <jgriffith> I failed to update the wiki but that's ok
16:01:11 <jgriffith> easy meeting today
16:01:16 <jgriffith> #topic RC2
16:01:35 <jgriffith> So since everybody decided to wait for RC1 for all of the driver bugs...
16:01:44 <jgriffith> and there were a number of new things discovered in core
16:01:49 <jgriffith> we're going to do an RC2
16:01:58 <jgriffith> I plan to cut this tomorrow
16:02:19 <jgriffith> I need some updates from folks on a few things though
16:02:40 <jgriffith> DuncanT: Do you hve any updates on the oslo sync Ollie started?
16:03:39 <jgriffith> hmmm....  guess not
16:03:47 <DuncanT> No news I'm afraid. Will see if there is any by the end of the meeting
16:03:55 <jgriffith> DuncanT: k, thanks
16:04:05 <jgriffith> DuncanT: otherwise I'll see if I can finish it
16:04:18 <jgriffith> ahh.. vincent_hou
16:04:20 <jgriffith> :)
16:04:27 <vincent_hou> hey
16:04:36 <vincent_hou> how are u
16:04:43 <jgriffith> vincent_hou: good thanks.. you?
16:04:54 <jgriffith> vincent_hou: I have some questions on a few of your bugs :)
16:05:06 <vincent_hou> jgriffith: i am fine
16:05:10 <vincent_hou> yes
16:05:20 <jgriffith> vincent_hou: https://bugs.launchpad.net/cinder/+bug/1157042
16:05:22 <uvirtbot> Launchpad bug 1157042 in nova "VMs and volumes can be accessed in a different tenant by a different user" [Undecided,Triaged]
16:05:58 <jgriffith> The DB api filters seems to filter out the context appropriately when I tested this
16:06:06 <hemna> morning
16:06:10 <jgriffith> hemna: :)
16:06:17 <vincent_hou> i do this test yersterday
16:06:26 <jgriffith> vincent_hou: yeah, very odd
16:06:51 <jgriffith> vincent_hou: That's a very serious issue if you can reproduce it
16:06:55 <vincent_hou> i thought the vm and volumes are not isolated among users
16:07:06 <DuncanT> One question: Is the second user an admin user?
16:07:14 <vincent_hou> no
16:07:29 <vincent_hou> it can be any user
16:07:35 <DuncanT> I can't reproduce this as normal users
16:07:46 <jgriffith> vincent_hou: it seems very odd
16:07:56 <jgriffith> vincent_hou: especially when list doesn't show it but delete works
16:07:57 <vincent_hou> right
16:08:08 <jgriffith> delete uses the same mechanism to get the volume as list
16:08:39 <jgriffith> unfortunately the nova side was marked as triaged but I don't think anybody actually tried it yet
16:09:01 <jgriffith> vincent_hou: I guess the only thing to do at this point...
16:09:10 <vincent_hou> jgriffith: here is what i did
16:09:18 <jgriffith> vincent_hou: k... go on
16:09:59 <vincent_hou> i opened two terminals on one machine. set different users and tenants to two of these terminals.
16:10:25 <vincent_hou> do u think it is a correct way?
16:10:34 <jgriffith> sure... that should be fine
16:11:07 <jgriffith> vincent_hou: I know this is an awful question to ask, but is it possible you got mixed up on which terminal had which settings?
16:11:08 <vincent_hou> ok. that was how i did the tests
16:11:41 <jgriffith> vincent_hou: alright, how about you try and reproduce it again on a fresh devstack install
16:12:00 <vincent_hou> one terminal username=admin and tenant=admin; the other username=cinder and tenant=service
16:12:01 <jgriffith> If you can reproduce it, ping me and we'll detail exactly how you did it
16:12:15 <jgriffith> vincent_hou: ohhh.... ummmm
16:12:36 <jgriffith> vincent_hou: admin and service tenants have some elevated privelleges
16:12:42 <vincent_hou> it is from a fresh install
16:12:46 <jgriffith> that *could* have something to do with it
16:13:44 <vincent_hou> oh i expected user and tenant can both separate resources
16:14:59 <DuncanT> vincent_hou: Normal tenants/users do, but service and admin are special... they shouldn't be used for normal operations
16:15:35 <jgriffith> vincent_hou: I believe what you saw is expected
16:15:55 <jgriffith> vincent_hou: the other thing to keep in mind is that devstack sets a number of special permissions for service and admin accounts
16:16:12 <jgriffith> You can view these via the dashboard or from keystoneclient if you want
16:16:17 <vincent_hou> ok.
16:16:18 <jgriffith> but I think that explains it
16:16:24 <jgriffith> phewww
16:16:40 <jgriffith> I was very worried last night, that's obviously a HUGE security issue
16:16:46 <jgriffith> alright... moving on
16:16:56 <jgriffith> ollie: I saw you drop in :)
16:17:00 <jgriffith> ollie: welcome
16:17:04 <ollie> hi
16:17:10 <jgriffith> ollie: any thoughts on the OSLO update patch?
16:17:10 <vincent_hou> i need to check more about the permission in keystone
16:17:19 <ollie> Duncan poked me,
16:17:23 <jgriffith> :)
16:17:31 <ollie> I won;'t get to that patch until next week I think
16:17:31 <jgriffith> DuncanT: is tired of me poking him :)
16:17:37 <ollie> :)
16:17:39 <jgriffith> ollie: ok, next week will be too late
16:17:48 <jgriffith> ollie: mind if I try to finish it out?
16:17:51 <hemna> what needs to be updated from oslo?
16:18:03 <jgriffith> hemna: well at the very least lockutils
16:18:09 <ollie> sorry about that, wrestling with some billing issues at the moment
16:18:10 <DuncanT> hemna: rpcnotifier
16:18:18 <hemna> want me to give it a go?
16:18:26 <ollie> sure,
16:18:26 <hemna> I can try working on that today
16:18:31 <jgriffith> hemna: sure if you have the bandwidth
16:18:32 <ollie> theres a bug open
16:18:42 <hemna> url?
16:18:47 <ollie> moment
16:18:49 <vincent_hou> folks, how about this one https://bugs.launchpad.net/cinder/+bug/1155512
16:18:50 <uvirtbot> Launchpad bug 1155512 in nova "Issues with booting from the volume" [Undecided,New]
16:19:02 <vincent_hou> yes
16:19:15 <vincent_hou> uvirtbot: u have me
16:19:16 <uvirtbot> vincent_hou: Error: "u" is not a valid command.
16:19:33 <jgriffith> vincent_hou: so a couple things;
16:19:46 <jgriffith> 1. did you set the security rules on the firewall to allow ping/ssh
16:19:46 <ollie> hemna: https://bugs.launchpad.net/cinder/+bug/1157126
16:19:49 <uvirtbot> Launchpad bug 1157126 in cinder "rpc notifier should be copied into openstack common" [Undecided,In progress]
16:19:57 <hemna> ollie, thanks
16:20:12 <jgriffith> 2. what image did you use?  Cirros as I recall has some issues sometimes
16:20:25 <vincent_hou> yes
16:20:44 <jgriffith> vincent_hou: regardless, failure to ping typically for me points to nova-net/quantum
16:20:51 <vincent_hou> the strange thing is it worked for booting from image , but not volume
16:21:08 <jgriffith> vincent_hou: k... I'll take another look at that one too then
16:21:14 <vincent_hou> hmm.
16:21:14 <jgriffith> vincent_hou: and you did use cirros?
16:21:17 <DuncanT> What did the console log from the boot show?
16:21:20 <vincent_hou> yes
16:21:35 <jgriffith> vincent_hou: I only test BFV with *real* images
16:21:46 <jgriffith> nothing against cirros... it's great
16:21:52 <vincent_hou> there is no error showing in the log
16:22:12 <jgriffith> vincent_hou: so you can't ping the private or floating IP from the compute node?
16:22:14 <avishay> hi all, sorry i'm late
16:22:20 <vincent_hou> private
16:22:23 <jgriffith> avishay: evening
16:22:31 <jgriffith> seems very strange
16:22:36 <jgriffith> ok.. I'll have a look at it
16:22:46 <jgriffith> vincent_hou: You going to be online for a bit?
16:23:03 <jgriffith> back to our regularly scheduled program....
16:23:08 <vincent_hou> after the meeting i will go to bed
16:23:18 <jgriffith> hemna: so you got what you need to take a look at the OSLO stuff?
16:23:29 <jgriffith> hemna: You should be able to just pull Ollies patch
16:23:33 <hemna> It looks like we just need to pull in rpcnotifier
16:23:41 <jgriffith> hemna: Well...
16:23:46 <jgriffith> not really
16:23:52 <hemna> ok
16:24:04 <hemna> I see his patch failed
16:24:05 <jgriffith> hemna: https://review.openstack.org/#/c/24774/
16:24:09 <hemna> I'll have to look into that
16:24:24 <jgriffith> Yeah, so my thought was... try to fix all the crap that broke
16:24:26 <jgriffith> :)
16:24:32 <jgriffith> sure you want this one still?
16:24:42 <hemna> hehe I'll see what I can do
16:24:46 <hemna> if I get stuck I'll ping you
16:24:53 <jgriffith> k... keep me posted
16:25:00 <hemna> ok will do
16:25:08 <hemna> I'll let you know either way throughout the day today
16:25:08 <vincent_hou> it is huge patch
16:25:12 <jgriffith> So I have a question for everybody too....
16:25:29 <jgriffith> Have all of you submitted your driver changes?
16:25:32 <jgriffith> are we done with that now?
16:25:48 <hemna> we are done for G afaik.
16:25:58 <avishay> jgriffith: as far as i know, i am.  hopefully no more bugs pop up.
16:25:59 <jgriffith> We really need to be moving on to the bugs in the core project and docs
16:26:05 <jgriffith> avishay: I hear that :)
16:26:29 <jgriffith> I haven't gone back to my driver but I've been focusing on all the other project stuff so mine will be late
16:26:54 <jgriffith> but, I think we're at a point where we need to put a line in the sand and get this thing out the door
16:27:26 <jgriffith> bswartz: how about from your end?
16:27:27 <bswartz> the NetApp driver has one bug I'd like to fix, only if the fix is a small change. if it's a big change I'll wait
16:27:38 <jgriffith> bswartz: k
16:27:58 <jgriffith> #bugs
16:28:14 <jgriffith> #topic rc2 targets
16:28:18 <jgriffith> https://launchpad.net/cinder/+milestone/grizzly-rc2
16:28:26 <jgriffith> So this is what I have *officially*
16:28:42 <jgriffith> I could use some help triaging the bug list
16:29:27 <hemna> 7 on the list for RC2
16:29:35 <jgriffith> hemna: for now, correct
16:29:38 <avishay> jgriffith: will keep working on the bug list
16:29:55 <jgriffith> avishay: thanks, would like to see some other folks take a look as well
16:30:03 <rushiagr> jgriffith: except for the driver specific bug, can help there
16:30:15 <jgriffith> rushiagr: excellent
16:30:38 <jgriffith> anybody know of anything that's NOT already listed and is NOT a driver bug?
16:30:49 <jgriffith> by listed I mean, no bug filed yet?
16:31:07 <avishay> nope
16:31:11 <hemna> not I
16:31:20 <jgriffith> DuncanT: ?
16:31:26 <guitarzan> jgriffith: can I ask about the snapshot quota stuff?
16:31:32 <jgriffith> guitarzan: sure
16:31:43 <guitarzan> I'm not sure it's a bug, but definitely a leaked abstraction :)
16:31:59 <hemna> Do we have a pub yet that has Pliny on tap for the summit?   Should I file that as a feature request?
16:32:00 <jgriffith> guitarzan: english man.. english!  :)
16:32:07 <jgriffith> haha!
16:32:16 <jgriffith> guitarzan: soo....
16:32:16 <hemna> :P
16:32:18 <guitarzan> snapshots taking up volume gig quota
16:32:22 <jgriffith> I had planned to bring this up
16:32:25 <DuncanT> I'm not aware of any
16:32:38 <jgriffith> guitarzan: doesn't like using the same quota for snaps and volumes gigabytes
16:32:56 <jgriffith> I thought this was nice and clean....
16:33:08 <jgriffith> but I'm fine with changing it depending on what other folks thing
16:33:10 <jgriffith> think
16:33:18 <guitarzan> well, by "doesn't like" it's just going to prevent rackspace from switching to grizzly for a while
16:33:44 <jgriffith> guitarzan: which none of us like :)
16:34:02 <jgriffith> Any objection to me just making a seperate snapshot-gb quota?
16:34:23 <guitarzan> that would work for us
16:34:23 <jgriffith> Or would you want to see Flag that says independent versus shared?
16:34:46 <guitarzan> I think the flag idea would be more complicated
16:34:51 <jgriffith> DuncanT: you're the other big SP in the room
16:35:02 <jgriffith> guitarzan: certainly would
16:35:41 <jgriffith> crickets... crickets everywhere
16:35:47 <ollie> snapshots and volumes sharing quota suits us,
16:35:56 <DuncanT> I'd have to ask around... the current system works fine for us but I can't comment on a split quota without checking
16:36:07 <guitarzan> here's the real issue for us
16:36:15 <ollie> but I can't think of a reason why we'd object to a change
16:36:16 <guitarzan> snapshot quotas are being introduced at the same time that backups are
16:36:38 <guitarzan> so we're cool with moving to backups
16:36:55 <DuncanT> Not having snapshot quotas *was* a big issue for us... trivial DoS
16:36:56 <guitarzan> but doing both (grizzly & backups) at the same time is going to be difficult
16:37:03 <guitarzan> DuncanT: agreed
16:37:11 <lakhindr_> Question: is there an assumption  anywhere that the two are separate? i.e quota for snapshot vs volume?
16:37:25 <DuncanT> lakhindr_: At the moment, no
16:37:31 <jgriffith> lakhindr_: it didn't even exist for snapshots
16:37:39 <jgriffith> until last week
16:38:01 <DuncanT> guitarzan: Would a flag to turn off snapshot quota entirely be enough for you?
16:38:06 <guitarzan> DuncanT: absolutely
16:38:17 <jgriffith> guitarzan: or what about just commenting out the line of code in the check :)
16:38:36 <guitarzan> jgriffith: yeah, that's my other option
16:38:42 <jgriffith> guitarzan: alright, well if a flag to disable it works for you...
16:39:08 <jgriffith> I'm more than comfortable with that, but I also don't want to come back in a month and add seperate quota counts for snaps
16:39:27 <guitarzan> jgriffith: nah, the only reason that was a suggestion is because our snapshot quotas would be -1 :)
16:40:10 <guitarzan> we'd be really happy with optional snapshot quotas
16:40:24 <guitarzan> then we'll move to backups and you won't have to hear me talk about snapshots ever again
16:40:34 <jgriffith> guitarzan: k... both count and Gigabytes as options?
16:40:50 <guitarzan> jgriffith: sure, we want neither one
16:40:53 <jgriffith> guitarzan: actually, since this is just or Rax, maybe you should write the patch :)
16:41:00 <guitarzan> hah
16:41:09 <guitarzan> maybe
16:41:30 <jgriffith> Ok... we'll figure that out later
16:41:35 <jgriffith> we should move on
16:41:40 <jgriffith> #topic summit-sessions
16:41:54 <jgriffith> So we're pretty full on summit proposals
16:42:07 <jgriffith> cut off is tomorrow, and we're already OVER our alloted time
16:42:24 <jgriffith> We are probably going to be able to get 10 sessions total
16:42:55 <kmartin> each 40 minutes?
16:43:08 <vincent_hou> how many do we have now
16:43:39 <jgriffith> vincent_hou: http://summit.openstack.org/cfp/topic/11
16:43:50 <jgriffith> kmartin: yes, 40 mins
16:44:09 <jgriffith> So we're at 15
16:44:23 <jgriffith> which means we'll be cutting a few things obviously
16:44:31 <avishay> jgriffith: how do we decide?
16:44:47 <jgriffith> avishay: So I get to decide :)
16:44:52 <jgriffith> avishay: but seriously
16:44:54 <hemna> :)
16:44:58 <bswartz> the benevolent dictator decides
16:45:00 <jgriffith> So I'll work on trying to consolidate some of them
16:45:18 <jgriffith> and working with the individuals who suggested them to see if we can compromise
16:45:33 <jgriffith> avishay: this has never been a problem in the past and I don't expect it be this time around
16:45:43 <kmartin> can smaller ones be combined into one slot?
16:45:49 <bswartz> last conference we made excellent use of unconference sessions
16:45:51 <avishay> jgriffith: if yes, start sharpening your ax :)
16:45:58 <jgriffith> kmartin: yeah, that's exactly point
16:46:09 <jgriffith> bswartz: and yes, that's our other ace up the sleeve
16:46:32 <jgriffith> I'll start working on it and probably pinging folks as I do
16:48:44 <DuncanT> I'm confused by two topics. "Cinder plugin interface" - that already works. "Independant scheduler service" - That already works
16:49:00 <vincent_hou> jgriffith: http://summit.openstack.org/cfp/details/130 this one is similar to one i submitted
16:49:16 <vincent_hou> can be combined
16:49:33 <bswartz> DuncanT: recarding the scheduler service -- I understand it's tied to the API service atm
16:50:20 <DuncanT> bswartz: I don't understand what the perceived tie is?
16:50:22 <guitarzan> yeah, the external driver thing is already a gimme
16:50:40 <DuncanT> bswartz: Can discuss it after the meeting if you like?
16:50:47 <bswartz> DuncanT: yes
16:51:50 <avishay> does a topic like "read only volumes" need a full topic?  i think there are some other small topics that didn't get proposals (like volume import, for example)
16:52:10 <guitarzan> read only volumes, aka multi attach may get pretty interesting
16:52:36 <bswartz> there are some subtleties to read only volume and multi attach
16:52:44 <jgriffith> sorry...
16:52:45 <bswartz> we could talk about it for 2 whole sessions I'm sure
16:52:47 <DuncanT> read-only volumes are also a way of implementing the snapshot semantics...
16:52:47 <kmartin> avishay: jgriffith and I talked and I had a little to add here regarding clusterd host support in the drivers
16:53:20 <avishay> ok, i take it back :)
16:53:29 <jgriffith> hahaha....slooowwww down folks
16:53:32 <avishay> i didn't think about it too much
16:53:51 <jgriffith> Ok... so sorry I got pulled away for a minute and missed the excitement
16:53:59 <jgriffith> plugins is going to get axed
16:54:10 <jgriffith> R/O is going to be combined with multi-attach
16:54:26 <hemna> :)
16:54:29 <hemna> +1
16:54:32 <jgriffith> the plugin/external driver idea is interesting...
16:54:40 <guitarzan> it's also easy... :)
16:54:44 <jgriffith> The idea is that the drivers won't actually be in the OpenStack repo
16:54:53 <jgriffith> it'll be an external plug in module
16:55:03 <rushiagr> can someone give me link to plugins proposal?
16:55:12 <guitarzan> http://summit.openstack.org/cfp/details/28
16:55:24 <DuncanT> plugins just works, it is how we do our driver now...
16:55:26 <guitarzan> I'm not really sure what it's about though...our driver isn't in openstack
16:55:28 <jgriffith> This sounds great in theor
16:55:29 <rushiagr> guitarzan: k thanks
16:55:37 <jgriffith> theory
16:55:56 <kmartin> you can do that today, nothing is stopping someone distributing a cinder driver from there own repo today
16:56:02 <jgriffith> but I know that at least 90% of you would probably no longer be here if we did it
16:56:18 <guitarzan> jgriffith: I don't think that's true
16:56:21 <jgriffith> kmartin: you can but it's not as easy as it could be
16:56:24 <guitarzan> we already are writing our own drivers
16:56:26 <guitarzan> and it is easy
16:56:54 <kmartin> oh, we'll still be here
16:57:00 <jgriffith> sorry... what I mean is, to develop an architecture where things are just plugged in easily via configs
16:57:04 <jgriffith> and testing etc etc etc
16:57:25 <jgriffith> Hey... if folks want to talk about it, by all means I'm game
16:57:49 <DuncanT> "volume_driver = external.python.package.mydriver" in cinder.conf... is that not easily plugged in?
16:58:04 <jgriffith> DuncanT: Yes,
16:58:14 <jgriffith> but testing, keeping up with changes etc etc
16:58:31 <jgriffith> Loook, I'm not arguing against it, I just didn't think there would be much interest
16:58:39 <jgriffith> apparantly there is so forgive me
16:58:44 <jgriffith> we'll talk about it
16:58:51 <DuncanT> I think if you aren't going to merge, then keeping up is your own problem... I don't see that there is much to talk about
16:58:53 <jgriffith> It'll make my job easier
16:59:05 <guitarzan> jgriffith: I think you're getting the sides mixed up :)
16:59:23 <jgriffith> guitarzan: oh... wouldn't be the first time :)
16:59:39 <guitarzan> we're saying, it's done, but I'm guessing we don't have someone on the other side of the argument present
16:59:50 <guitarzan> also, our hour is gone
16:59:56 <jgriffith> dang
17:00:24 <jgriffith> so real quick on that... there's another level it could be taken but anyway, another time
17:00:25 * bswartz points to the #openstack-cinder channel
17:00:32 <jgriffith> bswartz: indeed
17:00:34 <bswartz> no reason we can't continue discussion
17:00:42 <jgriffith> ok... everybody run across the hall!
17:00:46 <jgriffith> #endmeeting