16:00:18 <jgriffith_> #startmeeting
16:00:19 <openstack> Meeting started Wed Jun 27 16:00:18 2012 UTC.  The chair is jgriffith_. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:33 <jgriffith_> #topic status update
16:00:46 <jgriffith_> So for those that don't know... we're really close
16:00:49 <liamk> hi
16:00:57 <clayg> looks like one of the big two landed?
16:01:04 <jgriffith_> clayg: Yep
16:01:21 <jgriffith_> So the only one that's really holding things up at this point is mine :(
16:01:28 <clayg> heh
16:01:36 <jgriffith_> https://review.openstack.org/#/c/8073/
16:02:39 <sleepsonthefloor> hello
16:02:57 <clayg> hrmmm... ec2 tests?
16:03:21 <jgriffith_> clayg: yes
16:03:39 <jgriffith_> I'm stuck on terminate instances when a volume is attached
16:03:50 <jgriffith_> There's a failing rpc call
16:03:52 <clayg> do you know what needs to fix, or do you need help tracking it down?
16:04:26 <jgriffith_> clayg: I could use help tracking it down
16:04:44 <rnirmal> jgriffith_: so is the detach not happening on terminate?
16:04:58 <jgriffith_> clayg: I have it narrowed in to compute/api and I think it's calling volume manager somewhere but it's like spaghetti
16:05:17 <jgriffith_> rnirmal: I suspect it's actually the detach call
16:05:54 <sleepsonthefloor> jgriffith_: steps to reproduce?
16:06:43 <jgriffith_> sleepsonthefloor: run_tests.sh nova.tests.api.ec2.test_cinder_cloud:CinderCloutTestCase.test_stop_start_with_volume
16:07:20 <jgriffith_> or .....:CinderCloudTestCase.test_stop_with_attached_volume
16:08:27 <jgriffith_> So some background on this test....
16:08:59 <jgriffith_> It stubs out in the setup all of the api calls to volume/fake.API
16:09:27 <jgriffith_> I think we need both tests still since folks will be using nova-volume as is for a while
16:10:45 <jgriffith_> Ok... so if anybody has time to look at that and help me figure out what's up that would be great
16:11:01 <jgriffith_> Once we get this change set pushed we're pretty much there for F2 I believe
16:11:26 <jgriffith_> #topic cinder hack day
16:11:49 <jgriffith_> So Piston Cloud has been kind enough to host a Cinder Hack Day in SF next week
16:12:07 <jgriffith_> It will be at the Piston Cloud offices in San Francisco on Tuesday July 3'rd
16:12:29 <clayg> that's cool
16:12:32 <jgriffith_> Anybody who wants to write some Cinder code is welcome and encouraged to attend
16:12:44 <jgriffith_> clayg: Yeah, it should be really cool
16:13:06 <jgriffith_> The idea is to generate some interest, get some new recruits and buld a bit of momentum
16:13:32 <jgriffith_> There are some folks from Rising Tide, RackSpace and others that will be there
16:14:07 <jgriffith_> Depending on how this goes we may want to think about similar events in the future
16:14:22 <jgriffith_> Even doing via IRC remotely maybe?
16:15:01 <rturk> it'd be a lot easier to get some Ceph devs there if it were on IRC, I'd like to see that
16:15:17 <jgriffith_> rturk: Understood
16:15:23 <rturk> (since we're in LA mostly)
16:15:45 <DuncanT> There are timezone considerations for us, but it sounds like a good idea
16:16:01 <liamk> yeah
16:16:13 <jgriffith_> So would there be interest in having an IRC channel up for the event next week?
16:16:21 <rturk> absolutely!
16:16:29 <liamk> yup
16:16:34 <jgriffith_> Ok, I'll see what I can do and keep folks updated
16:16:38 <rturk> cool
16:16:51 <jgriffith_> So this assumes that you'll be setting the day aside to work on Cinder code
16:17:18 <rturk> agreed - just saves on the travel time
16:17:36 <jgriffith_> rturk: yep, sounds good
16:17:51 <jgriffith_> #action jgriffith to setup IRC channel for Cinder Hack Day next week
16:18:22 <jgriffith_> I'll send an email out with info on this
16:18:51 <jgriffith_> #topic plans for existing nova volume code
16:19:14 <jgriffith_> Ok, so this is something that maybe isn't as clear to folks as it should be
16:19:32 <jgriffith_> I just want to go through it as a team here to make sure we're all singing the same song
16:20:04 <jgriffith_> So the goal should be to have Cinder 'ready' for F2, and in Openstack as default volume servie for Folsom release
16:20:14 <jgriffith_> Existing nova volume code will NOT go away yet
16:20:23 <jgriffith_> It will still be there for folks that rely on it
16:20:40 <jgriffith_> But I would like to see it go away by G release depending on how things go
16:20:57 <jgriffith_> Also, I do have SERIOUS concerns about maintaining parity between both services
16:21:16 <jgriffith_> My thought is Cinder moves forward, nova-v stays where it is
16:21:28 <jgriffith_> Anybody have different ideas/thoughts in mind?
16:21:38 <jgriffith_> Or concerns, objections...
16:22:09 <liamk> Makes sense
16:22:29 <rturk> sounds reasonable
16:23:00 <jgriffith_> Ok... there was some misconception I think that we were going to completely rip out nova-v for Folsom
16:23:14 <jgriffith_> This caused some stress I think, and rightfully so
16:23:54 <jgriffith_> #topic priorities for Cinder
16:24:15 <jgriffith_> I wanted to get some input from folks on what to attach first for Cinder after F2
16:24:32 <jgriffith_> inparticular this might be good for laying out the hack day
16:24:52 <jgriffith_> I have some ideas of my own (boot from volume)
16:25:04 <jgriffith_> But wanted to open up for other ideas/input here
16:25:10 <jgriffith_> BTW...
16:25:27 <jgriffith_> When I say BFV I'm referring to the Ceph document as the model
16:25:40 <jgriffith_> If you haven't seen this I'll dig up a copy and send it out
16:26:16 <rnirmal> I'd like to see all the status stuff cleaned up
16:26:34 <jgriffith_> rnirmal: details?
16:26:39 <rnirmal> things like when is a delete possible, an attach/detach etc... there's been several bugs around that
16:26:52 <jgriffith_> rnirmal: Oh yes... EXCELLENT point
16:27:01 <rnirmal> just status of the volumes in general.. it's a mess right now
16:27:15 <clayg> jgriffith_: sorry had a walk up, I think nova should adopt a "no new features in volumes" at f-2, and have _major_ deprecation warnings pending removal in g-1 or g-2
16:27:33 <clayg> ^ or close there by
16:27:33 <uvirtbot> clayg: Error: "or" is not a valid command.
16:27:49 <clayg> ubirtbot: I hate you
16:27:51 <jgriffith_> clayg: I would agree with that
16:27:56 <rnirmal> clayg: I think that's what vishy had agreed on early on... only bug fixes go back in the nova-volumes
16:28:09 <jgriffith_> clayg: You and virtbot have quite a relationship I've noticed!
16:28:14 <clayg> meh, "security" fixes - bugs are "expected behavior" ;)
16:28:22 <jgriffith_> haha
16:29:07 <jgriffith_> rnirmal: So back to your point
16:29:09 <clayg> ok, sorry that's it from the peanut gallery, I'll try to look at that cloudcontroller test
16:29:19 <jgriffith_> clayg: NP
16:29:35 <jgriffith_> I agree, the bugs will be the priority
16:29:59 <rnirmal> yeah simple cleanup like adding a status class or something... we refer to status in all the places as just string literals
16:30:09 <jgriffith_> I think the only "important" one missing from the list right now is the snapshot one (I'll have to look it up again)
16:30:42 <jgriffith_> rnirmal: Agreed
16:30:46 <jgriffith_> and this one:
16:30:49 <jgriffith_> #link https://bugs.launchpad.net/nova/+bug/1008866
16:30:51 <uvirtbot> Launchpad bug 1008866 in nova "Creating volume from snapshot on real/production/multicluster installation of OpenStack is broken" [Medium,Triaged]
16:30:56 <clayg> urm... which "list"
16:31:16 <jgriffith_> clayg: Sorry...
16:31:20 <jgriffith_> #linkk https://bugs.launchpad.net/cinder/+bugs
16:31:40 <jgriffith_> Hmmm... I thought virtbot would yell at me for typing poorly
16:31:44 <rnirmal> jgriffith_: yup that one's a much bigger one for the default driver.... I don't think that's a problem for the other drivers
16:32:56 <jgriffith_> rnirmal: I definitely place it high on the priority list
16:33:17 <jgriffith_> Ok... so I think there's more than enough work laid out just in the defects/enhancements
16:33:41 <jgriffith_> delete issues are going to be the biggest win IMO
16:34:02 <jgriffith_> anything folks would like to add at this point?
16:34:18 <jgriffith_> Or need some clarification on something?
16:34:33 <rnirmal> one more thing... there's going to be folks around to review and approve changes for the hack day right ?
16:34:36 <rturk> jgriffith_: everything I care about is in that doc you mentioned
16:35:15 <jgriffith_> rturk: :)  You and me both...
16:35:21 <jgriffith_> rturk: To a point
16:35:42 <jgriffith_> rturk: We'll never get the stuff in the doc to work well withou fixing the items rnirmal mentioned
16:36:03 <rturk> nod
16:36:17 <jgriffith_> Ok... I guess this is a good time for open discussion?
16:36:20 <rnirmal> jgriffith_: one more.. I seem to have a huge list today... is there any plans to work on the docs
16:36:33 <jgriffith_> rnirmal: Thanks for reminding me!!!
16:36:46 <jgriffith_> I chatted with annegentle yesterday about that very topic
16:37:05 <jgriffith_> I have a list of todo's regarding the wiki pages, and doc updates
16:37:32 <jgriffith_> I've targeted F3 and will be opening up entries in the docs project for them
16:38:00 <jgriffith_> #topic docs
16:38:09 <jgriffith_> Might as well have a docs topic eh...?
16:38:14 <rnirmal> sure
16:38:24 <rnirmal> so what you mentioned is dev docs + api docs right?
16:38:30 <jgriffith_> rnirmal: correct
16:38:54 <jgriffith_> rnirmal: with an emphasis on api, config and mainenance docs as the priority
16:39:09 <jgriffith_> I would LOVE to get some good dev docs out there
16:39:26 <jgriffith_> but right now would prioritize people being able to use/run Cinder
16:39:30 <rnirmal> I think we are at a place to generate the sphinx docs
16:39:40 <rnirmal> not sure if it has any content :)
16:40:29 <jgriffith_> rnirmal: Sounds like a good thing for you to volunteer for ;)
16:41:11 <rnirmal> yeah I'll do some
16:41:23 <jgriffith_> :)
16:41:27 <jgriffith_> Sorry, I couldn't resist
16:41:47 <jgriffith_> So here's what I have as a list of official docs:
16:41:53 <jgriffith_> 1. migration (Essex to Folsom)
16:41:53 <jgriffith_> 2. Initial install/config
16:41:53 <jgriffith_> 3. API usage
16:41:58 <jgriffith_> 4. Cinderclient
16:42:00 <jgriffith_> 5. some sort of design description/review
16:42:32 <jgriffith_> Anybody see anything blatantly missing?
16:43:06 <rnirmal> nope looks good
16:43:18 <jgriffith_> excellent....
16:43:31 <jgriffith_> Ok... now I think that's all I had
16:43:36 <jgriffith_> #topic open discussion
16:44:00 <jgriffith_> anybody have anything?
16:44:07 <rnirmal> so looks like all the devstack changes got merged in
16:44:08 <nijaba> dhellmann: and I would like to tqlk qbout metring (ceilometer)
16:44:29 <jgriffith_> nijaba: Yes!!  Sorry, I almost forgot somehow
16:44:42 <jgriffith_> Just for an update for everybody....
16:45:01 <jgriffith_> I talked with the ceilometer team a bit last week about metering in Cinder
16:45:26 <jgriffith_> Particularly usage/create/delete stats on a per tenant basis
16:45:39 <jgriffith_> nijaba: I'll let you and dhellmann elaborate...
16:45:54 <nijaba> so, what we need to fetch is two fold
16:46:11 <nijaba> 1/ we need to collect usage per tenant on a regular basis
16:46:21 <nijaba> it seems that the best approach for this is to use the api
16:46:36 <nijaba> but that leaves the question of how to authenticate ceilometer
16:47:19 <nijaba> since our architecture offers local agents, we are proposing to deploy one on the sames hosts where cinder is deployed
16:47:24 <nijaba> thus making local calls
16:47:53 <rnirmal> nijaba: so that's going to be polling the cinder api
16:47:58 <nijaba> 2/ we need to capture create/modify/delete volume event
16:48:07 <nijaba> rnirmal: yes, that would be the idea
16:48:33 <nijaba> for 2/ it seems that you are not currently genereating any event on any queue
16:48:49 <nijaba> so we would like to test the idea of us patching cinder to do so
16:49:03 <rnirmal> yeah we had some events added to nova-volumes... but the review got abandoned in cinder
16:49:11 <rnirmal> that's something we should be able to get back in
16:49:36 <rnirmal> nijaba: this is similar to notification events in nova
16:49:41 <rnirmal> or essentially the same
16:49:56 <nijaba> rnirmal: yes, very much
16:50:26 <nijaba> we would just be proposing some patches to do so, so that we can capture them from our agent and send it to our collector
16:50:55 <nijaba> rnirmal: but if there are patches already in the work, we would gladly have a look at them :)
16:51:25 <rnirmal> nijaba: #link https://review.openstack.org/#/c/7517/
16:51:48 <nijaba> rnirmal: great!
16:52:08 <nijaba> so I take it you guys would see 1 and 2 from a good perspective?
16:52:26 <jgriffith_> nijaba: +1
16:52:29 <rnirmal> I'm ok with 2... I'm a little concerned about the polling in 1
16:52:41 <nijaba> rnirmal: we are very open to other suggestions
16:52:54 <jgriffith_> nijaba: I'm assuming the frequency of the polling is configurable....
16:52:58 <rnirmal> I haven't followed the ceilometer discussions
16:53:02 <nijaba> jgriffith_: it is, yes
16:53:15 <jgriffith_> nijaba: Then I'm ok with it, as long as it's user configurable
16:53:17 <nijaba> on a per agent basis
16:53:24 <rnirmal> so I don't think I can provide any valuable input without understanding that
16:53:48 <nijaba> rnirmal: what would you need?
16:54:02 <rnirmal> nijaba: so that would be getting usage for all the tenants right?
16:54:11 <nijaba> rnirmal: yes
16:54:31 <rnirmal> so this would have to be some sort of a admin only api call
16:54:35 <nijaba> the idea is to be able to callect all information so that a billing engine can be fed from a single source
16:54:44 <nijaba> https://launchpad.net/ceilometer
16:55:16 <nijaba> rnirmal: yes, admin type call, so that we are not exposing other user's information
16:55:26 <rnirmal> ok that sounds good
16:56:03 <nijaba> authentication would be done by the fact that our agent is local, if that's good enough
16:56:36 <DuncanT> Personally I'd prefer a second, admin-only endpoint
16:56:50 <rnirmal> DuncanT: +100 on that
16:56:57 <DuncanT> Can do whatever auth people want then (by port, wsgi plugin, firewall, whatever)
16:57:08 <jgriffith_> DuncanT: +1
16:57:16 <nijaba> DuncanT: sounds indeed much better
16:57:33 <nijaba> DuncanT: so that would be a second patch for us to propose, I guess?
16:57:39 <rnirmal> DuncanT: it's easy to do now.. just duplicate the service and run it on a different end point with only the required extensions
16:57:45 <rnirmal> but not clean
16:57:51 * dhellmann is sorry he's late
16:58:04 <dhellmann> a second admin API does make more sense
16:58:31 <DuncanT> rnirmal: Sounds like we can get something stood up for nijaba et al to start using, and change the internals ourselves later then?
16:58:44 <rnirmal> DuncanT: yeah that's what I was thinking
16:58:47 <dhellmann> are the openstack projects moving away from communicating internally via the RPC mechanism, or did cinder just decide not to implement that? I'm not sure about the history.
16:59:01 <rnirmal> for now we can just have an extension with admin_only policies for them to use
16:59:09 <jgriffith_> dhellmann: cinder just didn't do it (yet)
16:59:21 <dhellmann> jgriffith_, ok
17:00:18 <nijaba> btw, we are hanging out in #openstack-metering (as well as #openstack) if you guys have more questions
17:00:35 <rnirmal> great thanks nijaba
17:00:42 <nijaba> thank you!
17:00:49 <dhellmann> thanks for your support, everyone!
17:00:57 <nijaba> +1
17:01:22 <jgriffith_> Thanks dhellmann and nijaba
17:01:38 <jgriffith_> By the way... for folks that decide to try the tests I mentioned earlier....
17:01:54 <jgriffith_> There's a merge problem you'll need to fix in nova/tests/api/ec2/test_cinder_cloud.py
17:02:54 <jgriffith_> s/from nova import rpc/from nova.openstack.common import rpc/
17:03:04 <jgriffith_> Sorry about that....
17:03:26 <jgriffith_> Alright... anybody have anything else?
17:03:54 <jgriffith_> if anything comes up, or you have questions about hack day next week hit me on IRC
17:04:11 <jgriffith_> Somebody "stole" my nick so you can find me as jgriffith_ or jdg
17:04:22 <rturk> jgriffith_: will do, thx
17:04:37 <jgriffith_> Ok... thanks everyone!!!!
17:04:41 <jgriffith_> #endmeeting