16:01:08 <jd___> #startmeeting
16:01:09 <jd___> #meetingname ceilometer
16:01:09 <openstack> Meeting started Thu May 24 16:01:08 2012 UTC.  The chair is jd___. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:09 <jd___> #link https://lists.launchpad.net/openstack/msg12156.html
16:01:09 <jd___> 
16:01:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:11 <openstack> The meeting name has been set to 'ceilometer'
16:01:19 <jd___> 
16:01:19 <jd___> #topic actions from previous meetings
16:01:22 <jd___> 
16:01:50 <jd___> flacoste: something to add for you action item from last meeting?
16:01:57 <jd___> dhellmann: also ? :)
16:02:12 <dhellmann> sorry I'm late
16:02:17 <jd___> no problem
16:02:29 <flacoste> jd___: nope, comments sent to the list
16:02:30 <dhellmann> if we have time at the end can we talk about a way to share experimental code?
16:02:35 <flacoste> jd___: and i think we reached agreement
16:02:43 <clayg> o/
16:02:53 <jd___> dhellmann: yep
16:02:59 <dhellmann> thanks
16:03:22 <jd___> #info flacoste comments sent to the list
16:03:41 <jd___> #topic messaging queue system to use
16:03:49 <jd___> now let's discuss the real stuff :)
16:03:52 <dhellmann> :-)
16:04:20 <dhellmann> as I think I mentioned on the list, we should try to avoid dictating a specific implementation and limit ourselves to basic requirements for a message bus
16:04:26 <jd___> #link https://lists.launchpad.net/openstack/msg11937.html
16:04:40 <jd___> dhellmann: is nova.rpc mechanism enough?
16:04:48 <jd___> I like the idea of using it, personally
16:04:54 <dhellmann> if I can get this "worker" change approved it should do exactly what we need
16:05:16 <dhellmann> I posted a new patch today with qpid tests but I need to push for more reviews
16:05:23 <jd___> well your change seems to be in on a good path
16:05:59 <jd___> and I understood correctly we'll get qpid/zmq/rabbit for free by using nova.rpc so everybody is likely to be happy?
16:06:31 <dhellmann> that is also my understanding
16:07:08 <flacoste> sounds good
16:07:15 <jd___> ok so is everybody happy to say that we agree to use nova.rpc mechanism and let the user choose the messaging bus he wants?
16:07:47 <dhellmann> with the stipulation that when nova.rpc moves into the common library we should actually use *that* version
16:07:59 <dhellmann> depending on the schedule, of course
16:08:02 <jd___> yeah, obviously :)
16:08:21 <dhellmann> ok, I'm happy with that decision
16:08:43 <jd___> #agreed use nova.rpc as a messaging bus and let the user chose what he wants behind (qpid, rabbit, zmq…)
16:08:57 <jd___> #agreed use nova.rpc from openstack.common when it moves to this
16:09:07 <jd___> anything else?
16:09:43 <dhellmann> do we need to make sure the requirements nick posted to the list are met by the systems supported by nova.rpc?
16:09:46 <dhellmann> I think they are, but...
16:10:05 <flacoste> well, not all of them are
16:10:11 <flacoste> by all queues
16:10:22 <dhellmann> true. if the user has persistent queues disabled in rabbit then we can't guarantee delivery.
16:10:36 <dhellmann> flacoste, did you have another example?
16:10:37 <flacoste> and the ha story of rabbit isn't that great, last i heard
16:10:46 <jd___> well, if the user shoots himself in the foot… :)
16:10:53 <jd___> what about zmq?
16:10:53 <flacoste> it required shared storage
16:10:56 <dhellmann> I keep hearing that, but no one has any details. I'm not an expert, so I don't know one way or the other.
16:11:10 <jd___> hm I don't think zmq has persistence
16:11:12 <flacoste> my understanding is that it's easier to build a ha message queue system with zmq
16:11:21 <flacoste> but yeah, i think it lacks some other requirements
16:11:35 <dhellmann> so maybe that delivery requirement shouldn't be *our* requirement, but it may be a *user* requirement that we should support
16:11:35 <clayg> well zmq doesn't really have a "queue" (independent service that "stores" the messages)
16:11:48 <dhellmann> right, clayg, that's my understanding
16:11:50 <jd___> as nijaba wrote "Not sure this list is exhaustive or viable" so it's likely that you can't satisfy *every* point with only one message queue
16:11:57 <jd___> you'll have to do tradeoff
16:11:57 <dhellmann> someone could build a message queue server with zmq but it isn't one by itself
16:12:04 <dhellmann> exactly
16:12:30 <jd___> but I don't think we should decide which tradeoff to do for the user
16:12:35 <jd___> so nova.rpc is a good call :)
16:12:36 <dhellmann> +1
16:13:17 <jd___> anything else?
16:13:30 <clayg> jd___: where are you at on yoru nova branch to support an independent volume service?  Do the volume tables get ripped out?  Will nova-compute be the only component to talk to volumes?  What's the attach workflow?  Who keeps track of what's attached where?
16:13:45 <clayg> ^ any of those
16:13:45 <uvirtbot> clayg: Error: "any" is not a valid command.
16:13:50 <clayg> whoa...
16:14:32 <dhellmann> is that work related to ceilometer?
16:14:34 <jd___> clayg: I don't follow you here?
16:14:59 <clayg> am I in the wrong meeting?
16:15:12 <dhellmann> maybe. :-) this is the metering group meeting
16:15:12 <clayg> when will nova be ready to consume an indepent volume service (e.g. cinder)
16:15:24 <clayg> I guess I am in the wrong meeting then :D
16:15:31 <jd___> I think you're in the wrong meeting :D
16:16:02 <jd___> ok, so, except volumes :D anything else? :)
16:16:18 <dhellmann> experimental branches?
16:16:27 <jd___> ok let's change topic then
16:16:35 <jd___> #topic message bus usage described in architecture proposal V1
16:16:50 <jd___> I don't think dhellmann has something to add about it :)
16:16:55 <dhellmann> heh
16:17:33 <dhellmann> I haven't worked out the implementation details of sending the actual metering messages, yet. Should those be cast() calls?
16:17:54 <jd___> dhellmann: good question
16:18:06 <dhellmann> or should we use a topic publisher like the notifications do?
16:18:19 <jd___> I'm more in favor of copying notifications
16:18:20 <dhellmann> it seems like metering messages are another case where we might want multiple subscribers to see all of the messages
16:18:28 <dhellmann> yeah, I'm leaning that way, too
16:18:47 <jd___> it's just messaging so…
16:18:55 <mnaser> I understand that metering messages will be sent over a topic, but will it also be exposed/stored in a database?
16:19:11 <dhellmann> mnaser, yes
16:19:12 <jd___> mnaser: that's the job of the collector, yes
16:19:29 <mnaser> Fantastic, I've been working with the Nova notification system and it's much better to request information.
16:19:50 <jd___> dhellmann: we use only one topic?
16:19:53 <dhellmann> the collector will write the info to a database and the api service will allow for some basic queries
16:20:26 <dhellmann> we could do it like notifications do and publish multiple times to "metering" and "metering + counter_id"
16:20:32 <dhellmann> so metering and metering.instance for example
16:20:52 <jd___> makes sense
16:20:55 <dhellmann> that doubles the traffic, but the exchange should just discard the message if no one is listening
16:21:07 <jd___> #info we could do it like notifications do and publish multiple times to "metering" and "metering + counter_id"
16:22:26 <jd___> anything else on this?
16:22:45 <jd___> I find the current architecture good enough and clear so… :)
16:22:52 <jd___> good work from dhellmann \o/ :)
16:23:00 <dhellmann> thanks!
16:23:08 <mnaser> The backend storage, is there plans for usage of RRD or how exactly are you guys thinking of doing it?
16:23:32 <dhellmann> we haven't worked that out yet, I think that's the topic we need to discuss this week if I remember the meeting schedule correctly
16:23:40 <jd___> mnaser: we don't know yet, there's a meeting for this later in June (see http://wiki.openstack.org/Meetings/MeteringAgenda)
16:23:49 <jd___> #topic Open discussion
16:24:11 <jd___> free fight time
16:24:22 <jd___> dhellmann: experimental branches then?
16:24:28 <dhellmann> we've had some trouble with sharing experimental code through gerrit because of the rules about test coverage
16:24:35 <mnaser> I see.  I can try and figure that out, because we currently have a full collector service that uses nova (only works with XenServer driver) and also exposes data using REST API + keystone authentication .. I'll try to see how we can bring the java code to pytohn and maybe help
16:24:43 <mnaser> We are using RRD to store the data however
16:24:51 <dhellmann> personally I think it's too early to be so strict with testing, but I'm OK with it if we can work out another way to share code
16:24:55 <dhellmann> maybe just github branches?
16:25:42 <dhellmann> I'm worried that if we use separate branches we might end up with messy merges later, esp. with rebasing
16:25:59 <jd___> mnaser: if you can share with us your experience we'd be glad indeed
16:26:17 <dhellmann> +1, it would be great to hear about your experiences with that
16:26:20 <jd___> dhellmann: I'm on your side on this but heh…
16:26:37 <jd___> I've already lost much time because of merges and rebase I had to do
16:26:39 <mnaser> jd___: I'll be looking forward for future meetings, I'll be adding them to my calendar
16:26:45 <dhellmann> like I said, I'm OK with keeping the gerrit code "clean" and tested, but -- right
16:26:50 <jd___> mnaser: great, thanks!
16:26:59 <dhellmann> maser good!
16:27:43 <dhellmann> jd___, maybe when we agree we like an experimental branch we rebase it and submit it, then delete the branch on github?
16:28:08 <dhellmann> I guess we can just leave all of the experimental branches out there
16:28:26 <jd___> the problem is that we can base code on something experimental
16:28:27 <dhellmann> I don't know the best way to handle it
16:28:33 <dhellmann> right
16:28:42 <jd___> I mean rebasing on something that has been rebased is nightmare, even with git
16:28:52 <dhellmann> well, maybe we're reaching a point where this problem isn't going to be so severe
16:29:01 <jd___> dhellmann: this is what I think
16:29:08 <jd___> it has been a problem for one or two big commits
16:30:05 <dhellmann> yeah, true
16:30:05 <jd___> but the current architecture seems solid enough for now
16:30:16 <jd___> I don't think we'll encounter the problem again
16:30:28 <dhellmann> and if we think it might come up, we can have more discussions on the mailing list
16:30:35 <jd___> yep
16:30:38 <dhellmann> I wasn't looking for something formal, just some ideas
16:30:48 <dhellmann> ok, I'm happy with that
16:30:53 <dhellmann> what else do we have?
16:30:54 <jd___> :)
16:31:06 <mnaser> Does the project currently share the same OpenStack mailing list or has a specific one?  If it does, any tags in the subjects so I can have a filter on those?
16:31:19 <flacoste> mnaser: [metering] on the main list
16:31:21 <dhellmann> we use the main mailing list and messages are tagged with [metering] in the subject
16:31:22 <jd___> dhellmann: do you have an idea on what you'll work in the next days, if you do work on that?
16:31:28 <mnaser> Perfect, thank you.
16:31:53 <dhellmann> I am going to work on more notification converters
16:32:09 <dhellmann> I have the branch with plugins for the polling agent, too
16:32:20 <jd___> ok
16:32:30 <jd___> I've pushed a plugin for floating IP today
16:32:48 <dhellmann> our alpha 1 period ends next week and I need to try to get at least a branch going that logs events to the console, if not sending metering messages in some format
16:32:56 <dhellmann> excellent, I'll have a look after lunch
16:33:10 <jd___> ok
16:33:26 <dhellmann> the topic for next week is "API message format" but I'm going to just do something simple for now and plan to change it later if we don't like what I put together :-)
16:33:48 <jd___> good :)
16:34:22 <dhellmann> how do we want to track to-do items? bugs in launchpad?
16:34:32 <dhellmann> I feel like we're at the point where we could open some tickets for things like these plugins
16:34:33 <jd___> sounds like a good idea
16:34:39 <jd___> yep
16:34:53 <jd___> #action jd___ open tickets in launchpad for plugins, etc…
16:35:05 <jd___> I'll do so we can assign ticket and not duplicate the work
16:35:10 <dhellmann> good plan
16:35:25 <mnaser> Just as a suggestion, would it be good to propose a nova API extension that provides metrics that Ceilometer can consume (and each driver can write their own code to provide those metrics)
16:35:56 <dhellmann> that's more or less what the plugins are for
16:36:16 <mnaser> I see
16:36:19 <dhellmann> you can add code to the agent that polls, and the results are published so the collector can see them
16:36:35 <dhellmann> and you can add listeners in the collector for events like notifications
16:36:55 <dhellmann> one goal was to reduce the number of changes needed within the other projects
16:37:14 <dhellmann> we may need some, eventually, but we want as few as possible
16:37:36 <dhellmann> jd___, we need tickets to have the collector listen for notifications from the services other than nova
16:37:39 <mnaser> I see, so a dependency to go through another project is not something that is preferred in this case?
16:37:59 <dhellmann> right, we only want to push changes into the other projects if there is no way to avoid it
16:38:17 <jd___> #action jd___ open ticket to have the collector listen for notifications from the services other than nova
16:38:27 <dhellmann> and we want those things to be as general as possible (so, having them send general notifications is OK but they should not send metering data directly)
16:38:28 <jd___> dhellmann: you mean like glance I guess?
16:38:35 <dhellmann> glance and quantum at least
16:38:43 <dhellmann> maybe swift?
16:38:52 <dhellmann> we might need to poll swift, I'm not sure
16:39:05 <mnaser> I see, so for example, if we have some nova-compute nodes that are running XenServer, would we have to add them all individually to Ceilometer (forigve my silly questions)
16:39:17 <dhellmann> basically, for each counter we identified figure out which service has the data and make sure we are listening to its notification stream
16:39:20 <jd___> I don't know quantum
16:39:27 <jd___> swift does not use a RPC
16:39:42 <dhellmann> mnaser, no need to apologize for asking questions!
16:39:58 <dhellmann> quantum is the new networking stuff
16:40:05 <mnaser> I feel like they're basic questions about the infrastructure of it so sorta "go read the arch docs" :)
16:40:16 <jd___> dhellmann: I know what it is but not how it works ;)
16:40:27 <dhellmann> mnaser, you might want to read over http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 but we can still discuss here
16:40:32 <dhellmann> ah, ok, jd___
16:40:37 <dhellmann> I don't either :-_
16:40:39 <dhellmann> :-)
16:40:40 <jd___> lol
16:40:51 <jd___> anyway quantum is out of scope for now
16:41:01 <dhellmann> ok. I'm going to need it, but I can track that myself.
16:41:10 <jd___> your call ;)
16:41:35 <jd___> if you do more than what is planned it's good too I guess! :)
16:41:47 <dhellmann> mnaser a ceilometer agent runs on the compute node and polls the hypervisor for details through libvirt (or the other drivers) and we also catch notifications for events like creating or deleting instances
16:42:01 <mnaser> Gotcha, I see, I figured when reading.
16:42:18 <dhellmann> jd___ we also need tickets for adding polling for hypervisor drivers that do not use libvirt
16:42:29 <mnaser> In that case, I can help write up the XenServer plugins/poller, as I already have most of the work done for it to be honest.
16:42:37 <jd___> yeah, maybe mnaser will be able to help on that :)
16:42:38 <dhellmann> excellent!
16:42:48 <jd___> #action jd___ we also need tickets for adding polling for hypervisor drivers that do not use libvirt
16:42:49 <dhellmann> that worked out nicely
16:43:15 <jd___> mnaser: when the ticket will be opened feel free to at least describe what technique you did use to poll info for XenServer
16:43:23 <jd___> it'd be awesome
16:43:36 <mnaser> jd___: Will do, there are numerous ways as well so I'll bring up the options and then a decision can be taken
16:43:44 <jd___> mnaser: perfect
16:43:48 <dhellmann> sounds like a good approach
16:43:59 <dhellmann> that's all I have for this week. is there anything else to discuss?
16:44:13 <jd___> nothing for me
16:44:31 <jd___> I'll close the meeting, mnaser feel free to join #openstack-metering if you want to discuss more with us
16:44:39 <mnaser> In there :)
16:44:47 <dhellmann> good meeting jd___
16:44:51 <dhellmann> and mnaser
16:44:57 <jd___> thanks guys!
16:45:00 <jd___> #endmeeting