14:00:41 <markmc> #startmeeting oslo
14:00:42 <openstack> Meeting started Fri Jul 19 14:00:41 2013 UTC.  The chair is markmc. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:46 <openstack> The meeting name has been set to 'oslo'
14:00:48 * whenry is on
14:00:51 <markmc> #link https://etherpad.openstack.org/HavanaOsloMessaging
14:00:53 <flaper87> \o/
14:01:10 <markmc> doug sent his apologies
14:01:17 <markmc> russellb, ?
14:01:22 <markmc> who else?
14:01:45 <flaper87> whenry:
14:01:56 <whenry> yep
14:02:01 <markmc> simo, ?
14:02:09 <markmc> let's get going
14:02:10 <whenry> Tross,  you here?
14:02:15 <markmc> #topic oslo.messaging update
14:02:26 <whenry> oops tedross you here too?
14:02:26 <markmc> I dumped a bunch of updates into the etherpad there
14:02:33 <markmc> since the last meeting ...
14:02:35 <tedross> whenry: I'm here
14:02:46 <markmc> we've got an oslo.messaging repo rather than a fork of oslo-incubator
14:02:51 <markmc> API docs published
14:02:54 <markmc> good test coverage
14:03:03 <markmc> I think the API design is solid at this point
14:03:16 <markmc> pity it doesn't actually do anything yet :)
14:03:20 <whenry> :)
14:03:23 * markmc started hacking on the kombu driver yesterday
14:03:46 <markmc> my plan is to pull the kombu/amqp code into oslo.messaging with the minimum amount of modifications
14:03:57 * whenry started looking at API with respect to qpid/proton
14:04:08 <markmc> and just get it working
14:04:13 <flaper87> markmc: I'm going down that road with qpid's
14:04:15 <markmc> then later refactor as needed
14:04:24 <simo> maI'm here
14:04:26 <markmc> flaper87, cool
14:04:28 <flaper87> I had to take care of other things in the las few weeks but I got back to it today
14:04:34 <markmc> whenry, cool
14:04:37 <flaper87> and it's looking definitely better
14:04:47 <whenry> fyi I think there is a impl_qpid and an impl_proton.
14:05:04 <whenry> impl_proton is just impl_qpid without the assumption of a qpidd
14:05:19 <markmc> whenry, how about we talk about proton support as a separate topic?
14:05:35 <whenry> impl_proton is pure AMQP 1.0 based on the qpid/proton
14:05:37 <markmc> whenry, want to dig into it, but the first task of oslo.messaging is just parity with the existing code
14:05:45 <whenry> markmc, yep
14:05:46 <whenry> +1
14:06:06 <whenry> I will support flaper87 on that
14:06:08 <markmc> has anyone any hope at this point we can get e.g. nova switched over to oslo.messaging for havana?
14:06:20 <markmc> I'm still not ruling it out
14:06:29 <markmc> but maybe I'm being a bit crazy
14:06:38 <flaper87> markmc: I think we can make it
14:06:48 <markmc> nice, I like it :)
14:06:57 <flaper87> I'm planning to work on this qpid stuff next week 'til I finish it
14:07:02 <whenry> +1
14:07:04 <markmc> ok, great
14:07:24 <markmc> well ... at some point coming up to havana-3, there'll be a cut off point
14:07:30 <whenry> most of my questions have been either answered by the docs or will be figured out with flaper87
14:07:34 <markmc> i.e. unless nova is using it at this point, we delay to icehouse
14:07:42 <markmc> because this will really need a settling in period
14:07:52 <markmc> #link https://wiki.openstack.org/wiki/Havana_Release_Schedule
14:08:02 <tedross> markmc: is there a possibility for tech-preview in Havana?
14:08:05 <flaper87> Agreed
14:08:08 <whenry> i.e. all the namespace stuff makes total sense and I don't see any issues
14:08:33 <markmc> tedross, we can't do "tech preview of Nova using oslo.messaging"
14:08:50 <markmc> tedross, we can certainly do "tech preview of oslo.messaging but nothing in Havana is using it"
14:09:00 <markmc> tedross, i.e. Nova using it is all-or-nothing
14:09:29 <markmc> we can't even do something like "let's use oslo.messaging for nova-api talking to nova-scheduler and everything else uses the existing rpc stuff"
14:09:40 <whenry> ack
14:09:42 <tedross> so a tech-preview of proton in Havana would have to be under Nova
14:09:44 <markmc> that would be cool for testing interop, but I can't see how we'd do it
14:09:48 <flaper87> Nova's migration means we migrate the oslo-notification as well, right?
14:09:56 <flaper87> erm, notifier*
14:09:56 <markmc> tedross, we're talking about proton separately in a few minutes :)
14:10:03 <tedross> np
14:10:11 <markmc> flaper87, yep
14:10:14 <flaper87> markmc: cool
14:10:35 <whenry> I want to add collapsing the # exchanges in the conversation on impl_qpid too
14:10:55 <whenry> there are two reasons for it. let me know when we can discuss that
14:10:59 <markmc> ok, I'm gonna say we'll have a meeting August 16th
14:11:08 <markmc> and that'll be the go/no-go point
14:11:14 <whenry> ack
14:11:16 <flaper87> markmc: sounds good
14:11:35 <markmc> #info go/no-go on oslo.messaging in Havana on August 16th
14:11:47 <markmc> whenry, so, collapsing # of exchanges
14:11:58 <markmc> first thing is we need interoperability
14:12:09 <whenry> collapsing the # of exchanges != oslo messaging exchanges
14:12:17 <whenry> it meands qpidd exchange artifacts
14:12:32 <whenry> interoperability?
14:12:37 <markmc> for upgrades, we do need to allow e.g. havana nova-compute talk to grizzly nova-scheduler
14:12:53 <whenry> ah
14:13:01 <markmc> so, even e.g. renaming exchanges or topics would need to be opt-in
14:13:10 <markmc> and a deprecation period of the old setup
14:13:33 <whenry> ok so we can figure out how that would work
14:13:37 <markmc> so, yeah - we need interop
14:13:46 <markmc> and secondly, the priority here is getting parity working
14:13:53 <whenry> i.e. would updating the nova rpc version mean interop?
14:13:58 <markmc> so I don't really see this as a good time for doing any cleanups
14:14:11 <whenry> i.e. the new nova/rpc version would use the same addressing as the oslo.messaging versions
14:14:14 <markmc> whenry, no, we actually need to be able to talk the old version
14:14:25 <markmc> we can add a new version
14:14:35 <markmc> but can't lose support for the old version
14:14:50 <markmc> so, even fixing something like this: https://bugs.launchpad.net/oslo/+bug/1173552
14:14:53 <whenry> ok. so two versions ... because really there are two issues with the old way:
14:14:53 <uvirtbot> Launchpad bug 1173552 in oslo "rpc-api-review: topic names can conflict across exchanges" [High,Triaged]
14:15:17 <markmc> i.e. the fanout exchanges are called $topic_fanout_...
14:15:31 <markmc> which means you can't just change the control_exchange config
14:15:42 <markmc> to have two nova's on the same broker without conflicting
14:15:43 <whenry> 1. there are still some (not as many) leaked exchanges due to qpidd not supporting auto-delete on exchanges (which I'm not sure is in the spec anyway)
14:15:43 <whenry> 2. Using federation.
14:16:07 <markmc> ok
14:16:16 <markmc> but again - whatever issues there are with the current implementation
14:16:25 <markmc> oslo.messaging will need to have those issues two
14:16:30 <markmc> interop and parity first priority
14:16:36 <whenry> ok.
14:16:39 <markmc> cleanups would really be for icehouse IMHO
14:16:59 <markmc> ok, I think we've beaten this to death
14:17:06 <markmc> anything else on oslo.messaging?
14:17:16 <flaper87> markmc: what about configuration parameters? For example, we have qpid_sasl_mechanisms in oslo.rpc
14:17:18 <markmc> (apart from secure messaging, proton, etc. - separate topics)
14:17:27 <flaper87> and I removed the qpid_ part
14:17:31 <markmc> flaper87, yes, we need to continue supporting them
14:17:32 <flaper87> since those are tight to the url
14:17:50 <markmc> existing config must stay working
14:17:57 <flaper87> ok
14:18:17 <flaper87> so, support for existing oslo configs and new configs based on urls
14:18:30 <flaper87> then we can deprecate the ones based on oslo.config in future versions
14:18:32 <markmc> yeah
14:18:35 <flaper87> cool
14:18:46 <markmc> config options in the URL would take priority
14:19:09 <markmc> so, even control_exchange versus rabbit:///$exchange
14:19:23 <markmc> if you set the exchange in the URL, that takes precedence over control_exchange
14:19:36 <markmc> ok, moving on
14:19:37 <markmc> #topic secure messaging
14:19:41 <markmc> simo, hey hey hey
14:19:49 <markmc> simo, could you give us an update?
14:19:52 <simo> yes
14:19:56 * markmc digs up the URLs of all the reviews
14:20:04 <simo> the patches I have work just fine
14:20:15 <simo> I am working with ayoung now to get the server side into keystone
14:20:26 <simo> (reviews welcome)
14:20:44 <markmc> #link https://blueprints.launchpad.net/oslo/+spec/trusted-messaging
14:20:46 <simo> the patches currently up for review are WIP as adam has another patch inline we may want to depend on
14:20:57 <ayoung> big issue with the review is the SQL repo.  We want to keep it separate, but curent focus on Alembic is undermining that effort
14:21:00 <simo> (splits DB per keystone component to avoid migration issues)
14:21:11 <markmc> #link https://blueprints.launchpad.net/keystone/+spec/key-distribution-server
14:21:22 <ayoung> #link https://review.openstack.org/#/c/36731/
14:21:23 <simo> ayoung: my plan is to go ahead w/o splitting soon
14:21:26 <markmc> ayoung, that keystone blueprint isn't triaged/approved yet
14:21:56 <simo> markmc: so from oslo-incubator pov we are just waiting on keystone side
14:22:16 <simo> from oslo.messaging pov I .. don't know how we want to proceed
14:22:21 <markmc> simo, yeah, I think now is a good time for oslo-core to review the oslo-incubator patches
14:22:29 <markmc> and look over the keystone side of it too
14:22:32 <ayoung> we have a chicken/egg problem with it that the mechanism needs extensions to test it
14:23:12 <markmc> simo, don't worry too much about oslo.messaging just yet ... either you or I can add it at some point, but getting it into oslo-incubator is the first step
14:23:30 <simo> markmc: ok
14:23:43 <markmc> I'm happy for the oslo-incubator stuff to land if keystone-core are generally nodding at the idea of the key distribution server in keystone
14:23:51 <markmc> hence my question about the blueprint approval
14:23:54 <simo> markmc: is it ok to abandon old reviews and start a new one from scratch ?
14:24:25 <markmc> simo, it's better not to - it's preferable to keep the conversations tied together
14:24:49 <markmc> simo, even if you split up the patches a bit more, it's nice to keep the original Change-Id for one of the new patches
14:25:18 <simo> markmc: ok, then I will just update the old review with the latest code today
14:25:26 <markmc> ayoung, is keystone-core generally nodding at the idea of the key distribution server in keystone?
14:25:30 <markmc> simo, super
14:25:46 <ayoung> markmc, yes, since it is only going in as an extension
14:26:04 <markmc> ayoung, can we get the bp approved, then?
14:26:09 <ayoung> markmc, will do
14:26:34 <markmc> e.g. ttx looks at the oslo blueprint and assumes it's all blocked because the keystone bp isn't approved
14:27:17 <markmc> simo, I hope I get to review really soon
14:27:23 <markmc> simo, still super excited about getting this in
14:27:38 <simo> markmc: me too
14:27:49 <simo> markmc: now one more comment on implementation
14:27:51 <markmc> <openstackgerrit> Simo Sorce proposed a change to openstack/oslo-incubator: RPC: Add MessageSecurity implementation  https://review.openstack.org/37912
14:27:51 <markmc> <openstackgerrit> Simo Sorce proposed a change to openstack/oslo-incubator: RPC: Add support for optional message signing  https://review.openstack.org/37913
14:27:51 <markmc> <openstackgerrit> Simo Sorce proposed a change to openstack/oslo-incubator: Add support for retrieving group keys  https://review.openstack.org/37914
14:28:08 <simo> markmc: setup is all pretty manual still, is there a place I should put docs on how to do it ?
14:28:21 <markmc> simo, a wiki page would be a great start
14:28:31 <simo> markmc: where should I land it ?
14:28:39 <markmc> simo, including DocImpact in the commit message helps alert docs folks
14:28:45 <markmc> simo, where on the wiki?
14:28:49 <simo> nods
14:29:11 <markmc> https://wiki.openstack.org/wiki/MessageSecurityHowto ?
14:29:23 <simo> I'll do
14:29:26 <markmc> cool
14:29:44 <markmc> ok, excellent
14:29:45 <simo> so what I have now secures 1-1 communication if you espcify and explicit topic/host
14:30:01 <simo> it can also protect 1-many if the target is anything but 'compute'
14:30:14 <markmc> interesting, why is that?
14:30:20 <simo> well you could add a 'compute' group too but then you will not protect one compute from anothr
14:30:21 <markmc> the !compute bit, I mean
14:30:27 <markmc> ok, right
14:30:36 <markmc> not technical limitation so much as not recommended
14:30:38 <simo> because I assume you can trust scheduler1 and scheduler2 to be at the same trust level
14:30:46 <markmc> yeah
14:31:04 <simo> as scheduler1 can grab the group key for 'schduler' and forge messages as if they were from another
14:31:06 * dhellmann arrives late
14:31:08 <markmc> that covers a lot of cases, though
14:31:18 <markmc> e.g. scheduler mostly talks to specific compute hosts
14:31:22 <simo> markmc: indeed it does
14:31:29 <markmc> welcome dhellmann
14:31:32 <simo> fanout messages to compute.* are rare
14:31:36 <markmc> yeah
14:31:37 <dhellmann> hi, sorry I'm late!
14:31:44 <simo> and usually they ask compute.* to do something
14:32:06 <simo> so even if they are not protected when compute.host then sends something to the targte, that message will be protected
14:32:31 <simo> markmc: once we get this code in I am still planning to take a hard look at supporting public/private signing too
14:32:41 <simo> markmc: to try to cover also the compute.* target case
14:32:52 <simo> the API should alredy be flsxible enough for that
14:34:21 <markmc> great
14:35:15 <markmc> ok, great
14:35:20 * markmc tried to summarize in the etherpad
14:35:30 <markmc> simo, feel free to fill in anything I misrepresented
14:35:39 <markmc> #topic qpid/proton support
14:35:44 <markmc> ok, whenry - you're up :)
14:35:55 <whenry> ok
14:35:59 <markmc> could you give everyone a brief summary of what this is all about?
14:36:07 * markmc digs up some links of threads on openstack-dev
14:36:19 <whenry> so impl_proton would be a pure amqp 1.0 that uses the proton library
14:36:55 <whenry> it would not care about qpidd artifacts
14:37:08 <whenry> it would support exchanges in the namespace
14:37:32 <whenry> as the osl.messaging would support - i.e. exchanges are just a HLQ on the namespace
14:38:05 <whenry> qpidd or other intermeddiaries like qpid Dispatch could be used
14:38:06 <markmc> #link http://lists.openstack.org/pipermail/openstack-dev/2013-July/thread.html#11451
14:38:19 <tedross> removes the dependency on brokers - and would allow for HA without broker clustering
14:38:38 <whenry> also would allow for the move over to a more federated setup
14:38:46 <tedross> and removes the scale limitations that brokers cause
14:38:50 <markmc> yeah, that's probably the key message for most people
14:39:14 <markmc> this would be an amqp driver which would allow peer-to-peer style messaging architecture similar to what you can do with the zmq driver
14:39:35 <whenry> so people could still use qpidd as their broker if they wish (qpidd now supports AMQP 1.0 using the proton libraries)
14:39:37 <markmc> what's "a more federated setup" ?
14:39:41 <markmc> in the context of openstack?
14:39:45 <markmc> say, nova for example
14:39:54 <tedross> network-topology vs clusters
14:40:02 <whenry> just fyi for others: qpid/rpton is a library that provides a pure amqp 1.0 support
14:40:15 <markmc> what would the topology look like for a large scale nova deployment?
14:40:23 <whenry> anyone can use it to make their app/broker/etc. amqp 1.0 compliant
14:40:28 <dhellmann> would it use the matchmaker to help the peers find each other like the zmq driver? or is that something built into amqp 1.0?
14:41:03 <tedross> markmc: you would use a topology similar to a network topology in a datacenter - dual everything with crossed links
14:41:26 <tedross> with regional pairs and backbones
14:41:34 <whenry> topology can have many  intermediaries (like qpid/dispatch) that merely route messages on. alternate routes can be found. geo areas can be defined to limit widea area chattyness
14:41:57 <markmc> so ... e.g. a pair of dispatch thingies per rack?
14:42:28 <whenry> or more
14:42:30 <tedross> markmc: yes
14:42:36 <tedross> or more, if needed
14:42:44 <whenry> it's very light weight
14:42:59 <whenry> merely looks at addresses and moves message forward
14:43:19 <markmc> I'd love to see an opinionated explanation of e.g. "here's what it should look like for a nova install of 20 racks, 30 compute nodes per rack"
14:43:33 * markmc isn't loving "you can do it whatever way you like" type approach
14:43:49 <whenry> in the topology you could support brokers too if there was some need ... but there probably wouldn't be with the right amount of redundancy in the network
14:44:01 <markmc> a concrete example would really help people understand why you think this would be useful for openstack
14:44:20 <tedross> markmc: +1  - I think the messaging topology would want to match the underlying networking topology as closely as possible
14:44:24 <dhellmann> "less could, more should"
14:44:58 <tedross> i.e. if there are wan trunks in use, that would be a good place to put a messaging interconnect
14:45:16 <whenry> should because as we scale up to massive clouds over many geos then we need to have a more scalable solution
14:45:21 <markmc> I'd keep it relatively simple
14:45:30 <markmc> i.e. a single geo
14:45:37 <markmc> just relatively large scale
14:45:43 <markmc> ignore nova cells for now
14:46:00 <tedross> then three in a full-mesh would give you good availability
14:46:28 <markmc> tedross, could you or whenry write that up in some more detail?
14:46:36 * markmc would find it helpful
14:46:37 <tedross> no need for clustering or quarum management
14:46:48 <tedross> markmc: yes, I'd be happy to do that
14:47:07 <markmc> i.e. a writeup for people who don't spend much time thinking about messaging
14:47:14 <markmc> ok, cool
14:47:22 <markmc> also, a blueprint for this driver would be good too
14:47:31 <whenry> +1
14:47:31 <tedross> markmc: ack +1
14:47:32 <markmc> I'm assuming icehouse is the target at this point?
14:47:45 <tedross> it looks like it, yes
14:47:51 <whenry> but also another point
14:48:00 <whenry> impl_proton will be pure AMQP 1.0
14:48:13 <whenry> not just qpidd specific though qpidd will understand proton messages
14:48:20 <markmc> yes
14:48:30 <whenry> i.e. any AMQP 1.0 artifact can use this.
14:48:39 <markmc> I still can't imagine that's going to matter for us though
14:48:53 <whenry> so if some other broker tech came along that spoke amqp 1.0 then it would stil work
14:48:59 <markmc> e.g. having nova-compute use impl_proton and nova-scheduler using impl_some_other_amqp10
14:49:27 <markmc> ok, you're just saying the oslo.messaging driver isn't tieing you to a specific broker
14:49:34 <whenry> right
14:49:45 <markmc> unlike impl_rabbit and impl_qpid
14:49:47 <markmc> ok, got it
14:49:50 * markmc has to move on
14:50:00 <markmc> #topic eventlet dependency in oslo logging
14:50:04 <markmc> lbragstad, you're on :)
14:50:05 <lbragstad> #link https://review.openstack.org/#/c/34834/ for getting a unified logging solution from Oslo into Keystone.
14:50:16 <lbragstad> just wanting to get some feedback on that there.
14:50:23 <markmc> #link https://review.openstack.org/34834
14:50:35 <markmc> I looked very briefly and was like, "uggh, this is getting messy"
14:50:45 <markmc> but I'm not opposed to the idea in principle
14:50:56 <markmc> but to back up here for a second
14:51:06 <dhellmann> I wonder if that class couldn't be replaced with a single function that just returns the 2 stores?
14:51:10 <markmc> oslo logging depends on the local store for the messaging context?
14:51:19 <lbragstad> ContextAdapter
14:51:24 <lbragstad> in log.py
14:51:33 <markmc> i.e. the rpc code sets the context of the currently processing message?
14:51:44 <markmc> I haven't carried that over to oslo.messaging yet
14:51:51 <markmc> which is another ball of wax
14:52:10 <markmc> is there any better way of implementing this
14:52:15 <markmc> which doesn't depend on eventlet?
14:52:34 <flaper87> FWIW, I haven't looked at that patch but I'd love to remove local's dependency on eventlet.
14:52:35 <lbragstad> I guess the way I did it, was just to use python's standard threading library
14:52:52 <lbragstad> in the case we don't have eventlet installed on the system, like running keystone under Apache
14:53:09 <markmc> does eventlet monkey patch the standard threading library's thread-local stuff?
14:53:14 <lbragstad> yeah
14:53:20 <markmc> i.e. so you get a thread local context per green thread?
14:53:31 <markmc> so local.py could just use the standard thread-local stuff?
14:53:36 <markmc> not use eventlet at all?
14:53:42 <markmc> and eventlet will monkey patch it?
14:53:46 * markmc waves hands
14:53:46 <mrodden> it would be patched yea
14:54:02 <lbragstad> correct
14:54:26 <dhellmann> markmc: the problem with just relying on that approach is we can't have unit tests for both implementations
14:54:38 <lbragstad> given the fact that we needed to detect that case, we also needed to address the test cases for it in a way that we could test the python threading implemetation with Eventlet installed
14:54:45 <dhellmann> although we could do what we did in ceilometer with the nova notifier plugin, and run testr twice with different settings
14:55:11 <markmc> dhellmann, yeah, I'm fine with that ... mostly because we're going to have this problem in oslo.messaging too
14:55:24 * dhellmann nods
14:55:47 <markmc> lbragstad, ok, so I'd be happy with a patch to local.py which switches it to standard threading stuff
14:56:00 <markmc> it would be monkey patched in our unit tests
14:56:09 <markmc> and we can deal with adding tests for the non-eventlet case later
14:56:23 <markmc> that would be a relatively simple patch, right?
14:56:31 * flaper87 would be happy with that as well
14:56:42 <lbragstad> markmc: ok so just switch everything to use python threading instead of the implementation were we have an object to build the store dynamically?
14:56:59 <markmc> yeah
14:57:00 <lbragstad> simplier than what I have now I think ;)
14:57:21 <markmc> I much prefer the simplicity of that
14:57:22 <dhellmann> you could probably go back to the original implementation and just replace the eventlet module with threading
14:57:33 <markmc> right
14:57:43 <lbragstad> dhellmann: markmc ok, sounds good... simple is good
14:57:59 <dhellmann> markmk: as far as the tests go, if the tests for the local module don't import anything "bad" we should be able to reuse the same test cases, no?
14:58:21 <dhellmann> lbragstad: sorry for steering you down the wrong path initially :-/
14:58:36 <markmc> dhellmann, same test cases, but just different testr invocation?
14:58:39 <lbragstad> dhellmann: no worries, I learned alot :)
14:58:42 <dhellmann> markmc: right
14:58:45 <markmc> dhellmann, yep
14:58:58 <dhellmann> we just need to take care with the imports
14:59:12 <dhellmann> so we don't end up bringing eventlet in unexpectedly
14:59:22 <markmc> what am I going to do for oslo.messaging?
14:59:27 <markmc> add a get_current_context() method?
14:59:46 <markmc> I'm not going to expose the store in the API
14:59:58 <dhellmann> the context we're talking about is the incoming rpc call context?
15:00:04 <markmc> yeah
15:00:29 <dhellmann> yeah, I guess we need to provide a function somewhere for other modules to call to get it
15:00:34 <dhellmann> we could pass it to the callback we're given
15:00:42 <dhellmann> but that just moves the burden somewhere else
15:01:15 <dhellmann> is the oslo.messaging code still in your github repo, or is that on stackforge somewhere?
15:01:34 <markmc> dhellmann,  https://github.com/openstack/oslo.messaging
15:01:39 <markmc> dhellmann, see https://etherpad.openstack.org/HavanaOsloMessaging
15:01:49 * markmc is really proud of http://docs.openstack.org/developer/oslo.messaging/ :)
15:02:03 <dhellmann> I saw you'd done that, but haven't had a chance to catch up yet
15:02:06 <markmc> it only took me a year or so to publish the oslo.config docs
15:02:10 <mrodden> dhellmann, markmc: we're out of time but I was hoping to ask about https://review.openstack.org/#/c/34253/ in open discussion. it was +2'd already but dhellman had some comments that I think are addressed now. its blocking a blueprint i have in nova :(
15:02:12 <markmc> dhellmann, yep, cool
15:02:17 <flaper87> markmc: +1 for those docs, I love them
15:02:30 <dhellmann> mrodden: I'll take a look
15:02:34 <mrodden> dhellmann: thanks
15:02:55 <markmc> how are we doing on reviews anyway?
15:02:55 <markmc> http://russellbryant.net/openstack-stats/oslo-openreviews.txt
15:03:04 <markmc> --> Total Open Reviews: 55
15:03:16 <markmc> that's a lot
15:03:30 * markmc caught up a week or two ago, behind again
15:03:38 <dhellmann> ditto
15:04:09 <markmc> ok, I think we're done
15:04:11 * flaper87 reviewed the hell out of oslo just yesterday
15:04:12 <markmc> 5 minutes over
15:04:14 <flaper87> :D
15:04:17 <lbragstad> thanks guys!
15:04:19 <markmc> flaper87, awesome
15:04:19 <mrodden> thanks all
15:04:21 <markmc> thanks everyone
15:04:26 <markmc> #endmeeting