14:00:41 #startmeeting oslo 14:00:42 Meeting started Fri Jul 19 14:00:41 2013 UTC. The chair is markmc. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:46 The meeting name has been set to 'oslo' 14:00:48 * whenry is on 14:00:51 #link https://etherpad.openstack.org/HavanaOsloMessaging 14:00:53 \o/ 14:01:10 doug sent his apologies 14:01:17 russellb, ? 14:01:22 who else? 14:01:45 whenry: 14:01:56 yep 14:02:01 simo, ? 14:02:09 let's get going 14:02:10 Tross, you here? 14:02:15 #topic oslo.messaging update 14:02:26 oops tedross you here too? 14:02:26 I dumped a bunch of updates into the etherpad there 14:02:33 since the last meeting ... 14:02:35 whenry: I'm here 14:02:46 we've got an oslo.messaging repo rather than a fork of oslo-incubator 14:02:51 API docs published 14:02:54 good test coverage 14:03:03 I think the API design is solid at this point 14:03:16 pity it doesn't actually do anything yet :) 14:03:20 :) 14:03:23 * markmc started hacking on the kombu driver yesterday 14:03:46 my plan is to pull the kombu/amqp code into oslo.messaging with the minimum amount of modifications 14:03:57 * whenry started looking at API with respect to qpid/proton 14:04:08 and just get it working 14:04:13 markmc: I'm going down that road with qpid's 14:04:15 then later refactor as needed 14:04:24 maI'm here 14:04:26 flaper87, cool 14:04:28 I had to take care of other things in the las few weeks but I got back to it today 14:04:34 whenry, cool 14:04:37 and it's looking definitely better 14:04:47 fyi I think there is a impl_qpid and an impl_proton. 14:05:04 impl_proton is just impl_qpid without the assumption of a qpidd 14:05:19 whenry, how about we talk about proton support as a separate topic? 14:05:35 impl_proton is pure AMQP 1.0 based on the qpid/proton 14:05:37 whenry, want to dig into it, but the first task of oslo.messaging is just parity with the existing code 14:05:45 markmc, yep 14:05:46 +1 14:06:06 I will support flaper87 on that 14:06:08 has anyone any hope at this point we can get e.g. nova switched over to oslo.messaging for havana? 14:06:20 I'm still not ruling it out 14:06:29 but maybe I'm being a bit crazy 14:06:38 markmc: I think we can make it 14:06:48 nice, I like it :) 14:06:57 I'm planning to work on this qpid stuff next week 'til I finish it 14:07:02 +1 14:07:04 ok, great 14:07:24 well ... at some point coming up to havana-3, there'll be a cut off point 14:07:30 most of my questions have been either answered by the docs or will be figured out with flaper87 14:07:34 i.e. unless nova is using it at this point, we delay to icehouse 14:07:42 because this will really need a settling in period 14:07:52 #link https://wiki.openstack.org/wiki/Havana_Release_Schedule 14:08:02 markmc: is there a possibility for tech-preview in Havana? 14:08:05 Agreed 14:08:08 i.e. all the namespace stuff makes total sense and I don't see any issues 14:08:33 tedross, we can't do "tech preview of Nova using oslo.messaging" 14:08:50 tedross, we can certainly do "tech preview of oslo.messaging but nothing in Havana is using it" 14:09:00 tedross, i.e. Nova using it is all-or-nothing 14:09:29 we can't even do something like "let's use oslo.messaging for nova-api talking to nova-scheduler and everything else uses the existing rpc stuff" 14:09:40 ack 14:09:42 so a tech-preview of proton in Havana would have to be under Nova 14:09:44 that would be cool for testing interop, but I can't see how we'd do it 14:09:48 Nova's migration means we migrate the oslo-notification as well, right? 14:09:56 erm, notifier* 14:09:56 tedross, we're talking about proton separately in a few minutes :) 14:10:03 np 14:10:11 flaper87, yep 14:10:14 markmc: cool 14:10:35 I want to add collapsing the # exchanges in the conversation on impl_qpid too 14:10:55 there are two reasons for it. let me know when we can discuss that 14:10:59 ok, I'm gonna say we'll have a meeting August 16th 14:11:08 and that'll be the go/no-go point 14:11:14 ack 14:11:16 markmc: sounds good 14:11:35 #info go/no-go on oslo.messaging in Havana on August 16th 14:11:47 whenry, so, collapsing # of exchanges 14:11:58 first thing is we need interoperability 14:12:09 collapsing the # of exchanges != oslo messaging exchanges 14:12:17 it meands qpidd exchange artifacts 14:12:32 interoperability? 14:12:37 for upgrades, we do need to allow e.g. havana nova-compute talk to grizzly nova-scheduler 14:12:53 ah 14:13:01 so, even e.g. renaming exchanges or topics would need to be opt-in 14:13:10 and a deprecation period of the old setup 14:13:33 ok so we can figure out how that would work 14:13:37 so, yeah - we need interop 14:13:46 and secondly, the priority here is getting parity working 14:13:53 i.e. would updating the nova rpc version mean interop? 14:13:58 so I don't really see this as a good time for doing any cleanups 14:14:11 i.e. the new nova/rpc version would use the same addressing as the oslo.messaging versions 14:14:14 whenry, no, we actually need to be able to talk the old version 14:14:25 we can add a new version 14:14:35 but can't lose support for the old version 14:14:50 so, even fixing something like this: https://bugs.launchpad.net/oslo/+bug/1173552 14:14:53 ok. so two versions ... because really there are two issues with the old way: 14:14:53 Launchpad bug 1173552 in oslo "rpc-api-review: topic names can conflict across exchanges" [High,Triaged] 14:15:17 i.e. the fanout exchanges are called $topic_fanout_... 14:15:31 which means you can't just change the control_exchange config 14:15:42 to have two nova's on the same broker without conflicting 14:15:43 1. there are still some (not as many) leaked exchanges due to qpidd not supporting auto-delete on exchanges (which I'm not sure is in the spec anyway) 14:15:43 2. Using federation. 14:16:07 ok 14:16:16 but again - whatever issues there are with the current implementation 14:16:25 oslo.messaging will need to have those issues two 14:16:30 interop and parity first priority 14:16:36 ok. 14:16:39 cleanups would really be for icehouse IMHO 14:16:59 ok, I think we've beaten this to death 14:17:06 anything else on oslo.messaging? 14:17:16 markmc: what about configuration parameters? For example, we have qpid_sasl_mechanisms in oslo.rpc 14:17:18 (apart from secure messaging, proton, etc. - separate topics) 14:17:27 and I removed the qpid_ part 14:17:31 flaper87, yes, we need to continue supporting them 14:17:32 since those are tight to the url 14:17:50 existing config must stay working 14:17:57 ok 14:18:17 so, support for existing oslo configs and new configs based on urls 14:18:30 then we can deprecate the ones based on oslo.config in future versions 14:18:32 yeah 14:18:35 cool 14:18:46 config options in the URL would take priority 14:19:09 so, even control_exchange versus rabbit:///$exchange 14:19:23 if you set the exchange in the URL, that takes precedence over control_exchange 14:19:36 ok, moving on 14:19:37 #topic secure messaging 14:19:41 simo, hey hey hey 14:19:49 simo, could you give us an update? 14:19:52 yes 14:19:56 * markmc digs up the URLs of all the reviews 14:20:04 the patches I have work just fine 14:20:15 I am working with ayoung now to get the server side into keystone 14:20:26 (reviews welcome) 14:20:44 #link https://blueprints.launchpad.net/oslo/+spec/trusted-messaging 14:20:46 the patches currently up for review are WIP as adam has another patch inline we may want to depend on 14:20:57 big issue with the review is the SQL repo. We want to keep it separate, but curent focus on Alembic is undermining that effort 14:21:00 (splits DB per keystone component to avoid migration issues) 14:21:11 #link https://blueprints.launchpad.net/keystone/+spec/key-distribution-server 14:21:22 #link https://review.openstack.org/#/c/36731/ 14:21:23 ayoung: my plan is to go ahead w/o splitting soon 14:21:26 ayoung, that keystone blueprint isn't triaged/approved yet 14:21:56 markmc: so from oslo-incubator pov we are just waiting on keystone side 14:22:16 from oslo.messaging pov I .. don't know how we want to proceed 14:22:21 simo, yeah, I think now is a good time for oslo-core to review the oslo-incubator patches 14:22:29 and look over the keystone side of it too 14:22:32 we have a chicken/egg problem with it that the mechanism needs extensions to test it 14:23:12 simo, don't worry too much about oslo.messaging just yet ... either you or I can add it at some point, but getting it into oslo-incubator is the first step 14:23:30 markmc: ok 14:23:43 I'm happy for the oslo-incubator stuff to land if keystone-core are generally nodding at the idea of the key distribution server in keystone 14:23:51 hence my question about the blueprint approval 14:23:54 markmc: is it ok to abandon old reviews and start a new one from scratch ? 14:24:25 simo, it's better not to - it's preferable to keep the conversations tied together 14:24:49 simo, even if you split up the patches a bit more, it's nice to keep the original Change-Id for one of the new patches 14:25:18 markmc: ok, then I will just update the old review with the latest code today 14:25:26 ayoung, is keystone-core generally nodding at the idea of the key distribution server in keystone? 14:25:30 simo, super 14:25:46 markmc, yes, since it is only going in as an extension 14:26:04 ayoung, can we get the bp approved, then? 14:26:09 markmc, will do 14:26:34 e.g. ttx looks at the oslo blueprint and assumes it's all blocked because the keystone bp isn't approved 14:27:17 simo, I hope I get to review really soon 14:27:23 simo, still super excited about getting this in 14:27:38 markmc: me too 14:27:49 markmc: now one more comment on implementation 14:27:51 Simo Sorce proposed a change to openstack/oslo-incubator: RPC: Add MessageSecurity implementation https://review.openstack.org/37912 14:27:51 Simo Sorce proposed a change to openstack/oslo-incubator: RPC: Add support for optional message signing https://review.openstack.org/37913 14:27:51 Simo Sorce proposed a change to openstack/oslo-incubator: Add support for retrieving group keys https://review.openstack.org/37914 14:28:08 markmc: setup is all pretty manual still, is there a place I should put docs on how to do it ? 14:28:21 simo, a wiki page would be a great start 14:28:31 markmc: where should I land it ? 14:28:39 simo, including DocImpact in the commit message helps alert docs folks 14:28:45 simo, where on the wiki? 14:28:49 nods 14:29:11 https://wiki.openstack.org/wiki/MessageSecurityHowto ? 14:29:23 I'll do 14:29:26 cool 14:29:44 ok, excellent 14:29:45 so what I have now secures 1-1 communication if you espcify and explicit topic/host 14:30:01 it can also protect 1-many if the target is anything but 'compute' 14:30:14 interesting, why is that? 14:30:20 well you could add a 'compute' group too but then you will not protect one compute from anothr 14:30:21 the !compute bit, I mean 14:30:27 ok, right 14:30:36 not technical limitation so much as not recommended 14:30:38 because I assume you can trust scheduler1 and scheduler2 to be at the same trust level 14:30:46 yeah 14:31:04 as scheduler1 can grab the group key for 'schduler' and forge messages as if they were from another 14:31:06 * dhellmann arrives late 14:31:08 that covers a lot of cases, though 14:31:18 e.g. scheduler mostly talks to specific compute hosts 14:31:22 markmc: indeed it does 14:31:29 welcome dhellmann 14:31:32 fanout messages to compute.* are rare 14:31:36 yeah 14:31:37 hi, sorry I'm late! 14:31:44 and usually they ask compute.* to do something 14:32:06 so even if they are not protected when compute.host then sends something to the targte, that message will be protected 14:32:31 markmc: once we get this code in I am still planning to take a hard look at supporting public/private signing too 14:32:41 markmc: to try to cover also the compute.* target case 14:32:52 the API should alredy be flsxible enough for that 14:34:21 great 14:35:15 ok, great 14:35:20 * markmc tried to summarize in the etherpad 14:35:30 simo, feel free to fill in anything I misrepresented 14:35:39 #topic qpid/proton support 14:35:44 ok, whenry - you're up :) 14:35:55 ok 14:35:59 could you give everyone a brief summary of what this is all about? 14:36:07 * markmc digs up some links of threads on openstack-dev 14:36:19 so impl_proton would be a pure amqp 1.0 that uses the proton library 14:36:55 it would not care about qpidd artifacts 14:37:08 it would support exchanges in the namespace 14:37:32 as the osl.messaging would support - i.e. exchanges are just a HLQ on the namespace 14:38:05 qpidd or other intermeddiaries like qpid Dispatch could be used 14:38:06 #link http://lists.openstack.org/pipermail/openstack-dev/2013-July/thread.html#11451 14:38:19 removes the dependency on brokers - and would allow for HA without broker clustering 14:38:38 also would allow for the move over to a more federated setup 14:38:46 and removes the scale limitations that brokers cause 14:38:50 yeah, that's probably the key message for most people 14:39:14 this would be an amqp driver which would allow peer-to-peer style messaging architecture similar to what you can do with the zmq driver 14:39:35 so people could still use qpidd as their broker if they wish (qpidd now supports AMQP 1.0 using the proton libraries) 14:39:37 what's "a more federated setup" ? 14:39:41 in the context of openstack? 14:39:45 say, nova for example 14:39:54 network-topology vs clusters 14:40:02 just fyi for others: qpid/rpton is a library that provides a pure amqp 1.0 support 14:40:15 what would the topology look like for a large scale nova deployment? 14:40:23 anyone can use it to make their app/broker/etc. amqp 1.0 compliant 14:40:28 would it use the matchmaker to help the peers find each other like the zmq driver? or is that something built into amqp 1.0? 14:41:03 markmc: you would use a topology similar to a network topology in a datacenter - dual everything with crossed links 14:41:26 with regional pairs and backbones 14:41:34 topology can have many intermediaries (like qpid/dispatch) that merely route messages on. alternate routes can be found. geo areas can be defined to limit widea area chattyness 14:41:57 so ... e.g. a pair of dispatch thingies per rack? 14:42:28 or more 14:42:30 markmc: yes 14:42:36 or more, if needed 14:42:44 it's very light weight 14:42:59 merely looks at addresses and moves message forward 14:43:19 I'd love to see an opinionated explanation of e.g. "here's what it should look like for a nova install of 20 racks, 30 compute nodes per rack" 14:43:33 * markmc isn't loving "you can do it whatever way you like" type approach 14:43:49 in the topology you could support brokers too if there was some need ... but there probably wouldn't be with the right amount of redundancy in the network 14:44:01 a concrete example would really help people understand why you think this would be useful for openstack 14:44:20 markmc: +1 - I think the messaging topology would want to match the underlying networking topology as closely as possible 14:44:24 "less could, more should" 14:44:58 i.e. if there are wan trunks in use, that would be a good place to put a messaging interconnect 14:45:16 should because as we scale up to massive clouds over many geos then we need to have a more scalable solution 14:45:21 I'd keep it relatively simple 14:45:30 i.e. a single geo 14:45:37 just relatively large scale 14:45:43 ignore nova cells for now 14:46:00 then three in a full-mesh would give you good availability 14:46:28 tedross, could you or whenry write that up in some more detail? 14:46:36 * markmc would find it helpful 14:46:37 no need for clustering or quarum management 14:46:48 markmc: yes, I'd be happy to do that 14:47:07 i.e. a writeup for people who don't spend much time thinking about messaging 14:47:14 ok, cool 14:47:22 also, a blueprint for this driver would be good too 14:47:31 +1 14:47:31 markmc: ack +1 14:47:32 I'm assuming icehouse is the target at this point? 14:47:45 it looks like it, yes 14:47:51 but also another point 14:48:00 impl_proton will be pure AMQP 1.0 14:48:13 not just qpidd specific though qpidd will understand proton messages 14:48:20 yes 14:48:30 i.e. any AMQP 1.0 artifact can use this. 14:48:39 I still can't imagine that's going to matter for us though 14:48:53 so if some other broker tech came along that spoke amqp 1.0 then it would stil work 14:48:59 e.g. having nova-compute use impl_proton and nova-scheduler using impl_some_other_amqp10 14:49:27 ok, you're just saying the oslo.messaging driver isn't tieing you to a specific broker 14:49:34 right 14:49:45 unlike impl_rabbit and impl_qpid 14:49:47 ok, got it 14:49:50 * markmc has to move on 14:50:00 #topic eventlet dependency in oslo logging 14:50:04 lbragstad, you're on :) 14:50:05 #link https://review.openstack.org/#/c/34834/ for getting a unified logging solution from Oslo into Keystone. 14:50:16 just wanting to get some feedback on that there. 14:50:23 #link https://review.openstack.org/34834 14:50:35 I looked very briefly and was like, "uggh, this is getting messy" 14:50:45 but I'm not opposed to the idea in principle 14:50:56 but to back up here for a second 14:51:06 I wonder if that class couldn't be replaced with a single function that just returns the 2 stores? 14:51:10 oslo logging depends on the local store for the messaging context? 14:51:19 ContextAdapter 14:51:24 in log.py 14:51:33 i.e. the rpc code sets the context of the currently processing message? 14:51:44 I haven't carried that over to oslo.messaging yet 14:51:51 which is another ball of wax 14:52:10 is there any better way of implementing this 14:52:15 which doesn't depend on eventlet? 14:52:34 FWIW, I haven't looked at that patch but I'd love to remove local's dependency on eventlet. 14:52:35 I guess the way I did it, was just to use python's standard threading library 14:52:52 in the case we don't have eventlet installed on the system, like running keystone under Apache 14:53:09 does eventlet monkey patch the standard threading library's thread-local stuff? 14:53:14 yeah 14:53:20 i.e. so you get a thread local context per green thread? 14:53:31 so local.py could just use the standard thread-local stuff? 14:53:36 not use eventlet at all? 14:53:42 and eventlet will monkey patch it? 14:53:46 * markmc waves hands 14:53:46 it would be patched yea 14:54:02 correct 14:54:26 markmc: the problem with just relying on that approach is we can't have unit tests for both implementations 14:54:38 given the fact that we needed to detect that case, we also needed to address the test cases for it in a way that we could test the python threading implemetation with Eventlet installed 14:54:45 although we could do what we did in ceilometer with the nova notifier plugin, and run testr twice with different settings 14:55:11 dhellmann, yeah, I'm fine with that ... mostly because we're going to have this problem in oslo.messaging too 14:55:24 * dhellmann nods 14:55:47 lbragstad, ok, so I'd be happy with a patch to local.py which switches it to standard threading stuff 14:56:00 it would be monkey patched in our unit tests 14:56:09 and we can deal with adding tests for the non-eventlet case later 14:56:23 that would be a relatively simple patch, right? 14:56:31 * flaper87 would be happy with that as well 14:56:42 markmc: ok so just switch everything to use python threading instead of the implementation were we have an object to build the store dynamically? 14:56:59 yeah 14:57:00 simplier than what I have now I think ;) 14:57:21 I much prefer the simplicity of that 14:57:22 you could probably go back to the original implementation and just replace the eventlet module with threading 14:57:33 right 14:57:43 dhellmann: markmc ok, sounds good... simple is good 14:57:59 markmk: as far as the tests go, if the tests for the local module don't import anything "bad" we should be able to reuse the same test cases, no? 14:58:21 lbragstad: sorry for steering you down the wrong path initially :-/ 14:58:36 dhellmann, same test cases, but just different testr invocation? 14:58:39 dhellmann: no worries, I learned alot :) 14:58:42 markmc: right 14:58:45 dhellmann, yep 14:58:58 we just need to take care with the imports 14:59:12 so we don't end up bringing eventlet in unexpectedly 14:59:22 what am I going to do for oslo.messaging? 14:59:27 add a get_current_context() method? 14:59:46 I'm not going to expose the store in the API 14:59:58 the context we're talking about is the incoming rpc call context? 15:00:04 yeah 15:00:29 yeah, I guess we need to provide a function somewhere for other modules to call to get it 15:00:34 we could pass it to the callback we're given 15:00:42 but that just moves the burden somewhere else 15:01:15 is the oslo.messaging code still in your github repo, or is that on stackforge somewhere? 15:01:34 dhellmann, https://github.com/openstack/oslo.messaging 15:01:39 dhellmann, see https://etherpad.openstack.org/HavanaOsloMessaging 15:01:49 * markmc is really proud of http://docs.openstack.org/developer/oslo.messaging/ :) 15:02:03 I saw you'd done that, but haven't had a chance to catch up yet 15:02:06 it only took me a year or so to publish the oslo.config docs 15:02:10 dhellmann, markmc: we're out of time but I was hoping to ask about https://review.openstack.org/#/c/34253/ in open discussion. it was +2'd already but dhellman had some comments that I think are addressed now. its blocking a blueprint i have in nova :( 15:02:12 dhellmann, yep, cool 15:02:17 markmc: +1 for those docs, I love them 15:02:30 mrodden: I'll take a look 15:02:34 dhellmann: thanks 15:02:55 how are we doing on reviews anyway? 15:02:55 http://russellbryant.net/openstack-stats/oslo-openreviews.txt 15:03:04 --> Total Open Reviews: 55 15:03:16 that's a lot 15:03:30 * markmc caught up a week or two ago, behind again 15:03:38 ditto 15:04:09 ok, I think we're done 15:04:11 * flaper87 reviewed the hell out of oslo just yesterday 15:04:12 5 minutes over 15:04:14 :D 15:04:17 thanks guys! 15:04:19 flaper87, awesome 15:04:19 thanks all 15:04:21 thanks everyone 15:04:26 #endmeeting