14:01:35 <markmc> #startmeeting oslo
14:01:36 <openstack> Meeting started Fri May  3 14:01:35 2013 UTC.  The chair is markmc. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:40 <openstack> The meeting name has been set to 'oslo'
14:01:43 <beagles> o/
14:01:47 <markmc> #link https://wiki.openstack.org/wiki/Oslo/Messaging
14:01:55 <markmc> no, that's no what I meant :)
14:01:57 <GheRivero> o/
14:02:16 <markmc> #link https://wiki.openstack.org/wiki/Meetings/Oslo
14:02:28 <markmc> ok, so the agenda today is to talk about messaging in havana
14:02:44 <markmc> I'm figuring there are two major topics - the proposed new API and the message security work
14:02:44 <simo> hey
14:02:58 <markmc> #link https://etherpad.openstack.org/HavanaOsloMessaging
14:03:08 * markmc just started gathering info in the etherpad above
14:03:20 <markmc> anyone want to cover anything other than messaging today?
14:04:28 <markmc> ok then
14:04:31 <markmc> messaging it is
14:04:40 <markmc> let's start with the new API design
14:04:51 <markmc> #topic new messaging API design
14:04:58 <markmc> #link https://wiki.openstack.org/wiki/Oslo/Messaging
14:05:14 <markmc> I think we got through a tonne of stuff in the thread
14:05:19 <dhellmann> I haven't had a chance to update that page with the results of the email thread. I'm going to try to get that done today.
14:05:27 <markmc> ok, great
14:05:32 <markmc> I got back into it today
14:05:34 <markmc> and pushed this:
14:05:48 <markmc> https://github.com/markmc/oslo-incubator/commit/1f2f7fab
14:06:09 <markmc> I think there's a few things in there that'll make you happy dhellmann
14:06:18 <markmc> no reference to cfg.CONF
14:06:25 * dhellmann dances!
14:06:26 <markmc> transport URLs
14:06:32 <markmc> dispatcher stuff should be clearer
14:06:39 <markmc> there's something else
14:06:43 <markmc> oh, yes
14:06:50 <dhellmann> cool, dispatching was the part I was supposed to put in the wiki
14:06:54 <markmc> messaging.transport, messaging.target
14:07:06 <markmc> kinda forms a base messaging layer
14:07:10 <markmc> independent of RPC details
14:07:21 <markmc> now, there's no user-facing API for dealing with raw messages
14:07:28 <markmc> but at least the code is split that way
14:07:42 <dhellmann> that's good enough for me, for now
14:07:48 <dhellmann> should we go ahead and post comments on github?
14:07:52 * jd__ eyebrow raising
14:07:56 <simo> markmc: oh now I see why my stuff stopped working (no CONF etc..) :-)
14:08:12 <markmc> I need to update the wiki page with how the design has changed a bit
14:08:16 <markmc> jd__, whyso ?
14:08:31 <markmc> simo, don't follow?
14:08:45 <simo> markmc: with russellb we were discussing about having better initialization of rpc so that we can set things like 'source' name for sining
14:08:48 <markmc> so, I could walk through the patch
14:08:49 <jd__> markmc: good surprise you worked on this so quickly :)
14:08:50 <ewindisch> simo: I believe this is mark's private repository,
14:08:53 <ewindisch> this isn't upstream yet
14:08:56 <markmc> talk to the changes etc.
14:08:59 <simo> I currently have some not very nice global as there is no way to pass down info
14:09:06 <markmc> or would people just prefer to go and poke at it in their own time?
14:09:13 <markmc> jd__, ah, ok :)
14:09:15 <simo> ewindisch: yeah I realized too late after I typed the above :)
14:09:20 <markmc> jd__, it's all just stubbed out for now
14:09:35 <markmc> simo, the new design should definitely help
14:09:56 <markmc> simo, but it'd be interested in looking in detail later
14:10:09 <ewindisch> markmc: I still think the idea of a server is effectively broken as it pertains to a target
14:10:28 <markmc> ewindisch, I don't understand the statement, so elaborate
14:11:12 <markmc> dhellmann, comments in github, on the mailing list, wherever - all good
14:11:31 <dhellmann> ok
14:11:32 <markmc> dhellmann, pull requests happily received too :)
14:11:32 <ewindisch> markmc: we don't have a "server" to send things through with ZeroMQ, but we could institute a different messaging domain of some sort.
14:11:43 <ewindisch> for instance, we can do messaging over a different port
14:11:54 <markmc> ewindisch, the "server" concept is abstracted away from the underlying transport
14:11:56 * rustlebee appears late
14:12:04 <markmc> when you invoke a method, you're invoking it on a server
14:12:07 <markmc> or multiple servers
14:12:17 <markmc> it is fundamentally a client/server API
14:12:35 <markmc> it's abstract enough that the transport can handle that any way it makes sense
14:12:53 <markmc> the client doesn't address a server
14:13:07 <ewindisch> markmc: oh, Target(server=) is specifica to sending to a host?
14:13:16 <markmc> it addresses an (exchange, topic, namespace, topic) tuple
14:13:22 <markmc> where there are a pool of servers listening
14:13:31 <markmc> it can send direct to one of them by narrowing to
14:13:38 <markmc> (exchange, topic, namespace, topic, server)
14:13:49 <ewindisch> I suppose that is confusing to me because RPCClient refers to host= on prepare()
14:14:18 <simo> markmc: can you give me an example of what that tuple would actually look like ? (why 2 topics for example?)
14:14:41 <markmc> ewindisch, thanks, that host reference is a bug - will fix
14:14:55 <markmc> oh, there's an important caveat - no python interpretor has read these files yet :)
14:15:02 <markmc> simo, ok
14:15:05 <markwash> :-)
14:15:07 <ewindisch> markmc: I still think peer is clearer, but I have no real complaint as long as we're consistent
14:15:26 <flaper87> :P
14:15:26 <markmc> (nova, scheduler, <null>, '1.0', 'server1')
14:15:28 <markmc> versus
14:15:39 <markmc> (nova, compute, base, '1.5', 'server2')
14:15:50 <dhellmann> why have both topic and namespace?
14:15:51 <markmc> ewindisch, they're fundamentally not peers
14:16:03 <markmc> dhellmann, russell added it very recently
14:16:12 <dhellmann> so that's in the existing api?
14:16:17 <markmc> dhellmann, allows you to have multiple interfaces hanging off a topic
14:16:27 <simo> markmc: '1.0' IS A TOPIC NAME ?
14:16:29 <markmc> dhellmann, his use case was having a "base" interface that all topics expose
14:16:30 <dhellmann> yeah, I wasn't sure why that was a good idea
14:16:30 <simo> ops caps :/
14:16:35 <markmc> simo, sorry, no - it's a version
14:16:44 <ewindisch> markmc: I disagree? In the abstract, they are. In AMQP, they're peers communicating through a broker. In ZeroMQ they're directly communicating peers.
14:16:50 <dhellmann> like a ping()?
14:16:51 <markmc> simo, (exchange, topic, namespace, version)
14:16:55 <simo> ahhh
14:17:02 <simo> ok now it makes more sense
14:17:17 <simo> markmc: so you envision the ability foir the client to choose the message version too ?
14:17:18 <markmc> ewindisch, you're wrong - they can be peers at the transport level, not from the perspective of e.g. Nova calling a method on a compute node
14:17:26 <rustlebee> dhellmann: https://github.com/openstack/nova/blob/master/nova/baserpc.py
14:17:30 <markmc> ewindisch, in that case, the scheduler is a client and compute is a server
14:17:31 <simo> is this the eenvelope version or rpc version ?
14:17:39 <markmc> ewindisch, the roles can be reversed later
14:17:49 <markmc> ewindisch, but in terms of that call operation, there is a client and server
14:18:01 <dhellmann> rustlebee: thanks
14:18:09 <markmc> simo, rpc version, envelope version isn't exposed in the API
14:18:17 <markmc> simo, it's a transport implementation detail
14:18:23 <rustlebee> dhellmann: that's how it's used so far ... every service has that served up, as well as its service specific api
14:18:23 <simo> ok, sorry still making my way through some of this stuff
14:18:34 <markmc> simo, np
14:18:38 <ewindisch> again, no complaint as long as we're consistent with the terminology we choose
14:19:06 <simo> markmc: I assume server is actually something like: compute.my.little.nodeout.there ?
14:19:11 <dhellmann> rustlebee: ok, I think I saw that review go past, but haven't had a chance to look at it in detail. Makes some sense, although I think we're bleeding dispatch abstractions into the caller.
14:19:16 <markmc> simo, server is a hostname
14:19:18 <markwash> markmc: FWIW I'm thrilled at the new interface
14:19:26 <simo> markmc: without the type ?
14:19:27 <markmc> simo, doesn't have to be DNS resolvable though
14:19:33 <ewindisch> I need to look through this to see how cells are handled. I suppose via the tranport uri?
14:19:38 <markmc> simo, type? as in str?
14:19:40 <markwash> markmc: and the code groks a lot better for me as well
14:19:44 <simo> service type
14:19:48 <markmc> ewindisch, yes, transport url
14:19:48 <rustlebee> dhellmann: yeah, would be happy to hear your feedback.  i'm not tied to how it works ... if we change it, the sooner the better
14:19:49 <simo> copute/scheduler/whatever
14:19:53 <markmc> simo, that's the topic
14:20:06 <markmc> markwash, thanks
14:20:08 <simo> markmc: so you keep them separate ok,
14:20:17 * ttx appears later
14:20:57 <dhellmann> rustlebee: well, I still think we should just define message types and have RPC callers indicate where to send a response, but I think I've lost that argument a few times so I'm no longer pushing it. :-)
14:21:26 <markwash> dhellmann: oh interesting
14:21:49 <flaper87> dhellmann: I don't know what that argument was about
14:21:50 <dhellmann> markmc: have you had a chance to look at tulip, and how this may or may not play well there?
14:21:53 <flaper87> but sounds interesting
14:22:13 <markmc> dhellmann, nope, would be happy to get a pointer and some thoughts on relevance
14:22:22 <dhellmann> flaper87: clients shouldn't know which method in a server they're invoking. They should send a message asking for data or action, and the server should be able to dispatch that however it wants.
14:22:36 <rustlebee> markmc: thanks a lot for driving this new API ... long needed, and looking really nice
14:22:42 <markmc> rustlebee, thanks
14:23:00 <dhellmann> markmc: I need to research more myself, but was hoping you had more time. I wonder, for example, if Listener should be iterable, though?
14:23:16 <dhellmann> #link http://www.python.org/dev/peps/pep-3156/
14:23:44 <dhellmann> that may be a driver-level thing, rather than something the API needs to worry about
14:24:01 <dhellmann> #link http://code.google.com/p/tulip/
14:24:18 * dhellmann hopes to get rid of eventlet with the move to python 3
14:24:21 <ewindisch> I do think we shouldn't continue to be tied to eventlet.
14:24:27 <markmc> dhellmann, yeah, Listener could well be improved - I think the concept of the dispatcher saying "give me a message", then dispatching it and then saying "thanks I'm done with that message" is nice
14:24:56 <ewindisch> EventletRPCDispatcher seems the opposite of that, tying our API intimately with Eventlet
14:24:59 <jd__> any move to designing the API to be asynchronous?
14:25:16 <dhellmann> ewindisch: the idea would be we could switch dispatchers without rewriting everything else
14:25:19 <rustlebee> dhellmann: sounds like a pretty reasonable time to finally get that done ... especially if it's going to be baked into Python
14:25:22 <markwash> ewindisch: does it? it seems like the api is saying "you can replace this dispatcher with something equivalent"
14:25:25 <dhellmann> and every driver wouldn't have eventlet stuff in it
14:25:33 <rustlebee> dhellmann: kind of makes the appeal of looking into an alternative in the meantime much smaller ...
14:25:50 <dhellmann> rustlebee: right on both counts, but trying to plan ahead to avoid major issues
14:25:58 <rustlebee> dhellmann: sure
14:26:02 <markmc> dhellmann, re: eventlet, that's totally why I've made it an explicit part of the API
14:26:11 <dhellmann> markmc: yep
14:26:12 <markmc> dhellmann, i.e. services can move to eventlet by adding a new dispatcher
14:26:33 <flaper87> FWIW, makes totally sense to me
14:26:51 <dhellmann> is that Listener class used in the client and the server?
14:26:55 <flaper87> s/totally/total/
14:26:57 <markmc> dhellmann, nope, just the server
14:26:58 <ewindisch> markmc: drivers themselves do tricks with eventlet. ZeroMQ for sure. In fact, the driver would probably be rewritten quite a bit to work with tulip.
14:27:04 <dhellmann> ok
14:27:27 <markmc> ewindisch, driver eventlet tricks will need to be similarly abstracted
14:27:37 <rustlebee> yar
14:27:41 <ewindisch> so that introduces a challenge as well… some of that might be solvable by recognizing where we're spawning threads and abstracting that out, preferably into common
14:27:41 <markmc> ok, that's 25 minutes of details ... got to move on
14:27:57 <dhellmann> yeah, maybe the dispatcher needs to know about a class in the driver, either through convention or a plugin loader
14:27:59 <markmc> the thing I wanted to do a straw poll on was in the etherpad
14:28:06 <markmc> https://etherpad.openstack.org/HavanaOsloMessaging
14:28:14 <markmc> "Two approaches to getting this merged when we're happy with the current design:"
14:28:34 <markmc> the first, incremental approach would be the way I'd usually prefer to do it
14:28:54 <markmc> but (and this is a big but) if we thought we could parallelize and get this done early enough in havana
14:29:04 <markmc> I could live with doing this in parallel with the old code base
14:29:09 <simo> markmc: this is effectively a parallel API
14:29:12 <simo> right ?
14:29:15 <markmc> so long as we do actually delete the old code base soon
14:29:24 <markmc> simo, parallel API is option (2) yeah
14:29:42 <flaper87> I'm more for option 2
14:29:46 <markmc> simo, and really this choice impacts the message security work than anything else
14:29:47 <simo> markmc: is there any chance you can change the old API to use the new one underneath ?
14:29:51 <simo> or is that too hard ?
14:29:57 <ewindisch> markmc: in option #2, you'd fork the drivers?
14:29:59 <markmc> simo, that's option (1) I think
14:30:04 <markmc> ewindisch, yes
14:30:13 <flaper87> seems like cleaner and with less risks
14:30:28 <ewindisch> for me, that seems like twice the amount of work in Havana.
14:30:31 <markmc> simo, oh, I see what you mean - no that's another option
14:30:37 <flaper87> at the end, the old rpc would be removed anyway
14:30:41 <markmc> ewindisch, give an example of twice the amount of work?
14:30:58 <simo> markmc: that might be a way to get stuff initally in in parallel, test a few services with the new api
14:31:04 <simo> and subvert all others with the wrapper
14:31:11 <simo> and kill the orginal rpc de facto
14:31:13 <dhellmann> is the goal to get all of the other projects using this by the end of H, or just to get it working by then?
14:31:16 <jd__> didn't get any answer, so retrying: any move to designing the API to be asynchronous?
14:31:18 <simo> then slowly convert services to the new api
14:31:26 <markmc> simo, I'm not too concerned about converting the services to the new API, it's more about implementing the drivers under the new API and such
14:31:33 <ewindisch> markmc: I arguably have a full release's worth of work to do on rpc-related stuff. I'll need to get those things into the old rpc driver. I'll also need them in the new driver
14:31:37 <markmc> simo, we couldn't support the old API with the new API
14:31:52 <simo> markmc: I am for whatever reduces the amount of time 2 implementations live in parallel
14:32:10 <dhellmann> jd__: I think that's related to the discussion of dispatcher above. We're trying to abstract that out of the drivers. Do you want the public API to be async?
14:32:15 <ewindisch> but I could move the matchmaker out into its own top-level oslo-incubator project and not duplicate that code
14:32:15 <markmc> simo, if we parallelized, I'd say the goal would be to get all the servers moved over by havana-2
14:32:27 <jd__> dhellmann: yes
14:32:31 <ewindisch> which would make this all much easier for me
14:32:35 <simo> markmc: what are the risks here ?
14:32:40 <markmc> ewindisch, well, first - getting this done asap helps zmq more than most anything else
14:32:53 <markmc> ewindisch, so a bit of pain in the short term for zmq should help you in the long run
14:32:58 <dhellmann> jd__: I'd be interested to see what that looks like
14:33:11 <markmc> ewindisch, and if you have new zmq stuff to do in havana, and we do the parallel approach, just do it in the new API
14:33:21 <markmc> ewindisch, since the old one would be dead and gone by havana-2
14:33:31 <simo> jd__: truly asynchronous APIs are really hard, what is the driver ?
14:33:48 <jd__> dhellmann: I've some experience with this related to my years working on XCB, so I'd be happy to help, I think it may be interesting :)
14:33:52 <ewindisch> markmc: if the old one will be gone in havana-2 and we get the projects to consume it, then I'm okay with parallelizing it.
14:34:09 <markmc> jd__, I'd be interested in ideas for sure
14:34:15 <dhellmann> +1
14:34:16 <markmc> ewindisch, cool
14:34:19 <ewindisch> I do see a lot more risk to that, though
14:34:22 <rustlebee> does it need to be in tree?  a feature branch hacking the current API until it's ready?
14:34:31 * mordred also raises eyebrows
14:34:34 <ewindisch> (if we don't get it done by havana-2, or if the projects don't consume it)
14:34:37 <markmc> well, getting it done for havana-2 assumes we do actually parallelize the work
14:34:47 <mordred> gah. sorry - responding to scrollback- forget I was scrolledback :)
14:34:49 <markmc> so, that's what the list of tasks are for
14:34:58 <ewindisch> it sounds less like parallelizing the work and abandoning the old stuff
14:35:15 <markmc> ewindisch, saw what?
14:35:26 <markmc> say what, I mean?
14:35:27 <ewindisch> *and more like abandoning
14:35:45 <markmc> so you don't think this parallelizes the work of getting to the new API?
14:36:12 <simo> markmc: I am thinking that if you cannot make a wrapper to use old coe with the new api that putting 2 in in parallel and converting quickly all services and finally ripping out the old interface would be the way to go
14:36:12 <markmc> what I mean is, we can have multiple people sign up for the the tasks in the etherpad
14:36:13 <ewindisch> markmc: if we're removing the old rpc code in havana-2, I don't think we'll be doing much work on the old rpc code at all.
14:36:29 <markmc> if we have enough people willing to help, then havana-2 is probably achievable
14:36:34 <markwash> re new ideas for the interface, all the ones mentioned have sounded cool, but if we want to make the switch in this release, we need to timebox the process of discussing/implementing other interface ideas to ASAP in havana
14:36:43 <markmc> but e.g. if I need to port each driver myself, then port each project, etc.
14:36:43 <simo> markmc: otherwise you need a flag day to change everything in one go and that may be painful
14:36:47 <markmc> havana-2 is less likely
14:36:55 <ewindisch> markmc: parallelizing the task list is another matter, perhaps I misunderstood your intention there.
14:36:56 <markmc> and parallelizing the work isn't useful anyway
14:37:47 <simo> markmc: Is it too harsh to tell people: port your driver whithin day X or it will be ripped off ? :-)
14:37:48 <markmc> simo, well, the "beauty" of oslo-incubator's "managed copy and paste" approach is that incompatible changes don't need a flag day
14:37:59 <markmc> simo, heh
14:38:13 <markmc> ok, anyone up for signing up to any of those tasks ?
14:38:23 <markmc> e.g. who'll port the rabbit, qpid and zmq drivers?
14:38:29 <simo> ah "beauty"  ... first time I hear ti in the context of copy&paste :)
14:38:38 <flaper87> qpid +1
14:39:01 <markmc> simo, "less ugly" - that's what the "managed" bit implies :)
14:39:02 <simo> markmc: so the message format is totally unchanged here ?
14:39:05 <markmc> flaper87, cool
14:39:08 <markmc> simo, yes
14:39:10 <ewindisch> markmc: I'm obviously gonna lead porting zmq.
14:39:15 <markmc> ewindisch, great
14:39:29 <simo> markmc: oh well in that case I am ok either way given no flag day is necessary
14:39:53 <simo> markmc: ah except any new feature will be blocked
14:40:12 <simo> because you cannot import oslo-incubator into a service until it is converted
14:40:25 <markmc> markwash, re: tests - what I'd love if someone could take that github branch and start writing basic tests for the framework
14:40:29 <ewindisch> markmc: special interests for me will be ceilometer, quantum, and nova-cells.
14:40:31 <markmc> markwash, maybe a fake driver would help
14:40:44 <markwash> markmc: cloning now
14:40:47 <markmc> simo, yes
14:40:52 <markmc> markwash, awesome
14:40:57 <flaper87> markmc: forked
14:40:59 <rustlebee> there is a fake driver now
14:41:03 <markmc> ewindisch, that's a lot
14:41:09 <seagulls> (flaper87 - you can count on me for expedited review and sounding board for qpid driver port)
14:41:10 <simo> markmc: gives incentive to port stuff though
14:41:13 <markmc> I'd really like someone more familiar with cells to take on that
14:41:14 <ewindisch> markmc: I'm not saying I'd port them.
14:41:15 <simo> so I am for it
14:41:23 <markmc> ewindisch, ok, that's what I'm asking about
14:41:37 <markwash> markmc: which branch is it on your repo?
14:41:38 <flaper87> seagulls: awesome, thanks
14:41:39 <ewindisch> markmc: I'm saying I'm going to be keeping an eye on them -- so I'll be a resource on them, but as you said, I'll be too busy to lead.
14:42:10 <markmc> rustlebee, yeah, porting the fake driver could be the way to start
14:42:26 <markmc> markwash, messaging
14:43:03 <markmc> ok, great
14:43:13 <markmc> let's assume we're going for option (2)
14:43:18 * markwash has to catch a bus in 5 minutes, will see what he can get done on the train this morning
14:43:19 <ewindisch> should we split out developer versus deployer docs?
14:43:28 <flaper87> markmc: FWIW, I prefer option 2
14:43:34 <markmc> I won't block new features in openstack.common.rpc until we're more confident about getting this done for havana-2
14:43:39 <markmc> markwash, thanks
14:43:47 <markwash> yes, I take a bus AND a train
14:44:01 <markmc> switching topics
14:44:06 <markmc> #topic message security
14:44:11 <seagulls> markmc: I'm leaning to 2.. frankly the argument that the drivers are going to go away anyways was the key winning point
14:44:18 <markmc> #link https://wiki.openstack.org/wiki/MessageSecurity
14:44:20 <markmc> seagulls, great
14:44:26 <markmc> simo, want to give us an update?
14:45:04 * rustlebee is pumped to see this moving
14:46:08 * markmc notes the reviews simo just pushed
14:46:10 <simo> mnasure
14:46:17 <simo> markmc: sure
14:46:25 <markmc> #link https://review.openstack.org/28154
14:46:30 <simo> so I have code that implements all the crypto described in the wiki page
14:46:32 <markmc> #link https://review.openstack.org/28153
14:46:40 <simo> and I am cleaning up the 2 patches after gerrit complained a bit :)
14:46:51 <ewindisch> I've looked at the reviews. Considering how much discussion on this is still in progress, I'd like to see this abstracted a bit more.
14:46:59 <simo> the patches have tests to verify all is in good working
14:47:22 <simo> and the code is structured so that messaging is completely optional at this point
14:47:35 <simo> ewindisch: abstracted how/where ?
14:48:37 <ewindisch> simo: such that a secure-messaging driver can be selected. There is a good chance we'll also push PKI.
14:49:12 <markmc> ewindisch, can be done later - it'll be hidden behind the public facing API
14:49:12 <simo> ewindisch: it should be easy to subclass SecureMessage
14:49:36 <simo> as you can see it only offers 'encode' and 'decode'
14:49:49 <simo> so the upper layer has no knowledge of what is being used
14:50:13 <ewindisch> markmc: true, but some of the config options that might be specific to the shared-key approach seem to be in __init__.py
14:50:17 <simo> and the data needed is m passed in through 'metadata' which is just a dict
14:50:47 <simo> ewindisch: 2 options
14:50:55 <markmc> ewindisch, yes, we'd introduce a config option later to choose a different security driver - it's not an issue
14:50:56 <simo> 1. whether secure messaging is optional or required
14:50:58 <ewindisch> and errors note stuff like HMAC, etc..
14:51:00 <simo> that is universal
14:51:06 <markmc> ewindisch, we have transport driver specific config options similarly
14:51:14 <simo> 2. keys and adding pki: type will be banal
14:51:24 <ewindisch> I'm not saying this needs a rewrite or anything. We might need to just sanitize it slightly
14:51:35 <markmc> let's take it as a given we can abstract this further and add alternative drivers later
14:51:37 <rustlebee> all sounds like easy enough stuff to add if you actually did another driver at some point
14:51:41 <simo> ewindisch: there is a secureMessageException class
14:51:46 <simo> you just need to catch on that
14:51:46 <ewindisch> especially since if we *do* change algorithms later (even for this implementation) we don't want to have to update all our docstrings, etc
14:52:03 <simo> and you'll extend it with PKI specific tests for PKI exceptions
14:52:03 <markmc> ewindisch, I want to move on - your point is taken
14:52:04 <ewindisch> when you think about translations, etc
14:52:22 <markmc> simo, how much work is required on the keystone side?
14:52:25 <dhellmann> how does this fit in with the new API?
14:52:35 <simo> markmc: well technically it is just 2 REST calls
14:52:43 <simo> markmc: but also database is needed
14:52:44 <markmc> dhellmann, good question - I don't think it's visible through the API
14:52:51 <simo> the crypto is all in crypto already
14:52:56 <simo> so I will probably just reuse it
14:53:08 <markmc> simo, ok
14:53:08 <simo> *all in crypto.py*
14:53:24 <dhellmann> markmc: the set_service_name() and get_service_name() functions that operate on globals may need to be incorporated to take into account multiple listeners
14:53:27 <markmc> simo, are keystone folks on board with adding those?
14:53:29 <dhellmann> either that, or they go away
14:53:36 <simo> markmc: I will work on it starting today, but I have some travel in less than 10 days
14:53:46 <markmc> simo, would they be in the v3 API? so, you'd have to have that API enabled for this?
14:53:46 <simo> so I might have it ready and tested by the end of may I think
14:53:55 <markmc> simo, ok
14:54:08 <simo> markmc: I am thinking we might want a separate endpoint for keyserver
14:54:09 <markmc> dhellmann, ok, will look at that
14:54:15 <markmc> simo, ah!
14:54:19 <simo> keystone is just the project where it lives
14:54:24 <rustlebee> i figured they would become arguements to client/server classes
14:54:28 <simo> but we may want to be able to move it elsewhere
14:54:29 <rustlebee> and localized there
14:54:39 <simo> markmc: not set in stone yet though
14:54:54 <dhellmann> rustlebee: the service name stuff?
14:54:58 <rustlebee> dhellmann: yeah
14:55:01 <simo> markmc: adam said he needs to think about database upgrade procedures for example
14:55:06 <markmc> rustlebee, or Target properties?
14:55:17 <dhellmann> rustlebee: yeah, that makes sense -- I just wanted to point it out, since it's another global
14:55:21 <simo> markmc: but the code is well selfconatined so I think by the end of the month we'll have that settled too
14:55:23 <rustlebee> dhellmann: indeed
14:55:51 <markmc> simo, excellent
14:55:54 <rustlebee> re: keyserver specific endpoint ... makes sense, but also need to keep in mind that others int he community are working on a keyserver service
14:56:06 <rustlebee> and i would expect them to want to expose that endpoint
14:56:19 <rustlebee> but maybe it starts in keystone and moves there if/when that ever becomes a real thing
14:56:41 <seagulls> simo: the different endpoint thing makes a whole lot of sense... especially if we consider keystone's main domain different than the secure messaging domain. Allows for different deployment, implementation, etc. scenarios
14:56:42 <simo> rustlebee: key manager != key server
14:56:51 <seagulls> simo: +1 on that
14:57:07 <simo> yes that's the thinking
14:57:58 <markmc> ok, well that sounds like serious progress
14:58:00 <markmc> nice work simo
14:58:05 <markmc> one last topic
14:58:07 <simo> maybe I'll call it KDC just to avoid confusion with key manager
14:58:18 <markmc> #topic project meetings
14:58:23 <markmc> opinions?
14:58:25 <mordred> wow. that sounds lke a kerberos thing
14:58:28 <markmc> just schedule it for every week?
14:58:36 <markmc> or on demand if people have topics?
14:58:38 <simo> mordred: this design is very similar to kerberos ;-)
14:58:43 <dhellmann> every other week?
14:58:43 <mordred> simo: yes. :)
14:58:43 <markmc> is this time really ok with everyone?
14:58:51 <dhellmann> works for me
14:58:55 <seagulls> this time is awesome for me
14:59:09 <mordred> ++ (even though I was just lurking today)
14:59:14 <ttx> haven't participated much to this one but I liked the time
14:59:16 <rustlebee> works for me, i need to put it on my calendar
14:59:18 <ewindisch> what time is this in EST? :)
14:59:21 <simo> markmc: it's ok although on fri I try to not do meetings :)
14:59:24 <mordred> simo: I may have wondered why we don't just use kerberos and add a rest api...
14:59:32 <markmc> simo, same here :)
14:59:37 <ewindisch> this worked really well for me in CEST, but this isn't my typical timezone...
14:59:39 <markmc> mordred, hehe
14:59:42 <simo> mordred: I want to be able to get there, but I want something that works for Havana
14:59:52 <dhellmann> ewindisch: 10:00 AM
14:59:53 <simo> mordred: and we do not have the time to make that be real kerberos
14:59:54 <markmc> ok, great
15:00:01 <markmc> let's wrap this up on time :)
15:00:04 <markmc> #endmeeting