19:05:14 <kgriffs> #startmeeting marconi
19:05:15 <openstack> Meeting started Thu Apr  4 19:05:14 2013 UTC.  The chair is kgriffs. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:05:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:05:19 <openstack> The meeting name has been set to 'marconi'
19:05:40 <kgriffs> #topic State of the project
19:06:39 <kgriffs> So, we are close to having our demo ready for Portland. At that point, Marconi will be feature-complete, but will still have a lot of error handling and optimizations to complete.
19:07:11 <kgriffs> Sometime in the next few days we will also have a public sandbox ready that everyone can play with.
19:07:23 <flaper87> w000000t
19:07:41 <kgriffs> :D
19:08:18 <kgriffs> flaper87 has been making excellent progress on the mongodb storage driver, and we also have a reference driver based on sqlite.
19:09:11 <kgriffs> jdprax has been coding away on the client lib
19:10:10 <kgriffs> jdprax: can you comment?
19:10:12 <jdprax> We're still coding on the client library, but our gerrit config was rejected because essentially they want to us to set up pypi now or never for it.
19:10:15 <jdprax> ...
19:10:43 <jdprax> So I'm leaning toward "never", and we'll just push it ourselves.
19:10:44 <flaper87> jdprax: so, basically we have to release a first version before getting it into gerrit ?
19:10:58 <jdprax> That's my understanding.
19:11:03 <jdprax> :-/
19:11:09 <jdprax> But hey, not a big deal.
19:11:23 <jdprax> Honestly we've just been swamped so I haven't followed up as closely on it as I should have.
19:11:29 <flaper87> jdprax: what about pushing some dummy code there as placeholder ?
19:11:43 <flaper87> I mean, on pypi
19:12:13 <jdprax> Ah, pushing dummy code to pypi?
19:13:08 <flaper87> jdprax: yeah, some package with version 0.0.0.0.0.0.0.0.0.0
19:13:11 <flaper87> .0.0
19:13:13 <flaper87> and .0
19:13:24 <bryansd> .1
19:13:48 <flaper87> bryansd: .1/2 ?
19:14:23 <flaper87> jdprax: seriously, if that's the problem then I'd say, lets lock tha slot on pypi with some dummy package and get the client on stackforge
19:14:54 <kgriffs> sounds like a plan
19:15:07 <flaper87> fucking pypi, today is not working at all
19:15:25 <kgriffs> that should be a 7xx error
19:15:28 <jdprax> Hahaha
19:15:59 <kgriffs> ok, moving on… :p
19:16:01 <flaper87> kgriffs: I'd say 666
19:16:41 <jdprax> :-)
19:16:44 <kgriffs> #topic Finish reviewing the draft API
19:17:10 <kgriffs> ok, so over the past couple weeks there've been a bunch of changes to the API, hopefully for the better.
19:17:45 <kgriffs> so, first, is there anything in general you guys want to discuss based on the latest draft? If not, I've got a few specific areas I'd like to focus on.
19:18:20 <kgriffs> https://wiki.openstack.org/wiki/Marconi/specs/api/v1
19:19:33 <wirehead_> So, not to be too annoyingly bikesheddy, kgriffs….. (we talked in person — I'm Ken) I loves that the user-agent isn't overloaded like before, but maybe X-Client-Token instead of Client-ID?
19:19:52 <kgriffs> hey Ken
19:20:38 <flaper87> wirehead_: I'm afraid that can be a bit confusing for users since there may be other tokes (like keystone's)
19:20:45 <wirehead_> K
19:20:56 <wirehead_> Maybe true.  Still, would be more HTTP-ish to keep the X- as a prefix.
19:21:08 <wirehead_> I know I'm bikeshedding and I appologize for it. :)
19:21:09 <kgriffs> actually, I'm not sure I'd agree
19:21:29 <kgriffs> x-headers were never supposed to be used like that.
19:21:38 * kgriffs is looking for the RFC
19:21:57 <flaper87> kgriffs <- always has an RFC for everything
19:22:17 <kgriffs> http://tools.ietf.org/html/rfc6648
19:22:38 <jdprax> For the curious http://en.wikipedia.org/wiki/Bikeshedding
19:23:36 <kgriffs> I'm actually trying to figure out the process for registering new headers
19:23:55 <kgriffs> (seems like Client-ID is generic enough to be useful elsewhere)
19:24:19 <wirehead_> well, if we want to rabbit hole, you could always silently implement it as a cookie.
19:24:21 <kgriffs> (or maybe mnot and his possy will have a better suggestion…TBD)
19:24:26 <kgriffs> oh boy
19:24:35 <russell_h> I'm curious about authorization
19:24:37 <wirehead_> note that I didn't say "we should implement it as a cookie"
19:24:50 <kgriffs> I'm embarrassed to say the thought did cross my mind...
19:25:00 <kgriffs> russel-h: shoot
19:25:09 <kgriffs> russell_h
19:25:12 <russell_h> kgriffs: any plans to support queue-level permissions?
19:25:18 <russell_h> the spec is a little vague about this
19:25:47 <russell_h> but if you wanted to do this, you would presumably need to track them as a property of the queue
19:25:51 <kgriffs> we have thought about it, and I think it would be great to have, but that would best be implemented in auth middleware
19:26:29 <russell_h> right, but would you have to tell the middleware about the permissions of each queue, or where would that information actually go?
19:26:30 <kgriffs> it would be great if we could expand the Keystone wsgi middleware to support resource-level ACLS
19:26:52 <kgriffs> good question, we honestly haven't talked about it a lot
19:27:25 <wirehead_> Well, also some sort of "append only user"
19:27:32 <russell_h> does swift have anything like this?
19:27:35 <kgriffs> makes sense
19:28:27 <flaper87> we haven't talked that much about it but I guess that info will live in the queue
19:28:34 <russell_h> "You can implement access control for objects either for users or accounts using X-Container-Read: accountname and X-Container-Write: accountname:username, which allows any user from the accountname account to read but only allows the username user from the accountname account to write."
19:28:48 <russell_h> not a fan of that
19:29:28 <kgriffs> russell_h: what would you like to see instead?
19:29:35 <flaper87> or we could also have a PErmissionsController per storage to let it manage per resource permissions
19:29:56 <flaper87> actually, that sounds like a good idea to my brain
19:30:15 <kgriffs> flaper87: I'm thinking we could add an _acl field to queue metadata
19:30:28 <russell_h> kgriffs: I was hoping someone else had a clever idea
19:30:45 <russell_h> I can't think of anything that doesn't involve describing groups of users
19:31:04 <kgriffs> then, just call out to the controller or whatever as a Falcon hook/decorator
19:31:05 <wirehead_> I have a repeat of my clever-but-bad-idea: Create anonymous webhooks
19:31:09 <flaper87> kgriffs: we could but having security data mixed with other things worries me a bit, TBH
19:31:24 <wirehead_> to push to a queue, hit a URL with a long token
19:31:51 <russell_h> at any rate, I don't think the permissions issue needs to block a v1 API
19:31:55 <wirehead_> naw
19:32:32 <flaper87> russell_h: agreed
19:32:42 <flaper87> sounds like something for v2 and / or Ith release cycle
19:32:58 <kgriffs> russell_h, wirehead_: would you mind submitting a blueprint for that?
19:33:11 <kgriffs> https://blueprints.launchpad.net/marconi
19:33:27 <russell_h> sure, sounds fun
19:33:38 <flaper87> russell_h: thanks!!!!!!!!!!!!!!!!!!!!
19:35:21 <kgriffs> #action russell_h and wirehead_ to kickstart an auth blueprint
19:36:29 <kgriffs> #topic API - Claiming messages
19:36:49 <kgriffs> https://wiki.openstack.org/wiki/Marconi/specs/api/v1#Claim_Messages
19:37:19 <kgriffs> so, any questions/concerns about this section? We haven't had a chance to fully vet this with folks outside the core Marconi team.
19:38:32 <kgriffs> oops - just noticed that id needs to be removed from Query Claim response (just use value of Content-Location header)
19:39:47 * kgriffs fixes that real quick
19:40:32 <russell_h> so something I'm curious about
19:40:44 <russell_h> hmm, how to phrase this
19:41:20 <wirehead_> scud missile time, russell_h.
19:41:28 <russell_h> basically, can a message be claimed twice?
19:41:48 <russell_h> that is poorly phrased
19:41:48 <flaper87> russell_h: yes if the previous claim expired
19:41:57 <flaper87> not at the same time
19:42:10 <russell_h> what if I want a message to be processed exactly once by each of 2 types of services
19:43:36 <russell_h> for example if I have a queue, and I want it to be processed both by an archival service and streaming query interface
19:43:58 <russell_h> I'd basically like to be able to specify some sort of token associated with my claim
19:44:16 <russell_h> "make sure no one else with token <russells-query-interface> claims this message"
19:44:22 <flaper87> russell_h: right now, that's not possible because we don't have routings neither for queues nor for claims
19:44:48 <russell_h> so the eventual intention is that the message would be routed to 2 queues, and claimed there?
19:44:55 <wirehead_> That seems conceptually simpler to me
19:45:15 <kgriffs> yeah, seems like you could have something that pulls off the firehose and duplicates to two other queues
19:45:38 <kgriffs> alternatively, if they must be done in sequence, worker 1 would post to the second queue the next job to be done by worker 2
19:45:41 <wirehead_> Or a submit-to-multiple-queues
19:45:50 <flaper87> AFAIK, that's something AWS handles in the notification service
19:45:57 * flaper87 never mentioned AWS
19:46:06 <kgriffs> heh
19:46:09 <wirehead_> Just call it "That Seattle Queue"
19:46:19 <flaper87> so, that's something we'll add not because AWS does it but it's usefule
19:46:30 <russell_h> I don't like submit-to-multipe-queues idea, I think the point of queueing is to separate the concerns of publishers and consumers
19:47:01 <wirehead_> Or merely an internal tee
19:47:14 <flaper87> russell_h: actually the concept behind queues is just queue. It is the protocol itself that adds more functionalities, as for amqp, it addes exchanges, queues, routing_keys and so on
19:47:15 <russell_h> right, I could probably get onboard with that
19:47:26 <flaper87> I don't like the idea of posting to 2 queues either
19:47:38 <flaper87> so, what you mentioned is really fair
19:47:57 <russell_h> yeah, I'd really like for this to be something that is up to the consumer
19:48:02 <russell_h> basically "who they are willing to share with"
19:48:53 <kgriffs> #agreed leave routing up to the consumer/app
19:49:03 <flaper87> just want to add something more
19:49:42 <kgriffs> There's nothing saying we couldn't offer, as part of a public cloud, an add-on "workflow/routing-as-a-service"
19:49:46 <flaper87> consider that we've added another level that other queuing system may lack of. We also have tenants which adds a higher grouping level for messages, queues and permissions
19:49:51 <kgriffs> but I like the idea of keeping Marconi lean and mean
19:50:39 <kgriffs> right, and another grouping is tags which we are considering adding at some point (limiting to a sane number to avoid killing query performance)
19:50:48 <flaper87> a solution might be to create more tenants and just use queues as routing spaghettis
19:51:15 <kgriffs> so, the nice thing about Marconi, is queues are very light-weight, so it's no problem to create zillions of them
19:51:47 <kgriffs> …as opposed to That Seattle Notifications Service™
19:51:54 <flaper87> concept, consistency and simplicity. Those are some things Marconi would like to keep
19:52:07 <flaper87> (Marconi told be that earlier today, during lunch)
19:52:30 <kgriffs> wow, he's still alive? That's one ooooold dude!
19:52:59 <flaper87> kgriffs: was he dead? OMG, I wont sleep tonight
19:53:35 <flaper87> gazillions > zillions
19:53:38 * kgriffs Zombie Radio Genius Eats OpenStack Contributor's Brain While He Sleeps
19:54:06 <flaper87> and that message was sent through and unknown radio signal
19:54:16 <flaper87> s/and/an/
19:54:26 <flaper87> moving on
19:54:26 <kgriffs> so, you guys can always catch us in #openstack-marconi to discuss claims and routing and stuff further.
19:54:27 <russell_h> the problem with more tenants is that doesn't map well to how people actually use tenants
19:54:38 <russell_h> that can be overcome
19:54:50 <kgriffs> sure.
19:54:55 <kgriffs> let's keep the discussion going
19:55:03 <flaper87> russell_h: agreed, that was just a crazy idea that might work for 2 or 3 types of deployments
19:55:16 <russell_h> flaper87: yeah, I have that idea about every third day for monitoring :)
19:55:32 <russell_h> flaper87: it really doesn't work well for monitoring, because people want the monitoring on their server on the same tenant as the server itself
19:55:38 <russell_h> and they don't do that for servers for some reason
19:56:04 <russell_h> (because their server already exists, and my suggestion that they rebuild it on a different tenant doesn't go over well)
19:56:09 <russell_h> anyway, yeah, joined the other channel
19:56:11 <russell_h> thanks guys
19:56:18 <russell_h> I like the look of this so far
19:56:28 <russell_h> my heart fluttered a little when I saw you using json home ;)
19:56:32 <russell_h> in a good way
19:56:33 <flaper87> russell_h: thank you. Would love to talk more about that in the other channel
19:57:13 <kgriffs> yeah, we will have the home doc up soon. We want to use uri templates pervasively, but are waiting for the ecosystem around that to mature, so probably do that in v2 of the api
19:57:17 <kgriffs> ok
19:57:22 <kgriffs> we are just about out of time
19:57:29 <kgriffs> any last-minute items?
19:57:41 <kgriffs> oh, one quick thing
19:58:14 <kgriffs> Any objections to postponing the diagnostics (actions resource) to later this year after our first release?
19:58:43 <flaper87> not from me! I thinki we have other things with higher priority
19:59:25 <kgriffs> #agreed postpone diagnostics
19:59:42 <kgriffs> I really think it will be a hugely helpful feature, but we've got bigger fish to fry first. :D
20:00:06 <flaper87> I would say a bigger zombie
20:00:09 <kgriffs> ok guys, it's been cool. We'll have a sandbox up soon you can try out. Tell us what sux so we can fix it.
20:00:10 <flaper87> :P
20:00:39 <flaper87> awesome! Way to go guys! russell_h wirehead_ thanks for joining
20:00:43 <kgriffs> FYI, looks like we may be getting celery/kombu support in the near future as well
20:00:54 <kgriffs> thanks guys!
20:00:54 <wirehead_> thanks for having us, folks :)
20:00:59 <kgriffs> #endmeeting