Thursday, 2013-01-24

*** rnirmal has quit IRC00:15
*** vipul is now known as vipul|away00:20
*** robertmyers has joined #openstack-meeting-alt00:25
*** vipul|away is now known as vipul00:28
*** juice has quit IRC00:38
*** kaganos has quit IRC00:38
*** juice has joined #openstack-meeting-alt00:54
*** esp has joined #openstack-meeting-alt01:00
*** esp has left #openstack-meeting-alt01:07
*** bdpayne has quit IRC01:28
*** amyt has quit IRC01:47
*** kaganos has joined #openstack-meeting-alt02:54
*** jog0 has joined #openstack-meeting-alt02:58
*** grapex has joined #openstack-meeting-alt03:03
*** jog0 has quit IRC03:36
*** kaganos has quit IRC04:09
*** robertmy_ has joined #openstack-meeting-alt04:18
*** robertmyers has quit IRC04:20
*** juice has quit IRC04:29
*** amyt has joined #openstack-meeting-alt04:32
*** grapex has quit IRC04:55
*** juice has joined #openstack-meeting-alt05:01
*** kaganos has joined #openstack-meeting-alt05:36
*** kaganos has quit IRC05:56
*** juice has quit IRC06:35
*** juice has joined #openstack-meeting-alt06:38
*** juice has quit IRC07:29
*** juice has joined #openstack-meeting-alt07:35
*** juice has quit IRC07:57
*** juice has joined #openstack-meeting-alt07:58
*** juice has quit IRC08:36
*** amyt has quit IRC13:38
*** robertmy_ has quit IRC13:51
*** jog0 has joined #openstack-meeting-alt14:33
*** grapex has joined #openstack-meeting-alt14:34
*** grapex has joined #openstack-meeting-alt14:34
*** robertmyers has joined #openstack-meeting-alt14:48
*** robertmyers has joined #openstack-meeting-alt14:48
*** amyt has joined #openstack-meeting-alt14:54
*** rnirmal has joined #openstack-meeting-alt14:56
*** djohnstone has joined #openstack-meeting-alt15:06
*** jog0 has left #openstack-meeting-alt15:08
*** jcru has joined #openstack-meeting-alt15:09
*** cp16net is now known as cp16net|away15:19
*** juice has joined #openstack-meeting-alt15:21
*** jcru is now known as jcru|away15:26
*** juice has quit IRC15:33
*** esp1 has joined #openstack-meeting-alt15:59
*** esp1 has left #openstack-meeting-alt15:59
*** juice has joined #openstack-meeting-alt16:04
*** cp16net|away is now known as cp16net16:16
*** jcru|away is now known as jcru16:16
*** juice has quit IRC16:19
*** kmansel has quit IRC17:05
*** kaganos has joined #openstack-meeting-alt17:06
*** juice has joined #openstack-meeting-alt17:20
*** bdpayne has joined #openstack-meeting-alt17:28
*** esp1 has joined #openstack-meeting-alt17:52
*** esp1 has left #openstack-meeting-alt17:52
*** bdpayne has quit IRC18:29
*** bdpayne has joined #openstack-meeting-alt18:30
*** edsrzf has joined #openstack-meeting-alt18:50
*** kgriffs has joined #openstack-meeting-alt18:52
*** treeder has joined #openstack-meeting-alt18:57
*** chad has joined #openstack-meeting-alt18:58
*** chad is now known as carimura18:58
*** carimura has quit IRC18:59
*** chad has joined #openstack-meeting-alt18:59
*** chad is now known as carimura19:00
*** kgriffs1 has joined #openstack-meeting-alt19:04
kgriffs1#startmeeting19:04
openstackkgriffs1: Error: A meeting name is required, e.g., '#startmeeting Marketing Committee'19:04
kgriffs1sorry guys. My IRC client is being lame.19:05
kgriffs1#startmeeting Marconi19:05
openstackMeeting started Thu Jan 24 19:05:11 2013 UTC.  The chair is kgriffs1. Information about MeetBot at http://wiki.debian.org/MeetBot.19:05
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:05
*** openstack changes topic to " (Meeting topic: Marconi)"19:05
openstackThe meeting name has been set to 'marconi'19:05
kgriffs1#topic Review last week's actions19:05
*** openstack changes topic to "Review last week's actions (Meeting topic: Marconi)"19:05
*** robertmyers has left #openstack-meeting-alt19:05
kgriffs1#info kgriffs to kick the tires on Pecan19:05
kgriffs1So, I did play around a bit with Pecan.19:06
*** kgriffs has quit IRC19:06
kgriffs1It has a lot of cool stuff, but is a bit slow for my tastes.19:06
kgriffs1We are doing a hack day today in my office and some people are playing with various web frameworks, including Falcon.19:07
kgriffs1I'm hoping to get some good feedback from everyone to see what they like and don't like about each framework, to feed into the discussion around what we should use for Marconi.19:08
treederwith a tagline like that, i'm surprised. ;)19:08
kgriffs1At this point, Falcon is still a bit of a science project, so the jury's still out.19:09
kgriffs1heh19:09
treederreferring to Pecan's tagline19:09
kgriffs1exactly.19:10
kgriffs1I did a fairly simple benchmark where I set a body, read a single query parameter, and also have a param as part of the URI path. Pecan and Flask were about the same speed, while Bottle was several times faster per request, and Falcon was about twice as fast as Bottle.19:11
kgriffs1Granted, Falcon does less than the other frameworks, but at least you don't have to pay for what you don't use. Or something like that.19:11
kgriffs1We'll have to see how things go with more realistic tests.19:11
kgriffs1Anyway, that something simmering on the back-burner while we figure out the API.19:12
treederCan you post link to Falcon?19:12
treederand what was yours again?19:12
kgriffs1http://github.com/racker/falcon19:12
kgriffs1BTW, the benchmark was hitting the WSGI callable directly, not going through any webserver.19:13
kgriffs1We're talking about a performance range of 2 thousand req/sec vs 30 for Falcon on my MBP.19:14
kgriffs1Kinda crazy. I just don't want the framework to be the bottleneck in terms of latency.19:15
treederya, that's pretty significant19:15
kgriffs1#topic Review the g1 milestone19:17
*** openstack changes topic to "Review the g1 milestone (Meeting topic: Marconi)"19:17
kgriffs1So, I just wanted to take a minute and review what our first milestone looks like. It has "g" in the name for "Grizzly" - I'm hoping to get at least a few of these done before the next summit.19:18
kgriffs1https://launchpad.net/marconi/+milestone/grizzly-119:18
*** SlickNik has left #openstack-meeting-alt19:18
kgriffs1#info Set up Marconi wiki, repo, Launchpad, etc.19:18
kgriffs1That's pretty much all done at this point. The Wiki could use some refactoring, but it's there.19:19
kgriffs1#info Define v1.0 functional and non-functional requirements19:19
kgriffs1I've got a rough draft of this that is mostly based on the meeting we had last summit to kick off the project.19:20
kgriffs1http://wiki.openstack.org/marconi/specs/grizzly19:20
kgriffs1In a future meeting we should refine it, but it's probably best to get some code out there rather than spend too much time writing specs.19:20
*** cp16net is now known as cp16net|away19:21
treeder+119:21
treeder1) define MVP API 2) code19:21
kgriffs1#info Define v1.0 API19:21
kgriffs1+119:21
kgriffs1So, the API draft is out there but has a bunch of TBD on it that I'm hoping we can work through pretty quickly. We started last week, and will keep going today.19:23
kgriffs1#info Create a skeleton implementation and test suite19:23
kgriffs1So, this will probably go into the next milestone. I'm thinking we will just close out g1 once we have the API (mostly) figured out.19:24
kgriffs1(also I've got some interviews coming up in the next couple weeks to hire some full-time devs)19:25
kgriffs1#action kgriffs1 to move coding to g219:25
kgriffs1Any questions/comments before we move on?19:26
kgriffs1#topic Decide whether to support tags or some other fanout mechanism, or just stick with N-queue approach (for v1.0 at least)19:27
*** openstack changes topic to "Decide whether to support tags or some other fanout mechanism, or just stick with N-queue approach (for v1.0 at least) (Meeting topic: Marconi)"19:27
kgriffs1So, what do you guys think is the way to go for v1.0?19:27
treederwhat's N-queue approach?19:28
edsrzfI think that's your preferred approach19:28
kgriffs1What I mean by that is to, say, use two queues rather than nested namespaces.19:29
treederahh, ya19:29
kgriffs1Also, not having tags means all filtering happens either by using multiple queues or doing it in the client firehose-style.19:29
treederN-queue approach is definitely the simplest19:29
kgriffs1+119:30
treederI also think we should have concrete queues regardless of whether tags are added on19:30
kgriffs1#agreed Have concrete queues19:30
treederSo as a base point, I think the N-queues approach is best19:30
treederfanout and tags can be added on top19:30
kgriffs1Any objections?19:31
*** rnirmal has quit IRC19:33
kgriffs1#agreed No tags or fanout for v119:34
treederbrb19:34
kgriffs1OK19:34
kgriffs1#topic Consider splitting the API into two namespaces, one for work queuing and one for eventing19:36
*** openstack changes topic to "Consider splitting the API into two namespaces, one for work queuing and one for eventing (Meeting topic: Marconi)"19:36
kgriffs1A few Pros off the top of my head:19:37
kgriffs1#info Pro: Allows for different scaling architectures, storage backends, etc.19:37
kgriffs1#info Pro: Simplifies semantics for the two major use cases19:37
kgriffs1#info Con: Remove affordances - clients won't be able to mix/match semantics (we would be effectively removing features)19:38
kgriffs1Thoughts?19:38
kgriffs1By "eventing", I mean notifications and RPC style communication19:39
kgriffs1By work queuing, I mean semantics like those provided by IronMQ and SQS.19:40
edsrzfSo are we talking about separate semantics, or separate namespaces?19:42
kgriffs1The API spec as currently written lets you do all of the above under a single namespace. The way the client interacts with a given queue determines the contract, essentially.19:43
kgriffs1http://wiki.openstack.org/marconi/specs/api/v119:43
kgriffs1So, I could have a queue with work items or metering data documents in it, and have consumers pulling those off in a way that hides those documents from other consumers (similar to the way SQS works).19:44
kgriffs1But an observer could peek at the messages on the queue if they wanted to - it would be up to the developer to ensure the observer is passive.19:45
*** vipul is now known as vipul|away19:45
*** vipul|away is now known as vipul19:45
*** vipul is now known as vipul|away19:46
kgriffs1The alternative would be to basically have two different queue types and clients aren't able to mix-and-match semantics.19:46
treederwhat's the alternative way?19:47
kgriffs1I guess you could even have two completely different services for each type.19:47
treedera consumer could take a message and it wouldn't be hidden?19:47
kgriffs1They couldn't take a message per say, but they could peek and promise to only be a passive observer.19:48
treederright, so that's essentially just a different function/endpoint19:50
*** cp16net|away is now known as cp16net19:50
treederGET vs PEEK19:50
treederpeek doesn't reserve the message19:50
kgriffs1as written, the spec would allow PEEKing at messages that are reserved AKA hidden/locked.19:51
treederwhere is that in the spec?19:51
treedercan't find it19:51
kgriffs1sorry, it's a ways down. Search for "Lock Messages"19:52
kgriffs1http://wiki.openstack.org/marconi/specs/api/v119:52
treederoh, so GET messages is like peek19:52
treederand lock is more like a get?19:53
kgriffs1yeah. I was attempting to keep state on the client as much as possible.19:53
treederin terms of SQS/beanstalk/IronMQ terms19:53
kgriffs1right. But GET isn't really supposed to change state.19:53
*** bdpayne has quit IRC19:53
*** juice has quit IRC19:53
kgriffs1I jut had a thought. The body of the post could be a list of queues to pull from. That reduces load on the server.19:55
kgriffs1Although that means we would probably want a batch GET across multiple queues, and that could get weird.19:56
*** jcru has quit IRC19:59
kgriffs1SQS gives you back some kind of transaction ID or something for each batch of messages, right?19:59
kgriffs1(it's been a little while since I looked at it)19:59
kgriffs1(looking at docs)19:59
*** djohnstone_ has joined #openstack-meeting-alt20:00
kgriffs1oic.20:00
kgriffs1ReceiptHandle20:00
kgriffs1It's per message20:00
treederdon't think there's a transaction20:00
treederya20:00
kgriffs1What does IronMQ do?20:00
treederin terms of?20:02
treederlock vs peek?20:02
treedervery similar to SQS20:02
treederGET messages locks/reserves messages for the timeout period20:02
treederwe also have a /peek endpoint20:02
kgriffs1no, I mean, is their some kind of handle for, e.g., renewing the timeout?20:02
*** djohnstone has quit IRC20:02
treedereach message has an id20:02
kgriffs1# IronMQ has a /peek endpoint20:03
treederwe have touch/release for renewing the lock20:03
treederor releaseing the lock20:03
kgriffs1oic20:03
treederreleasing20:03
*** djohnstone has joined #openstack-meeting-alt20:03
kgriffs1Do you think releasing a lock manually is something Marconi should have?20:03
treederprobably20:03
kgriffs1#IronMQ has touch/release for renewing the lock20:04
treederuse case is: "i got a message that I can't deal with right now so I'll put it back on the queue for someone else"20:04
kgriffs1oic20:04
treederand you can release with a delay too meaning it won't be available for a while20:04
treederjust as you can post a message with a delay20:04
*** djohnstone_ has quit IRC20:05
kgriffs1So it seems like doing the locks per-message is better than as a batch?20:05
treedertouch resets the timeout20:05
treederhmmmm20:05
kgriffs1or does it matter?20:05
treederi don't know if there's any sane way to batch them?20:05
kgriffs1I guess you just assign the same lock ID to a group of messages.20:06
kgriffs1(not normalized)20:06
treederok20:06
treederso if I take 100 messages, I have to deal with them all, no matter what?20:06
treederlike I can't fail on one message20:07
treederwithout affecting the whole batch20:07
kgriffs1Well, if you fail on one and keep going, that one will eventually time out and go back into the pot20:07
treederjust thinking out loud20:07
kgriffs1or you could manually release it20:07
treederif the lock is on a batch though20:07
treederwouldn't the whole batch have to go back on the pot20:07
treeder?20:07
kgriffs1oic. You would keep renewing until you are done with the entire batch, in the meantime another worker could have been picking up the ones you failed/skipped.20:08
treedereg:20:09
treederserver X grabs 100 messages20:09
treederserver crashes after processing 50 messages20:09
treederlock expires and all 100 messages would have to go back on?20:09
kgriffs1no, the server would presumably have deleted the 50 messages before crashing20:10
treederahh, right20:10
kgriffs1(by ID, not lock)20:10
treedergot it, so it deletes by message id?20:10
kgriffs1yeah20:10
treederok20:11
kgriffs1maybe also have to include the lock id for safety20:11
treederyou say tags in the spec20:11
kgriffs1yeah20:11
treederis a tag a message id too?20:11
kgriffs1no, that's a holdover from the idea to not use concrete queues20:11
kgriffs1I need update it20:11
kgriffs1so, would be...20:11
kgriffs1POST {base_url}/messages/{queue}/locks{?limit}20:12
kgriffs1or something like that20:12
*** djohnstone has left #openstack-meeting-alt20:12
kgriffs1oops20:12
kgriffs1POST {base_url}/{queue}/locks{?tags,limit}20:13
kgriffs1anyway, you get the idea - it will take some refinement20:13
kgriffs1dang. Sorry - left tags in again. /me blushes20:13
*** esp1 has joined #openstack-meeting-alt20:13
kgriffs1So, I'm thinking that per-message receipt handles or whatever you call them is a little more flexible.20:15
treederya, I think so20:16
treedermessage id's to keep it simple. ;)20:16
kgriffs1#agreed use per-message lock/receipt IDs or just message IDs directly20:16
kgriffs1I guess that's an argument for having separate queue types or namespaces or whatever.20:17
*** esp1 has quit IRC20:18
kgriffs1On the other hand, having to provide a handle of some sort when deleting a message (in addition to it's ID) does mitigate a race condition between a lock expiring and a message being deleted.20:18
treederhandle to a lock?20:19
kgriffs1Another option would be to require clients put a UUID string in the request (possibly User-Agent header), then when a client gets a batch, the messages are tagged with that UUID so only they can delete them if they are still locked.20:20
kgriffs1yeah20:20
kgriffs1#action kgriffs1 to update API spec to remove tags, add concrete queues20:21
treederneed to think about that batch lock thing20:22
kgriffs1OK20:22
kgriffs1Well, the meeting is running pretty long20:22
kgriffs1maybe we should sleep on the locking as well as the question of whether to have a unified queue type/namespace.20:22
treederya, well we're making progress so that's good.20:22
kgriffs1definitely.20:23
treederalright, I have to run to another meeting, talk to you next week.20:23
kgriffs1sounds like a plan20:23
treedermaybe we should get a marconi mailing list going or something?20:24
kgriffs1Well, the way OpenStack seems to like to do it is put everything in openstack-dev with a tag in the subject line, i.e., [marconi].20:25
kgriffs1#agreed Start some discussion threads on the mailing list20:25
kgriffs1appreciate everyone's help.20:25
kgriffs1#action everyone think about locking semantics20:26
kgriffs1#action everyone think about / discuss  the implications of a unified queue type/namespace and come up with a recommendation for next mtg.20:27
kgriffs1#endmeeting20:27
*** openstack changes topic to "OpenStack meetings (alternate) || Development in #openstack-dev || Help in #openstack"20:27
openstackMeeting ended Thu Jan 24 20:27:32 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:27
openstackMinutes:        http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-01-24-19.05.html20:27
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-01-24-19.05.txt20:27
openstackLog:            http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-01-24-19.05.log.html20:27
*** treeder has quit IRC20:29
*** kgriffs1 has quit IRC20:35
*** kgriffs has joined #openstack-meeting-alt20:35
*** vipul|away is now known as vipul20:35
*** carimura has quit IRC20:37
*** esp1 has joined #openstack-meeting-alt20:38
*** juice has joined #openstack-meeting-alt20:38
*** chad has joined #openstack-meeting-alt20:41
*** chad is now known as Guest1334820:41
*** carimura has joined #openstack-meeting-alt20:41
*** esp1 has left #openstack-meeting-alt20:43
*** cp16net is now known as cp16net|away20:44
*** cp16net|away is now known as cp16net20:50
*** grapex has quit IRC20:59
*** bdpayne has joined #openstack-meeting-alt21:09
*** jcru has joined #openstack-meeting-alt21:16
*** edsrzf has left #openstack-meeting-alt21:28
*** carimura has quit IRC21:34
*** chad has joined #openstack-meeting-alt21:45
*** chad is now known as Guest9282821:45
*** treeder has joined #openstack-meeting-alt21:46
*** kgriffs has quit IRC22:16
*** Guest92828 has quit IRC22:33
*** treeder has quit IRC22:33
*** kaganos has quit IRC22:45
*** jcru is now known as jcru|away22:51
*** juice has quit IRC22:55
*** juice has joined #openstack-meeting-alt22:58
*** juice has quit IRC23:01
*** juice has joined #openstack-meeting-alt23:05
*** jcru|away is now known as jcru23:08
*** jcru has quit IRC23:20
*** juice has quit IRC23:24
*** juice has joined #openstack-meeting-alt23:26
*** juice has quit IRC23:26
*** juice has joined #openstack-meeting-alt23:27

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!