Thursday, 2013-04-04

*** via has joined #openstack-meeting-alt01:29
*** yidclare has quit IRC01:36
*** rmohan has quit IRC01:41
*** rmohan has joined #openstack-meeting-alt01:52
*** rmohan has quit IRC02:22
*** rmohan has joined #openstack-meeting-alt02:23
*** vipul is now known as vipul|away03:15
*** esp has joined #openstack-meeting-alt03:28
*** esp has left #openstack-meeting-alt03:43
*** SergeyLukjanov has joined #openstack-meeting-alt03:59
*** cp16net_ has joined #openstack-meeting-alt04:16
*** cp16net has quit IRC04:16
*** cp16net_ is now known as cp16net04:16
*** dmitryme has joined #openstack-meeting-alt04:31
*** dmitryme has quit IRC04:37
*** dmitryme2 has joined #openstack-meeting-alt04:38
*** sacharya has joined #openstack-meeting-alt04:50
*** dmitryme2 has quit IRC04:50
*** SergeyLukjanov has quit IRC04:56
*** via_ has joined #openstack-meeting-alt05:32
*** chmouel_ has joined #openstack-meeting-alt05:34
*** via has quit IRC05:38
*** chmouel has quit IRC05:38
*** sacharya has quit IRC05:51
*** grapex has quit IRC05:54
*** yidclare has joined #openstack-meeting-alt06:04
*** vipul|away is now known as vipul06:47
*** vipul is now known as vipul|away07:21
*** SergeyLukjanov has joined #openstack-meeting-alt07:47
*** zykes- has quit IRC08:00
*** zykes- has joined #openstack-meeting-alt08:01
*** SergeyLukjanov has quit IRC08:07
*** SergeyLukjanov has joined #openstack-meeting-alt08:14
*** SergeyLukjanov has quit IRC08:54
*** SergeyLukjanov has joined #openstack-meeting-alt08:56
*** SergeyLukjanov has quit IRC09:47
*** SergeyLukjanov has joined #openstack-meeting-alt10:23
*** grapex has joined #openstack-meeting-alt11:44
*** grapex has quit IRC11:44
*** grapex has joined #openstack-meeting-alt11:44
*** grapex has quit IRC12:14
*** rnirmal has joined #openstack-meeting-alt12:26
*** rnirmal_ has joined #openstack-meeting-alt12:45
*** rnirmal_ has quit IRC12:45
*** rnirmal has quit IRC12:49
*** jcru has joined #openstack-meeting-alt12:52
*** amyt has joined #openstack-meeting-alt12:53
*** rnirmal has joined #openstack-meeting-alt13:12
*** via_ is now known as via13:26
*** amyt has quit IRC13:31
*** sacharya has joined #openstack-meeting-alt13:49
*** cloudchimp has joined #openstack-meeting-alt13:53
*** cp16net is now known as cp16net|away14:13
*** djohnstone has joined #openstack-meeting-alt14:13
*** amyt has joined #openstack-meeting-alt14:31
*** jcru is now known as jcru|away14:48
*** chmouel_ is now known as chmouel14:50
*** sacharya has quit IRC14:54
*** cp16net|away is now known as cp16net15:04
*** jcru|away is now known as jcru15:07
*** nikhil has joined #openstack-meeting-alt15:08
*** iccha_ has joined #openstack-meeting-alt15:23
*** ameade has joined #openstack-meeting-alt15:23
*** grapex has joined #openstack-meeting-alt15:24
*** jcru is now known as jcru|away15:29
*** jcru|away is now known as jcru15:32
*** vipul|away is now known as vipul15:35
*** sacharya has joined #openstack-meeting-alt15:35
*** grapex has left #openstack-meeting-alt15:36
*** HenryG has joined #openstack-meeting-alt16:01
*** amyt has quit IRC16:03
*** amyt has joined #openstack-meeting-alt16:03
*** SergeyLukjanov has quit IRC16:18
*** yidclare has quit IRC16:19
*** bdpayne has quit IRC16:46
*** bdpayne has joined #openstack-meeting-alt16:47
*** esp has joined #openstack-meeting-alt16:51
*** esp has left #openstack-meeting-alt16:51
*** EmilienM has quit IRC17:01
*** wirehead_ has joined #openstack-meeting-alt17:10
*** flaper87 has joined #openstack-meeting-alt17:17
*** nkonovalov has joined #openstack-meeting-alt17:21
*** cp16net is now known as cp16net|away17:25
*** bdpayne has quit IRC17:26
*** yidclare has joined #openstack-meeting-alt17:29
*** MarkAtwood has joined #openstack-meeting-alt17:44
*** sacharya has quit IRC17:46
*** dmitryme has joined #openstack-meeting-alt17:50
*** SergeyLukjanov has joined #openstack-meeting-alt17:50
*** aignatov3 has joined #openstack-meeting-alt17:54
*** bdpayne has joined #openstack-meeting-alt17:54
SergeyLukjanovHey everybody18:00
ogelbukho/18:00
SergeyLukjanovwe will start Savanna project meeting in five minutes18:01
*** EmilienM has joined #openstack-meeting-alt18:01
SergeyLukjanov#startmeeting savanna18:04
openstackMeeting started Thu Apr  4 18:04:27 2013 UTC.  The chair is SergeyLukjanov. Information about MeetBot at http://wiki.debian.org/MeetBot.18:04
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.18:04
*** openstack changes topic to " (Meeting topic: savanna)"18:04
openstackThe meeting name has been set to 'savanna'18:04
SergeyLukjanovOk, let's start18:04
aignatov3Hi there. Let's start with agenda18:04
aignatov31. Savanna release 0.1a1 is ready.18:05
aignatov32. Several documents were updated18:05
ogelbukhbtw, think about a page with agenda on openstack wiki18:06
aignatov33. Improvements in code: config provisioning, updated validation, several bug fixes18:06
aignatov34. We published several blueprints18:06
aignatov3on the Launchpad18:06
SergeyLukjanovogelbukh, I think it's good to have such page, but it's not bad to copy agenda to chat ;)18:06
ogelbukhsure )18:07
aignatov35. Plan for the nearest future18:07
SergeyLukjanov#info Today we have released our first alpha version of Savanna - 0.1a118:08
aignatov3And you can download it from http://tarballs.openstack.org/savanna/18:09
SergeyLukjanov#link http://tarballs.openstack.org/savanna/savanna-0.1a1.tar.gz18:09
aignatov3It's installation now is very easy because18:10
aignatov3it uses only "pip install" from tarball18:10
SergeyLukjanov#info We updated the following docs - Quickstart, HowToParticipate18:12
ogelbukhdo you plan to use setuptools-git?18:12
ogelbukh(or may be you're already using it)18:12
*** vipul is now known as vipul|away18:13
SergeyLukjanovogelbukh, it looks like obsolete dependency18:13
SergeyLukjanovogelbukh, we will check it18:13
SergeyLukjanov#info Improvements in code: config provisioning, updated validation, several bug fixes18:14
aignatov3As usual code has been improved: at now Savanna has user config provisionnig18:14
aignatov3so, user is able to define need Hadoop config's parameters for task tracker, data_nodes, name-nodes processes18:15
aignatov3job tracker is well18:16
aignatov3also we added sonme improvements in validation logic18:16
aignatov3Savanna is able to define Nova's available resources18:17
aignatov3sorry, I meant check resources18:17
aignatov3not define18:17
ogelbukhwhich resources you mean?18:18
aignatov3so, Savanna will save you from creationg new clusters with insufficient resources18:18
ogelbukhnumber of cores, free ram or?18:18
aignatov3ram, vcpus, instances18:19
*** yidclare has quit IRC18:19
aignatov3we will add resource checking for disks in the future18:19
aignatov3we have fixed several bugs as well18:20
aignatov3you can find them in the main Savanna's launchpad page18:20
SergeyLukjanov#info We published several blueprints for the feature tasks18:20
SergeyLukjanov#link https://blueprints.launchpad.net/savanna18:20
SergeyLukjanovthere some several important things that should be implemented asap18:21
*** yidclare has joined #openstack-meeting-alt18:21
SergeyLukjanovfirst of all, it's python-savannaclient :)18:21
SergeyLukjanovthe next one is to improve cluster security by using separated keypairs18:22
SergeyLukjanovfor different Hadoop clusters18:22
SergeyLukjanovadditionally, we want to start to support i18n from the first version of Savana18:24
SergeyLukjanov#info Our plans for the nearest future18:26
SergeyLukjanov#info we are going to finish instructions how to create custom images18:26
SergeyLukjanovfor different Hadoop versions or OS version, etc.18:27
*** ruhe has joined #openstack-meeting-alt18:28
*** SlickNik has left #openstack-meeting-alt18:28
*** SlickNik has joined #openstack-meeting-alt18:28
aignatov3also we are working on creating Hadoop images on Centos distrs to interop with Savanna18:28
SergeyLukjanov#info we are planning to publish Savanna packages for Ubuntu and Centos18:29
SergeyLukjanov#info custom Horizon will be published in a few days18:30
*** rnirmal has quit IRC18:32
SergeyLukjanovand the final item is18:32
SergeyLukjanov#info we are doing the final preparations to make devstack working with Savanna18:32
SergeyLukjanovI think that's all from our side18:33
aignatov3guys, if you have questions please ask us18:34
ogelbukhwhich version of Ubuntu are you targeting?18:35
*** ruhe has left #openstack-meeting-alt18:36
dmitrymeWe are using Ubuntu 12.10 cloud image for Hadoop image18:36
dmitrymeOur Savanna is deployed with OpenStack on Ubuntu 12.0418:37
SergeyLukjanovwe will build packages for Ubuntu 12.04 and Centos 6.318:38
dmitrymebasically I don't see a reason for our code not to work on other versions18:38
SergeyLukjanovadditionally, I think that we will build packages for Ubuntu 12.10 too18:38
SergeyLukjanovbut with the lower priority18:38
dmitrymeOh, and by the way we are going discuss Savanna on the summit!18:42
dmitrymeIt will take place in Unconference room18:43
dmitrymeOn monday at 11:5018:43
*** amyt has quit IRC18:43
*** amyt has joined #openstack-meeting-alt18:43
SergeyLukjanovFolks, do you have more questions?18:44
SergeyLukjanovIf not, I think it's about time to end our meeting18:45
*** markwash has joined #openstack-meeting-alt18:45
aignatov3As always, you can mail us in savanna-all@lists.launchpad.net mailing lists and find us in #savanna irc channel18:45
SergeyLukjanov#info JFYI you can always use savanna-all@lists.launchpad.net mailing lists and #savanna irc channel to find us and ask your questions18:45
SergeyLukjanov#endmeeting18:45
*** openstack changes topic to "OpenStack meetings (alternate) || Development in #openstack-dev || Help in #openstack"18:45
openstackMeeting ended Thu Apr  4 18:45:53 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:45
openstackMinutes:        http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-04-04-18.04.html18:45
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-04-04-18.04.txt18:45
openstackLog:            http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-04-04-18.04.log.html18:45
*** edsrzf has joined #openstack-meeting-alt18:52
*** yidclare has quit IRC18:52
*** russell_h has joined #openstack-meeting-alt18:55
*** brianr-g1ne has joined #openstack-meeting-alt18:57
*** yidclare has joined #openstack-meeting-alt18:58
*** jcru has quit IRC18:58
*** djohnstone1 has joined #openstack-meeting-alt18:59
*** vipul|away is now known as vipul18:59
*** djohnstone has quit IRC18:59
*** kgriffs has joined #openstack-meeting-alt19:00
kgriffshttps://wiki.openstack.org/wiki/Meetings/Marconi19:01
*** jcru has joined #openstack-meeting-alt19:02
kgriffsso, before we get started, just wanted to shout out to everyone who contributed to Grizzly.19:02
*** cp16net|away is now known as cp16net19:03
flaper87o/19:03
kgriffsyo19:03
flaper87just in time19:03
*** malini has joined #openstack-meeting-alt19:03
*** bryansd has joined #openstack-meeting-alt19:03
kgriffsLet's give it another minute before we start19:04
kgriffshttps://wiki.openstack.org/wiki/Meetings/Marconi19:04
*** dhellmann has quit IRC19:04
kgriffs#startmeeting marconi19:05
openstackMeeting started Thu Apr  4 19:05:14 2013 UTC.  The chair is kgriffs. Information about MeetBot at http://wiki.debian.org/MeetBot.19:05
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:05
*** openstack changes topic to " (Meeting topic: marconi)"19:05
openstackThe meeting name has been set to 'marconi'19:05
*** dhellmann has joined #openstack-meeting-alt19:05
kgriffs#topic State of the project19:05
*** openstack changes topic to "State of the project (Meeting topic: marconi)"19:05
kgriffsSo, we are close to having our demo ready for Portland. At that point, Marconi will be feature-complete, but will still have a lot of error handling and optimizations to complete.19:06
kgriffsSometime in the next few days we will also have a public sandbox ready that everyone can play with.19:07
flaper87w000000t19:07
kgriffs:D19:07
kgriffsflaper87 has been making excellent progress on the mongodb storage driver, and we also have a reference driver based on sqlite.19:08
*** jdprax has joined #openstack-meeting-alt19:08
kgriffsjdprax has been coding away on the client lib19:09
kgriffsjdprax: can you comment?19:10
jdpraxWe're still coding on the client library, but our gerrit config was rejected because essentially they want to us to set up pypi now or never for it.19:10
jdprax...19:10
jdpraxSo I'm leaning toward "never", and we'll just push it ourselves.19:10
flaper87jdprax: so, basically we have to release a first version before getting it into gerrit ?19:10
jdpraxThat's my understanding.19:10
jdprax:-/19:11
jdpraxBut hey, not a big deal.19:11
jdpraxHonestly we've just been swamped so I haven't followed up as closely on it as I should have.19:11
flaper87jdprax: what about pushing some dummy code there as placeholder ?19:11
flaper87I mean, on pypi19:11
jdpraxAh, pushing dummy code to pypi?19:12
flaper87jdprax: yeah, some package with version 0.0.0.0.0.0.0.0.0.019:13
flaper87.0.019:13
flaper87and .019:13
bryansd.119:13
flaper87bryansd: .1/2 ?19:13
*** dieseldoug has joined #openstack-meeting-alt19:13
flaper87jdprax: seriously, if that's the problem then I'd say, lets lock tha slot on pypi with some dummy package and get the client on stackforge19:14
kgriffssounds like a plan19:14
flaper87fucking pypi, today is not working at all19:15
kgriffsthat should be a 7xx error19:15
*** oz_akan has joined #openstack-meeting-alt19:15
jdpraxHahaha19:15
kgriffsok, moving on… :p19:15
flaper87kgriffs: I'd say 66619:16
jdprax:-)19:16
*** aignatov3 has quit IRC19:16
kgriffs#topic Finish reviewing the draft API19:16
*** openstack changes topic to "Finish reviewing the draft API (Meeting topic: marconi)"19:16
kgriffsok, so over the past couple weeks there've been a bunch of changes to the API, hopefully for the better.19:17
kgriffsso, first, is there anything in general you guys want to discuss based on the latest draft? If not, I've got a few specific areas I'd like to focus on.19:17
kgriffshttps://wiki.openstack.org/wiki/Marconi/specs/api/v119:18
*** aignatov3 has joined #openstack-meeting-alt19:19
wirehead_So, not to be too annoyingly bikesheddy, kgriffs….. (we talked in person — I'm Ken) I loves that the user-agent isn't overloaded like before, but maybe X-Client-Token instead of Client-ID?19:19
kgriffshey Ken19:19
flaper87wirehead_: I'm afraid that can be a bit confusing for users since there may be other tokes (like keystone's)19:20
wirehead_K19:20
wirehead_Maybe true.  Still, would be more HTTP-ish to keep the X- as a prefix.19:20
wirehead_I know I'm bikeshedding and I appologize for it. :)19:21
kgriffsactually, I'm not sure I'd agree19:21
*** vipul is now known as vipul|away19:21
*** cyli has joined #openstack-meeting-alt19:21
*** fsargent has joined #openstack-meeting-alt19:21
kgriffsx-headers were never supposed to be used like that.19:21
* kgriffs is looking for the RFC19:21
flaper87kgriffs <- always has an RFC for everything19:21
kgriffshttp://tools.ietf.org/html/rfc664819:22
jdpraxFor the curious http://en.wikipedia.org/wiki/Bikeshedding19:22
kgriffsI'm actually trying to figure out the process for registering new headers19:23
kgriffs(seems like Client-ID is generic enough to be useful elsewhere)19:23
*** woodbon has joined #openstack-meeting-alt19:24
wirehead_well, if we want to rabbit hole, you could always silently implement it as a cookie.19:24
kgriffs(or maybe mnot and his possy will have a better suggestion…TBD)19:24
kgriffsoh boy19:24
russell_hI'm curious about authorization19:24
wirehead_note that I didn't say "we should implement it as a cookie"19:24
kgriffsI'm embarrassed to say the thought did cross my mind...19:24
kgriffsrussel-h: shoot19:25
kgriffsrussell_h19:25
russell_hkgriffs: any plans to support queue-level permissions?19:25
russell_hthe spec is a little vague about this19:25
russell_hbut if you wanted to do this, you would presumably need to track them as a property of the queue19:25
kgriffswe have thought about it, and I think it would be great to have, but that would best be implemented in auth middleware19:25
russell_hright, but would you have to tell the middleware about the permissions of each queue, or where would that information actually go?19:26
kgriffsit would be great if we could expand the Keystone wsgi middleware to support resource-level ACLS19:26
kgriffsgood question, we honestly haven't talked about it a lot19:26
wirehead_Well, also some sort of "append only user"19:27
russell_hdoes swift have anything like this?19:27
kgriffsmakes sense19:27
flaper87we haven't talked that much about it but I guess that info will live in the queue19:28
russell_h"You can implement access control for objects either for users or accounts using X-Container-Read: accountname and X-Container-Write: accountname:username, which allows any user from the accountname account to read but only allows the username user from the accountname account to write."19:28
russell_hnot a fan of that19:28
kgriffsrussell_h: what would you like to see instead?19:29
flaper87or we could also have a PErmissionsController per storage to let it manage per resource permissions19:29
flaper87actually, that sounds like a good idea to my brain19:29
kgriffsflaper87: I'm thinking we could add an _acl field to queue metadata19:30
russell_hkgriffs: I was hoping someone else had a clever idea19:30
russell_hI can't think of anything that doesn't involve describing groups of users19:30
kgriffsthen, just call out to the controller or whatever as a Falcon hook/decorator19:31
wirehead_I have a repeat of my clever-but-bad-idea: Create anonymous webhooks19:31
flaper87kgriffs: we could but having security data mixed with other things worries me a bit, TBH19:31
wirehead_to push to a queue, hit a URL with a long token19:31
russell_hat any rate, I don't think the permissions issue needs to block a v1 API19:31
wirehead_naw19:31
flaper87russell_h: agreed19:32
flaper87sounds like something for v2 and / or Ith release cycle19:32
kgriffsrussell_h, wirehead_: would you mind submitting a blueprint for that?19:32
kgriffshttps://blueprints.launchpad.net/marconi19:33
russell_hsure, sounds fun19:33
*** SergeyLukjanov has quit IRC19:33
flaper87russell_h: thanks!!!!!!!!!!!!!!!!!!!!19:33
*** SergeyLukjanov has joined #openstack-meeting-alt19:34
kgriffs#action russell_h and wirehead_ to kickstart an auth blueprint19:35
kgriffs#topic API - Claiming messages19:36
*** openstack changes topic to "API - Claiming messages (Meeting topic: marconi)"19:36
kgriffshttps://wiki.openstack.org/wiki/Marconi/specs/api/v1#Claim_Messages19:36
kgriffsso, any questions/concerns about this section? We haven't had a chance to fully vet this with folks outside the core Marconi team.19:37
*** DandyPandy has left #openstack-meeting-alt19:37
kgriffsoops - just noticed that id needs to be removed from Query Claim response (just use value of Content-Location header)19:38
*** aignatov3 has quit IRC19:39
* kgriffs fixes that real quick19:39
russell_hso something I'm curious about19:40
russell_hhmm, how to phrase this19:40
*** malini has quit IRC19:41
wirehead_scud missile time, russell_h.19:41
russell_hbasically, can a message be claimed twice?19:41
russell_hthat is poorly phrased19:41
flaper87russell_h: yes if the previous claim expired19:41
flaper87not at the same time19:41
russell_hwhat if I want a message to be processed exactly once by each of 2 types of services19:42
russell_hfor example if I have a queue, and I want it to be processed both by an archival service and streaming query interface19:43
russell_hI'd basically like to be able to specify some sort of token associated with my claim19:43
russell_h"make sure no one else with token <russells-query-interface> claims this message"19:44
flaper87russell_h: right now, that's not possible because we don't have routings neither for queues nor for claims19:44
russell_hso the eventual intention is that the message would be routed to 2 queues, and claimed there?19:44
wirehead_That seems conceptually simpler to me19:44
kgriffsyeah, seems like you could have something that pulls off the firehose and duplicates to two other queues19:45
kgriffsalternatively, if they must be done in sequence, worker 1 would post to the second queue the next job to be done by worker 219:45
wirehead_Or a submit-to-multiple-queues19:45
flaper87AFAIK, that's something AWS handles in the notification service19:45
*** oz_akan has quit IRC19:45
* flaper87 never mentioned AWS19:45
kgriffsheh19:46
wirehead_Just call it "That Seattle Queue"19:46
flaper87so, that's something we'll add not because AWS does it but it's usefule19:46
russell_hI don't like submit-to-multipe-queues idea, I think the point of queueing is to separate the concerns of publishers and consumers19:46
wirehead_Or merely an internal tee19:47
flaper87russell_h: actually the concept behind queues is just queue. It is the protocol itself that adds more functionalities, as for amqp, it addes exchanges, queues, routing_keys and so on19:47
russell_hright, I could probably get onboard with that19:47
flaper87I don't like the idea of posting to 2 queues either19:47
flaper87so, what you mentioned is really fair19:47
russell_hyeah, I'd really like for this to be something that is up to the consumer19:47
russell_hbasically "who they are willing to share with"19:48
kgriffs#agreed leave routing up to the consumer/app19:48
flaper87just want to add something more19:49
kgriffsThere's nothing saying we couldn't offer, as part of a public cloud, an add-on "workflow/routing-as-a-service"19:49
flaper87consider that we've added another level that other queuing system may lack of. We also have tenants which adds a higher grouping level for messages, queues and permissions19:49
kgriffsbut I like the idea of keeping Marconi lean and mean19:49
kgriffsright, and another grouping is tags which we are considering adding at some point (limiting to a sane number to avoid killing query performance)19:50
flaper87a solution might be to create more tenants and just use queues as routing spaghettis19:50
kgriffsso, the nice thing about Marconi, is queues are very light-weight, so it's no problem to create zillions of them19:51
kgriffs…as opposed to That Seattle Notifications Service™19:51
flaper87concept, consistency and simplicity. Those are some things Marconi would like to keep19:51
flaper87(Marconi told be that earlier today, during lunch)19:52
kgriffswow, he's still alive? That's one ooooold dude!19:52
flaper87kgriffs: was he dead? OMG, I wont sleep tonight19:52
flaper87gazillions > zillions19:53
* kgriffs Zombie Radio Genius Eats OpenStack Contributor's Brain While He Sleeps19:53
flaper87and that message was sent through and unknown radio signal19:54
flaper87s/and/an/19:54
flaper87moving on19:54
kgriffsso, you guys can always catch us in #openstack-marconi to discuss claims and routing and stuff further.19:54
russell_hthe problem with more tenants is that doesn't map well to how people actually use tenants19:54
russell_hthat can be overcome19:54
*** nkonovalov has quit IRC19:54
kgriffssure.19:54
kgriffslet's keep the discussion going19:54
flaper87russell_h: agreed, that was just a crazy idea that might work for 2 or 3 types of deployments19:55
russell_hflaper87: yeah, I have that idea about every third day for monitoring :)19:55
russell_hflaper87: it really doesn't work well for monitoring, because people want the monitoring on their server on the same tenant as the server itself19:55
russell_hand they don't do that for servers for some reason19:55
russell_h(because their server already exists, and my suggestion that they rebuild it on a different tenant doesn't go over well)19:56
russell_hanyway, yeah, joined the other channel19:56
russell_hthanks guys19:56
russell_hI like the look of this so far19:56
russell_hmy heart fluttered a little when I saw you using json home ;)19:56
russell_hin a good way19:56
flaper87russell_h: thank you. Would love to talk more about that in the other channel19:56
kgriffsyeah, we will have the home doc up soon. We want to use uri templates pervasively, but are waiting for the ecosystem around that to mature, so probably do that in v2 of the api19:57
kgriffsok19:57
kgriffswe are just about out of time19:57
kgriffsany last-minute items?19:57
kgriffsoh, one quick thing19:57
kgriffsAny objections to postponing the diagnostics (actions resource) to later this year after our first release?19:58
flaper87not from me! I thinki we have other things with higher priority19:58
*** jbresnah has joined #openstack-meeting-alt19:59
kgriffs#agreed postpone diagnostics19:59
kgriffsI really think it will be a hugely helpful feature, but we've got bigger fish to fry first. :D19:59
flaper87I would say a bigger zombie20:00
kgriffsok guys, it's been cool. We'll have a sandbox up soon you can try out. Tell us what sux so we can fix it.20:00
flaper87:P20:00
*** bcwaldon has joined #openstack-meeting-alt20:00
flaper87awesome! Way to go guys! russell_h wirehead_ thanks for joining20:00
kgriffsFYI, looks like we may be getting celery/kombu support in the near future as well20:00
kgriffsthanks guys!20:00
wirehead_thanks for having us, folks :)20:00
kgriffs#endmeeting20:00
flaper87w0000t20:00
*** openstack changes topic to "OpenStack meetings (alternate) || Development in #openstack-dev || Help in #openstack"20:01
openstackMeeting ended Thu Apr  4 20:00:59 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:01
openstackMinutes:        http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-04-04-19.05.html20:01
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-04-04-19.05.txt20:01
openstackLog:            http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-04-04-19.05.log.html20:01
markwashglance meeting folks around?20:01
flaper87o/20:01
bcwaldonhello!20:01
*** wirehead_ has left #openstack-meeting-alt20:01
bcwaldonman, its like nobody uses this project20:01
markwashdo we have jbresnah ?20:01
flaper87hahaha20:01
jbresnahi am here20:01
markwashcool20:01
markwash#startmeeting glance20:01
openstackMeeting started Thu Apr  4 20:01:59 2013 UTC.  The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot.20:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:02
*** openstack changes topic to " (Meeting topic: glance)"20:02
bcwaldonany rackers around?20:02
openstackThe meeting name has been set to 'glance'20:02
markwashunfortunately I scheduled this over top of a racksburg product meeting20:02
markwashso nobody from rackspace can make it20:02
kgriffsbcwaldon: here20:02
bcwaldonok - next week20:02
markwashwell, racksburg, that is :-)20:02
kgriffs(for the moment - all hands mtg starting soon)20:02
bcwaldonkgriffs: I should have been more specific (looking for iccha , brianr, and ameade)20:02
markwashso this is our first meeting20:03
markwashexciting I know20:03
bcwaldonkgriffs: but please stick around :)20:03
kgriffsno worries. I'm just eavesdropping :D20:03
markwash#topic Glance Blueprints https://blueprints.launchpad.net/glance20:03
*** openstack changes topic to "Glance Blueprints https://blueprints.launchpad.net/glance (Meeting topic: glance)"20:03
markwashI've been working on cleaning up the blueprint list lately20:03
*** mtreinish has joined #openstack-meeting-alt20:03
markwashand I'd love to make this meeting the place where we keep up to speed with that20:04
jbresnahThat would be great IMO20:04
flaper87+120:04
markwashI've got a list of items to discuss, I'll just start going 1 by 120:04
markwashwith blueprint public-glance20:04
markwashhttps://blueprints.launchpad.net/glance/+spec/public-glance20:05
markwashfeels like we maybe already support that feature?20:05
bcwaldonhow so?20:05
bcwaldonthe use case came from canonical wanting to publish cloud images through a public glance service20:06
bcwaldona readonly service20:06
markwashisn't there anonymous auth?20:06
bcwaldonthe catch is how do you use it through nova20:06
bcwaldonand yes, there is anonymous access20:06
bcwaldonI will admit we are most of the way there, but the last mile is the catch20:07
markwashso, is the feature then that we need nova to support talking anonymously to an arbitrary glance server?20:07
bcwaldonI think so20:07
bcwaldonnow that we have readonly access to glance20:07
bcwaldonbut this BP was written before that - really it feels like a nova feature20:07
markwashso, I'd like for us to investigate to ensure that's the only step left20:07
bcwaldonOR20:07
markwashand then move this bp to nova20:08
*** kgriffs has quit IRC20:08
bcwaldonsure - the alternative would be to create a glance driver for glance that you can point to that public glance service with20:08
markwashmaybe we can touch base with smoser and see what he still wants20:08
bcwaldonbut maybe you could just use the http backend...20:08
markwashanybody wanna take that action?20:08
bcwaldonyes, thats probably good20:08
bcwaldonwe could just summon him here20:08
markwashnah, lots to go through, there is riper fruit20:09
bcwaldonok20:09
bcwaldon#action bcwaldon to ping smoser on public-glance bp20:09
markwash#action markwash investigate remaining steps on bp public-glance and touch base with smoser20:09
markwashdarnit20:09
bcwaldon!20:09
markwashteam work20:09
bcwaldonok, lets move on20:09
markwashglance-basic-quotas, transfer-rate-limiting20:09
markwashhttps://blueprints.launchpad.net/glance/+spec/glance-basic-quotas https://blueprints.launchpad.net/glance/+spec/transfer-rate-limiting20:09
markwashThese interact closely with a proposed summit session: http://summit.openstack.org/cfp/details/22520:10
flaper87Good point, quotas. I think we should get that going for havana.20:10
markwashI'd like for iccha_ to put together a blueprint for that umbrella topic20:11
markwashand, mark those bps as deps20:11
markwashand then put everything in discussion for the summit20:11
*** kgriffs has joined #openstack-meeting-alt20:11
* flaper87 wont be at the summit :(20:11
markwashah, hmm20:12
markwashflaper87: any notes for us now?20:12
jbresnahI think that the transfer rate limiting might be a special case20:12
jbresnahit needs much lower latency than connection throttling20:12
markwashhmm, interesting, how do you mean?20:13
jbresnahie: it cannot call out to a separate service to determine the current rate20:13
flaper87markwash: yep. I've put some thoughts there and I think it would be good to get the quota code used for nova (same code for cinder) and make it generic enough to live in oslo and use that as a base for the implementation20:13
jbresnahwell... say a tenant has a limit of 1Gb/s total20:13
markwashjbresnah: ah I see20:13
markwashjbresnah: one thought I had is that most people run glance api workers in a cluster20:13
jbresnahso the limit can be taken up front from a 3rd party service20:13
markwashso we'd need a way to share usages across workers20:13
jbresnahbut enforcement has to be done locally20:14
jbresnahif that makes sense20:14
jbresnahthat too20:14
jbresnahtho that is harder20:14
jbresnahso, in the case i was pointing out20:14
jbresnahif they have 1 gb/s20:14
jbresnahand they do 1 transfer and it is going at 900mb/s20:14
jbresnahthey have 100 left20:14
jbresnahbut if the first drops down to 500...20:14
jbresnahthen they have 50020:14
*** jcru is now known as jcru|away20:14
jbresnahthat is all nice, but how do you efficiently do it20:14
markwashsounds complicated20:15
flaper87markwash: nova's implementation uses either a local cache or a remote cache (say memcached)20:15
jbresnahyou cannot reasonably make calls to a third party service ever time you wish to send a buffer20:15
jbresnahif you do, all transfer will be quite slow20:15
jbresnaha solution to that problem woulc be complicated20:15
jbresnahi propose that the limit be set at the begining of the transfer20:15
markwashmy take is, it sounds complicated and non-glancy, but if it were solved it would be useful for lots of openstack projects20:15
jbresnahso the bandwidth is 'checked out' from the quota service20:15
jbresnahenforced locally20:16
*** cp16net is now known as cp16net|away20:16
jbresnahand then chencked back in when done20:16
jbresnahthat approach is pretty simple i think20:16
jbresnahand in the short term, it can just be a global conf setting for all users20:16
jbresnahso an admin can say 'no user may transfer faster than 500mb/s', or some such20:16
markwashin any case, its something that folks have seen as relevant to making glance a top-level service20:17
markwashand we only have 5 slots, so I'd like to frame transfer-rate-limiting as part of that discussion20:17
markwashso does that framing sound acceptable?20:17
jbresnahi am parsing the meaning of:  making glance a top-level service20:18
markwash:-)20:18
jbresnahbut i am ok with making it a subtoipic for sure.20:18
markwashRackspace currently doesn't expose glance directly, only through nova20:18
jbresnahah20:19
markwashthere are some features they want in order to expose it directly20:19
* markwash stands in for rackspace since they aren't here atm20:19
jbresnahcool, that makes sense20:19
flaper87makes sense20:19
markwash#agreed discuss quotas and rate-limiting as part of making glance a public service at the design summit20:19
markwashWe have a number of blueprints related to image caching20:20
markwashglance-cache-path, glance-cache-service, refactoring-move-caching-out-of-middleware20:20
markwashI have some questions about these20:20
*** MarkAtwood has quit IRC20:21
*** dmitryme has quit IRC20:21
markwash1) re glance-cache-service, do we need another service? or do we want to expose cache management just at another endpoint in the glance-api process20:21
markwashor do we want to expose cache management in terms of locations, as with glance-cache-path?20:21
jbresnahi vote for the latter20:21
markwashflaper87: thoughts?20:22
flaper87I'm not sure. If it'll listen in another port I would prefer to keep them isolated and as such more "controllable" by the user20:22
jbresnahi made some comments on that somewhere but i cannot find them...20:22
flaper87what If I would like to stop the cache service?20:22
flaper87I bet most deployments have HA on top of N glance-api's20:23
bcwaldondo you think those using the cache depend on the remote management aspect?20:23
flaper87so, stoping 1 at a time wont be an issue but, it doesn't feel right to run 2 services under the same process / name20:23
markwashI could see it going either way. . I like the management flexibility of having a separate process, but I think it could cause problems to add more moving parts and more network latency20:24
markwashbcwaldon: I'm not sure actually20:24
jbresnahhow is the multiple locations feature supposed to work?20:24
jbresnahto me a cached image is just another location20:24
markwashbcwaldon: is there a way to manage the cache without using the v1 api? i.e. does glance-cache-manage talk directly to the local cache?20:24
jbresnahit would be nice if when cached it could be registered via that same mechanisms and any other location20:25
jbresnahand when cleaned up, removed the same way20:25
jbresnahthe cache management aspect would then be outside of this scope20:25
flaper87would it? I mean, I don't see that much latency, TBH. I'm thinking about clients pointing directly to the cache holding the chaced image20:25
flaper87or something like that20:25
bcwaldonmarkwash: cache management always talks to the cache local to each glance-api node, and it is only accessible using the /v1 namespace20:25
markwashbcwaldon: so even if the api is down, I can manage the cache, right?20:26
bcwaldonwhat does that mean?20:26
markwashlike, I can ssh to the box and run shell commands to manage the cache. . .20:26
bcwaldonno - it uses the public API20:26
markwashgotcha, okay20:27
*** dmitryme has joined #openstack-meeting-alt20:27
bcwaldon...someone check me on that20:27
flaper87it uses the public api20:27
flaper87AFAIK!20:27
markwashSo, its probably easy to move from a separate port on the glance-api process, to a separate process20:27
flaper87yeah, the registry client20:27
flaper87i guess, or something like that20:27
bcwaldonits not even a separate port, markwash20:28
markwashright, I'm proposing that it would be exposed on a separate port20:28
bcwaldonah - current vs proposed20:28
bcwaldonom20:28
bcwaldonok20:28
markwashits quite a change to, from the api's perspective, treat the cache as a nearby service, rather than a local file resource20:29
flaper87I think it would be cleaner to have a separate project! Easier to debug, easier to maintain and easier to distribute20:29
flaper87s/project/service20:29
flaper87sorry20:29
bcwaldonyou scared me20:29
flaper87hahahaha20:29
markwash:-)20:29
bcwaldonI agree20:29
jbresnahI do not yet understand the need for a separate interface.20:29
jbresnahwhy not register it as another location20:30
flaper87we already have a separate file glance-cache.conf20:30
jbresnahand use the interfaces in place for multiple locations?20:30
flaper87jbresnah: It would be treated like that, (as I imagine it)20:30
markwashjbresnah: I think you're right. . its just, to me, there is no place for cache management in image api v2 unless its a special case of locations20:30
jbresnahflaper87: can you explain that a bit?20:30
flaper87yep20:31
flaper87so, I imagine that service like this:20:31
jbresnahmarkwash: i would think that when a use went to download an image, the service could check all registered locations, if there is a local one, it could send that.  if not it could cache that and then add that location to its list20:32
flaper871) it caches a specific image 2) When a request gets to glance-api it checks if that image is cached in some of the cache services. 3) if it is then it points the client to that server for downloading the iamge20:32
flaper87iamge20:32
jbresnahany outside admin calls could be done around existing API and access to that store (the filesystem)20:32
flaper87image20:32
flaper87that's one scenario20:32
flaper87what I wanted to say is that it would be, somehow, another location for an image20:33
jbresnahflaper87: of course.  but in that case, why not give the client a list of locations and let it pick what it can handle?20:33
jbresnahflaper87: swift:// file:/// http:// etc20:33
flaper87jbresnah: sure but the client will have to query glance-api anyway in order to obtain that info20:33
jbresnahthe special case aspect + another process makes me concerned that this is an unneeded complication20:33
jbresnahbut i can back down20:34
markwashflaper87: I'm not sure in that scenario how we populate an image to the cache. . right now you can do it manually, but you can also use it as an MRU cache that autopopulates as we stream data through the server20:34
flaper87so, the client doesn't know if it's cached or not20:34
jbresnahflaper87: ?.  in that case i do not understand your original scenerio20:34
flaper87mmh, sorry. So, The cache service would serve cached images, right ?20:35
markwashwe might need to postpone this discussion for a while20:35
jbresnahmy last point is this: to me this part of glance is a registry replica service.  it ideally should be able to handle transient/short term replica registrations without it being a special case20:35
jbresnahand it seems that it is close to that20:35
*** vipul|away is now known as vipul20:36
jbresnahbut i do not want to derail work at hand either20:36
markwashI don't know that we have enough consensus at this point to really move forward on this front20:36
jbresnahcode in the hand is worth 2 in the ...bush?20:36
markwashtrue, so I think if folks have patches they want to submit for directional review that would be great20:37
jbresnahcool20:37
markwashbut I'm not comfortable enough to just say "lets +2 the first solution that passes pep8" either :-)20:37
flaper87hahaha20:37
* markwash is struggling for an action item out of this cache stuff20:38
*** vipul is now known as vipul|away20:38
*** vipul|away is now known as vipul20:38
bcwaldonmarkwash: let's get a better overview of the options - I think I can weigh more effectively if I see that20:38
jbresnahi could better document up my thoughts and submit them for review?20:39
bcwaldonand we can have a more directed discussion20:39
flaper87I'd say we should discuss this a bit further. I mean, I'd agree either with a separate service or with everything embedded in glance-api somehow. What I don't like that much is for this service to listen in another port within glance-api process20:39
*** SergeyLukjanov has quit IRC20:39
jbresnahinformally submit i mean, like email them20:39
markwashflaper87: okay, good to know20:39
markwashjbresnah: sounds good20:39
flaper87jbresnah: we could create a pad and review both scenarios together20:39
flaper87and see which one makes more sense20:40
*** SergeyLukjanov has joined #openstack-meeting-alt20:40
flaper87and then review that either in the summit or the next meeting20:40
markwash#action jbresnah, flaper87 to offer more detail proposals for futuer cache management20:40
markwashtypos :-(20:40
markwashone more big issue to deal with that I know of20:40
markwashiscsi-backend-store, glance-cinder-driver, image-transfer-service20:40
bcwaldonI really don't want to open that can of worms on IRC - this is a big discussion to have at the summit20:41
markwashall of these feel oriented towards leveraging more efficient image transfers20:41
markwashmy proposal is more limited20:41
markwashI would like to roll these together for the summit, in as much as they are all oriented towards bandwidth efficienty20:41
jbresnahin that case, i'll keep my worms in the can until the summit ;-)20:41
markwashs/efficienty/efficiency/20:41
*** dmitryme has quit IRC20:42
lifelessoooh bandwidth efficient transfers.20:42
* lifeless has toys in that department20:42
flaper87jbresnah: I'll email mines so you can throw them all together20:42
markwashI also think the goals of the image-transfer service border on some of the goals of exposing image locations directly20:43
jbresnahmarkwash: i think they may expand into areas beyond BW efficiency, but i am good with putting them all into a topic limited to that20:43
jbresnahmarkwash: i agree20:43
markwashcool20:43
markwashI'll double check with john griffith and zhi yan liu about what their goals are exactly20:44
markwashbecause it is still possible the conversations are divergent20:44
markwashdoes anybody have any other blueprints they would like to discuss?20:44
markwashI have a few more items, but lower importance and less risk of worm-cans20:45
jbresnahI have one, but I feel like i am dominating too much of this time already so it can wait20:46
bcwaldonopen 'em up20:46
markwashjbresnah: go for it, not a ton to discuss besides blueprints. . we've been hitting summit sessions on the way I think20:46
*** sdague has joined #openstack-meeting-alt20:47
jbresnahdirect-url-meta-data20:48
jbresnahit actually has more to do with multiple-locations i think20:48
jbresnahand cross cuts a little to the caching thing...20:48
markwashyeah, I think so too20:48
bcwaldonman, someone should finish that multiple-locations bp20:48
markwashthough probably we need to update the image locations spec before we could mark it as superseded20:48
bcwaldon#action bcwaldon to expose multiple image locations in v2 API20:49
flaper87hahahaha20:49
jbresnahbasically the thought is that if you are exposing information from a drvier that you may also need to expose more information than a url for it to be useful20:49
jbresnah(i <3 the multiple image locations BP)20:49
markwashagreed20:49
jbresnahfor example, a file url20:49
markwashI think locations should become json objects, rather than strings20:49
jbresnahthat is basically useless20:49
jbresnahyeah20:49
bcwaldonyep - that's the plan20:50
jbresnahwith an definition defined by the url scheme20:50
jbresnahok excelent20:50
flaper87+120:50
bcwaldonwe'll establish the API then we can figure out how to internally add that metadata and bubble it out20:50
jbresnahthen i suppose that blueprint can go away, or be part of the multiplelocations bp20:50
bcwaldonno, it should stay - just make it dependent on multiple-locations20:50
jbresnahcool20:51
markwashhmm, I'd rather mark it as superseded and add some more detail to the multi-locations bp20:51
markwashjust to reduce the number of bps20:51
flaper87markwash: agreed20:51
markwashbcwaldon: would you be okay with me doing that?20:51
bcwaldonmarkwash: that's a silly thing to strive for20:51
* markwash strives for many silly things20:51
bcwaldonlet's make one blueprint called 'features'20:51
jbresnahheh20:51
flaper87hahaha20:52
markwash:-)20:52
bcwaldonwe can chat about it - it's a small detail20:52
bcwaldonit's a logically different thing to me20:52
markwashyeah, in this case, its just that I sort of want object-like urls to appear in the api fully formed like athena from zeus' head20:52
bcwaldonand I want to be able to call multiple locations done once we are exposing multiple locations20:52
flaper87bcwaldon: but isn't direct-url-meta-data covered by what will be exposed in m-image-l ?20:52
markwashand not have two api revisions for the whole ordeal, just one20:53
bcwaldonfrom my point of view, multiple-image-locations has value completely disregarding backend metadata20:53
bcwaldonso we're defining a very specific feature20:53
markwashI see20:53
flaper87ok, sounds good20:53
markwashI think that's true. . lets still make sure the multi-locations bp has the details we need to be forward compatible with useful metadata20:53
flaper87should we define an action to keep markwash hands off of those blueprints ?20:53
bcwaldononly if he actions himself20:54
bcwaldonand yes, markwash, we definitely should20:54
markwashand then we can rally around some usecases to motivate the metadata aspect20:54
markwash#action markwash keep messing with blueprints20:54
bcwaldonyeah - I'm interested in the other usecases of that metadata20:54
bcwaldonjbresnah: any easy examples in mind?20:54
jbresnahthe file url20:54
bcwaldonwell - one could argue that you shouldn't use that store20:55
jbresnahso in nove there is a feature that will do a system cp if the direct_url to an image is a file url20:55
markwashthis could be useful for glance and nova compute workers that share a filesystem like gluster20:55
bcwaldon...oh20:55
bcwaldonwell I didnt think that through20:55
jbresnahbut this is sort of a useless feature20:55
bcwaldonI get the distributed FS use case20:56
jbresnahbecause you have to assume that nova-compute and glance mount the same fs in the  same way20:56
bcwaldontrue20:56
jbresnahso info like NFS exported host, or some generic namespace token would be good20:56
markwashjbresnah: that does make me wonder if we need some new fs driver that is a "shared fs" driver20:56
jbresnahso that the services could be preconfigured with some meaning20:56
jbresnahmarkwash: maybe, i haven't really thought aobut how that would help yet20:56
markwashor maybe its just optional metadata that goes straight from local configuration of the fs store to the metadata location20:56
*** yidclare has quit IRC20:57
markwashs/metadata location/location metadata/20:57
flaper87markwash: I would say the later20:57
flaper87sounds like something up to the way the whole thing is configured20:57
markwashjbresnah: were you thinking along the lines of the latter as well?20:57
flaper87and not the implementation itself20:57
jbresnahnod20:57
markwashokay cool, I like that20:57
markwashso that's a great use case, we should find some more b/c I think they may be out there20:58
markwashwe're about out of time, any last minute items?20:58
flaper87yep20:58
bcwaldononly to say thank you for organizing this, markwash20:58
flaper87what about doing bug squashing days from time to time ?20:58
markwashflaper87: could be great20:58
flaper87markwash: indeed, thanks! It's really useful to have this meetings20:58
bcwaldonmy only reservation would be our low volume of bugs20:58
bcwaldonrelative to larger projects that have BS days20:59
markwashDo we want to have another meeting next week before the summit?20:59
flaper87yep, that's why I was thinking of it like something we do 1 per month20:59
flaper87or something like that20:59
jbresnahyeah this was great, thanks!20:59
bcwaldonok20:59
bcwaldonmarkwash: yes please20:59
flaper87markwash: +120:59
markwash#action markwash to look at 1/month bugsquash days20:59
*** jcru|away is now known as jcru20:59
markwash#action markwash to schedule an extra glance meeting before the summit21:00
markwashthanks guys, we're out of time21:00
markwash#endmeeting21:00
*** openstack changes topic to "OpenStack meetings (alternate) || Development in #openstack-dev || Help in #openstack"21:00
openstackMeeting ended Thu Apr  4 21:00:15 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-04-04-20.01.html21:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-04-04-20.01.txt21:00
openstackLog:            http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-04-04-20.01.log.html21:00
flaper87\o/ bye guys!21:00
bcwaldonseeya21:00
jbresnahwave21:01
*** jbresnah has left #openstack-meeting-alt21:01
*** bryansd has left #openstack-meeting-alt21:01
*** flaper87 has left #openstack-meeting-alt21:01
*** djohnstone has joined #openstack-meeting-alt21:01
*** yidclare has joined #openstack-meeting-alt21:02
*** djohnstone1 has quit IRC21:05
*** amyt has quit IRC21:09
*** amyt has joined #openstack-meeting-alt21:09
*** cp16net|away is now known as cp16net21:10
*** cloudchimp has quit IRC21:14
*** MarkAtwood has joined #openstack-meeting-alt21:17
*** rmohan has quit IRC21:23
*** rmohan has joined #openstack-meeting-alt21:25
*** yidclare has quit IRC21:29
*** yidclare has joined #openstack-meeting-alt21:31
*** mtreinish has quit IRC21:34
*** dieseldoug has quit IRC21:41
*** malini has joined #openstack-meeting-alt21:49
*** amyt_ has joined #openstack-meeting-alt21:52
*** amyt_ has quit IRC21:52
*** amyt_ has joined #openstack-meeting-alt21:53
*** amyt has quit IRC21:53
*** amyt_ is now known as amyt21:53
*** jdprax has quit IRC21:55
*** malini has left #openstack-meeting-alt21:57
*** djohnstone has quit IRC22:00
*** sacharya has joined #openstack-meeting-alt22:01
*** yidclare has quit IRC22:07
*** yidclare has joined #openstack-meeting-alt22:09
*** amyt_ has joined #openstack-meeting-alt22:24
*** amyt has quit IRC22:24
*** amyt_ is now known as amyt22:24
*** ogelbukh has quit IRC22:25
*** amyt has quit IRC22:32
*** amyt has joined #openstack-meeting-alt22:32
*** kgriffs has quit IRC22:40
*** woodbon has quit IRC22:48
*** sdake_ has quit IRC23:03
*** sdake_ has joined #openstack-meeting-alt23:03
*** jcru has quit IRC23:07
*** amyt has quit IRC23:30
*** markwash has quit IRC23:47
*** rmohan has quit IRC23:50
*** rmohan has joined #openstack-meeting-alt23:51
*** MarkAtwood has quit IRC23:53
*** HenryG_ has joined #openstack-meeting-alt23:53
*** HenryG has quit IRC23:56
*** rmohan has quit IRC23:57
*** rmohan has joined #openstack-meeting-alt23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!