Thursday, 2012-03-01

*** milner has quit IRC00:00
*** jakedahn has quit IRC00:01
*** milner has joined #openstack-meeting00:01
*** jastr has joined #openstack-meeting00:04
*** joearnold has quit IRC00:12
*** martine has joined #openstack-meeting00:12
*** jakedahn__ is now known as jakedahn00:15
*** jakedahn has joined #openstack-meeting00:15
*** dendrobates is now known as dendro-afk00:24
*** nyov has quit IRC00:45
*** nyov has joined #openstack-meeting00:56
*** mdomsch has joined #openstack-meeting00:58
*** jakedahn_ has joined #openstack-meeting01:00
*** nati_ueno has quit IRC01:00
*** nati_ueno has joined #openstack-meeting01:00
*** anotherjesse2 has quit IRC01:01
*** Yak-n-Yeti has joined #openstack-meeting01:03
*** patelna has quit IRC01:04
*** jakedahn has quit IRC01:04
*** _adjohn has joined #openstack-meeting01:04
*** adjohn has quit IRC01:08
*** _adjohn is now known as adjohn01:08
*** markvoelker has joined #openstack-meeting01:09
*** littleidea has quit IRC01:09
*** _adjohn has joined #openstack-meeting01:14
*** adjohn has quit IRC01:17
*** _adjohn is now known as adjohn01:17
*** bengrue has quit IRC01:19
*** randomubuntuguy has joined #openstack-meeting01:21
*** littleidea has joined #openstack-meeting01:29
*** anotherjesse1 has joined #openstack-meeting01:29
*** donaldngo_hp has quit IRC01:33
*** randomubuntuguy has quit IRC01:36
*** anotherjesse1 has quit IRC01:38
*** js42_ has joined #openstack-meeting01:39
*** js42 has quit IRC01:42
*** js42_ is now known as js4201:42
*** ravi has quit IRC01:42
*** adjohn has quit IRC01:43
*** troytoman is now known as troytoman-away01:49
*** reed has quit IRC01:55
*** nati_ueno has quit IRC02:05
*** novas0x2a|laptop has quit IRC02:08
*** markvoelker has quit IRC02:09
*** jog0 has left #openstack-meeting02:19
*** ravi has joined #openstack-meeting02:19
*** ravi_ has joined #openstack-meeting02:21
*** ravi has quit IRC02:24
*** ravi_ is now known as ravi02:24
*** jog0_ has joined #openstack-meeting02:24
*** mattray has joined #openstack-meeting02:25
*** jakedahn has joined #openstack-meeting02:29
*** mestery is now known as mestery_02:36
*** adjohn has joined #openstack-meeting02:37
*** jdurgin has quit IRC02:41
*** mestery_ is now known as mestery02:43
*** js42_ has joined #openstack-meeting02:45
*** js42 has quit IRC02:48
*** js42_ is now known as js4202:48
*** deshantm has quit IRC02:51
*** jakedahn_ has joined #openstack-meeting02:52
*** jakedahn has quit IRC02:54
*** jakedahn_ is now known as jakedahn02:54
*** mattray has quit IRC02:56
*** zigo has joined #openstack-meeting03:00
*** mdomsch has quit IRC03:14
*** ravi_ has joined #openstack-meeting03:25
*** ravi has quit IRC03:29
*** ravi_ is now known as ravi03:29
*** ravi has quit IRC03:32
*** anotherjesse1 has joined #openstack-meeting03:34
*** nati_ueno has joined #openstack-meeting03:37
*** ravi has joined #openstack-meeting03:42
*** nati_ueno has quit IRC03:56
*** nati_ueno has joined #openstack-meeting03:56
*** danwent has quit IRC04:04
*** Yak-n-Yeti has quit IRC04:56
*** bengrue has joined #openstack-meeting04:57
*** dtroyer has quit IRC05:04
*** heckj has quit IRC05:09
*** ravi has quit IRC05:15
*** adjohn has quit IRC05:18
*** ravi has joined #openstack-meeting05:23
*** ravi has quit IRC05:25
*** martine has quit IRC05:45
*** joearnold has joined #openstack-meeting05:55
*** jastr has quit IRC05:56
*** adjohn has joined #openstack-meeting06:00
*** jastr has joined #openstack-meeting06:02
*** gkotton has quit IRC06:19
*** donaldngo_hp has joined #openstack-meeting06:23
*** gkotton has joined #openstack-meeting06:36
*** ravi has joined #openstack-meeting06:39
*** ravi has joined #openstack-meeting06:40
*** joearnold has quit IRC06:40
*** ravi has quit IRC06:46
*** ravi has joined #openstack-meeting06:46
*** zigo has quit IRC06:47
*** nati_ueno has quit IRC06:54
*** ravi has quit IRC07:20
*** jastr has quit IRC07:26
*** ravi has joined #openstack-meeting07:40
*** jastr has joined #openstack-meeting07:41
*** bencherian has quit IRC07:46
*** dolphm has quit IRC07:59
*** ravi has quit IRC08:08
*** anotherjesse1 has quit IRC08:12
*** littleidea has quit IRC08:26
*** darraghb has joined #openstack-meeting09:02
*** bengrue has quit IRC09:13
*** derekh has joined #openstack-meeting09:18
*** adjohn has quit IRC09:20
*** DuncanT has left #openstack-meeting11:11
*** DuncanT has quit IRC11:11
*** jastr has quit IRC11:33
*** dayou has quit IRC11:38
*** dayou has joined #openstack-meeting11:39
*** mikal has quit IRC11:56
*** mikal has joined #openstack-meeting11:59
*** jastr has joined #openstack-meeting12:17
*** dprince has joined #openstack-meeting12:22
*** dprince has quit IRC12:31
*** markvoelker has joined #openstack-meeting12:43
*** dtroyer has joined #openstack-meeting13:03
*** dprince has joined #openstack-meeting13:24
*** mancdaz1203 has quit IRC13:31
*** dtroyer has quit IRC13:32
*** sandywalsh has quit IRC13:32
*** mancdaz1203 has joined #openstack-meeting13:35
*** mattray has joined #openstack-meeting13:44
*** sandywalsh has joined #openstack-meeting13:44
*** deshantm has joined #openstack-meeting13:48
*** martine has joined #openstack-meeting13:55
*** dhellmann has quit IRC14:03
*** dtroyer has joined #openstack-meeting14:06
*** dendro-afk is now known as dendrobates14:07
*** littleidea has joined #openstack-meeting14:07
*** joesavak has joined #openstack-meeting14:23
*** littleidea has quit IRC14:29
*** mdomsch has joined #openstack-meeting14:55
*** bencherian has joined #openstack-meeting15:07
*** mancdaz1203 has quit IRC15:10
*** mancdaz1203 has joined #openstack-meeting15:10
*** AlanClark has joined #openstack-meeting15:13
*** mancdaz1203 has quit IRC15:17
*** dendrobates is now known as dendro-afk15:19
*** dendro-afk is now known as dendrobates15:21
*** mancdaz1203 has joined #openstack-meeting15:26
*** danwent has joined #openstack-meeting15:29
*** randomubuntuguy has joined #openstack-meeting15:32
*** bencherian has quit IRC15:36
*** bencherian_ has joined #openstack-meeting15:36
*** randomubuntuguy has quit IRC15:54
*** GheRivero has joined #openstack-meeting15:54
*** dolphm has joined #openstack-meeting16:06
*** gyee has joined #openstack-meeting16:09
*** martine has quit IRC16:20
*** ryanpetrello has joined #openstack-meeting16:25
*** heckj has joined #openstack-meeting16:26
*** Ravikumar_hp has joined #openstack-meeting16:28
*** dhellmann has joined #openstack-meeting16:30
*** andrewsben has joined #openstack-meeting16:30
*** nati_ueno has joined #openstack-meeting16:33
jaypipesQA meeting in 19 minutes!16:42
*** joearnold has joined #openstack-meeting16:46
*** jsavak has joined #openstack-meeting16:46
*** hggdh has quit IRC16:48
*** joesavak has quit IRC16:49
*** hggdh has joined #openstack-meeting16:57
*** dwalleck has joined #openstack-meeting16:59
jaypipesgood morning QAers :)17:00
jaypipes#startmeeting17:00
openstackMeeting started Thu Mar  1 17:00:37 2012 UTC.  The chair is jaypipes. Information about MeetBot at http://wiki.debian.org/MeetBot.17:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic.17:00
Ravikumar_hpgood morning Jay17:00
*** JoseSwiftQA has joined #openstack-meeting17:00
jaypipesRavikumar_hp: morning :)17:00
dwalleckmorning17:01
JoseSwiftQA:)17:01
jaypipesso... a quick status report from David Kranz and myself...17:01
jaypipes#topic status report on stable/diablo Tempest branch17:01
*** openstack changes topic to "status report on stable/diablo Tempest branch"17:01
jaypipesSo, the stable/diablo branch of Tempest was created and is now under Gerrit's control17:01
*** johngarbutt has joined #openstack-meeting17:02
donaldngo_hpnice!17:02
jaypipesDavid and I have been working on fixes that allow Tempest to run against a Diablo environment smoothly.17:02
Ravikumar_hpgreat . it helps17:02
jaypipesTwo fixes are already proposed against the branch:17:02
jaypipes#link https://review.openstack.org/#q,status:open+project:openstack/tempest,n,z17:02
jaypipesif you look at the above link, you will see the branch listed in the Gerrit output17:03
jaypipesyou can see there the two branches proposed for stable/diablo17:03
jaypipesreviews very welcome.17:03
jaypipesThe big thing to point out is the following:17:03
jaypipesa) Tempest's test cases will definitely be different from the stable/diablo branch to the development branch17:04
jaypipesb) We had previously accepted a patch to development that added a "release" variable to tests that could be used to skip certain tests that failed on a particular release17:04
jaypipesc) That skipping strategy should no longer be used. Instead, simply apply your patch to a specific branch of Tempest that is built to run against a specific release of OpenStack17:05
dwallecksounds like a much easier solution to me17:05
jaypipesAny questions on the above? It's any important distinction and a big step towards aligning with other core projects17:05
Ravikumar_hpjaypipes: question: where do we check-in common tests applicable for both ?17:05
jaypipesRavikumar_hp: excellent question! :) glad you asked!17:06
donaldngo_hpso what this means is that if a bug is found in Essex Tempest then we need to see if needs to go into stable/Diablo17:06
jaypipesdonaldngo_hp: one sec, lemme answer ravi first... then I get to that one.17:06
jaypipesOK, so now that we're set up correctly in Gerrit, here is the process for adding a test that is common to multiple releases:17:07
jaypipes(and this is the same process for all core projects, so good to keep this in mind)17:07
*** littleidea has joined #openstack-meeting17:07
*** ravi has joined #openstack-meeting17:07
*** ravi has left #openstack-meeting17:07
jaypipes1) Always first propose a patch with your new test case to the development trunk (master)17:07
jaypipes2) Once that patch is approved, then propose the same patch against stable/diablo17:08
jaypipes2a) if any merge conflicts occur, of course, resolve those before proposing (or Gerrit will yell at you anyway :)17:08
jaypipesSo, the flow is always master -> stable branches17:08
Ravikumar_hpthanks17:08
donaldngo_hpjay there is likely hood that the patch will be for the same test but differnt assertions17:09
jaypipesdonaldngo_hp: precisely correct.17:09
donaldngo_hpfor example test_images.py minRam(Essex) and min_ram(Diablo)17:09
jaypipesdonaldngo_hp: correct. and that is exactly where you would change the test in the patch to stable/diablo to use min_ram instead of minRam17:10
donaldngo_hpi think what that means is 1) run you proposed patch on DevStack(Essex) and then run it on DevStack(Diablo)17:10
donaldngo_hpok got it17:11
jaypipesdonaldngo_hp: this prevents all the if/else blocks on release variables... just apply the patch to correct release branch that makes sense for the API calls that release uses..17:11
donaldngo_hpyep totally agree17:11
dwalleckIs there a way to force Devstack to create a Diablo instance? My concern is that since I'm not primarily testing against diablo, it's much easier for me to propose tests that may fail17:11
jaypipesdwalleck: another great question! :)17:11
*** ayoung-afk is now known as ayoung17:11
jaypipesdwalleck: indeed there is :)17:11
jaypipesdwalleck: let me demonstrate:17:11
jaypipeshere are the steps I do when testing diablo locally17:12
jaypipes$> rm -rf /opt/stack &&  cd $devstack_dir && git checkout gerrit/stable/diablo && ./stack.sh17:12
jaypipestakes a while (since it needs to re-pull the branches), but it's guaranteed to be a clean diablo install\17:13
dwalleckexcellent! I can certainly do that17:13
*** Yak-n-Yeti has joined #openstack-meeting17:13
donaldngo_hpjaypipes: how do we shutdown DevStack I usually reboot my machine17:13
jaypipesOK, so the end goal here is, of course, to have diablo clusters tested autmatically through jenkins against a diablo Tempest (and of course, same for essex)17:13
jaypipesdonaldngo_hp: sudo killall screen17:13
donaldngo_hpsweet17:14
dwalleckand now I don't feel so bad for doing the same thing :)17:14
jaypipesdonaldngo_hp: there's actually a patch coming (to master devstack) that has a restart.sh script... keep an eye out for that17:14
jaypipesdwalleck: lol, hey, a reboot also works! :)17:14
donaldngo_hpdwalleck++17:14
jaypipesbut sudo killall screen is much, much faster ;)17:14
Ravikumar_hpwe need also shutdown.sh17:14
jaypipesRavikumar_hp: echo "sudo killall screen" > shutdown.sh ;)17:15
*** Gordonz has joined #openstack-meeting17:15
Ravikumar_hpok17:15
*** andrewsben has quit IRC17:16
jaypipesOK, let's change topic to discussion of individual merge proposals and bugs. anyone object?17:16
Ravikumar_hpjaypipes: we also need directories so as to group services17:16
jaypipesRavikumar_hp: could you elaborate?17:16
Ravikumar_hplike nova , keystone, swift17:16
Ravikumar_hptempest/tempest/tests/nova ...17:17
jaypipesRavikumar_hp: talking about tempest or devstack?17:17
jaypipesgotcha...17:17
Ravikumar_hpwe are planning to add keystone tests17:17
jaypipesRavikumar_hp: well, when I added the Glance tests, I put them all under tempest/tests/images/17:17
dwalleckI agree. I think I have a merge prop with a compute test subdirectory17:17
jaypipesRavikumar_hp: so, when you add keystone tests, I would advise: tempest/tests/identity/17:18
Ravikumar_hpsounds good17:18
dwalleckI just didn't want to move everything since it didn't have to do with that bug17:18
jaypipesanyone disagree with using the generic names in the tests directory?17:18
dwalleckVara is also about to submit some keystone tests as well, need to check in with him17:18
jaypipesI did that to emphasize the tests are against the *API*, not the implementation...17:18
*** Gordonz has quit IRC17:18
dwalleckno, that sounds like it would be for the best17:18
jaypipesdwalleck, Ravikumar_hp: OK, please coordinate with each other on keystone tests...17:19
*** Gordonz has joined #openstack-meeting17:19
Ravikumar_hpsure . vara is ...?17:19
jaypipesdwalleck: ^^17:19
dwalleckVara is another automation lead with Rack. He was going to be here but I don't see him...17:19
donaldngo_hpi have a quesiton about: https://bugs.launchpad.net/tempest/+bug/94309217:20
uvirtbot`Launchpad bug 943092 in tempest "test_servers_negative.py: Name error, release not defined" [Undecided,Fix committed]17:20
jaypipesdonaldngo_hp: that will be going away...17:20
jaypipesdonaldngo_hp: the release variable...17:20
donaldngo_hpthis was found by Sarad on my team and fixed17:20
donaldngo_hpquestion is how did it ever pass the Jenkins build17:20
donaldngo_hpshould have failed?17:20
jaypipesdonaldngo_hp: because the jenkins built doesn't run temnpest :(17:20
donaldngo_hpwhoa17:21
donaldngo_hpwhat does it do?17:21
*** GheRivero has quit IRC17:21
jaypipesdonaldngo_hp: yeah... this is why we've been trying to get to a place where jenkins can consistently run tempest against the deployment cluster that currently runs devstacjk's exercises against it17:21
jaypipesdonaldngo_hp: but tempest has not been stable enough up to date17:21
donaldngo_hphttps://jenkins.openstack.org/view/Tempest/17:22
jaypipesdonaldngo_hp: and since there aren't any unit tests in tempest... there's not much to run in jenkins other than a pep8 checker and a merge conflicty check :(17:22
donaldngo_hpi see17:22
jaypipesdonaldngo_hp: believe me, I'm ashamed about it...17:22
donaldngo_hpwas scratching my head wondering why its been green17:22
donaldngo_hpmakes sense now17:23
jaypipesdonaldngo_hp: and the breakout of stable/diablo was a step towards being able to run Tempest in a stable manner17:23
jaypipesdonaldngo_hp: that has always been our end-goal... to replace devstack's exercises with tempest.17:23
jaypipesdonaldngo_hp: and we're much closer today than we were a month ago atr least :)17:23
jaypipesbut, still lots to do!17:23
donaldngo_hpawesome17:24
jaypipesOK, shall we go over the merge proposals individually?17:24
jaypipes#topic merge proposal status17:24
*** openstack changes topic to "merge proposal status"17:24
jaypipes#link https://review.openstack.org/#q,status:open+project:openstack/tempest,n,z17:24
jaypipesLet's go bottom to top...\17:24
jaypipesthe stress tests are going to be reviewed today by me. dwalleck and others, would be great to get a review from you!17:25
dwalleckjaypipes: I'd like to. I just need to look clearly to try to understand how to run them17:25
jaypipesRavikumar_hp: it would be great to get those three Volume patches in. Could you focus on reviewing those? And ping Sapan about the comments on his review?17:26
Ravikumar_hpwe have volume & volume attachement - should we combine and make it as one?17:26
*** DuncanT has joined #openstack-meeting17:26
jaypipesRavikumar_hp: no... just saying to focus on those reviews..17:27
Ravikumar_hpI will review and run it today17:27
jaypipesRavikumar_hp: they are different bugs AFAICT17:27
jaypipesRavikumar_hp: rock on, thx17:27
jaypipesdwalleck: I should have the authorization tests merge prop reviewed within an hour.17:27
dwalleckawesome, thanks17:27
jaypipesRavikumar_hp, donaldngo_hp: care to review the security groups merge proposals from rajalakshmi and sapan?17:28
donaldngo_hpsure17:28
jaypipesall: the top merge proposal is from Eoghan Glynn. It adds retries to the rest client to deal with ratelimit middleware, if it is enabled in the environment17:29
jaypipesI've already reviewd it. looks simple enough and potentially very useful (though personally, I destroy ratelimit middlware ASAP on my envs ;)17:29
dwalleckI just saw that. His patch should work, but the one he'll hit first is posts, which is x per hour...17:30
dwalleckNot sure if people want their tests to pause for an hour. And the 50 per day would be even worse17:30
jaypipes:) indeed...17:30
jaypipesanyway, please add your comments on that review.17:30
dwalleckBut it does seem to work. Worth a try17:30
jaypipeswhich leads to the last one ... https://review.openstack.org/#change,473917:31
jaypipeswhcih is dwalleck's improvements tot he config in tempest17:31
jaypipesit is a large change and will likely cause a lot of merge conflicts for any branch unlucky enough to try merging after it :)17:31
dwalleckYes...this is my start to getting the configurations organized in a more logical manner17:31
jaypipesso... dwalleck, if you don't mind, I'll put the merge hell in your court and attempt to clear that patch last?17:32
dwalleckjaypipes: Fair enough17:32
*** jsavak has quit IRC17:32
jaypipesdwalleck: but by the end of the week. that means if folks don't get their reviews and review comments fixed, they'll be dealing wth those conflicts themselves :)17:32
*** jeremydei has quit IRC17:33
jaypipesOK, let's switch topics to Swift, eh? :)17:33
dwalleckalso fair :) I realize its a huge change, but I saw very clearly this week the pains people are having17:33
jaypipes#topic JoseSwiftQA and dwalleck to give status report on swift tests17:33
*** openstack changes topic to "JoseSwiftQA and dwalleck to give status report on swift tests"17:33
dwalleckJoseSwiftQA: We're close, right? Both for swift and CBS?17:34
JoseSwiftQAcorrect.17:34
jaypipesdwalleck: let's focus on swift right now :)17:34
dwalleckFair enough17:34
jaypipesJoseSwiftQA: you guys have been adding to tempest, right?17:34
JoseSwiftQAService is figured out, just have to work out a few kinks and clean it up.  Haven't commited anything yet.17:35
jaypipesk... looking forward to it!17:35
JoseSwiftQA:)17:35
dwalleckThey weren't doing it in Tempest at first, but they're merging it all in17:35
dwalleckI've seen it, it'l be a huge help17:35
jaypipesRavikumar_hp, donaldngo_hp: what about Swift at HP? any tests been worked on that could be aligned with tempest?17:36
Ravikumar_hpwe have tests , but we need to refactor little bit17:36
Ravikumar_hpto suit temptest17:36
jaypipesRavikumar_hp: actually, better question might be who is the lead QA person at HP for Swift stuff?17:36
jaypipesI can reach out to them to coordinate/collaborate with Jose and Daryl's teams17:37
Ravikumar_hpJohn Lenihan in Ireland17:37
jaypipesgotcha. OK, I'll reach out... let him know what's happening in the communtiy17:37
Ravikumar_hpok17:37
jaypipesdwalleck, JoseSwiftQA: any ETA on swift stuff?17:37
jaypipesthey are hedging their bets :)17:38
jaypipesok, I won't push.17:38
dwalleckJoseSwiftQA: how do you feel? I don't want you to rush17:38
dwalleckI'm all about stability this week :)17:39
JoseSwiftQAit's almost ready17:39
dwalleckWe don't have to have everything done, just base service and a few tests even17:39
JoseSwiftQAJust need to find time to finish baking it17:39
JoseSwiftQA:D17:39
jaypipesno problem guys17:39
dwalleckOnce we get that in, everything else can follow, plus others can join in17:39
*** nati_ueno has quit IRC17:40
jaypipesin the meantime, I'll get with jeblair and mtaylor about the needs for tempest in the CI infrastructure17:40
jaypipes#topic Open Discussion17:40
*** openstack changes topic to "Open Discussion"17:40
jaypipesAnybody got stuff to bring up? Any concerns? Questions? Comments about the design summit coming up?17:41
jaypipesDoes everyone have a registration code for the design summit that needs one?17:41
dwalleckI think most of my team will be at the summit, which should be fun :)17:41
jaypipesIf not, please email me.17:41
jaypipesdwalleck: awesome!17:41
jaypipesdonaldngo_hp and Ravikumar_hp are pretty close, so I hope they make it there :)17:41
Ravikumar_hpwe have registered . (Ravi, Nayna , Donald)17:42
jaypipesexcellent17:42
Ravikumar_hpjaypipes: You will be in SF this week?17:42
JoseSwiftQAWe've mostly all registered.17:42
dwalleckI also pinged Thierry about having at least one Tempest specific session, which sounds possible17:43
jaypipesRavikumar_hp: yep, next Tuesday in SFO and then Wed through Friday in Santa Clara at PyCon17:43
Ravikumar_hpmay be you should stop by in Cupertino17:43
jaypipesdwalleck: we will have >1 session I believe. at a bare minimum I think we should have a Tempest Install-n-Run fest!17:43
jaypipesRavikumar_hp: I'd love to! :)17:43
jaypipesOK, folks, any other questions/concerns? I'll wrap things up if not...17:44
dwalleckI'm hoping for even more than one. I'd really like to roadmap out the future17:44
dwallecknope, I'm good17:44
jaypipes++17:44
jaypipesOK all, have a great day. See you on the reviews :)17:45
jaypipes#endmeeting17:45
*** openstack changes topic to "Status and Progress (Meeting topic: keystone-meeting)"17:45
openstackMeeting ended Thu Mar  1 17:45:14 2012 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:45
openstackMinutes:        http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-03-01-17.00.html17:45
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-03-01-17.00.txt17:45
openstackLog:            http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-03-01-17.00.log.html17:45
mtaylorjaypipes: tempest++17:45
*** dwalleck has quit IRC17:45
*** johngarbutt has quit IRC17:45
jaypipesmtaylor: gonna grab some lunch. after that, chat about tempest needs and the HP CI and QA clusters?17:46
*** zigo has joined #openstack-meeting17:47
*** JoseSwiftQA has quit IRC17:48
*** AlanClark has quit IRC17:49
*** jeremydei has joined #openstack-meeting17:50
mtaylorjaypipes: yes to tempest. still working on clusters - I think I'm having a communications breakdown with the guys17:50
*** AlanClark has joined #openstack-meeting17:52
jaypipesmtaylor: ok, let's chat in about 45 ins17:53
*** jdg has joined #openstack-meeting17:53
*** kishanov has joined #openstack-meeting17:54
*** jdurgin has joined #openstack-meeting17:57
*** bencherian_ has quit IRC17:59
*** bencherian has joined #openstack-meeting17:59
jdgAny folks here for the nova-volume meeting?17:59
DuncanTYup18:01
jdggreat, was hoping you'd be here18:01
jdg#startmeeting18:01
openstackMeeting started Thu Mar  1 18:01:21 2012 UTC.  The chair is jdg. Information about MeetBot at http://wiki.debian.org/MeetBot.18:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic.18:01
*** YorikSar has joined #openstack-meeting18:01
jdg#link http://wiki.openstack.org/NovaVolumeMeetings18:01
DuncanTAnybody else?18:02
jdgNot much was added to the agenda other than DuncanT's request to talk about boot from volume18:02
YorikSaro/18:02
jdgThis might be a short meeting... :)  Maybe we should give folks another minute?18:03
jdurginI'm here too18:03
kishanovogelbukh wanted to join, but he might be unavailable right now18:03
jdgalright, if there's no objections let's get started18:03
jdg#topic boot from volume18:03
*** openstack changes topic to "boot from volume"18:03
*** dricco has joined #openstack-meeting18:04
jdgDuncanT... you had some things you wanted to talk about here?18:04
DuncanTYes please18:04
jdgGo for it18:04
DuncanTBasically I want to get some sort of conscientious as to where people think boot-from-volume is aiming for18:05
*** dolphm has quit IRC18:05
jdgAnything specific?18:05
DuncanTI'm not 100% sure what works at the moment, but I'd like some idea of what people think should work...18:05
DuncanTBoot from iso18:06
DuncanTBoot from a volume that I've arranged to have a boot loader on it already18:06
jdgRather than using the intermediate instance etc?18:06
DuncanTYes18:06
jdgPersonally I agree that this is soemthing that we "need", how to go about it is another story18:07
jdgDuncanT: Do you have any thoughts on how to implement?18:08
*** derekh has quit IRC18:08
*** renuka has joined #openstack-meeting18:08
*** jeremydei has quit IRC18:08
DuncanTI don't /think/ it is possible to run an instance at the moment that doesn't have a glace reference?18:08
DuncanTI'm only just getting familiar with how nova starts instances at the moment18:08
*** jog0 has joined #openstack-meeting18:09
DuncanTSorry, I'm not as well prepared here as I'd hoped to be18:10
*** dolphm has joined #openstack-meeting18:10
jdgNo worries...  Does anybody have any thoughts on this?  Or do we not have the right people today?18:10
jdurginlast I checked that was the case, and I agree the imageref shouldn't be necessary18:10
jdurgindid you guys see https://blueprints.launchpad.net/nova/+spec/auto-create-boot-volumes?18:10
DuncanTjdurgin: Somebody here pointed me at that a few minutes ago18:11
DuncanTWe'd like to be able to create the volumes from (volume) snapshots too18:12
YorikSarYes, it seemed to be a good idea even back in early Diablo days18:12
jdurginit's started to be implemented now though: https://review.openstack.org/#change,457618:12
*** rnirmal has joined #openstack-meeting18:13
DuncanTSo it looks like this makes it a system-wide change to always use persistent volumes?18:14
YorikSarI ran through this change, looks pretty good18:15
YorikSarBut shouldn't there be some cleanup after instance shutdown?18:15
DuncanTI don't see how you start an instance using the same volumes again18:16
DuncanTi.e. terminate the instance, keep the volumes then boot a new instance using those volumes18:16
*** jeremydei has joined #openstack-meeting18:16
DuncanTThe same as you might shutdown a server and have it come back exactly as it was18:16
*** Gordonz has quit IRC18:17
DuncanTI'm not entirely sure the usecase of the AutoCreateVolumes feature without this facility18:17
DuncanTMaybe I'm missing something?18:17
jdurginDuncanT: maybe the way to accomplish that is to make creating the image from a volume a parameter of the api request, instead of a global flag18:18
YorikSarWe can minimize usage of local disks on compute node, it can be necessary sometimes18:19
DuncanTjdurgin: I think so, yes. I think you can get the behaviour that this new feature gives you using block_device_mapper flags on every instance creation18:19
YorikSarFor example, to minimize VM downtime on compute host failure18:19
DuncanTYorikSar: Ok, I can see that18:20
YorikSarBut if we are going to use/reuse such volumes, it looks like we should not put this logic into compute18:20
DuncanTYorikSar: I agree18:20
YorikSarMaybe, we should let nova-volume summon new volume from image and then start an instance on it?18:21
DuncanTCan we find a way to reuse the code in nova-compute that currently creates the ephemeral (local) volumes here, since we know it is good?18:22
jdurginYorikSar: I don't think nova-volume should start the instance itself, but adding a VolumeDriver method to create a volume from an image sounds good to me18:22
YorikSarjdurgin: Of course, instance creation should be a separate API call handled by compute18:23
*** jeremydei has quit IRC18:24
*** joearnold has quit IRC18:24
jdgSo the create API call could take an argument to specify the instance should reside on a volume... create the volume, and launch the instance18:25
YorikSarDuncanT: I don't see how can it help here18:25
DuncanTI've no strong feelings on where it should live, but using different code to populate local .v. persistent volumes from glace seems odd18:25
DuncanTThe task is essentially the safe, isn't it?18:25
jdurginnot quite the same - local disks are just files downloaded to the host from glance18:26
ogelbukhafaik, local volumes are not actually volumes18:26
ogelbukhjakedahn: +118:26
ogelbukhjdurgin: +118:26
jdurgincurrently nova-volume has no way to actually write to the volumes18:26
renukaDuncanT: quick update...I am sorry I joined in late, so I may not have all the context. The way I have created test bfv volumes so far is by attaching a new volume to an existing instance and dd-ing over the contents of /boot18:27
YorikSarrenuka: Exactly this logic should be separated into "create_volume_from_image" API call18:27
renukaDuncanT: by new volume, I mean one that nova-volume knows about18:28
DuncanTOk, I thought it could inject files to them and stuff, but I haven't looked at the code in detail. Doesn't it do some magic to expand the filesystem to fill whatever size volume your flavour provides?18:28
DuncanTOk, if we need an API call to do it, I'm fine with that18:29
YorikSarI think, we can delegate this call to some place where both Glance and nova-volume are accessible, along with this resizefs funcitonality18:29
*** clayg has joined #openstack-meeting18:29
renukaDuncanT: not sure about the details of that.. but do we care about filesystem size when we are explicitly saying to boot from *this* volume18:29
*** hggdh_ has joined #openstack-meeting18:29
claygsry, late - what'd I miss :)18:30
DuncanTrenuka: Only if/when initially populating *this* volume with an image from glace18:30
DuncanTs/glace/glance/18:30
DuncanTrenuka: If the volume gets populated any other way, I agree we don't care18:30
*** hggdh_ has quit IRC18:30
YorikSarMaybe we should move it to nova.virt.disk and run on nova-api node?18:30
*** hggdh has quit IRC18:31
*** Gordonz has joined #openstack-meeting18:31
renukaDuncanT: my impression was, when people use boot from volume, they will have the exact volume they want to boot from. So you are talking about when we create *this* volume, correct?18:31
*** mdomsch has quit IRC18:31
DuncanTYorikSar: Would that mean nova-api nodes then need to be able to connect to / mount volumes?18:31
claygnm, found the log http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-03-01-18.01.log.txt18:31
renukawhy should this be in the nova-volume api? versus like a utility command18:32
YorikSarDuncanT: yeah, this is odd. It definitely should be done on nova-volume node.18:32
DuncanTrenuka: I think there are two stages. You're quite right, the second stage is to say 'boot from this already created volume'. There's also the case in https://blueprints.launchpad.net/nova/+spec/auto-create-boot-volumes of initially creating that volume from a glance image18:32
renukaDuncanT: why should this be in the nova-volume api? versus like a utility command18:33
*** hggdh has joined #openstack-meeting18:33
DuncanTrenuka: How would it be driven by a user, if it isn't in an api somewhere?18:34
*** nati_ueno has joined #openstack-meeting18:35
YorikSarrenuka: In case of iSCSI driver, we can cache frequently used image on nova-volume node and propagate it locally, with a performance gain18:35
*** creiht has joined #openstack-meeting18:35
*** mdomsch has joined #openstack-meeting18:36
renukaDuncanT: I guess what I am more uncomfortable about is having nova-volume be aware of glance all of a sudden18:36
YorikSarrenuka: It can be used too frequent to be an utility.18:36
DuncanTrenuka: An example use-case might be: Create me a server using new persistent volumes for all storage, using the ubuntu glace image.... later, terminate that instance... later still boot a new instance using the volumes I created earlier, exactly as if I have powered off a physical server then powered it back on again18:36
DuncanTrenuka: If we can get nova-compute to use nova-volume volumes in place of local disk images then nova-compute existing code can do the rest18:37
renukaDuncanT: We need to be careful that all this while, compute has been the glance-aware component18:37
YorikSarrenuka: It will connect to Glance anyway to backup volumes18:37
*** jeremydei has joined #openstack-meeting18:37
jdgDuncanT: I guess I don't see why you couldn't use the existing glance and compute relations to do that?18:37
DuncanTYorikSar: We do backups (or what the euca commands call snapshots) without glance usign copy-on-write18:37
jdgeg: keep volume unaware, and just "use" it18:37
renukaYorikSar: at this point, nova-volume does not connect to glance AFAIK... back ups and snapshots are taken on the existing backend18:38
YorikSarDuncanT: I'm talking about backup to cold storage, e.g. Glance.18:38
jdurginthere are also possible optimizations if glance and nova-volume are using the same backend storage - new instances could be created that are copy-on-write18:38
DuncanTjgd: If the API is gotten correct, I think you can keep volume unaware, yes18:38
*** rnirmal has quit IRC18:38
renukajdurgin: that cannot be a requirement18:39
DuncanTjdurgin: COW instances for fast instance creation is definitely on our road-map18:39
*** js42_ has joined #openstack-meeting18:39
jdurginrenuka: not a requirement, certainly, but an optimization18:39
*** dolphm has quit IRC18:39
jdgIt seems like adding the functionality to compute api when creating an instance to "use" a volume gets what everybody wants without causing a bunch of tangles in volume code18:39
DuncanTjdg: Agreed18:40
renuka+118:40
dricco+118:40
jdurgin+118:40
YorikSarjdg: I think, this method (create volume from image) should be useful for nova-volume as separate service.18:40
jdgYorikSar: Maybe, but I like the idea of keeping volumes limited to just being "volumes"18:41
jdgThey should not know or care how they are being used should they?18:41
*** darraghb has left #openstack-meeting18:41
renukaHaving said that I am not entirely thrilled with the idea of compute suddenly having control of a command that does "feel" like a compute command18:41
YorikSarOf course, no. But what can stop them from using Glance to store and ressurect long-term backups?18:41
*** js42 has quit IRC18:42
*** js42_ is now known as js4218:42
creihtbackups should be a function of the backend storage system18:42
creihtweather that is to glance, directly to swift, or local snaps it shouldn't matter18:42
renukaYorikSar: the individual volume drivers should not have to be modified for this functionality.. we need volume only to *create* the new volume...18:42
creihtas long as we make sure we have a consistent interface for the users to interact with that18:43
DuncanTcreiht: we need an API that can support many semantics though18:43
YorikSarMmm.. I think, I should formulate this as some bluebrint.18:43
creihtThere should be a base amount of functionality for backups18:43
creihtcreate backup, create volume based on backup, etc.18:43
creihtany extra can be added with extensions18:43
DuncanTcreiht: The problem there is that we consider 'snapshots' and 'backups' to be two separate things, both of which users might want to do18:45
creihtbecause that extra functionality is going to be different for every implementation18:45
DuncanTcreiht: Which do you map to the standard 'backup'?18:45
YorikSarrenuka: Still volume node can be the closest node to the new volume, so we lose performance on network IO18:46
jdurginrenuka: how would you know how to write to a volume without an additional volume driver method?18:46
YorikSarjdurgin: We can mount volume to nova-volume node and write to it18:46
claygSo there's a lot of code referencing block_device_mapping, which as I understand the EC2 feature allows you to accomplish boot-from-volume (i.e. the root fs of this instance is an ebs volume) - has any used the block_device_mapping feature currently impliemented?18:46
renukawell but that would mean the volume needs to be mounted somewhere right18:46
jdurginYorikSar: not all volumes are mountable on the host18:47
creihtDuncanT: that's part of the problem is the terminology makes it difficult to define all of this18:47
creihtI tried at one point to clean it up with using the term backups, but I may just further confused the situation18:47
creihtAll I'm saying is that there should be a simple base functionality (what is currently implemented in the api as snapshots)18:48
DuncanTcreiht: I wrote a blueprint that attempted to define some terminology. One problem is that ec2 API already owns some of the terms18:48
jdgjdurgin: can you clarify volumes that aren't mountable?18:48
creihtweather we call it backups or snapshots at this point, I've come to not care18:48
creihtbut anything on top of that should be an extension18:48
jdurginjdg: sheepdog and rbd are written to directly by qemu18:48
DuncanTclayg: Regarding block_device_mapping, I can't see how to create a new instance that doesn't reference a glace image using it18:49
claygcreiht: DuncanT: the terms can be overloaded, but it seems ok if they means different things to diffent volume types/storage backends as long as the user can keep it stright.18:49
YorikSarThere are volumes, we can create fast snapshots that use less space and can easily be a source for a new volume. And there are backups that take a lot of time to create, are stored in a very reliable place (like Swift) and take a lot of resources to be restored.18:49
*** joearnold has joined #openstack-meeting18:49
jdgjdurgin: So what about saying sheepdog and rdb don't suppor this bfv method?18:49
jdgOr "this method of bfv is not supported"18:50
YorikSarI though, this terminology is common18:50
DuncanTclayg: block_device_mapping otherwise gives a lot of the needed functionality, I think, though it is ugly18:50
creihtYorikSar: you can support that, but not every storage system is going to support that18:50
creihtthat's why I am arguing for a simple base concept18:50
jdurginjdg: they can support it, just not with dding to a block device on the host. they can both be written to with qemu-img18:50
creihtit is reasonable to expect every storage system to implement some backup/snapshot system18:50
jdgjdurgin: ahh, ok18:51
renukacreiht: why does this API have to be backend dependent18:51
YorikSarcreiht: the worst way is to attach volume to nova-volume host (just as it can be attached to nova-compute) and dd an image to/from it18:51
renukacreiht: we should not have to rely on additional backend functionality, when all we need is the ability to create/attach volume18:51
creihtrenuka: all I'm arguing for is a simple base functionality that all systems can implement18:52
creihtthen where systems want to vary/ add their own value they can in extensions18:52
creihtjust like the rest of nova18:52
creihtrenuka: and I agree totally with that18:52
renukaYorikSar: that sounds so wrong. the volume host is a control plane, we should not have random volumes whose contents we have no idea about being attached to a privileged host/vm18:53
DuncanTrenuka: volume host function separation ++18:53
renukaok here's a suggestion.. off the top of my head... can we expect to boot the image we want, attach a new volume to it, and dd (like how i said i was creating volumes)...18:53
renukaboot the image if required of course, not if it is running already18:54
YorikSarrenuka: ok, then we should have a separate utility host that should do it.18:54
renukaYorkSar: why?18:54
*** adjohn has joined #openstack-meeting18:55
renukaYorikSar: i know this is quite hacky... but it shouldn't matter where we booted this image18:55
YorikSarrenuka: to keep Compute unaware of all backuping. And to not lose performance on virtualization and networking (if volumes are local)18:55
*** dendrobates is now known as dendro-afk18:56
jdurginrenuka: that's essentially what libguestfs does (and there's a plugin for it in nova.virt.disks for file injection)18:56
creihtYorikSar: so are you saying that the backup functionality should be common accross all storage systems?18:57
YorikSarjdurgin: file injection can be done by compute host later18:57
YorikSarcreiht: Yes. And if backend can not do something faster (like stream image directly to storage), we should do in for it, e.g. on utility host18:58
creihtYorikSar: I would argue that is not possible (at least in the near term)18:59
jdurginYorikSar: yeah, file injection is a separate issue18:59
creihtevery storage system is going to do backups differently18:59
*** adjohn has quit IRC18:59
creihtfor example, I imagine netapp will store backups internally as snaps19:00
*** adjohn has joined #openstack-meeting19:00
YorikSarcreiht: But it should be kept in nova-volume, not spread around both volume and compute19:00
creihtlunr is going to backup directly to swift19:00
YorikSarsnapshots should not be considered as long-term storage19:01
jdgSo what's so wrong with a default backup in the Volume driver that does something along those lines, and then folks override it in their drivers where possible.  In both cases it's the same volume-api call?19:01
creihtYorikSar: again that depends on the backend storage system19:01
creihtfor your storage system that may be the case19:01
creihtfor another it may not19:02
YorikSarcreiht: Well, it should be up to driver19:02
creihtand that is exactly what I am arguing for19:02
YorikSarcreiht: It can alias backup to snapshot19:02
creihtwe shouldn't make those decisions for them :)19:02
jdgYorikSar: +119:02
*** nati_ueno has quit IRC19:03
creihtand I have to run to another meeting19:03
YorikSarcreiht: But we should pass create_backup request to driver anyway, so that there should be API call that triggers it19:03
jdgOk, we're out of time.  Sounds like we can pick up on this again next week for sure.19:03
DuncanTIt sounds like we have some conscientious on how boot-from-volume could work, even if backup/snapshot is a bit up-it-the air?19:04
jdgDuncanT: Yes, I think folks agree on the top level.  Backup/Snapshot details still needs some discussion.19:04
DuncanTMaybe I'll try to summerise my understanding of the boot from volume, with notes on the code that seems to be missing, for next week?19:04
renukaDuncanT: can you summarize?19:04
YorikSarDuncanT: If we support future with backups in nova-volume, this logic should be moved to nova-volume.19:05
jdgAnother question I have is how this impacts the existing blueprint and work that's been done by samsung19:05
claygjdg: any progress on uuids?19:05
jdgclayg:  :)19:05
DuncanTjdg: Is http://wiki.openstack.org/AutoCreateBootVolumes the samsung one?19:05
jdgWorking on it.  My first approach trashed ec2 calls.19:05
DuncanTSorry, I don't know who's who19:05
renukaDuncanT: could you put down what we have agreed on...i am still confused about this19:05
jdgDuncanT: Yes19:05
YorikSarAnd I've been beaten for unifying volume API and extension.19:06
DuncanTjdg: Ok, I'll make sure any interaction with that is documented19:06
jdgclayg: I'm going to take another look today and mabye send out an email to volume list about what I'm trying to do.19:06
jdgMy thought now is to just modify existing DB/API methods to check if they're recieving a UUID versus int-id and behave accordingly.19:07
ogelbukhvishy mentioned some plan for separation nova-volume into project of it's own19:07
jdgBut this creates some confusion higher up19:07
jdgogelbukh:  Yes19:07
ogelbukhdid anyone see updates on that?19:07
DuncanTrenuka: I'll email our the summary ASAP then you can comment on that... to be honest it feels like what you were doing is basically what I'm thinking of, other than using something slightly smarter than DD19:07
ogelbukhor he's going to do it at summit19:08
jdgogelbukh: I think that's going to be relegated more towards the summit19:08
YorikSarWe need to use hash-action to fixate what should be donne by next week19:08
ogelbukhjdg: oh, fine19:08
renukaDuncanT: i think this could be a different service altogether, or the closest would be to have it become part of nova-compute19:08
jdg#action DuncanT to send out summary of where we're at on BFV19:08
DuncanTrenuka: It is going to involve nova-compute work, definitely19:09
jdgAnything else real quick?19:09
jdgOk, thanks everyone.19:10
jdg#endmeeting19:10
*** openstack changes topic to "Status and Progress (Meeting topic: keystone-meeting)"19:10
openstackMeeting ended Thu Mar  1 19:10:06 2012 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:10
openstackMinutes:        http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-03-01-18.01.html19:10
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-03-01-18.01.txt19:10
openstackLog:            http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-03-01-18.01.log.html19:10
DuncanTThanks all19:10
*** andrewsben has joined #openstack-meeting19:11
*** renuka has quit IRC19:11
*** blamar is now known as markwash_19:11
*** markwash_ is now known as blamar19:11
*** jdg has left #openstack-meeting19:14
*** donaldngo_hp has quit IRC19:14
*** nati_ueno has joined #openstack-meeting19:22
vishyDuncanT: i missed this, but is there something with boot from volume that doesn't work?19:25
vishyDuncanT: you can currently boot from a volume directly.  There is some funky interaction to get an image onto the volume but it works more or less.19:27
*** donaldngo_hp has joined #openstack-meeting19:29
*** kishanov has quit IRC19:29
*** dwalleck has joined #openstack-meeting19:33
*** mikeyp has joined #openstack-meeting19:34
*** mikeyp has left #openstack-meeting19:34
*** novas0x2a|laptop has joined #openstack-meeting19:41
*** mattray has quit IRC19:45
*** jog0 has left #openstack-meeting19:47
*** dwalleck has quit IRC19:47
*** jog0_ has joined #openstack-meeting19:48
*** dwalleck has joined #openstack-meeting19:48
*** adjohn has quit IRC19:49
*** mattray has joined #openstack-meeting19:50
*** mikeyp has joined #openstack-meeting19:52
*** jog0_ has quit IRC19:52
*** novas0x2a|laptop has quit IRC19:53
*** novas0x2a|laptop has joined #openstack-meeting19:53
*** dolphm has joined #openstack-meeting19:54
*** deshantm has quit IRC19:55
* mikeyp lurking during the hack-in19:56
*** n0ano has joined #openstack-meeting19:58
*** jog0 has joined #openstack-meeting19:59
*** bengrue has joined #openstack-meeting19:59
*** deshantm has joined #openstack-meeting19:59
*** kishanov has joined #openstack-meeting20:01
*** dwalleck has quit IRC20:01
*** jog0 has quit IRC20:01
*** jog0 has joined #openstack-meeting20:01
n0anoanyone here for the orchestration meeting?20:03
*** creiht has left #openstack-meeting20:04
*** kishanov has quit IRC20:08
*** anotherjesse1 has joined #openstack-meeting20:09
*** n0ano has left #openstack-meeting20:13
*** mdomsch_ has joined #openstack-meeting20:15
*** dwalleck has joined #openstack-meeting20:15
*** mdomsch has quit IRC20:18
*** novas0x2a|laptop has quit IRC20:19
*** nati_ueno has quit IRC20:21
*** asdfasdf has joined #openstack-meeting20:21
*** novas0x2a|laptop has joined #openstack-meeting20:21
*** bencherian has quit IRC20:27
*** dwalleck has quit IRC20:29
*** bencherian has joined #openstack-meeting20:30
*** kishanov has joined #openstack-meeting20:30
*** dwalleck has joined #openstack-meeting20:34
*** dwalleck has quit IRC20:39
anotherjesse1westmaas: bcwaldon just pushed an update to https://review.openstack.org/#change,4675 - he says "still need to add more logging"20:41
anotherjesse1westmaas: getting pub cloud feedback would be <320:41
bcwaldonwhy are you talking about this in openstack-meeting20:41
westmaasbcwaldon: why not20:41
bcwaldonwestmaas: quiet down20:41
anotherjesse1haha - because I mis-typed20:41
bcwaldondid you want some privacy?20:41
anotherjesse1moving to #dev20:42
*** adjohn has joined #openstack-meeting20:44
*** clayg has left #openstack-meeting20:46
*** dprince has quit IRC20:49
*** joearnold has quit IRC20:51
*** kishanov has quit IRC20:52
*** dendro-afk is now known as dendrobates21:01
*** troytoman-away has quit IRC21:02
*** troytoman-away has joined #openstack-meeting21:02
*** sandywalsh has quit IRC21:07
*** AlanClark has quit IRC21:09
*** joearnold has joined #openstack-meeting21:13
*** mikeyp has left #openstack-meeting21:27
*** Yak-n-Yeti has quit IRC21:28
*** Yak-n-Yeti has joined #openstack-meeting21:33
*** zigo has quit IRC21:37
*** kishanov has joined #openstack-meeting21:38
*** jmckenty has joined #openstack-meeting21:40
*** kishanov has quit IRC21:43
*** deshantm has quit IRC21:50
*** kishanov has joined #openstack-meeting21:54
*** sandywalsh has joined #openstack-meeting21:56
*** AlanClark has joined #openstack-meeting21:57
*** AlanClark has quit IRC22:01
*** mdomsch_ has quit IRC22:01
*** kishanov has quit IRC22:06
*** dtroyer has quit IRC22:12
*** ryanpetrello has quit IRC22:14
*** dhellmann has quit IRC22:14
*** joearnold has quit IRC22:16
*** asdfasdf has quit IRC22:22
*** andrewsben has quit IRC22:25
*** kishanov has joined #openstack-meeting22:33
*** ryanpetrello has joined #openstack-meeting22:35
*** kishanov has quit IRC22:40
*** Yak-n-Yeti has quit IRC22:41
*** dolphm has quit IRC22:47
*** Yak-n-Yeti has joined #openstack-meeting22:49
*** markvoelker has quit IRC22:54
*** Yak-n-Yeti has quit IRC22:55
*** kishanov has joined #openstack-meeting23:06
*** anotherjesse1 has quit IRC23:10
*** dolphm has joined #openstack-meeting23:13
*** danwent has quit IRC23:15
*** kishanov has quit IRC23:16
*** danwent has joined #openstack-meeting23:19
*** Gordonz has quit IRC23:24
*** ryanpetrello has quit IRC23:26
*** dolphm has quit IRC23:26
*** littleidea has quit IRC23:35
*** littleidea has joined #openstack-meeting23:37
*** tsobral has joined #openstack-meeting23:38
*** jastr has quit IRC23:40
*** madhav-puri has joined #openstack-meeting23:40
*** madhav-puri has left #openstack-meeting23:41
*** dtroyer has joined #openstack-meeting23:46

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!