Monday, 2018-09-24

*** fried_rolls1 has joined #openstack-meeting-alt00:48
*** fried_rolls has quit IRC00:50
*** fried_rolls1 is now known as fried_rolls00:50
*** fried_rolls is now known as efried01:07
*** rcernin has quit IRC01:40
*** rcernin has joined #openstack-meeting-alt01:40
*** hongbin has joined #openstack-meeting-alt01:49
*** cloudrancher has joined #openstack-meeting-alt02:22
*** yamamoto has quit IRC02:42
*** ijw has joined #openstack-meeting-alt03:28
*** hongbin has quit IRC03:39
*** yamamoto has joined #openstack-meeting-alt04:08
*** cloudrancher has quit IRC04:20
*** dtrainor has quit IRC04:51
*** cloudrancher has joined #openstack-meeting-alt05:06
*** cloudrancher has quit IRC05:21
*** yamamoto has quit IRC05:30
*** dave-mccowan has joined #openstack-meeting-alt05:36
*** jtomasek has joined #openstack-meeting-alt05:37
*** yamamoto has joined #openstack-meeting-alt05:39
*** cloudrancher has joined #openstack-meeting-alt06:01
*** dpawlik has joined #openstack-meeting-alt06:03
*** dtrainor has joined #openstack-meeting-alt06:05
*** apetrich has joined #openstack-meeting-alt06:05
*** e0ne has joined #openstack-meeting-alt06:23
*** cloudrancher has quit IRC06:26
*** ijw has quit IRC06:36
*** belmoreira has joined #openstack-meeting-alt06:36
*** e0ne_ has joined #openstack-meeting-alt06:41
*** e0ne has quit IRC06:44
*** belmoreira has quit IRC06:45
*** belmoreira has joined #openstack-meeting-alt06:47
*** giblet is now known as gibi06:53
*** rdopiera has joined #openstack-meeting-alt07:04
*** ijw has joined #openstack-meeting-alt07:06
*** rcernin has quit IRC07:06
*** ijw has quit IRC07:10
*** ircuser-1 has joined #openstack-meeting-alt07:20
*** alexchadin has joined #openstack-meeting-alt07:51
*** jcoufal has joined #openstack-meeting-alt08:07
*** alexchadin has quit IRC08:14
*** janki has joined #openstack-meeting-alt08:33
*** janki has quit IRC08:34
*** janki has joined #openstack-meeting-alt08:34
*** finucannot is now known as stephenfin08:38
*** rossella_s has quit IRC08:41
*** alexchadin has joined #openstack-meeting-alt08:42
*** derekh has joined #openstack-meeting-alt08:44
*** yamamoto has quit IRC08:45
*** jcoufal has quit IRC09:07
*** pbourke has joined #openstack-meeting-alt09:20
*** ianychoi_ has joined #openstack-meeting-alt09:30
*** ianychoi has quit IRC09:34
*** e0ne_ has quit IRC09:38
*** e0ne has joined #openstack-meeting-alt09:38
*** yamamoto has joined #openstack-meeting-alt09:40
*** e0ne has quit IRC10:08
*** ganso has joined #openstack-meeting-alt10:08
*** janki is now known as janki|lunch10:14
*** e0ne has joined #openstack-meeting-alt10:27
*** e0ne has quit IRC10:40
*** e0ne has joined #openstack-meeting-alt10:49
*** erlon has joined #openstack-meeting-alt10:50
*** jcoufal has joined #openstack-meeting-alt10:56
*** e0ne has quit IRC10:57
*** janki|lunch is now known as janki11:10
*** alexchadin has quit IRC11:20
*** alexchadin has joined #openstack-meeting-alt11:59
*** alexchadin has quit IRC12:05
*** alexchadin has joined #openstack-meeting-alt12:06
*** e0ne has joined #openstack-meeting-alt12:10
*** tpsilva has joined #openstack-meeting-alt12:16
*** leakypipes is now known as jaypipes12:24
*** sambetts_ is now known as sambetts12:35
*** jcoufal has quit IRC12:43
*** alexchadin has quit IRC12:43
*** alexchadin has joined #openstack-meeting-alt12:53
*** yamamoto has quit IRC13:07
*** yamamoto has joined #openstack-meeting-alt13:08
*** lbragstad has joined #openstack-meeting-alt13:17
*** belmorei_ has joined #openstack-meeting-alt13:30
*** belmoreira has quit IRC13:32
*** SteelyDan is now known as dansmith13:34
*** e0ne has quit IRC13:38
*** e0ne has joined #openstack-meeting-alt13:47
*** alex_xu has joined #openstack-meeting-alt13:48
*** helenafm has joined #openstack-meeting-alt13:51
*** yamamoto has quit IRC13:54
*** yamamoto has joined #openstack-meeting-alt13:55
*** yamamoto has quit IRC13:55
*** yamamoto has joined #openstack-meeting-alt13:55
*** yamamoto has quit IRC13:55
*** mriedem has joined #openstack-meeting-alt13:55
*** dustins has joined #openstack-meeting-alt13:55
*** beekneemech is now known as bnemec13:57
efried#startmeeting nova_scheduler14:00
openstackMeeting started Mon Sep 24 14:00:13 2018 UTC and is due to finish in 60 minutes.  The chair is efried. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: nova_scheduler)"14:00
openstackThe meeting name has been set to 'nova_scheduler'14:00
mriedemo/14:00
alex_xuo/14:00
gibio/14:00
* bauzas waves14:00
edleafe\o14:01
efriedalrightythen14:01
efried#link agenda https://wiki.openstack.org/wiki/Meetings/NovaScheduler#Agenda_for_next_meeting14:01
*** raildo has joined #openstack-meeting-alt14:02
efried#topic last meeting14:02
efried#link last minutes: http://eavesdrop.openstack.org/meetings/scheduler/2018/scheduler.2018-09-17-13.59.html14:02
efriedThanks jaypipes for covering ^ while I was having Day From Hell.14:02
*** openstack changes topic to "last meeting (Meeting topic: nova_scheduler)"14:02
efriedAny old business to discuss?14:02
mriedemgrenade changes are ongoing14:02
mriedemhttps://review.openstack.org/#/c/604454/14:03
*** cloudrancher has joined #openstack-meeting-alt14:03
mriedemit's a mess as most can probably imagine14:03
efriedexcellent.14:03
mriedem^ currently depends on dansmith's db migration script,14:03
mriedemand a change to the neutron-grenade job so that we can clone openstack/placement into the CI run14:03
mriedemi'm not sure yet if grenade will actually have to do the placement install/config stuff that devstack normally does14:04
mriedemb/c we've got a weird chicken/egg situation where cdent's devstack change that does the new style placement setup depends on the grenade change14:04
mriedemotherwise ideally new devstack would have placement cloned and setup via the separate repo before the upgrade runs14:04
*** hongbin has joined #openstack-meeting-alt14:05
mriedemtl;dr i'm whacking the moles as i hit them14:05
mriedemalso, dan is out the rest of the week after today so if we need updates to his script we'll have to make them14:05
* efried gets ready to swap in getopt14:06
*** munimeha1 has joined #openstack-meeting-alt14:06
efriedI'd like to see the pg script commonized in some sane way, since it's 90% duplicated.14:06
efriedI commented accordingly.14:07
dansmithmy script is easily broken out into common bits and the mysql bits.14:07
efriedyup, that'd be one way, lib-ify the common bits14:08
dansmithit would be trivial I think to just take --dbtype on one script14:08
dansmithand just call the right function I would think14:08
efriedyup, that was what I was thinking14:08
dansmithor commonize it, either one14:08
dansmithI just wish we wouldn't, but..14:08
efriedwhyzat?14:08
dansmithbecause we hardly even test pg and have said we're not making an effort to make sure it works14:09
dansmithall this script does is dump a set of tables but with some safeguards.. if you're running pg against community advice, I'm sure you're capable of doing that part yourself14:09
efriedFair enough, but there's clearly a want for the pg version since someone went through the trouble to propose it; and if we're going to have it at all, it might as well not be duplicated code.14:09
dansmithI don't disagree that if we're going to have it it should be commonized14:10
efriedIf we want to take a harder line on not supporting pg, we should downvote that patch I guess.14:10
mriedemwe've already merged a grenade-postgresql job14:11
mriedemso we can test it14:11
efriedthat's nice14:11
efriedMoving on?14:11
efried#topic specs and review14:12
*** openstack changes topic to "specs and review (Meeting topic: nova_scheduler)"14:12
efried#link Consumer generation & nrp use in nova: Series starting at https://review.openstack.org/#/c/59159714:12
efriedIn a runway (ends Oct 4)14:12
efriedAny discussion on this?14:12
gibiyep14:12
bauzasI'll sponsor it14:13
gibiefried: you have some concerns about the first patch14:13
bauzass/sponsor/shepherd14:13
mriedemi had questions in it as well14:13
gibiefried: I tried to answer it inline14:13
efriedYeah, I haven't processed your response yet. Does what I'm saying at least make sense?14:14
gibimriedem: yours are more like comments I have to fix (and I'm fixing right now)14:14
gibiefried: I'm not sure what you eventually suggests about the patch? shall we not do the delete patch at all?14:14
efriedThat was my thought, yes.14:14
efriedSwitching from DELETE to PUT {} doesn't buy us anything, and it makes it *look* like we're covering races when we're really really not.14:15
gibiefried: for delete allocation we have DELETE /allocations/<uuid> to ignore generation14:15
efriedright14:15
gibiefried: for the rest of the allocation manipulate we don't have such a workaround14:15
efriedRight14:15
efriedBut14:15
gibiefried: I really don't like ignorig generation in delete but handling it in every other case14:16
efriedI think the scenarios are different14:16
efriedlike for initial allocs in spawn14:16
efriedwe ought to be able to count on the generation being None14:16
efriedIf it ain't, we should blow up righteously14:17
gibiright14:17
efriedThere's no corollary to that in the delete case.14:17
gibithere are places where nova needs to read the allocation from placement and then manipualte it and then PUT it back14:18
efriedFor other paths like migrate or resize, I agree we have a similar issue, in that we're basically retrieving the allocations *right* before we mess with them. So we're not closing much of a gap race-wise.14:19
*** belmorei_ has quit IRC14:19
gibihttps://review.openstack.org/#/c/583667/24/nova/scheduler/utils.py14:19
gibiyeah14:19
* jaypipes here now, sorry for lateness14:19
gibiefried: what to do with those? there we cannot ignore the generation14:20
efriedThough again for migration, if we attempt to claim for the migration UUID and the generation isn't None, that's the same as spawn, which is definitely a thing to guard against.14:20
efriedidk, I just think the cure is worse than the disease for deletion.14:20
gibiefried: so when we read - manipulate - and put (like in force live migrate and in force evacuate) then we can still have to prepare for conflict14:21
efriedyes14:22
gibieven if this is not something that happens frequently14:22
gibiI can drop the delete patch if others like jaypipes and mriedem agrees14:22
mriedemno comment14:23
gibibut in the rest of the cases we will have conflicts to prepare for14:23
efriedyes14:23
efriedI'm not suggesting getting rid of any of the others14:23
gibiefried: OK14:23
efriedjaypipes: thoughts?14:23
efriedor dansmith?14:23
jaypipesreading back still14:24
gibijaypipes: how do you feel about still using DELETE /allocations/{uuid} to delete instance allocations instead of PUTting allocations: {} with generation14:24
dansmithif we don't use PUT,14:24
dansmiththen we don't know if they have changed since we examined them right?14:24
gibidansmith: right14:25
jaypipesgibi: I don't get why efried thinks your patch will not avoid races.14:25
dansmithlike, maybe we shouldn't be deleting them if we thought they should be gone but they changed?14:25
efrieddansmith: Point is that we only know something changed in the teeny window *within* this one report client method between the GET and the PUT.14:25
dansmithI thought we discussed in dublin even that it should be a put because delete with a body is weird, for exactly that reason14:25
efriedwhich is not useful14:25
jaypipesgibi: if someone changes the allocations in between the initial read and the call to PUT {} then we will fail, which will prevent the race.14:25
dansmithit's not?14:25
efriedbecause the window-of-change we care about strats when the delete is initiated14:26
efriedand the vast majority of that window is *before* we hit this method.14:26
efriedSo switching from DELETE to PUT {} in this method makes it *look* like we're closing a race when we're really really not.14:26
gibijaypipes: yes, this is what the patch does, but the window is small as efried pointed out14:26
dansmithisn't efried arguing that the window is so small that we shouldn't close it?14:27
gibiefried: we close one really small race. But if nova does not store the allocation then there is no way to close the others14:27
dansmithbecause that does not resonate with me14:27
efrieddansmith: We're not closing it.14:27
dansmithit's a small window now, but in the future it might be larger if we change the way the workflow is arranged14:27
efriedWe're narrowing it a fraction. It's still wide open.14:27
dansmithefried: you say that because why.. because we're able to detect it and we'll just re-do it right?14:28
efriedwell, if we do that (retry) then the change is truly pointless. But we agreed we would make 409 a hard fail here.14:28
gibidansmith: we are not even retrying the delete in this case14:28
dansmithI totally don't understand14:28
gibidansmith: the end user can retry the delete14:28
dansmithgibi: ack, just trying to understand what efried is talking about14:29
efriedThe race window starts when the conductor receives the delete request.14:29
efriedLots of stuff happens, lots of stuff happens, then finally down in the compute service we hit the report client method.14:29
jaypipesdansmith: efried is saying that because the window the race condition exists in is super-tiny, he doesn't feel this patch is important.14:29
efriedThen *within* that method, we do a GET followed by an immediate PUT.14:29
dansmithjaypipes: I thought he was arguing that it doesn't solve the race14:30
efriedRight. It doesn't.14:30
*** belmoreira has joined #openstack-meeting-alt14:30
efriedIf allocations are changed anywhere in that "lots of stuff happens" timeframe, we'll miss it, ignore it, delete anyway.14:30
dansmithefried: I totally don't understand what the "conductor does lots of stuff" part has to do it14:30
dansmithokay, but in the grand scheme of things,14:31
efriedSo I'm saying this patch gives us a false sense of security that we've actually done something14:31
efriedwhen we really haven't.14:31
dansmiththe get followed by the put should include some "does the data I got in the GET make sense? Yes, okay, delete if it hasn't changed"14:31
dansmithwhich is the pattern we should be following,14:31
dansmitheven if right now we just PUT $(GET ..)14:32
dansmithIMHO14:32
dansmithjaypipes: agree?14:32
gibiefried: without nova storing the instance allocation there is nothing to compare to so we cannot detect the race in a the big window14:32
efriedYou mean e.g. comparing the GET data with some understanding of what we think the allocation should look like?14:32
efriedgibi: Precisely.14:32
dansmithefried: correct14:32
dansmithjust because the code plays fast and loose on the client side right now doesn't mean it will be that way forever14:32
jaypipesdansmith: agree. but this particular code path doesn't use the reportclient's cached provider tree and so cannot check a known consumer generation.14:32
efriedOkay, but the only thing we have that would allow us to do that is... the allocation in placement, isn't it?14:32
dansmithjaypipes: that's a transient and unfortunate state of the code at this moment though right?14:33
jaypipesdansmith: not sure?14:33
efriedWe've discussed and dismissed the idea of caching allocations in the past, I thought, but I suppose we could revisit that.14:33
jaypipesefried, gibi: where is delete_allocation_for_instance() called from at this pint?14:33
jaypipespoint...14:33
jaypipesefried: it's not allocations we need/want to cache. it's consumer generation.14:34
jaypipesjust another reason why *we should have a separate consumer endpoint*.14:34
* jaypipes goes back into hole.14:34
efriedjaypipes: To answer your question, it's called from all over the place.14:34
gibijaypipes: normal instance delete, local delete, some rollback cases like failing unshelve14:34
efriedyeah, what gibi said14:35
efriedfilter scheduler too14:35
jaypipeswell, then my view is that this patch doesn't make our existing situation any *worse* and solves a micro-race that might happen (very unlikely). I don't think it's a reason to not merge the patch as-is14:35
gibideleting the allocation held by the migration uud after move14:36
jaypipesbut I do acknowledge the problem that efried has outlined.14:36
jaypipeswe could solve this long-term by caching consumer generations.14:36
jaypipeswhich would be made easier if we had a GET /consumers endpoint.14:37
jaypipesbut I digress.14:37
efriedOkay. I'll buy the change if can we put a prominent NOTE acking the issue. gibi cool?14:37
*** dpawlik has quit IRC14:37
gibiefried: super cool14:37
jaypipesI'm fine with that.14:37
efriedThanks for the discuss y'all.14:37
gibithank you14:37
efried#agreed to keep https://review.openstack.org/#/c/591597/ in play with a NOTE acking the race issue14:37
efriedmoving on14:38
efried#link latest pupdate: http://lists.openstack.org/pipermail/openstack-dev/2018-September/134977.html14:38
efriedIf you wouldn't mind clicking through ^ and scrolling down to the specs section14:38
efriedGive the titles a quick skim and see if there's anything you'd like to talk about.14:38
mriedemi haven't been reviewing specs14:39
gibiefried: about my specs open on placement, I'm totally OK deferring those while placement is in freezed state14:40
jaypipesefried: not that I'd like to talk about. just need to do the reviews. :(14:40
*** dpawlik has joined #openstack-meeting-alt14:40
jaypipesgibi: does min bandwidth sched depend on any-traits?14:40
*** dpawlik has quit IRC14:40
gibijaypipes: the multisegment use case depends on any-traits but I think that can wait14:41
jaypipesack14:41
gibijaypipes: if a network only maps to a single physnet then we are good14:41
efriedI think alex_xu merged gibi's specs, not sure what the bp tracking process does at this point. mriedem?14:41
jaypipesright.14:41
gibiI think mriedem approved the bp after the spec was merged14:41
gibiand mriedem also noted that placement is freezed in the bp14:42
efriedbut now we're saying we want to defer to Train? Or just let it ride and *probably* defer to Train at the end of the cycle?14:42
gibiefried: I more like the later14:43
alex_xuwe freeze to Train?14:43
efriedWe've got a handful of placement bps in play for this cycle. I was under the impression we were assuming placement would un-freeze at some point, early enough for us to get work done.14:43
mriedemit won't unfreeze until grenade is done14:43
mriedemso that's my focus14:43
efriedWe're not truly frozen at the moment, fwiw. The openstack/placement repo is open for patches; we've been merging stuff that's not purely related to extraction.14:44
mriedemwhat kinds of stuff?14:44
efriedI agree we shouldn't merge features until the upgrade stuff is sorted.14:44
gibiefried: still I'd not start steeling review time from the extraction with any-traits and similar features14:44
bauzascan we call Anna for unfreezing ?14:44
* bauzas makes sad jokes14:44
efriedmriedem: Nothing big, mainly refactoring. E.g. for reducing complexity.14:45
efriedactually, looking at it, nothing significant has merged, but there's open patches along those lines.14:46
efriedanyway, my point is, the code could get proposed and reviewed while upgrade stuff is being worked.14:46
efriedWe don't have to keep hands off the repo entirely.14:47
efriedokay, any other specs or reviews to bring up specifically?14:48
*** Leo_m has joined #openstack-meeting-alt14:48
efried#topic Extraction14:48
efriedcdent is on PTO. Anyone have any updates beyond what mriedem talked about earlier?14:48
*** openstack changes topic to "Extraction (Meeting topic: nova_scheduler)"14:48
bauzasnothing but the fact I just uploaded a new revision for the libvirt reshape14:49
bauzasand then I'll try to look whether it works14:49
bauzaswith my machine14:49
jaypipesbauzas: link?14:49
efried#link vgpu reshape https://review.openstack.org/#/c/599208/14:50
*** dave-mccowan has quit IRC14:50
jaypipesdanke14:50
bauzasjaypipes: https://review.openstack.org/#/c/599208/514:50
bauzasshit too late14:50
* efried holsters six-shooter14:50
efried#topic bugs14:50
efried#link Placement bugs https://bugs.launchpad.net/nova/+bugs?field.tag=placement14:50
*** openstack changes topic to "bugs (Meeting topic: nova_scheduler)"14:50
bauzaswe should also discuss about reshapes14:50
efriedokay14:51
bauzaseg. should we have some specific module for reshapes ?14:51
efried#topic reshapes14:51
*** openstack changes topic to "reshapes (Meeting topic: nova_scheduler)"14:51
efriedbauzas: module to do what?14:51
bauzasefried: for example, say I need a reshape for vGPU14:51
bauzasefried: then, once we agree on NUMA, we could have yet another reshape14:51
bauzasthen, say PCPU will need a new reshape14:51
bauzasetc.14:51
efriedYou think there's some portion of the reshaping algorithm in update_provider_tree that could be common for all reshapes in all virt drivers?14:51
bauzasso, I was thinking of having a specific module that upgraders would use (like FFU folks)14:52
efriedor are you saying we should ask virt drivers to keep those things somewhere separate for ease of review/maintenance?14:52
bauzasefried: maybe having a pattern, just that14:52
bauzasso we could see all the reshapes14:53
bauzasthen, knowing for example when they were created, and for which cycle14:53
bauzasI mean, I dunno14:53
efriedI think the idea has merit, if only for the sake of not having to keep adding and removing virt driver code every cycle. Keep 'em all together, kind of like we do for db upgrades?14:53
bauzasyup14:54
bauzasif so, I'll provide a new revision for https://review.openstack.org/#/c/599208/14:54
bauzasand people will discuss in it ^14:54
efriedbauzas: Maybe you can write up something that demonstrates what you're talking about?14:54
bauzasyeah, the above ^14:54
efriedokay.14:55
efried#topic opens14:55
*** openstack changes topic to "opens (Meeting topic: nova_scheduler)"14:55
efriedWe're going to run out of time, but I put this one on there14:55
efriedHow to handle min_unit (and others) in a forward-looking (i.e. generic NRP) way?14:55
efried#link IRC discussion with belmoreira http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2018-09-20.log.html#t2018-09-20T14:11:5914:55
efriedMy take was that, if we're going to have any hope of allowing operators to configure things like allocation ratios, min/max units, etc. in the future (with nrp/sharing) then we're going to need a generic solution that doesn't get us into the config nightmare we're currently experiencing with alloc ratios.14:56
*** cloudrancher has quit IRC14:57
efriedjaypipes suggested14:57
efried#link Spec (rocky, abandoned): Standardized provider descriptor file https://review.openstack.org/#/c/550244/14:57
efriedwhich almost gets us there, but falls a bit short in that it doesn't solve the chicken/egg of being able to identify a provider before you can tweak it.14:57
gibiefried: I agree but right now I have no good sollution14:58
efriedyeah14:58
efriedthe efforts around device passthrough14:58
efried#link Spec: Modelling passthrough devices for report to placement ("generic device management") https://review.openstack.org/#/c/591037/14:58
efried#link Spec: Generic device discovery policy https://review.openstack.org/#/c/603805/14:58
efriedare leading towards defining file formats in a similar spirit14:58
efriedbut I think that fundamental problem still exists - how do I identify a provider that's going to be automatically generated by nova (or neutron or cyborg or cinder or...)14:59
efried...in order to provide customized inventory settings for it?14:59
bauzasmmmm14:59
efriedSince we're out of time, consider ^ food for thought.14:59
bauzasmaybe we should autogenerate this file if needed ?14:59
bauzaslike we do for options ?14:59
bauzasa getter/setter way15:00
jaypipesefried: I specifically left out the "identification of the provider before you need it" because the clients of such a descriptor file would undoubtedly have different ideas of how to map local identifiers to RP identifiers.15:00
efriedit would have to be updated frequently. basically kept in sync with placement.15:00
efriedThat's time, folks. Let's continue in -nova if desired.15:00
efried#endmeeting15:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"15:00
openstackMeeting ended Mon Sep 24 15:00:35 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-09-24-14.00.html15:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-09-24-14.00.txt15:00
openstackLog:            http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-09-24-14.00.log.html15:00
*** cloudrancher has joined #openstack-meeting-alt15:02
*** helenafm has left #openstack-meeting-alt15:02
*** cloudrancher has quit IRC15:06
*** mriedem has left #openstack-meeting-alt15:11
*** dave-mccowan has joined #openstack-meeting-alt15:12
*** yamamoto has joined #openstack-meeting-alt15:13
*** dave-mccowan has quit IRC15:17
*** dave-mccowan has joined #openstack-meeting-alt15:22
*** jesusaur has quit IRC15:26
*** dustins has quit IRC15:26
*** dustins has joined #openstack-meeting-alt15:30
*** jesusaur has joined #openstack-meeting-alt15:31
*** efried has quit IRC15:37
*** dpawlik has joined #openstack-meeting-alt15:38
*** efried has joined #openstack-meeting-alt15:39
*** dpawlik has quit IRC15:42
*** alexchadin has quit IRC15:47
*** e0ne has quit IRC16:01
*** cloudrancher has joined #openstack-meeting-alt16:02
*** lbragstad has quit IRC16:10
*** lbragstad has joined #openstack-meeting-alt16:18
*** janki has quit IRC16:20
*** spotz is now known as spotz_16:25
*** spotz_ is now known as spotz16:25
*** rdopiera has quit IRC16:27
*** dave-mccowan has quit IRC16:38
*** ttsiouts has joined #openstack-meeting-alt16:38
*** ttsiouts has quit IRC16:45
*** ganso has quit IRC16:49
*** diablo_rojo has joined #openstack-meeting-alt16:51
*** derekh has quit IRC16:59
*** dpawlik has joined #openstack-meeting-alt17:00
*** belmoreira has quit IRC17:04
*** alex_xu has quit IRC17:05
*** dave-mccowan has joined #openstack-meeting-alt17:35
*** sambetts is now known as sambetts|afk17:39
*** munimeha1 has quit IRC17:47
*** cloudrancher has quit IRC17:48
*** cloudrancher has joined #openstack-meeting-alt17:49
*** dpawlik has quit IRC17:51
*** e0ne has joined #openstack-meeting-alt18:06
*** dpawlik has joined #openstack-meeting-alt18:17
*** dpawlik_ has joined #openstack-meeting-alt18:17
*** dpawlik has quit IRC18:18
*** ianychoi_ is now known as ianychoi18:54
*** panda has quit IRC18:56
*** yamahata has quit IRC19:20
*** iyamahat has quit IRC19:20
*** iyamahat has joined #openstack-meeting-alt19:33
*** yamahata has joined #openstack-meeting-alt19:35
*** panda has joined #openstack-meeting-alt19:37
*** dpawlik_ has quit IRC19:51
*** dpawlik has joined #openstack-meeting-alt19:55
*** dpawlik has quit IRC19:56
*** erlon has quit IRC20:05
*** raildo_ has joined #openstack-meeting-alt20:11
*** raildo_ has quit IRC20:12
*** dpawlik has joined #openstack-meeting-alt20:13
*** raildo has quit IRC20:13
*** dpawlik has quit IRC20:18
*** e0ne has quit IRC20:37
*** dpawlik has joined #openstack-meeting-alt20:40
*** slaweq has quit IRC21:01
*** amito has quit IRC21:10
*** amito has joined #openstack-meeting-alt21:11
*** dpawlik has quit IRC21:12
*** jtomasek has quit IRC21:37
*** cloudrancher has quit IRC21:42
*** cloudrancher has joined #openstack-meeting-alt21:43
*** dustins has quit IRC22:01
*** dpawlik has joined #openstack-meeting-alt22:16
*** dpawlik has quit IRC22:20
*** panda is now known as panda|off22:38
*** rcernin has joined #openstack-meeting-alt22:45
*** hongbin has quit IRC22:45
*** dims_ has joined #openstack-meeting-alt22:59
*** mmedvede_ has joined #openstack-meeting-alt23:00
*** stephenfin_ has joined #openstack-meeting-alt23:00
*** dims has quit IRC23:00
*** jlvillal has quit IRC23:00
*** strigazi has quit IRC23:00
*** mmedvede has quit IRC23:00
*** stephenfin has quit IRC23:00
*** mmedvede_ is now known as mmedvede23:00
*** cloudrancher has quit IRC23:01
*** andreykurilin has quit IRC23:03
*** purplerbot has quit IRC23:04
*** cloudrancher has joined #openstack-meeting-alt23:05
*** cloudrancher has quit IRC23:05
*** andreykurilin has joined #openstack-meeting-alt23:05
*** cloudrancher has joined #openstack-meeting-alt23:07
*** cloudrancher has quit IRC23:07
*** cloudrancher has joined #openstack-meeting-alt23:10
*** cloudrancher has quit IRC23:10
*** cloudrancher has joined #openstack-meeting-alt23:12
*** tetsuro has joined #openstack-meeting-alt23:24
*** cloudrancher has quit IRC23:41
*** tpsilva has quit IRC23:43
*** erlon has joined #openstack-meeting-alt23:46
*** cloudrancher has joined #openstack-meeting-alt23:51
*** cloudrancher has quit IRC23:57
*** Leo_m has quit IRC23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!