Monday, 2018-10-08

*** erlon has joined #openstack-meeting-alt01:15
*** yikun has joined #openstack-meeting-alt01:16
*** yamamoto has quit IRC01:29
*** yamamoto has joined #openstack-meeting-alt01:29
*** hongbin has joined #openstack-meeting-alt01:48
*** yamamoto has quit IRC02:01
*** yamamoto has joined #openstack-meeting-alt02:02
*** yamamoto has quit IRC02:06
*** dave-mccowan has quit IRC02:30
*** yamamoto has joined #openstack-meeting-alt02:43
*** yamamoto has quit IRC02:57
*** yamamoto has joined #openstack-meeting-alt02:58
*** yamamoto has quit IRC03:02
*** yamamoto has joined #openstack-meeting-alt03:46
*** kaisers has quit IRC03:56
*** kaisers has joined #openstack-meeting-alt04:05
*** kaisers has quit IRC04:12
*** kaisers has joined #openstack-meeting-alt04:19
*** tetsuro has joined #openstack-meeting-alt04:25
*** yamamoto has quit IRC04:29
*** hongbin has quit IRC04:32
*** tetsuro has quit IRC04:37
*** tetsuro has joined #openstack-meeting-alt04:38
*** janki has joined #openstack-meeting-alt04:54
*** yamamoto has joined #openstack-meeting-alt04:58
*** apetrich has joined #openstack-meeting-alt05:06
*** sheel has joined #openstack-meeting-alt05:11
*** jtomasek has joined #openstack-meeting-alt05:17
*** yamamoto has quit IRC05:45
*** e0ne has joined #openstack-meeting-alt06:21
*** tetsuro has quit IRC06:22
*** tetsuro has joined #openstack-meeting-alt06:25
*** yamamoto has joined #openstack-meeting-alt06:44
*** giblet is now known as gibi06:50
*** kopecmartin|off is now known as kopecmartin|ruck07:02
*** slaweq_ has joined #openstack-meeting-alt07:12
*** jtomasek has quit IRC07:16
*** jtomasek has joined #openstack-meeting-alt07:17
*** rcernin has quit IRC07:21
*** ttsiouts has joined #openstack-meeting-alt07:23
*** rdopiera has joined #openstack-meeting-alt07:37
*** ttsiouts has quit IRC07:39
*** yamamoto has quit IRC07:55
*** ttsiouts has joined #openstack-meeting-alt07:57
*** tssurya has joined #openstack-meeting-alt08:15
*** finucannot is now known as stephenfin08:15
*** belmoreira has joined #openstack-meeting-alt08:17
*** yamamoto has joined #openstack-meeting-alt08:53
*** alexchadin has joined #openstack-meeting-alt09:03
*** alexchadin has quit IRC09:15
*** tetsuro has quit IRC09:29
*** bhavikdbavishi has joined #openstack-meeting-alt09:47
*** yamamoto has quit IRC09:49
*** bhavikdbavishi has quit IRC09:52
*** ttsiouts has quit IRC10:01
*** jtomasek has quit IRC10:09
*** janki has quit IRC10:12
*** janki has joined #openstack-meeting-alt10:13
*** jtomasek has joined #openstack-meeting-alt10:28
*** rossella_s has joined #openstack-meeting-alt10:33
*** rcernin has joined #openstack-meeting-alt10:36
*** dpawlik has joined #openstack-meeting-alt10:37
*** slaweq_ is now known as slaweq10:44
*** slaweq is now known as dpawlik_10:45
*** dpawlik_ is now known as slaweq10:45
*** dave-mccowan has joined #openstack-meeting-alt10:47
*** yamamoto has joined #openstack-meeting-alt10:48
*** alexchadin has joined #openstack-meeting-alt10:53
*** rossella_s has quit IRC10:55
*** rossella_s has joined #openstack-meeting-alt10:55
*** yamamoto has quit IRC11:00
*** yamamoto has joined #openstack-meeting-alt11:00
*** rossella_s has quit IRC11:04
*** rossella_s has joined #openstack-meeting-alt11:05
*** yamamoto has quit IRC11:07
*** erlon has quit IRC11:08
*** rcernin has quit IRC11:17
*** rossella_s has quit IRC11:19
*** rossella_s has joined #openstack-meeting-alt11:19
*** rossella_s has quit IRC11:30
*** rossella_s has joined #openstack-meeting-alt11:31
*** rossella_s has quit IRC11:39
*** rossella_s has joined #openstack-meeting-alt11:39
*** yamamoto has joined #openstack-meeting-alt11:46
*** belmorei_ has joined #openstack-meeting-alt11:48
*** rossella_s has quit IRC11:50
*** belmoreira has quit IRC11:51
*** rossella_s has joined #openstack-meeting-alt11:54
*** alexchadin has quit IRC11:56
*** yamamoto has quit IRC11:59
*** yamamoto has joined #openstack-meeting-alt11:59
*** yamamoto has quit IRC12:01
*** yamamoto has joined #openstack-meeting-alt12:01
*** raildo has joined #openstack-meeting-alt12:02
*** e0ne has quit IRC12:04
*** yamamoto has quit IRC12:06
*** helenafm has joined #openstack-meeting-alt12:07
*** yamamoto has joined #openstack-meeting-alt12:08
*** rossella_s has quit IRC12:09
*** rossella_s has joined #openstack-meeting-alt12:11
*** yamamoto has quit IRC12:12
*** e0ne has joined #openstack-meeting-alt12:13
*** ttsiouts has joined #openstack-meeting-alt12:14
*** fried_rice is now known as efried12:49
*** yamamoto has joined #openstack-meeting-alt12:51
*** lbragstad has joined #openstack-meeting-alt12:54
*** iyamahat has joined #openstack-meeting-alt12:54
*** jgrosso has joined #openstack-meeting-alt12:55
*** ijw has joined #openstack-meeting-alt12:56
*** ijw has quit IRC12:56
*** ijw has joined #openstack-meeting-alt12:57
*** ijw has quit IRC12:59
*** ijw_ has joined #openstack-meeting-alt13:00
*** tetsuro has joined #openstack-meeting-alt13:04
*** e0ne has quit IRC13:05
*** rossella_s has quit IRC13:07
*** rossella_s has joined #openstack-meeting-alt13:07
*** dustins has joined #openstack-meeting-alt13:13
*** lbragstad has quit IRC13:24
*** rossella_s has quit IRC13:28
*** rossella_s has joined #openstack-meeting-alt13:29
*** erlon has joined #openstack-meeting-alt13:30
*** erlon_ has joined #openstack-meeting-alt13:31
*** cloudrancher has quit IRC13:31
*** cloudrancher has joined #openstack-meeting-alt13:32
*** lbragstad has joined #openstack-meeting-alt13:32
*** erlon has quit IRC13:35
*** rossella_s has quit IRC13:42
*** rossella_s has joined #openstack-meeting-alt13:43
*** sheel has quit IRC13:50
*** cloudrancher has quit IRC13:51
*** janki has quit IRC13:51
*** janki has joined #openstack-meeting-alt13:52
*** cloudrancher has joined #openstack-meeting-alt13:52
*** jaypipes has joined #openstack-meeting-alt13:53
*** takashin has joined #openstack-meeting-alt13:54
efried#startmeeting nova_scheduler14:00
openstackMeeting started Mon Oct  8 14:00:19 2018 UTC and is due to finish in 60 minutes.  The chair is efried. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: nova_scheduler)"14:00
openstackThe meeting name has been set to 'nova_scheduler'14:00
takashino/14:00
jaypipeso/14:00
* gibi cannot join this time14:01
* efried strikes gibi from agenda14:01
efriedBueller? Bueller?14:02
efriedguess we'll get started and let people wander in.14:02
efried#link agenda https://wiki.openstack.org/wiki/Meetings/NovaScheduler#Agenda_for_next_meeting14:02
efried#topic last meeting14:03
*** openstack changes topic to "last meeting (Meeting topic: nova_scheduler)"14:03
efried#link last minutes: http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-01-14.00.html14:03
efriedAny old business?14:03
*** mriedem has joined #openstack-meeting-alt14:03
mriedemo/14:03
efried#topic specs and review14:03
efried#link latest pupdate: http://lists.openstack.org/pipermail/openstack-dev/2018-October/135475.html14:03
*** openstack changes topic to "specs and review (Meeting topic: nova_scheduler)"14:03
* bauzas waves a bit late14:03
efriedAnything to call out from the pupdate? (Will talk about extraction a bit later)14:04
efried#link Consumer generation & nrp use in nova: Series now starting at https://review.openstack.org/#/c/605785/14:04
efriedNo longer in runway. Was gonna ask gibi the status of the series, but he's not attending today.14:04
efriedBottom patch has some minor fixups required14:05
*** ganso has joined #openstack-meeting-alt14:05
efriedan interesting issue raised by tetsuro14:05
mriedemi need to look at that one,14:05
mriedemsince i talked with gibi about the design before he wrote it14:06
efriedwhich is that we can't tell whether the destination is nested in many cases until after we've already decided to schedule to it.14:06
bauzasalso some concern about how the filters could check the computes in case allocation candidates are only about nested RPs14:06
efriedwhich means we don't know whether we need to run the scheduler until... we've run the scheduler.14:06
mriedemi have the same problem with the resize to same host bug14:06
efriedwell, if you know the host already, you can go query that host to see if he's nested. And if so, you have to run the scheduler.14:07
bauzasI had a concern on changing the behaviour in https://review.openstack.org/#/c/605785/9/nova/compute/api.py@437514:07
*** ganso has left #openstack-meeting-alt14:07
*** munimeha1 has joined #openstack-meeting-alt14:07
bauzasif we want to call the scheduler anyway, we should have a new microversion IMHO14:07
mriedembauzas: gibi and i had talked about the behavioral changes, but we didn't think a new microversion would be needed here,14:08
mriedembut it's messy i agree,14:08
bauzasheh14:08
bauzashuh* even14:08
mriedemwe already broke the force behavior in pike when we made sure we could claim allocations for vcpu/disk/ram on force14:08
mriedemhere we're breaking that if nested14:08
mriedemthe more we depend on claims in the scheduler, the less we can honor force14:09
bauzasif we want to stop forcing a target (wrt I'm fine with), I just think we should still signal it for operators14:09
efriedCan we add a column to the hosts table caching whether the host uses nested/sharing?14:09
bauzaslike, you wanna still not call the scheduler ? fair enough, just don't ask for 2.XX microversion14:09
bauzas>2.XX even14:10
jaypipeswhy does it matter if we go from a non-nested host to a nested host? I mean, if the nested host supports the original requested resources and traits, who cares?14:10
mriedemi don't think we want to allow people to opt into breaking themselves14:10
efriedbauzas: But if we don't call the scheduler, we literally *can't* schedule to a nested host14:10
bauzasefried: how can I target a nested resource provider ?14:10
bauzascould someone give me examples ?14:10
efriedjaypipes: a) How would you know if it does? b) if any of the resources are in child providers, you need GET /a_c to give you a proper allocation request.14:10
bauzasoperators target compute services14:11
jaypipesefried: and?14:11
efriedand that (calling GET /a_c rather than just cloning the alloc onto the dest) is a behavior change.14:11
mriedemwe should probably table this until gibi is around to talk about it, because i know he and i talked about a bit of this before he started this code14:11
jaypipesefried: if the scheduler returns a destination, we use it. who cares if the resources ended up being provided by child providers or not.14:11
efriedthat's the point. The scheduler returns a destination if we call the scheduler.14:12
efriedWe're talking about a code path where previously we *didn't* call the scheduler.14:12
efriedIIUC.14:12
mriedemjaypipes: the question is when you force and bypass the scheduler14:12
jaypipesah... force_host rears its ugly-ass head yet again.14:12
mriedemyes14:12
bauzasnot force_hosts14:12
mriedemsame idea14:12
bauzasforce_hosts is only for boot14:12
mriedemi think we should table until gibi is around14:12
bauzasbut it's calling the scheduler14:12
efriedyeah.14:12
mriedemi could try to dig up our irc conversation but it'd be hard probably14:13
efriedor we could just proceed, and make a big decision that affects his whole world for the next six months.14:13
bauzascompared to livemigrate/evacuate where you litterally can bypass scheduler14:13
jaypipesI guess I still don't see why we care. If the destination host (forced or not) supports the original request, why do we care?14:13
efriedchicken/egg. We don't know if it supports the original request unless we call the scheduler algo to find that out.14:13
*** SteelyDan is now known as dansmith14:13
mriedemwell, we claim outside of the scheduler14:14
efriedI'm not sure to what extent ops expect "force" to mean "don't call the scheduler" though.14:14
mriedemtoday14:14
bauzasI still don't get why we're concerned by nested resource providers being targets14:14
jaypipesefried: why can't we ask the destination host in pre-live-migrate?14:14
mriedemlike i said, we already sort of broke the live migration 'force' parameter in pike,14:14
mriedemwhen conductor started claiming14:14
bauzasefried: since live-migrate API is existing AFAIK14:14
bauzasmriedem: shit, I missed that then14:14
efriedbauzas: If any of the resources that we need come from nested providers, we must use GET /a_c to come up with a proper allocation request.14:14
bauzasefried: isn't that a bit related to the concern I had about candidates be only on nested resource providers ?14:15
bauzaswe somehow need to know which root RP we're talking about14:15
mriedembauzas: see https://review.openstack.org/#/c/605785/9/nova/conductor/tasks/live_migrate.py@132 and scheduler_utils.claim_resources_on_destination for history14:16
*** rossella_s has quit IRC14:16
efriedso, tabling until we can involve gibi. Moving on.14:16
mriedem+114:16
efriedExtraction14:16
efriedInfo in the pupdate ---^14:16
efriedcdent out this week. edleafe, mriedem, status?14:16
efriedOh, Ed isn't around either. It's all on you mriedem14:17
*** e0ne has joined #openstack-meeting-alt14:17
mriedemumm14:17
* mriedem looks14:17
mriedemhttps://review.openstack.org/#/c/604454/ is the grenade patch which is passing,14:17
mriedemcdent updated that with the proper code to create the uwsgi placement-api config14:18
efried#link https://review.openstack.org/#/c/604454/ is the grenade patch which is passing14:18
mriedemthe devstack change that depends on it is still failing though https://review.openstack.org/#/c/600162/14:18
efried#link the devstack change that depends on it https://review.openstack.org/#/c/600162/14:18
efriedThis is the $PROJECTS issue?14:18
mriedemtl;dr there are other jobs that devstack runs which aren't cloning the placement repo yet,14:18
mriedemi have patches up for that, but they aren't passing and i haven't dug into why yet14:19
mriedemyeah https://review.openstack.org/#/c/606853/ and https://review.openstack.org/#/c/608266/14:19
bauzasI have good news for extraction14:19
bauzashttps://review.openstack.org/#/c/599208/ has been tested and works on a physical machine with pGPUs14:20
mriedemefried: looks like my d-g patch for updating $PROJECTS passed, just failed one test in tempest14:20
mriedemso just rechecks it looks like14:20
bauzasnext step will be to write some functional test mocking this ^14:20
efriednice14:20
jaypipesbauzas: nice.14:20
efriedbauzas: That's more of a reshape nugget than extraction, though?14:21
jaypipesefried: we agreed that that was a requirement for extraction.14:21
bauzasefried: well, I thought we agreed on this being a priority for the extraction :)14:21
efriedoh, I guess we said we were going to want ... yeah14:21
efriedI forget why, actually.14:21
bauzasanyway14:22
jaypipeslet's not rehash that.14:22
efriedoh, right, it was a requirement for the governance split14:22
efriednot for getting extracted placement working.14:22
efriedcool cool14:22
jaypipesI have a spec topic...14:22
*** beekneemech is now known as bnemec14:22
efriedanything else on extraction?14:22
mriedemtl;dr it's close14:22
efriedsweet14:22
mriedemfor the grenade/devstack ci/infra bits14:22
bauzasI need to disappear, taxi driving my kids from school14:23
efriedjaypipes: Want to go now or after the other spec/review topics?14:23
jaypipesso I have repeatedly stated I am not remotely interested in pursuing either https://review.openstack.org/#/c/544683/ or https://review.openstack.org/#/c/552105/. I was under the impression that someone (Yikun maybe?) who *was* interested in continuing that work was going to get https://review.openstack.org/#/c/552105/ into a state where people agreed on it (good luck with that), but as of now, I've seen little action on it other than14:24
jaypipesnegative reviews.14:24
* efried click click click14:24
mriedemjaypipes: yeah yikun has been busy with some internal stuff after a re-org,14:24
jaypipesso my question is should I just abandon both of the specs and force the issue?14:24
mriedemi can send an email to see what's going on and if we still care about those14:24
jaypipesk, thx14:24
*** rossella_s has joined #openstack-meeting-alt14:25
efriedThis could relate to the next-next topic on the agenda actually.14:25
*** dtrainor has joined #openstack-meeting-alt14:26
efriedwe were talking about using the file format proposal embedded in the14:26
efried#link device passthrough spec https://review.openstack.org/#/c/591037/14:26
efriedas a mechanism to customize provider attributes (prompted by the belmoreira min_unit discussion)14:26
efriedjaypipes agreed to review ^ with that in mind14:26
jaypipesefried: yes.14:27
jaypipesefried: I have found it very difficult to review. will give it another go this morning.14:27
efriedThe "initial defaults" thing is still weird.14:27
efriedand not addressed in there (yet)14:27
efriedbauzas suggested to split out the part of the spec that talks about the file format, and do the device passthrough aspect on its own.14:28
efriedWhich sounds like a good idea to me, considering the various ways we've talked about using it.14:28
efriedokay, moving on.14:30
efriedlast week, the14:30
efried#link HPET discussion http://lists.openstack.org/pipermail/openstack-dev/2018-October/135446.html14:30
efriedled to an interesting precedent on using traits for config14:30
jaypipesanother spec ... I pushed a new rev on https://review.openstack.org/#/c/555081/14:30
jaypipes(cpu resource tracking)14:30
efried#link CPU resource tracking spec https://review.openstack.org/#/c/555081/14:30
*** lpetrut has joined #openstack-meeting-alt14:32
efriedany discussion on traits-for-config or CPU resource tracking?14:33
efriedany other specs or reviews to discuss?14:33
mriedemi personally hope that cpu resource tracking is not something we pursue for stein14:33
mriedemwhile we're still trying to land reshaper et al14:33
mriedemreshaping all instances on all compute nodes is going to be rough during upgrade14:34
mriedemunless we can do that offline14:34
jaypipesmriedem: so let's hold off getting new clean functionality so that upgrades can be prolonged even longer until end of 2019?14:35
mriedemyes?14:36
dansmithI feel like we've been putting off numa topo in placement a while now14:36
mriedemi think getting reshaper and bw-aware scheduling and all that stuff has been around long enough that we need to get those done first14:36
*** rossella_s has quit IRC14:36
dansmithso I don't disagree that it's going to be a big reshape, but.. dang, we've been working towards it for a while now and..14:36
jaypipesmriedem: I don't get the argument that adding another data migration (reshape action) makes upgrades harder than having one to do in a release cycle.14:37
mriedemi would just like fewer things to worry about14:37
dansmithif we end up with something for gpus that requires compute nodes to be online,14:37
dansmithit'd be massively better for FFU to have both of those in the same release14:37
dansmithvs. two different (especially back-to-back) releases14:37
mriedemdo we need the computes online for the cpu resource tracking upgrade?14:38
dansmithyes14:38
dansmiththey have to do it themselves, I think, because only they know where and what the topology is14:38
jaypipesdansmith: right, unless we go with a real inventory/provider descriptor file format.14:39
dansmithjaypipes: well, that just pushes the problem elsewhere.. you still have to collect that info from somewhere14:39
*** rossella_s has joined #openstack-meeting-alt14:39
jaypipesdansmith: it's already in the vast majority of inventory management systems.14:39
efriedwaitwait, the *admin* is going to be responsible for describing NUMA topology? It's not something the driver can discover?14:39
dansmithefried: we should have the driver do it for sure14:40
jaypipesefried: the operator is ultimately responsible for *whether* a compute node should expose providers as a tree.14:40
efriedphew14:40
dansmithjaypipes: but we can't just requre the operator to have that and build such a mapping, IMHO14:40
dansmithbut even still,14:40
mriedemwhy would operators care how we model things internally?14:40
jaypipesefried: a lot of operators don't want or need to deal with NUMA. they just have needs for dedicated CPU and shared CPU resources and don't care about NUMA.14:41
efriedYeah, I can live with a "use numa or not" switch.14:41
dansmiththe driver is the only one that can decide how existing allocations map to that information, IMHO, so unless you want to run the driver against the DB from a central node... even still, there are numa pinnings that the driver has done already we need to know about14:41
efriedI was just afraid you were talking about requiring the op to say "and this CPU is in NUMA node 1, and this CPU is in NUMA node 2 and..."14:41
dansmithmriedem: they don't, that's why making them write a topo description for each (type of) compute node to do this migration would be mega-suck14:41
dansmithefried: I think that's what jaypipes is saying14:41
dansmithand I think that's not reasonable14:42
dansmithefried: I don't think a "numa or not" switch is reasonable either, FWIW14:42
jaypipesdansmith: ops *already* have that. they all have hardware profiles which describe the different types of hardware they provide to guests.14:42
dansmiththey just want it to work14:42
dansmithjaypipes: all ops do not have that14:42
dansmithjaypipes: but even still, they don't have the information about what numa allocations we've already done for existing instances14:42
jaypipesdansmith: agreed completely with that last statement.14:43
efriedWith a generic inventory/provider descriptor file, you could allow the op to override/customize. But I would think we would want the default to be automatic detection/configuration resulting in at least a sane setup.14:43
jaypipesit's a shame the guest NUMA topology and CPU pinning were implemented as such a tightly coupled blobject mess.14:43
mriedemwhile i agree it would be best if we can do all the reshapes we know we need to do in the same release to ease the pain, i just wanted to state that i'm worried about trying to bite this off in stein with everything else that's going on14:43
dansmithmriedem: there's risk there for sure, no doubt14:44
efriedWe also still can't do generic affinity without a placement API change, just to bring that up again.14:44
dansmithI'm not saying it's critical, I'm just saying writing it off now seems like a bad idea to me14:44
*** rossella_s has quit IRC14:44
mriedemi'll admit the only part of that spec i've read is the upgrade impact14:45
mriedemthen i had to go change my drawers14:45
dansmithmriedem: I guess I'm not sure why that's a surprise at this point,14:45
mriedemwill artom's stuff depend on this?14:45
dansmithbut maybe I have just done more thinking about it14:45
mriedemartom's stuff = numa aware live migration14:45
dansmithmriedem: artom's stuff kinda conflicts with this.. if this was done his stuff would be easier I think14:45
mriedemdansmith: yeah i've just avoided thinking about this14:45
*** rossella_s has joined #openstack-meeting-alt14:46
mriedemok, i need to get updated on what he plans to do with that as well14:46
mriedemanyway, i'll be quiet now14:46
efrieddansmith, jaypipes: any last words?14:48
dansmithno.14:48
efriedHome stretch14:49
efried#topic bugs14:49
efried#link Placement bugs https://bugs.launchpad.net/nova/+bugs?field.tag=placement14:49
*** openstack changes topic to "bugs (Meeting topic: nova_scheduler)"14:49
efriedany bugs to highlight?14:49
jaypipesefried: go Browns?14:49
mriedemugliest FG ever14:50
efriedHorns to 5-1 by a toenail. Khabib face cranks Connor to a tap, then attacks his training team. Derek Lewis's balls are hot. Other weekend sports news?14:51
*** edleafe has joined #openstack-meeting-alt14:51
efriedI guess we're really in14:51
efried#topic opens14:51
*** openstack changes topic to "opens (Meeting topic: nova_scheduler)"14:51
dansmithis that real sports news?14:51
mriedemyes14:52
dansmithhah. okay, sounded made up14:52
mriedemhttps://deadspin.com/khabib-nurmagomedov-taps-out-conor-mcgregor-attacks-co-182958062214:52
dansmithshows how much I know14:52
* bauzas waves again14:52
efriedand https://www.youtube.com/watch?v=F_E6jXHMPs414:53
efriedokay, anything else?14:53
* edleafe arrives super-late14:53
*** gcb_ has joined #openstack-meeting-alt14:53
efriededleafe: Anything to bring up before we close?14:53
*** e0ne has quit IRC14:54
bauzaswe also had https://www.youtube.com/watch?v=KgwmPhAu0tc14:54
efriedThanks y'all14:55
efried#endmeeting14:55
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"14:55
openstackMeeting ended Mon Oct  8 14:55:31 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:55
openstackMinutes:        http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.html14:55
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.txt14:55
openstackLog:            http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html14:55
*** helenafm has left #openstack-meeting-alt14:55
*** mriedem has left #openstack-meeting-alt14:55
*** tetsuro has quit IRC14:56
*** lpetrut has quit IRC14:57
*** ijw_ has quit IRC14:58
*** ijw has joined #openstack-meeting-alt14:58
*** takashin has left #openstack-meeting-alt14:59
*** cloudrancher has quit IRC15:01
*** cloudrancher has joined #openstack-meeting-alt15:02
*** dklyle has joined #openstack-meeting-alt15:04
*** cloudrancher has quit IRC15:07
*** cloudrancher has joined #openstack-meeting-alt15:08
*** e0ne has joined #openstack-meeting-alt15:11
*** janki has quit IRC15:20
*** panda is now known as panda|off15:23
*** rossella_s has quit IRC15:35
*** rossella_s has joined #openstack-meeting-alt15:37
*** e0ne has quit IRC15:40
*** e0ne has joined #openstack-meeting-alt15:42
*** tssurya has quit IRC15:47
*** e0ne has quit IRC15:49
*** rossella_s has quit IRC15:51
*** rossella_s has joined #openstack-meeting-alt15:51
*** rossella_s has quit IRC15:56
*** cloudrancher has quit IRC15:59
*** cloudrancher has joined #openstack-meeting-alt15:59
*** rossella_s has joined #openstack-meeting-alt16:00
*** gcb_ has quit IRC16:03
*** cloudrancher has quit IRC16:06
*** cloudrancher has joined #openstack-meeting-alt16:08
*** belmorei_ has quit IRC16:11
*** rdopiera has quit IRC16:21
*** ttsiouts has quit IRC16:21
*** panda|off has quit IRC16:22
*** ttsiouts has joined #openstack-meeting-alt16:22
*** panda has joined #openstack-meeting-alt16:23
*** ttsiouts has quit IRC16:26
*** efried has quit IRC16:30
*** rossella_s has quit IRC16:31
*** efried has joined #openstack-meeting-alt16:37
*** gyee has joined #openstack-meeting-alt16:48
*** jgrosso has quit IRC16:57
*** apetrich has quit IRC17:11
*** iyamahat has quit IRC17:15
*** e0ne has joined #openstack-meeting-alt17:24
*** iyamahat has joined #openstack-meeting-alt17:28
*** e0ne has quit IRC17:29
*** iyamahat_ has joined #openstack-meeting-alt17:32
*** iyamahat has quit IRC17:36
*** apetrich has joined #openstack-meeting-alt17:47
*** e0ne has joined #openstack-meeting-alt17:54
*** Swami has joined #openstack-meeting-alt18:13
*** panda has quit IRC18:26
*** panda has joined #openstack-meeting-alt18:28
*** kopecmartin|ruck is now known as kopecmartin|off18:45
*** hongbin has joined #openstack-meeting-alt18:50
*** slagle has joined #openstack-meeting-alt18:52
*** tssurya has joined #openstack-meeting-alt18:53
*** _pewp_ has quit IRC18:54
*** _pewp_ has joined #openstack-meeting-alt18:55
*** diablo_rojo has joined #openstack-meeting-alt19:04
*** sambetts|afk has quit IRC19:07
*** sambetts_ has joined #openstack-meeting-alt19:10
*** armstrong has joined #openstack-meeting-alt19:11
*** slagle has quit IRC19:40
*** diablo_rojo has quit IRC19:42
*** cloudrancher has quit IRC19:43
*** cloudrancher has joined #openstack-meeting-alt19:44
*** jtomasek has quit IRC20:13
*** e0ne has quit IRC20:19
*** slagle has joined #openstack-meeting-alt20:20
*** e0ne has joined #openstack-meeting-alt20:20
*** tssurya has quit IRC20:28
*** lbragstad has quit IRC20:37
*** slagle has quit IRC20:37
*** lbragstad has joined #openstack-meeting-alt20:37
*** e0ne has quit IRC20:48
*** raildo has quit IRC20:57
*** iyamahat__ has joined #openstack-meeting-alt21:52
*** erlon_ has quit IRC21:54
*** iyamahat_ has quit IRC21:55
*** slaweq has quit IRC22:01
*** dustins has quit IRC22:09
*** armstrong has quit IRC22:12
*** ijw has quit IRC22:19
*** hongbin has quit IRC22:38
*** dave-mccowan has quit IRC22:49
*** munimeha1 has quit IRC22:50
*** rcernin has joined #openstack-meeting-alt22:50
*** dave-mccowan has joined #openstack-meeting-alt22:51
*** pbourke has quit IRC23:08
*** pbourke has joined #openstack-meeting-alt23:08
*** lbragstad has quit IRC23:16
*** cloudrancher has quit IRC23:24
*** cloudrancher has joined #openstack-meeting-alt23:24
*** Swami has quit IRC23:37
*** tpsilva has quit IRC23:41
*** gyee has quit IRC23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!