Monday, 2018-09-17

*** dpawlik has quit IRC00:01
*** ircuser-1 has joined #openstack-meeting-alt00:53
*** hongbin has joined #openstack-meeting-alt01:09
*** haleyb has quit IRC01:48
*** lbragstad has joined #openstack-meeting-alt02:07
*** e0ne has joined #openstack-meeting-alt02:33
*** lbragstad has quit IRC02:36
*** dave-mccowan has quit IRC02:37
*** lbragstad has joined #openstack-meeting-alt02:42
*** e0ne has quit IRC03:21
*** bhavikdbavishi has joined #openstack-meeting-alt03:39
*** e0ne has joined #openstack-meeting-alt03:54
*** dpawlik has joined #openstack-meeting-alt03:57
*** dpawlik has quit IRC04:02
*** e0ne has quit IRC04:55
*** jbadiapa has joined #openstack-meeting-alt05:25
*** janki has joined #openstack-meeting-alt05:32
*** jtomasek has joined #openstack-meeting-alt05:34
*** e0ne has joined #openstack-meeting-alt05:42
*** jtomasek has quit IRC05:44
*** jtomasek has joined #openstack-meeting-alt05:45
*** macermak has joined #openstack-meeting-alt05:54
*** belmoreira has joined #openstack-meeting-alt05:55
*** hongbin has quit IRC06:00
*** dpawlik has joined #openstack-meeting-alt06:07
*** rnoriega has quit IRC06:17
*** rdopiera has joined #openstack-meeting-alt06:18
*** apetrich has joined #openstack-meeting-alt06:18
*** rnoriega has joined #openstack-meeting-alt06:20
*** e0ne_ has joined #openstack-meeting-alt06:50
*** e0ne has quit IRC06:53
*** e0ne_ has quit IRC06:54
*** slaweq has joined #openstack-meeting-alt07:00
*** giblet is now known as gibi07:02
*** rcernin has quit IRC07:11
*** priteau has joined #openstack-meeting-alt07:30
*** iyamahat has joined #openstack-meeting-alt07:38
*** sambetts_ has quit IRC07:42
*** sambetts_ has joined #openstack-meeting-alt07:45
*** kopecmartin has joined #openstack-meeting-alt07:52
*** slaweq has quit IRC08:00
*** slaweq has joined #openstack-meeting-alt08:03
*** iyamahat has quit IRC09:04
*** priteau has quit IRC09:23
*** priteau has joined #openstack-meeting-alt09:24
*** priteau has quit IRC09:30
*** priteau has joined #openstack-meeting-alt09:44
*** tssurya has joined #openstack-meeting-alt09:58
*** alexchadin has joined #openstack-meeting-alt10:10
*** kopecmartin has quit IRC10:15
*** belmoreira has quit IRC10:22
*** bhavikdbavishi has quit IRC10:24
*** finucannot is now known as stephenfin10:34
*** alexchadin has quit IRC10:43
*** alexchadin has joined #openstack-meeting-alt10:53
*** kopecmartin has joined #openstack-meeting-alt11:15
*** pbourke has quit IRC11:18
*** kopecmartin has quit IRC11:19
*** pbourke has joined #openstack-meeting-alt11:20
*** belmoreira has joined #openstack-meeting-alt11:23
*** jaypipes has joined #openstack-meeting-alt11:25
*** kopecmartin has joined #openstack-meeting-alt11:34
*** erlon has quit IRC11:36
*** alexchadin has quit IRC11:48
*** kopecmartin has quit IRC11:51
*** raildo has joined #openstack-meeting-alt12:03
*** alexchadin has joined #openstack-meeting-alt12:05
*** alexchadin has quit IRC12:06
*** alexchadin has joined #openstack-meeting-alt12:07
*** tpsilva has joined #openstack-meeting-alt12:24
*** janki has quit IRC12:25
*** erlon has joined #openstack-meeting-alt12:39
*** raildo has quit IRC12:45
*** raildo has joined #openstack-meeting-alt12:45
*** kopecmartin has joined #openstack-meeting-alt12:55
*** kopecmartin has quit IRC12:55
*** kopecmartin has joined #openstack-meeting-alt12:55
*** kopecmartin has quit IRC12:56
*** kopecmartin has joined #openstack-meeting-alt12:56
*** eggmaster has joined #openstack-meeting-alt12:58
*** dustins has joined #openstack-meeting-alt13:04
*** munimeha1_ has joined #openstack-meeting-alt13:14
*** alexchadin has quit IRC13:14
*** alexchadin has joined #openstack-meeting-alt13:29
*** belmorei_ has joined #openstack-meeting-alt13:33
*** belmoreira has quit IRC13:33
*** jcoufal has joined #openstack-meeting-alt13:35
*** hongbin has joined #openstack-meeting-alt13:38
*** alexchadin has quit IRC13:38
*** helenafm has joined #openstack-meeting-alt13:39
*** mriedem has joined #openstack-meeting-alt13:47
*** alexchadin has joined #openstack-meeting-alt13:47
*** tetsuro has joined #openstack-meeting-alt13:53
*** alexchadin has quit IRC13:56
*** wkite has joined #openstack-meeting-alt13:57
jaypipes#startmeeting scheduler13:59
openstackMeeting started Mon Sep 17 13:59:59 2018 UTC and is due to finish in 60 minutes.  The chair is jaypipes. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: scheduler)"14:00
openstackThe meeting name has been set to 'scheduler'14:00
jaypipesgood morning/evening all.14:00
* bauzas yawns14:01
*** alexchadin has joined #openstack-meeting-alt14:01
jaypipestetsuro, edleafe, mriedem, dansmith, bauzas, gibi: hi14:01
gibio/14:01
wkiteo/14:01
mriedemo/14:01
jaypipes#topic quick recap of placement/scheduler topics from PTG14:01
*** openstack changes topic to "quick recap of placement/scheduler topics from PTG (Meeting topic: scheduler)"14:01
* dansmith snorts14:02
jaypipes#link https://etherpad.openstack.org/p/nova-ptg-stein14:02
* bauzas gentle reminds that he has to leave for 20 mins at 1420UTC14:02
jaypipesthere were a number of placement-related topics (as always) at the PTG14:03
jaypipesalong with a fairly lengthy discussion on the status and milestones related to placement extraction14:03
jaypipesedleafe: would you like to summarize the extraction bits?14:03
jaypipesEd may be on his way to the office, so let me try14:05
jaypipesmelwitt summarized the decisions regarding the governance items nicely in a ML post:14:06
jaypipes#link http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html14:06
jaypipesThat ML post lists the items that we're aiming to focus on to finalize the path for final extraction of placement. The items revolve around testing of the upgrade paths and implementing support for reshaper for the vGPU use cases14:07
jaypipesbauzas is responsible for the libvirt vGPU reshaper efforts and naichuan Sun is responsible for the vGPU efforts for the Xen virt driver14:08
jaypipesgibi: perhaps you might give a quick status report on the extraction patch series since I'm not familiar with the progress there?14:08
gibihonestly I also needs to catch up on what is happening on the placement side14:09
gibiwhat I know that we see green test results with the new repo14:09
mriedemwe need to do the grenade stuff14:09
gibiyeah, next step is grenade I guess14:10
mriedemfirst step is writing the db table copy and dump script14:10
mriedemand then integrate that into grenade14:10
mriedemi've got a patch up to grenade for adding a postgresql grenade job to the experimental queue as well so anyone adding pg support for the upgrade script can test it14:10
gibiIn parallel I would like to make nova functional test run with the extracted placement repo14:10
jaypipes#link latest etherpad on placement extraction bits: https://etherpad.openstack.org/p/placement-extract-stein-314:11
mriedem#link the grenade postgresql job patch https://review.openstack.org/#/c/602124/14:11
*** tsmith2 has quit IRC14:12
jaypipesok, thanks gibi and mriedem14:12
jaypipes#topic placement and scheduler blueprints for Stein14:12
*** openstack changes topic to "placement and scheduler blueprints for Stein (Meeting topic: scheduler)"14:12
wkiteHi, I am working on joint scheduler for nova and zun based on numa and pinned cpu, could anyone give me some advice?14:13
jaypipeswkite: sure, in a little bit. let me get through the status parts of the meeting?14:14
gibiI thought that we are pretty freezed at the moment regarding new features in placement14:14
jaypipesgibi: there are plenty of blueprints targeting the placement and scheduler services in Stein, though14:15
mriedemlike https://blueprints.launchpad.net/nova/+spec/use-nested-allocation-candidates !14:15
gibiwhich I'm working on :)14:16
mriedemthat's just all nova-scheduler side stuff14:16
jaypipesright. this meeting is still the scheduler meeting is it not? :)14:16
mriedemi gues14:16
mriedemcan someone summarize the consumer gen thread?14:16
gibisure14:16
gibiso placement 1.28 added the consumer generation for allocations14:16
gibito use this the scheduler report client needs some change14:17
bauzaswhat's unfun is that we merged 1.30 (reshape) and used it before nova used 1.2914:17
bauzas(nested alloc candidates)14:17
jaypipesbauzas: I don't see why that matters.14:17
gibiin general either nova creates a new consumer and there nova is sure that the generation is None14:17
bauzasbut 1.30 requires 1.29 to be implemented on the client side14:17
gibior nova updates and existing consumer and there nova ask placement about the generation of the consumer to be updated14:18
bauzasjaypipes: because reshaping implies that a boot will fail unless nova speakes nested alloc candidates14:18
mriedembecause now we have more than just nova doing things, like in the bw providers case14:18
gibiif in any case placement returns consumer generation conflict nova will fail the instance workflow operation14:19
bauzasbecause resources can be on children14:19
mriedemeven though nova and neutron are working with the same consumer right? the instance uuid.14:19
gibineutron does not manipulate allocations14:19
gibijust reporting inventories14:19
gibiat the moment14:19
jaypipesright. all allocation is done via claim_resources()14:20
gibisomewhere in the future when a bandwidth of a port needs to be resized neutron might want to touch allocations14:20
gibibut not now14:20
bauzas... and I need to disappear14:21
jaypipesmriedem: are you asking gibi to summarize the entire consumer generation patch series? or something else?14:21
gibithe implementation to support consumer generation is basically ready for review14:21
mriedemi was asking to summarize the ML thread which contributes to the code series i assume14:22
gibithe patch series starts here https://review.openstack.org/#/c/59159714:22
mriedemi was wondering why we have such a big ML thread about this and what the big changes are to nova before i actually review any of this14:22
mriedemif it's just, "this makes the nova client side bits (SchedulerReportClient) aware of consumer generations" that's fine14:22
gibimriedem: my concern was what to do if placement return consumer generation conflict. a) retry, b) fail hard c) fail soft, let user retry14:23
jaypipesmriedem: it makes the nova client side *safe* for multiple things to operate on an allocation.14:23
gibithe answer was mostly b) fail hard14:23
jaypipesyep14:23
gibiso the patch series is now makes consumer conflict a hard failure with instance ending up in ERROR state14:23
mriedemwhat besides the scheduler during the initial claim is messing with the allocations created by the scheduler?14:23
jaypipesgibi: which was the safest choice.14:23
jaypipesmriedem: reshaper.14:24
gibimriedem: all the intance move operations by moving allocations from instance.uuid to migration.uuid and back in revert14:24
jaypipesmriedem: along with anything that does migrations or resizes.14:24
mriedemok14:24
gibimriedem: the nasty things are force evacuate and force migrate14:25
jaypipesas always.14:25
gibimriedem: they allocate outside of scheduler14:25
mriedemyeah they still do the allocations though14:25
mriedemlike the scheduler14:25
mriedembut with a todo, from me, since i think pike14:25
gibimriedem: yes, they do just via different code path14:26
mriedemyup14:26
mriedemok we can probably move on - i just needed to get caught up14:26
jaypipesmriedem: right, but they don't currently handle failures due to consumer generation mismatch, which is what Gibi's patch series does (sets instances to ERROR if >1 thing tries updating allocations for the same instance at the same time)14:26
jaypipesok, yes, let's move on.14:27
jaypipes#topic open discussion14:27
*** openstack changes topic to "open discussion (Meeting topic: scheduler)"14:27
jaypipes#action all to review gibi's consumer generation patch series14:27
gibi\o/14:27
jaypipes#link Gibi's consumer generation patch series; https://review.openstack.org/#/q/topic:bp/use-nested-allocation-candidates+(status:open+OR+status:merged)14:28
jaypipesok, open discussion now14:28
jaypipeswkite: hi14:28
jaypipeswkite: can you give us a brief summary of what you are trying to do?14:28
wkiteok14:29
*** dtrainor has joined #openstack-meeting-alt14:30
*** e0ne has joined #openstack-meeting-alt14:30
wkiteI am trying to use placement for save numa topology for nova and zun, so that the scheduler can get numa topology from placement and then do the schedue work14:31
wkitefor both nova and zun14:31
*** tsmith2 has joined #openstack-meeting-alt14:32
jaypipeswkite: a NUMA topology isn't a low-level resource. It's not possible to "consume a NUMA topology" from placement because a NUMA topology is a complex, non-integer resource.14:32
wkitejaypipes: yes14:32
jaypipeswkite: now, if you were to consume some CPU resources or memory resources from a NUMA cell, now that is something we could model in placement.14:33
wkitea json object14:33
jaypipeswkite: we have no plans to allow resources in placement to be JSON objects.14:33
dansmithWUT14:33
dansmithbut I wanna!14:33
jaypipesdansmith: stop. :)14:33
jaypipeswkite: the solution that we've discussed is to keep the NUMATopologyFilter inside the nova-scheduler to handle placement of a virtual guest CPU topology on top of a host NUMA topology.14:34
jaypipeswkite: while keeping placement focused on atomic, integer resources. basically, placement is complementary to the nova-scheduler. for simple integer-based capacity calculations, placement is used. for complex placement/topology decisions, the nova/virt/hardware.py functions are called from within the nova-scheduler's NUMATOpologyFilter14:35
jaypipeswkite: if you are creating a separate scheduler service for Zun, my advice would be to follow the same strategy.14:35
jaypipeswkite: if you'd like to discuss this further, let's move the conversation to #openstack-placement and I can fill you in on how the nova and placement services interact with regards to NUMA topology decisions.14:37
wkiteI want to run both nova and zun on one host by sharing the pinned cpu, but what should do14:38
*** e0ne has quit IRC14:38
jaypipeswkite: if you share a pinned CPU, it's no longer pinned is it? :)14:39
jaypipeswkite: I mean... a pinned CPU is dedicated to a particular workload. if you then have another workload pinned to that same CPU, then the CPU is shared among workloads and is no longer dedicated.14:40
wkiteboth of them use themself pinned cpu14:40
mriedemso a vm uses [1,2] and a container uses [3,4]?14:41
mriedemon the same host14:41
wkiteyes14:41
jaypipeswkite: here is a spec you read that discusses dedicated and shared CPU resource tracking and our plans for this in Stein: https://review.openstack.org/#/c/555081/14:41
jaypipesthat you should read...14:41
wkitejaypipes: ok, thank you14:42
jaypipesnp. like I said, if you'd like to discuss this further, join us on #openstack-placement and we can discuss there.14:42
jaypipeswkite: ^14:42
*** belmorei_ has quit IRC14:43
jaypipesok, in other open discussion items... I still need to write the "negative member_of" spec. I'll do that today or tomorrow and get with sean-k-mooney on his nova-side specs for the placement request filters that will use negative member_of.14:43
wkitejaypipes: ok14:44
jaypipesdoes anyone have any other items to discuss? otherwise, I'll wrap up the meeting.14:44
tetsurojaypipes: I was wondering if I can take that negative member_of spec.14:45
*** belmoreira has joined #openstack-meeting-alt14:45
jaypipestetsuro: sure, if you'd like it, I have not started it, so go for it.14:45
tetsuroThanks14:45
jaypipestetsuro: thank YOU. :)14:46
tetsuroSince I have sorted out how the existing member of param works on nested/shared in https://review.openstack.org/#/c/602638/14:46
tetsuroI'd like to clear how negative member_of param should work as well14:47
tetsuroAnd that should relate to the bug I opened^14:47
tetsuronp14:47
jaypipestetsuro: well, the negative member_of is just "make sure the provider trees that match an allocation candidate request group are NOT associated with one or more provider aggregates"14:49
jaypipestetsuro: and the bug you reference above is making sure that member_of refers to the aggregate associations of the entire provider tree instead of just a single provider, right?14:50
tetsuroYup, so my question is does an aggregate on root provider span on whole tree as well?14:50
*** macermak has quit IRC14:51
* bauzas is back14:51
tetsurofor negative aggregate cases14:51
tetsuros/negative member_of cases14:51
tetsuroif the root is on aggregate A and a user specifies !member_of=aggA, non of the nested rps under the root can be picked up.14:52
jaypipestetsuro: I believe in Denver we agreed that for the non-numbered request group, a member_of query parameter means that the provider *tree*'s associated aggregates are considered (in other words, look for the provider aggregates associated with the root_provider_id of providers). And for numbered request groups, the single provider_id (not root_provider_id) would be used for the member associations.14:52
tetsuros/non/none14:52
jaypipesgibi, dansmith, mriedem: if you remember that conversation, could you verify my recollection is accurate for ^^14:53
dansmithaye14:54
* bauzas nods14:54
dansmithbecause the numbered ones are more specific14:54
dansmithor more prescriptive14:54
bauzasand I remember we said it's a bugfix14:54
tetsuroThat's my understanding, too.14:55
gibiseems correct. this means a negative member_of on a root_rp in an unnumbered group means nothing from that tree14:55
tetsuroOkay, thanks.14:55
jaypipescool. I actually remembered something from a PTG correctly.14:55
jaypipesalright, tetsuro, you good to go on that then?14:56
jaypipestetsuro: I'll review your bug patch series today.14:56
tetsuroYes, I am good to go.14:56
jaypipesawesome, thanks.14:56
jaypipesOK, well, I'm going to wrap the meeting up then. thanks everyone. :) see you on #openstack-nova and #openstack-placement.14:56
jaypipes#endmeeting14:56
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"14:56
openstackMeeting ended Mon Sep 17 14:56:58 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:57
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scheduler/2018/scheduler.2018-09-17-13.59.html14:57
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scheduler/2018/scheduler.2018-09-17-13.59.txt14:57
openstackLog:            http://eavesdrop.openstack.org/meetings/scheduler/2018/scheduler.2018-09-17-13.59.log.html14:57
*** helenafm has left #openstack-meeting-alt14:57
*** janki has joined #openstack-meeting-alt14:58
*** bnemec has joined #openstack-meeting-alt14:59
*** betherly has quit IRC15:03
*** tetsuro has quit IRC15:03
*** markstur has joined #openstack-meeting-alt15:04
*** wkite has quit IRC15:06
*** alexchadin has quit IRC15:08
*** alexchadin has joined #openstack-meeting-alt15:18
*** jcoufal has quit IRC15:21
*** mriedem has left #openstack-meeting-alt15:23
*** alexchadin has quit IRC15:23
*** alexchadin has joined #openstack-meeting-alt15:40
*** Leo_m has joined #openstack-meeting-alt15:40
*** tssurya has quit IRC15:42
*** bhavikdbavishi has joined #openstack-meeting-alt15:43
*** alexchadin has quit IRC15:44
*** dklyle has quit IRC15:49
*** dklyle has joined #openstack-meeting-alt15:50
*** belmoreira has quit IRC15:55
*** cloudrancher has quit IRC15:55
*** bhavikdbavishi has quit IRC16:08
*** bhavikdbavishi has joined #openstack-meeting-alt16:10
*** gyee has joined #openstack-meeting-alt16:12
*** kopecmartin has quit IRC16:51
*** rdopiera has quit IRC17:00
*** efried has joined #openstack-meeting-alt17:08
*** efried is now known as efried_off17:12
*** tbarron_ is now known as tbarron17:41
*** david-lyle has joined #openstack-meeting-alt17:54
*** priteau has quit IRC17:55
*** dklyle has quit IRC17:57
*** janki has quit IRC18:04
*** aspiers has quit IRC18:13
*** aspiers has joined #openstack-meeting-alt18:29
*** markstur has quit IRC18:34
*** bhavikdbavishi has quit IRC18:37
*** aspiers has quit IRC18:42
*** aspiers has joined #openstack-meeting-alt18:49
*** alexchadin has joined #openstack-meeting-alt18:59
*** jtomasek has quit IRC19:17
*** cloudrancher has joined #openstack-meeting-alt19:19
*** jtomasek has joined #openstack-meeting-alt19:20
*** iyamahat has joined #openstack-meeting-alt19:21
*** alexchadin has quit IRC19:32
*** yamahata has joined #openstack-meeting-alt19:35
*** jtomasek has quit IRC19:37
*** jtomasek has joined #openstack-meeting-alt19:40
*** haleyb has joined #openstack-meeting-alt19:45
*** aspiers has quit IRC19:48
*** aspiers has joined #openstack-meeting-alt19:59
*** diablo_rojo has joined #openstack-meeting-alt20:03
*** cloudrancher has quit IRC20:19
*** jtomasek has quit IRC20:22
*** lbragstad has quit IRC20:26
*** lbragstad has joined #openstack-meeting-alt20:27
*** erlon has quit IRC21:18
*** munimeha1_ has quit IRC21:27
*** raildo has quit IRC21:28
*** Leo_m has quit IRC21:43
*** rcernin has joined #openstack-meeting-alt22:08
*** slaweq has quit IRC22:09
*** slaweq has joined #openstack-meeting-alt22:11
*** slaweq has quit IRC22:15
*** hongbin has quit IRC22:30
*** dustins has quit IRC22:33
*** slaweq has joined #openstack-meeting-alt23:11
*** slaweq has quit IRC23:16

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!