Thursday, 2018-09-20

openstackgerritTetsuro Nakamura proposed openstack/nova-specs master: Spec: allocation candidates in tree  https://review.openstack.org/60358501:07
*** tetsuro has joined #openstack-placement01:07
*** mriedem_away has quit IRC01:08
openstackgerritSundar Nadathur proposed openstack/nova-specs master: Nova Cyborg interaction specification.  https://review.openstack.org/60395502:33
*** tetsuro has quit IRC04:06
openstackgerritGhanshyam Mann proposed openstack/nova-specs master: Spec for API inconsistency cleanup  https://review.openstack.org/60396904:16
openstackgerritGhanshyam Mann proposed openstack/nova-specs master: Spec for API inconsistency cleanup  https://review.openstack.org/60396904:28
*** tetsuro has joined #openstack-placement06:19
*** belmoreira has joined #openstack-placement06:39
*** e0ne has joined #openstack-placement07:16
*** helenafm has joined #openstack-placement07:51
*** tetsuro has quit IRC09:28
*** helenafm has quit IRC09:30
*** belmoreira has quit IRC09:38
*** helenafm has joined #openstack-placement10:05
*** takashin has quit IRC10:13
openstackgerritYikun Jiang proposed openstack/placement master: WIP: Add nova database migration script(psql version)  https://review.openstack.org/60402810:17
openstackgerritYikun Jiang proposed openstack/placement master: WIP: Add nova database migration script(psql version)  https://review.openstack.org/60403110:22
openstackgerritYikun Jiang proposed openstack/placement master: WIP: Add nova database migration script(psql version)  https://review.openstack.org/60402810:25
*** takashin has joined #openstack-placement10:44
openstackgerritJohn Garbutt proposed openstack/nova-specs master: WIP: Spec for API policy updates  https://review.openstack.org/54785010:47
*** belmoreira has joined #openstack-placement10:57
openstackgerritGhanshyam Mann proposed openstack/nova-specs master: Spec for API inconsistency cleanup  https://review.openstack.org/60396910:58
*** tetsuro has joined #openstack-placement10:59
*** s10 has joined #openstack-placement11:06
*** tssurya has joined #openstack-placement11:39
*** mriedem has joined #openstack-placement13:06
*** cdent has joined #openstack-placement13:15
*** e0ne has quit IRC13:48
cdentmriedem: I was going to look into during the grenade changes for nova -> placement upgrade but a) i'm way sick and stupid so not up to the immediate challenge of teasing that out, b) on pto first three days of next week so... ?13:51
mriedemthat's fine,13:55
mriedemi can do it (or someone else if they beat me) but i wanted to test out dan's db migration script in a devstack first,13:55
mriedemwhich i haven't had time to do yet this week13:56
cdentI ran dan's db migration on a plain devstack yesterday and it was happy. was then thinking I'd spin up a placement-devstack with https://review.openstack.org/#/c/600162/ but didn't get that far and wasn't sure what the point was: the mova data from here ... to here part is tight. as long as things are identified properly13:58
*** jaypipes is now known as jaypipes-ooo14:03
belmoreiraI'm running into a weird situation with placement... can't schedule a VM because placement doesn't retrieve any allocation candidate14:11
belmoreiraDoing some debug with the os_placement client and getting:14:12
belmoreiraopenstack allocation candidate list --resource MEMORY_MB=6000014:12
belmoreira| MEMORY_MB=60000 | 90799f16-9a4a-4001-868b-335dbce6b5df | MEMORY_MB=60000/122966  |14:12
belmoreiraopenstack allocation candidate list --resource MEMORY_MB=60000 --resource VCPU=114:12
belmoreira| VCPU=1,MEMORY_MB=60000 | 90799f16-9a4a-4001-868b-335dbce6b5df | VCPU=32/1584,MEMORY_MB=60000/122966   |14:12
belmoreiraopenstack allocation candidate list --resource MEMORY_MB=60000 --resource VCPU=3214:12
belmoreiraThe node is not in the list!14:12
belmoreiraIs there any know issue in placement that can justify this?14:13
cdentbelmoreira: are min_unit, max_unit or step_size set to anything weird on the inventories on 90799f16-9a4a-4001-868b-335dbce6b5df ?14:14
edleafebelmoreira: Are there 32 VCPU available (total-reserved-allocated)?14:14
belmoreiracdent: that's it!14:16
cdent\o/14:16
belmoreiramax_unit is set to 16 that is the number of available cores14:16
cdentyeah, that was a decision that was made at some point that was a bit controversial but made sense at the time14:16
belmoreiraI guess that it can't be changed14:17
cdenton vcpu: max_unit gets set to the physical max because it seemed wrong to cause immediate cpu contention with just one workload14:17
cdentnot without changing code14:17
cdentit would get reset14:18
* cdent looks at the code14:18
belmoreirathe issue is that with the L1TF mitigations SMT was disabled. meaning that the inventory was updated to half of the cores which causes issues with already defined flavors14:19
cdentah, yeah.14:19
belmoreiraI think we should discuss to allow "max_unit" configurable14:20
efriedyou can tweak your allocation ratio14:20
cdentthat won't help14:20
cdentmax_unit controls the max individual ask14:20
efriedoright14:20
efriedwe're asking for 32 VCPU?14:20
belmoreiraefried: I'm doing that. My allocation ration for CPUs is 99. We just scheduler based in RAM14:21
cdentif the flavor wants 32 on a (now) 16 core machine, no chance14:21
efriedtrue story.14:21
efriedI thought we set max_unit to MAXINT14:21
efriedno? total? That seems weird, in light of allocation ratio.14:22
cdentthat's the default if it is not set when creating the inventory14:22
cdentbut the resource tracker sets max_unit in _compute_node_to_inventory_dict14:22
cdentbelmoreira: what's the outcome that would be perfect?14:23
efriedcdent: I think if we're not going to default to MAXINT, we should default to total * allocation_ratio14:24
cdentpresumably if SMT is disabled, then your hardware capacity has in fact halved by some vague definitions. are you wanting to be able to ignore that?14:24
efriedtotal is just... wrong.14:24
belmoreirafor my use case allowing the "max_unit" being dynamic considering the allocation ratio14:24
cdentefried: I think we're talking about two different things at once and we should pick your topic up after we've figure out what belmoreira needs to have happy14:24
* efried stfu14:24
belmoreiracdent: I agree with efried (total * allocation_ratio)14:25
efriedbelmoreira: Easy patch if you want to try it in your env.14:26
cdentisn't that unsafe? it means that we're allowing (for disk, ram and vcpu) a single workload to consume not just all the physical resources, but also all the overcommited resources14:26
belmoreiracdent: answering the "hardware capacity has in fact halved" question. It's a deployment decision in my opinion considering the workloads14:27
efriedcdent: IMO setting max_unit to total is just as arbitrary.14:27
efriedbelmoreira: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L647814:27
efriedIf you remove that line, I think the placement side will actually default to MAXINT.14:28
efriedchecking...14:28
efriedyup14:28
efriedwhich is effectively the same as total * allocation_ratio, because that'd be the most you would ever be able to allocate anyway.14:28
cdentefried, belmoreira: I guess my understanding of how hardware works is confused, but making max_unit greater than total is _really_ weird to me. It _can_ work for vcpus, but for disk and ram it sounds downright messy14:30
efriedcdent: The operator shouldn't have to think about max_unit.14:30
cdentI agree, which is why it defaults to a safe option14:30
cdentit's there to prevent a single workload clobbering a machine14:31
efriedI'm safer if I never leave my house.14:31
efriedI don't agree that that one-size-fits-all decision is appropriate across the board.14:31
cdentif you allow two 32 vcpu workloads on a 16 cpu machine, what happens?14:31
belmoreiracdent: The operator is responsible for the available flavors. I agree that can lead to unsafe options but sane confiugration shouldn't the enforced14:32
belmoreiras/the/be14:32
efriedguess it depends what overcommit actually means based on how the system is set up.14:32
cdenthmm, we should probably get jaypipes-ooo and dansmith in this conversation when they are available because I remember them in this conversation when it first came up14:33
efriedI don't know what ^ is on libvirt, but e.g. on power with micropartitioning, if I have my micropartition size set to .05, each vcpu is actually 1/20 of a physical CPU, so I really do have 20x total.14:33
cdentI agree that max_unit should be changeable14:33
belmoreiracdent: in some of our cells we have a CPU overcommit ratio of 314:33
cdentefried: the way power and vmware manage the concept of "vcpu" is much different from libvirt, as I understand things14:34
dansmithI think I brought this up as an issue a couple months ago14:34
cdentyou did, which is why I pinged you14:35
dansmithbut I'm on two meetings right now, so can't really discuss for another hour plus14:35
dansmithack14:35
efriedcdent: Which is cool, because this would be a change in the libvirt driver specifically. I.e. we can tailor this particular change to whatever makes sense specifically for libvirt.14:35
efriedThe placement side is MAXINT, which is sane.14:35
cdentbelmoreira: is the goal of that overcommit ratio to allow "single workloads that are bigger than the physical constraints of this machine" or to allow "many workloads"14:35
cdentefried: I don't think there's any question that we need to make some changes. We do. What I'm trying to figure out is the more general picture of the goals.14:36
jaypipes-ooolemme read back14:36
cdentThe reason for the original set up was basically to have a "safe" default. However since that default isn't actually changeable, we've introduced a problem.14:37
belmoreiracdent: I was never thinking in these terms... initially definitely to allow "many workloads" but now that SMT is off we need "single workloads that are bigger than the physical constraints of this machine" otherwise all flavors/flavor allocation need to be reviewed14:37
cdentIt's not a "default", it's a lock in.14:37
efriedI'm just really not a fan of adding more conf opts in the spirit of the allocation ratio ones, which are a tangled freakin mess.14:37
cdentefried: I was trying to avoid going into solution space yet.14:38
dansmithbelmoreira: so you want max_unit to go higher not because it makes sense but because you don't want your flavors to change?14:38
dansmithpurely because you turned off HT for meltdown I assume?14:38
efriedcdent: Argument about mem being allowed to exceed total - that makes sense to me as well, for thin provisioning.14:39
efriedsorry, disk, not mem14:39
dansmithcpu is the most legit use of overcommit, fwiw14:39
dansmithlike the most straightforward and the most "doable without completely crazy results"14:39
belmoreiradansmith: "not because it makes" :) I think it makes sense... but ran into this problem because SMT off, yes14:40
jaypipes-oooefried: total is not "just wrong".14:40
jaypipes-ooothis has been this way since before Kilo.14:40
jaypipes-oooit used to be called limit['vcpus'] but it was always the number of physical CPU cores on the box.14:41
dansmithbelmoreira: what I meant was "you wouldn't have chosen this by default, but now you need it because of HT"14:41
belmoreiradansmith: I didn't have this use case until now14:43
dansmithright, that's what I'm trying to get at14:43
jaypipes-ooobelmoreira: ftr, this would have happened without placement. this has been the behaviour of Nova for as long as I remember. it used to be that the scheduler passed a "limits" dict that contained the max number of host CPU processors and would prevent (even with overcommit) an instance from consuming more than that amount.14:44
dansmithwhen I brought this up last time, I was worried that someone would be specifically designing for this14:44
belmoreiraI think it's a good default, but should not be enforced14:44
dansmithjaypipes-ooo: pretty sure that's not the case14:44
dansmithjaypipes-ooo: because we had some tests that had to change because of this, which is why I brought this up last time14:44
efriedbelmoreira: So what would actually happen if you *were* able to get past the allocation candidates here? You would get into the virt driver's spawn method, but would libvirt actually be able to create a VM with 32 VCPU on this system?14:44
jaypipes-ooodansmith: we have had limits['vcpu'] set to the max host CPU processors for a long time.14:45
dansmithjaypipes-ooo: at the very least if you didn't have the cpu filter enabled, then you wouldn't get a cpu limit14:45
jaypipes-ooodansmith: oh, yes, that's true indeed.14:45
dansmithjaypipes-ooo: and cpu filter was not enabled by default ever, AFAIK14:47
cdentbelmoreira: do you want "should not be enforced" or "should be changeable"14:48
dansmithhttps://github.com/openstack/nova/blob/stable/ocata/nova/conf/scheduler.py#L268-L27614:48
belmoreiraefried: That's a good question... never tried. I just went from understanding why placement was not retrieving the allocation candidates to all these open questions14:48
efriedwell14:48
belmoreiraefried: because in the past my flavors were designed considering the max_unit14:49
efriedwell14:50
efrieddesigned considering the allocation ratio14:50
efriedbelmoreira: Can you apply this patch and see if you actually succeed in spawning? WIP: libvirt: Turn off max_unit  https://review.openstack.org/60411014:50
efriedI suspect you'll actually fail (just later).14:50
cdentefried: why do you say [t 1rID]?14:51
purplerbot<efried> designed considering the allocation ratio [2018-09-20 14:50:11.520176] [n 1rID]14:51
jaypipes-ooodansmith: we removed CoreFilter from default enabled_filters in Ocata AFAICT: https://docs.openstack.org/releasenotes/nova/ocata.html#upgrade-notes14:51
efriedcdent: Maybe that was a wrong assumption.14:51
belmoreiraefried: actually libvirt can define 32vcpus in one VM (in 16 cores nodes) Because already existing VMs (with 32 vcpus) are running14:52
dansmithjaypipes-ooo: was pretty sure we never had that on by default14:52
dansmithlet me look further back14:52
efriedI don't understand how that works, but okay.14:52
dansmithjaypipes-ooo: not in there in newton either: https://github.com/openstack/nova/blob/newton-eol/nova/conf/scheduler.py#L11414:52
belmoreiraefried: at least users didn't complain. I will review them anyway14:54
cdentbelmoreira: presumably those are vms that existed before the shut down to get the compute node having the new SMT settings. I wonder if creating a new one and starting and existing one is any different. I would assume not, but these are murky waters.14:55
jaypipes-ooodansmith: ack.14:55
jaypipes-oooefried: for the record... https://github.com/openstack/nova/blob/liberty-eol/nova/scheduler/filters/core_filter.py#L48-L5114:55
jaypipes-oooefried: the use of total vcpus reported by the host has been the limit/max_unit for a long time. dansmith is correct that the CoreFilter has not been in the enabled filter defaults, though.14:56
jaypipes-ooowhich surprises me, actually...14:56
jaypipes-oooconsidering we count vcpu quotas by default, but whatever.14:56
efriedjaypipes-ooo: Look at line 46 though - that limit is being set to total * allocation ratio14:56
*** e0ne has joined #openstack-placement14:56
efriedwhich was one of the first things we said made more sense than just total.14:57
jaypipes-oooefried: that's not the right limit.14:57
dansmithso for me, I want to focus on whether or not it makes sense to have max_unit over total for real, separate from belmoreira's rather synthetic problem here14:57
jaypipes-oooefried: sorry, gave you wrong line numbers: https://github.com/openstack/nova/blob/liberty-eol/nova/scheduler/filters/core_filter.py#L53-L5514:58
belmoreiracdent: I also assume not. But I will test this14:58
dansmithI'm not sure that it really does make sense14:58
jaypipes-oooto quote the code...14:58
jaypipes-ooo        # Only provide a VCPU limit to compute if the virt driver is reporting14:58
jaypipes-ooo# an accurate count of installed VCPUs. (XenServer driver does not)14:58
jaypipes-oooand14:58
jaypipes-ooo  # Do not allow an instance to overcommit against itself, only14:58
jaypipes-ooo# against other instances.14:58
efrieddansmith: I contend that at least disk overcommit makes sense if thin provisioning is in play.15:00
dansmithefried: disagree, because you definitely don't want one single allocation to be larger than the actual size of yourdisk15:00
dansmiththin or not15:00
efriedshrug15:00
dansmiththis is about max_unit, right/15:00
efriedIMO that shouldn't be hardcoded15:00
efriedshould be up to the operator15:00
cdentyeah, we need to make sure we're focusing on the meaning of max_unit here, not "overcommit" these are two separate things15:00
dansmithwell, the point of placement is to provide a sane theoretical model, IMHO15:01
efriedPlacement is doing the right thing here.15:01
cdentI'm not sure it should be up to the operator, but it should be up to the virt driver15:01
efriedIt's the driver - libvirt in this case - that's imposing max_unit = total.15:01
dansmithI thought the point was that max_unit can't be higher than total in placement?15:01
efriednope15:01
efriedmax_unit defaults to MAXINT if not specified.15:02
dansmithcan it be?15:02
dansmithhrm15:02
efriedabsolutely15:02
*** takashin has left #openstack-placement15:02
dansmithI thought it couldn't.. I thought that was kindof the point15:02
jaypipes-ooodansmith: efried may be talking about the JSONSchema in placement API.15:03
jaypipes-ooodansmith: but AFAIK, we never set max_unit > total anywhere in the virt drivers.15:03
efriedI'm talking about nova/api/openstack/placement/handlers/inventory.py:4515:04
cdentthat's basically pre-schema default management15:04
jaypipes-ooodansmith: and the default value we use for max_unit is in fact total for libvirt virt driver in all supported resource classes (cpu, ram, disk and vgpu)15:05
cdentand we chose maxint because we didn't want placement to a) need to know anything or guess anything, b) have built in relationships/smartness between those various fields: we wanted the client side to control (as nova has done)15:05
efriedWe definitely don't do any numeric calc/limitation in the schema15:05
efried        "max_unit": {15:05
efried            "type": "integer",15:05
efried            "maximum": db_const.MAX_INT,15:05
efried            "minimum": 115:05
efried        },15:05
jaypipes-ooocdent: right, which is the correct thing, IMHO.15:06
cdenti agree15:06
efried++15:06
jaypipes-ooocdent: this is client-side logic, not server-side logic.15:06
efried^15:06
cdentI know15:06
jaypipes-oooeach client can and probably should decide its own for max_unit-ness...15:06
cdentI agree15:06
efried++15:06
efriedEarlier I gave an example of why powervm would definitely not want max_unit = total15:07
cdentthe debate appears to be over what actor in the client should be making the decision: the individual virt drivers, nova a whole, or the operator/deployer15:07
cdentwhere in this case "client" == "nova"15:07
efriedIs it not sufficient to constrain maximum VCPUs via the flavors themselves?15:08
efriedOr is this a thing where, because flavors can be shared among different-size hosts, we want an additional cap to make sure one instance can't ever overcommit?15:09
cdentyou don't want huge flavors going to small servers15:09
efriedright, and that's not enforced any other way?15:09
cdentit's what max_unit is _for_15:09
efriedRight, I mean, I guess if we look at this from that angle, being able to set a max_unit per host makes sense.15:10
belmoreiracdent: I don't agree. if it's bellow the allocation ratio should be allowed. It's a operator decision15:10
*** helenafm has quit IRC15:10
efriedjaypipes-ooo: Did your generic provider override yaml spec allow for this? Setting overrides for any field of any inventory on any provider on the host?15:11
cdentbelmoreira: I was speaking in terms of the design of placement: the reason max_unit exists is to define the size of maximum workload for a resource provider15:11
cdenthow we control that value is up for debate15:11
cdentas far I can tell, however, it's purpose/meaning is not15:11
cdentits15:12
efriedagree with that15:12
belmoreiraI need to leave. Thanks for helping me understand the issue15:13
cdentbelmoreira: thanks for brining it. If you're able to come back at some point with the results of the experiments menionted above, that would be great15:14
cdentI'm not sure how this discussion is going to resolve15:14
jaypipes-oooefried: pretty sure it did.15:15
jaypipes-oooefried: yeah, it did: https://review.openstack.org/#/c/550244/2/specs/rocky/approved/provider-config-file.rst@12215:16
efriedjaypipes-ooo: Not sure how that would play for nrp15:17
efriedoh, that's what gibi is saying in that comment.15:17
efriedLike, basically, the file needs some way to identify which provider it's mucking with. And sometimes you wouldn't have a UUID or even a name until the provider gets created by nova. Chicken/egg.15:18
jaypipes-oooefried: which is why I answered gibi the way I did.15:18
efriede.g. if what I wanted to do was muck with the VCPU allocation ratio for my NUMA nodes...15:19
efriedIn any case, where I was going with that is, I'm thinking it makes sense to allow the op to override max_unit, but it would be crazy to try doing that via conf options for exactly this reason: we need a way to identify the provider and the resource. Which we can mostly do sanely today, but it won't hold up into the future with nrp.15:22
efriedso if we're going to allow such an override, we should try to resurrect this yaml deal - but we need to address the `identification` issue.15:23
*** s10 has quit IRC15:23
efriedI don't know if it makes sense from an op point of view to bring up the compute, let it auto-populate providers, dump some placement info, and use that to compose the yaml file (identifying providers by the generated names/UUIDs). Then the next periodic picks up the changes and we're good to go.15:25
cdentthat makes it hard to manage the compute nodes from something like ansible15:26
efriedWould work okay for messing with things like reserved, min/max units, and allocation ratios on existing resources on existing providers. Not so well for adding/removing providers, adding/removing whole resources, etc. But I'm not sure the latter are sane use cases anyway.15:27
efriedcdent: Not sure there's any mechanism that would allow us to do this kind of tweaking more easily from ansible.15:28
cdentwhat i mean is that you've defined a two stage process. which people don't want.15:28
cdentit may be that it is not possible to do it some other way, but that concern should be remembered15:29
efriedThe deployer (human or tool) really needs to understand how the provider tree is built.15:29
cdentideally the compute node itself should be a "cloud native app" as much as possible15:29
efriedThough I guess that's sort of the case for constructing flavors anyway.15:29
efriedIf the providers in the tree are named predictably, you could reduce to single-stage. But that requires the deployer to duplicate a lot of the logic of e.g. the virt driver and/or neutron/cyborg/cinder/whoever is responsible for creating and naming providers.15:30
efriedhow else?15:31
mriedemcdent: you have a spec for being able to list resource providers as having some kind of inventory right?15:32
cdentwow15:32
cdentjinxness15:32
efriedmriedem: https://review.openstack.org/#/c/600016/15:32
mriedemyeah ok i thought so15:33
mriedemi remember you commenting on that in a nova fix15:33
efriedDoes that help here? Or were you starting a new topic mriedem?15:33
mriedemi wasn't following along,15:34
mriedemjust interjecting15:34
cdentefried: it's from a conversation in #openstack-sdks15:34
cdentefried: in response to "how else?". I'm not really sure. I think we might need to solve small problems one at a time rather than coming up with Solutions™ because we haven't got the bandwidth or brains to make good enough choices.15:37
efriedcdent: Yeah, what I fear is that we'll wind up adding ${resource_class}_max_unit conf options, and then as soon as those resources are no longer on the compute node, we end up with a godsawful mess like what we're going through now with allocation ratios.15:38
cdentYes, I agree that nested providers are difficult to manage.15:39
efriedwe at least know we're moving toward a future where we can't count on resources - any resources, really - being on the compute node RP. And when they're not, trying to use some constant value for settings thereon won't make sense.15:40
efriedGood luck trying to define a conf option that indicates15:41
efriedvcpu_max_unit = $total * $allocation_ratio for whatever inventory you find VCPU on15:41
cdenti wonder if we need the same kind of "allow operator to set and don't override if they do" logic that allocation ratios (will) have, for *_unit. Or inventory in general. It occurs to me (only now) that we have a way to check for a new inventory: the updated_at field is null15:47
cdentbut we don't expose that externally anywhere15:47
cdentefried: heads up in case I forget to mention it otherwise: i'm out m-w this coming week15:55
efriedNot sure how the updated_at thing is germane. Agree we need "allow operator to set" with some kind of sane default (which IMO would still be MAXINT, but for backward compat will probably need to be `total` instead). But that doesn't address the problem of *how* to set it - how to target a specific max_unit on a specific resource class on a specific provider.15:55
efriedack15:55
cdentefried: I was going back to belmoreira's needs, not the nested stuff.15:56
efriedyeah, I'm sayin, we can satsify belmoreira's needs right now with simple $rc_max_unit conf options, but that's going to bite our ass later.15:57
cdentright, I'm trying to say: we can avoid that15:57
cdentif we want15:57
cdentallow the existing code to do its thing15:57
cdentbut make it so it only sets new inventory15:58
cdentallow the op to use the api to set inventory that the RT won't clobber15:58
cdentI don't like that very much, but it's a way that satisfies your constraint15:58
cdenthowever15:58
cdentI still don't think we've really satisfied the underlying debate there on how/what max_unit means and who should controls it, so I'm not really sure why I'm talking ... :)15:59
edleafethinking out loud?16:00
cdentguess so16:00
cdentfeverish?16:00
efriedwell16:01
efriedI think the op needs to be ultimately able to declare that their instances aren't limited to max_unit = total.16:02
efriedbut I recognize that changing the default at this stage is not likely to be an option16:03
efriedwhich means we need something configurable (if we're going to solve it at all)16:03
cdentwhen you say "configurable" do you mean "conf or other files on the compute node" or does that also include "manipulating the inventory over the api from somewhere else"16:04
efriedthe latter would be okay I suppose16:06
efriedtricky to implement tho16:06
efriedeasiest with new support from the API16:07
efriedPATCH /rp/{u}/inventories :P16:07
cdenthow does that make any difference?16:08
efriedhttps://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6478 <== delete this line, and instead of getting MAXINT as you would today, overriding whatever the user set manually, it leaves whatever's already set in max_unit alone16:10
efriedotherwise you have to GET first, check/set max_unit explicitly16:11
efriedwhich I guess since we're caching, we can do...16:12
efriedinitial setup is tricky in any case16:13
efriedguess that's where you're talking about trying to use updated_at16:14
cdentyou probably want to GET first anyway, because you need the right generation, at which point a PUT is straightforward16:14
cdentin general blind PATCH is not recommended16:14
efriedmm16:15
efriedthough that's in spirit what we're doing now (blind PUT)16:16
efriedcache contains the latest generation, but we're blindly overriding the contents with whatever upt tells us16:17
cdentIn my experience PATCH encourages a lot of weird behaviors so is not something I'd want the API to expose. What nova chooses to do on its side in its use of the API is entirely up to nova (and that's a good thing).16:17
*** e0ne has quit IRC16:41
openstackgerritLee Yarwood proposed openstack/placement master: wsgi: Always reset conf.CONF when starting the application  https://review.openstack.org/60416717:11
lyarwoodcdent: ^ re your comment in https://review.openstack.org/#/c/603372/ , was that more what you had in mind?17:12
* lyarwood isn't sure that's even valid, just trying to resolve this as it keeps coming up downstream17:12
cdentlyarwood: I hadn't got around to think about it deeply yet. What does reset() actually do? Because by name you'd think that was clearing the configuration which is not what we'd want at that point17:13
openstackgerritLee Yarwood proposed openstack/placement master: wsgi: Always reset conf.CONF when starting the application  https://review.openstack.org/60416717:14
lyarwoodcdent: https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py#L2665 - yeah it's clearing everything from CONF before we reload it all again in init_application, it's heavy handed but should workaround things in the short term.17:15
cdentin that case it needs to be _before_ _parse_args17:16
cdentotherwise you are clearing the conf before it is being used to configure the db17:16
lyarwoodcdent: yeah sorry just moved it in the second PS above17:17
cdentor setup the wsgi app17:17
cdentin that case, seems sane to me. have you tried it?17:17
cdenti mean with the database related thing17:17
lyarwoodI'll try it now17:18
openstackgerritLee Yarwood proposed openstack/placement master: wsgi: Always reset conf.CONF when starting the application  https://review.openstack.org/60416717:18
*** tssurya has quit IRC17:20
cdentlyarwood: tomorrow I'll make sure it does make uwsgi-based stuff blow up17:28
lyarwoodcdent: doesn't* ?17:50
cdentyes, sorry. contractions are my nemesis17:51
lyarwoodcdent: but yeah this appears to work in devstack, I'll try it out with a downstream build under httpd tomorrow and remove the reference to httpd in the change17:51
cdentyay!17:51
*** e0ne has joined #openstack-placement18:10
*** cdent has quit IRC18:44
*** e0ne has quit IRC19:03
*** e0ne has joined #openstack-placement19:30
*** e0ne has quit IRC19:34
*** belmoreira has quit IRC19:35
*** e0ne has joined #openstack-placement19:43
*** tetsuro has quit IRC21:35
*** e0ne has quit IRC21:36
*** tetsuro has joined #openstack-placement21:37
*** takashin has joined #openstack-placement21:50
*** mriedem is now known as mriedem_away23:10

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!