Thursday, 2018-08-09

*** esberglu has joined #openstack-powervm00:51
*** esberglu has quit IRC01:00
*** esberglu has joined #openstack-powervm01:05
*** esberglu has quit IRC01:10
*** esberglu has joined #openstack-powervm01:35
*** esberglu has quit IRC01:40
*** esberglu has joined #openstack-powervm01:45
*** esberglu has quit IRC01:50
*** esberglu has joined #openstack-powervm02:15
*** esberglu has quit IRC02:19
*** esberglu has joined #openstack-powervm02:50
*** esberglu has quit IRC02:55
*** esberglu has joined #openstack-powervm03:05
*** esberglu has quit IRC03:10
*** esberglu has joined #openstack-powervm03:55
*** esberglu has quit IRC04:00
*** esberglu has joined #openstack-powervm04:50
*** esberglu has quit IRC04:55
*** esberglu has joined #openstack-powervm05:15
*** esberglu has quit IRC05:20
*** esberglu has joined #openstack-powervm05:35
*** esberglu has quit IRC05:40
*** esberglu has joined #openstack-powervm06:00
*** esberglu has quit IRC06:05
*** esberglu has joined #openstack-powervm06:25
*** esberglu has quit IRC06:30
*** esberglu has joined #openstack-powervm07:00
*** esberglu has quit IRC07:05
*** esberglu has joined #openstack-powervm07:25
*** esberglu has quit IRC07:30
*** esberglu has joined #openstack-powervm07:40
*** esberglu has quit IRC07:46
*** esberglu has joined #openstack-powervm08:25
*** esberglu has quit IRC08:29
*** esberglu has joined #openstack-powervm08:35
*** esberglu has quit IRC08:40
*** esberglu has joined #openstack-powervm08:55
*** esberglu has quit IRC09:00
*** esberglu has joined #openstack-powervm09:25
*** esberglu has quit IRC09:31
*** edmondsw has joined #openstack-powervm12:45
*** esberglu has joined #openstack-powervm13:23
*** edmondsw has quit IRC13:29
*** edmondsw has joined #openstack-powervm13:36
*** edmondsw has quit IRC13:41
*** edmondsw has joined #openstack-powervm13:43
efriededmondsw: You gonna propose nova-powervm RC1 today?16:52
efried*-powervm I guess16:52
edmondswefried yes16:54
edmondswefried I'm trying to get the docs fixed first17:09
efriedjust add me to 'em and/or ping me when ready.17:10
efriededmondsw: Oh, we're giving up (deferring) integrating docs build?17:28
edmondswnot necessarily, but I wanted to throw this up first anyway in case we can't get the rest done17:29
efrieddo you have quick links where I can validate that these numbers are right, and that openstackci is properly registered?17:30
efriedand/or does this patch somehow build and post results where I can see the rtd content properly generated? Or do I have to take it on faith?17:31
efriededmondsw: Okay, got it all sussed. +1.17:47
edmondswefried I've figured out how to tell rtd not to build pdf, so trying that now17:57
efriededmondsw: By commenting out the latex section in doc/source/
efriedor something you can configure on rtd itself?17:57
edmondswno... unchecking something in rtd17:57
efriededmondsw: Well, I tried to follow the build process locally and didn't get far.18:11
edmondswmy rtd build just passed18:12
edmondswso unchecking the boxes for building pdfs and epubs fixed it18:12
efriedis that an acceptable solution?18:12
edmondswwell, and changing it to use doc/requirements instead of test-requirements.txt18:12
edmondswI don't think we need pdf and epub18:12
edmondswI wouldn't even know where to find them18:12
efriededmondsw: It what? It should have been doing that already.18:13
edmondswrtd was not updated when we changed our code to split out doc/requirements18:13
efriedIf you look at the 7th entry in
edmondswefried that was one of the builds I just ran where I had changed that18:14
efriedcould it be that that was the whole problem?18:14
efriedoh, I guess not then.18:14
efriedWell, at least short term I'm fine with nixing pdf/epub. Would be interested to know whether other projects are doing that.18:14
edmondswso it's a two-fold problem...18:15
edmondsw1) apparently the recent doc changes we/stephenfin made don't work for pdfs, so we had to disable that18:15
edmondsw2) when he split out doc/requirements.txt we needed to update rtd to match18:15
edmondswthe downside of #2 is that rtd doesn't let you specify that per branch... so by fixing it for master we break it for all our stable branches18:15
edmondswbut we don't update docs for stable branches very often18:16
edmondswif/when we do, we'll need to remember to temporarily change things back to test-requirements.txt, run a build, and then change it back for the sake of master18:16
efriededmondsw: is where the pdf would be, if there was one.18:16
efriedand epub18:16
edmondswefried most users wouldn't have access to that, though, I don't think... requires login, right?18:17
edmondswmaybe not, I don't know18:17
efriedI was about to ask. But yes, you should still be able to access it without login.18:17
efriedSo, what other projects exist in rtd that we can go look at?18:17
edmondswyeah, I can still see that link after I logout18:18
edmondswefried I'm not sure what other projects use it18:18
edmondswI can check the projects yaml, hold on18:18
efriedtheir latest build was 5mo ago though.18:18
efried - there's only about 83,600-ish projects.18:20
efriedokay, so theirs works.18:20
efriedpdflatex sounds like a bowel disorder18:21
efriedHah, especially with -interaction=nonstopmode18:21
edmondswefried lol18:22
edmondswefried I suspect they are not using openstackdocstheme?18:22
edmondswour problems started when we moved to that18:22
edmondswif i remember correctly18:22
efriededmondsw: True story, assuming openstackdocstheme would be a thing listed in doc/source/conf.py18:23
efriedironic-staging-drivers' does not list it.18:23
efriedwhat do we do?18:24
edmondswfor the moment I think we live without PDF and EPub18:25
edmondswand then long term we try to get off RTD entirely18:26
efriedwfm. So does that mean we're ready to cut RC?18:27
efriedyeah, so the zuul output from isn't worth anything for this.18:29
efriededmondsw: Why does nova-powervm have tarball_base but the others don't?18:35
edmondswbecause y'all misnamed it in... let me find it...18:36
edmondsw_ when it should have been -18:37
edmondswgoing to eat18:38
efriedokay. I still don't understand ^ but no matter. ttyl.18:38
efriededmondsw: ?19:56
edmondswefried yes?19:57
efriededmondsw: I've set up a session where I can fiddle with a flavor and see what lpar xml comes out the other side.19:58
efriedIf you're busy, I can just hack at it myself. But if you're available to bounce stuff off of...19:58
edmondswpretty busy, but I'll push it aside if you need me19:58
efriededmondsw: okay, don't worry about it then. I think I'll just dump stuff in here for the sake of posterity, and you can read it if you care, whenever you get a chance. If I need your brain I'll tag you. Cool?20:01
edmondswsounds good20:01
efriedFirst run through, I set flavor.vcpus=4 and no extra specs, and ended up with20:03
efried      <uom:DesiredProcessingUnits>0.40</uom:DesiredProcessingUnits>20:03
efried      <uom:DesiredVirtualProcessors>4</uom:DesiredVirtualProcessors>20:03
efried      <uom:MaximumProcessingUnits>0.40</uom:MaximumProcessingUnits>20:03
efried      <uom:MaximumVirtualProcessors>4</uom:MaximumVirtualProcessors>20:03
efried      <uom:MinimumProcessingUnits>0.40</uom:MinimumProcessingUnits>20:03
efried      <uom:MinimumVirtualProcessors>4</uom:MinimumVirtualProcessors>20:03
efriedThis is in SharedProcessorConfiguration20:03
efriedGeneral note: the actual inventory count we're going to consume from placement is always going to be what's in flavor.vcpus. We have no say over that, and that's going to be a problem.20:04
efriedIn this particular case, the actual resources we're consuming from the system: 0.4 of a physical proc; so if we had an allocation ratio of 10.0 (1/CONF.powervm.proc_units_factor) then we would be dead on.20:05
efriedwhen I set extra_specs['powervm:proc_units']=5, all the 0.40s change to 5.00s. So we're still asking the LPAR for 4 VCPUs (meaning I suppose that the guest OS will see four procs) and we're still consuming 4 VCPUs as far as placement is concerned, but we're actually consuming five full procs from the system.20:06
efriedTL;DR: setting proc_units totally borks our allocation ratio calculation.20:07
efriededmondsw: So this is where it might be useful to know things about pvc and real deployments. Does pvc use proc_units? All the time, most of the time, rarely? And can end users affect whether it's used or not?20:07
edmondswyes, PowerVC specifies proc_units in every flavor20:08
edmondswan admin (flavor creation is restricted to admin) can set max and min as well, optionaly, but must at least set desired proc units20:09
edmondswtell me again what this allocation ratio is?20:11
edmondswproc units is one of the strengths of the PowerVM architecture, allowing you to get much better proc usage than you can on x86, where processing power is going to waste20:12
edmondswbecause x86 doesn't let you get that granular20:12
edmondswin your example above, when you got 0.4 it was because the default conf setting for proc_units_factor is 0.1 and then multiple that by 4 vcpus to get 0.4 proc units20:14
efriedyes yes, but you still have full control over the number of proc units by combining proc_units_factor and the default behavior of the VCPUs field.20:15
edmondswand then when you started specifying proc_units you get exactly what you ask for... no multiplcation20:15
efriedso it may be wonderful for talking directly to power, but for openstack it sucks ass.20:15
efriedbecause accounting is just totally out the window.20:15
edmondswwhy? I'm not following20:15
efriedNext data point: proc_units is only used for shared proc config.20:15
efriedignored when dedicated_proc is True.20:16
edmondswright... why would you want a dedicated proc and then not use the whole thing?20:16
edmondswsince it's dedicated, anything you don't use is going to waste20:16
efriedAnd I'm assuming that20:16
efried      <uom:DesiredProcessors>4</uom:DesiredProcessors>20:16
efried      <uom:MaximumProcessors>4</uom:MaximumProcessors>20:16
efried      <uom:MinimumProcessors>4</uom:MinimumProcessors>20:16
efriedmeans four whole procs.20:16
efriedyeah, I guess I'm confusing "dedicated" with the minimum on sharing proc config.20:16
edmondswI'm not sure if uom:DesiredProcessors is num vcpus or num proc units20:17
edmondswI would have thought vcpus20:17
efriedI am, that's why I'm experimenting. Yes, it's VCPUs. proc units is ignored.20:17
edmondswbut I don't keep that in my head20:17
efriedAnyway, the effect on accounting is that, when dedicated_proc=True, allocation_ratio is exactly 1.20:18
efriededmondsw: does pvc allow dedicated mode?20:18
efriedin which case someone may be surprised that proc_units is being ignored :)  Or maybe you've got the software set up properly to hide the fact that the underlying field is actually proc_units, so it does the right thing anyway.20:19
efriedor... maybe this is a bug20:19
efriedI started off thinking we were going to be able to calculate a sane allocation ratio for procs and set it dynamically from the driver.20:20
edmondswthe PowerVC GUI won't let you specify proc_units if you select the dedicated procs option20:20
efriedokay, that's good then. Somebody figured this out.20:21
efried...Now I'm thinking we just freaking use the same thing nova uses, which is a conf setting for allocation ratio.20:21
efriedLet the admin set it based on the flavors they think they'll be using mostly20:21
edmondswI think allocation ratio is a very x86-specific concept from what I'm reading here:
edmondswI don't think this form of overcommit exists for PowerVM20:22
edmondswit's my understanding that on PowerVM, if you have a proc_unit then you have a proc_unit... nobody else can be using that proc_unit20:23
edmondswmdrabe that right?20:23
efriedfor sharing, there's min, desired, and max.20:23
edmondswlet's talk min and max for a second20:23
efriedmin means nobody else can have 'em.20:23
efriedmax - min is where it floats depending on how busy the system is.20:24
edmondswno min is how much you could DLPAR (live resize) down to20:24
mdrabeedmondsw: I think it depends on if the LPAR is configured for shared procs or not20:24
mdrabeIf it's in dedicated mode, then the LPAR should be entitled to the full proc unit20:24
efriedthere's still desired, min, and max for dedicated though20:25
mdrabemin and max are for DLPAR20:25
mdrabeactive resize20:25
edmondswthere is one other usage of min... if a proc dies and the system has to try to keep as many LPARs running as it can, it may run you at your min to free up resources for other things20:25
edmondswbut it's not an overcommit thing in the x86 sense20:25
efriedthis doesn't sound right. I thought the point of shared procs was that the system can dynamically change how much compute power each vcpu has under the covers according to the load on the system.20:26
edmondswmax is how much you can DLPAR up to, and I believe it's also a cap of how much additional CPU you might be allowed to use if there is free CPU on the system20:27
edmondswthat's where the sharing comes in20:27
efriedyou still have four VCPUs, but if your min proc units is 0.4 and your max is 4.0 it means each VCPU could have anywhere from 0.1 to 1.0 worth of real proc moxy at any given time.20:27
edmondswno, I don't think that's how it works20:28
efriedthere are six fields for sharing. desired/min/max * proc_units/VCPUs20:28
efriedI thought the proc_units part is the dynamic under-the-covers thing; and the VCPUs part is the DLPAR thing.20:29
edmondswefried these are some links I bookmarked... see if they help:20:29
edmondswDLPAR can be used to change either VCPUs or proc units or both20:30
efriedthat's changing the actual lpar profile.20:30
efriedWhat do the six fields mean in the context of a single lpar profile?20:30
edmondswthey mean what I was saying above20:31
efried(sorry, I shouldn't say profile, I don't mean actual LPAR profile; I just mean settings)20:31
edmondswdesired is what you get.20:31
edmondswmin in is what you could dlpar down to and how much you're saying you have to have at min to stay running in an emergency where a CPU dies.20:31
edmondswmax is what you could dlpar up to and you might be allowed to use more up to that cap without using DLPAR if there are free cycles20:32
edmondsw^ is for proc units20:33
efriedyup, that gels with what I was saying.20:33
edmondswfor vcpus, desired is what you get, min and max are what you could dlpar to20:33
efriedso back to allocation ratio...20:34
efriedthe only sane way to think about actual resource usage in shared mode is via desired proc units.20:35
efriedthe only thing placement knows about is desired vcpus.20:35
efriedin dedicated mode it's desired procs and alloc ratio of 120:36
efriedplacement doesn't know about shared vs. dedicated.20:36
edmondswefried where do you actually specify allocation ratio?20:38
edmondsw doens't say (bug?)20:39
efriedToday, it's done via CONF.cpu_allocation_ratio20:39
efriedwhich defaults to 16.020:39
efriedbut not via the conf option default20:39
edmondswI think placement would need different logic for powervm... ignoring CONF.cpu_allocation_ratio and using proc units20:41
efriedbut allocation ratio is a static(ish) thing on the inventory. It doesn't change for each instance or whatever.20:42
edmondswdoesn't the virt driver tell placement how much cpu and proc_units it has free? And then placement should match both of those up with the flavor?20:42
edmondswI'm not saying allocation ratio should change per instance... I'm saying it shouldn't be used for powervm hosts20:43
efriedwe could just use #physical procs / 0.05 as our VCPU inventory count, and an allocation ratio of 1.20:44
efriedor make sure the real allocation is always for the number of proc units20:44
efriedthe problem isn't that we couldn't do the math. The problem is that we don't *get* to do the math.20:44
efriedit's out of our hands - in the compute manager / placement, not in the virt driver.20:45
edmondswsomehow it needs to be put back in our hands unless they want to have powervm-specific logic in nova proper rather than in the virt driver20:45
edmondswI don't see another way20:45
efriedplacement allocation is done based on flavor.vcpus, end of story. And it uses allocation ratio to determine when inventory is "full" when trying to schedule.20:46
efriedwell, the other way is for us to completely change how we interpret flavors.20:47
efried- Use the conf option for allocation ratio to inform the micropartitioning size (e.g. 20.0 => 0.05)20:50
efried- VCPUs will always give you a number of physical procs corresponding to #VCPUs * that micropartitioning size20:50
efried- If you flip on 'dedicated', then VCPUs / allocation_ratio had better be a whole number or we'll error20:50
edmondswthat won't work20:53
edmondswwouldn't allow you to have some VMs using dedicated and other using shared20:53
efriedeh, sure it would. You can still use the powervm:dedicated_proc in the flavor20:53
edmondswwouldn't allow you to have some VMs using more proc units per vcpu than others20:53
efriedyes, true story. I'm not sure why that's a problem, but taking it on faith that it is...20:54
efriedwe should therefore have the override for the number of guest VCPUs be part of the extra specs instead of coming from the outer flavor.vcpus20:54
efriedthat ought to do the trick.20:55
edmondswI expect that would cause other problems...20:56
efriedfor code, or for users?20:56
edmondswseparate topic if I can interject... #openstack-release is saying we need to create branches20:57
edmondswI thought that happened automatically based on the rc1 commit I threw up, but no?20:57
edmondswdo you know how to create branches?20:57
efriedoh, crap.20:58
edmondswnm, I think I know what I need to do20:59
edmondswI just don't have that commit right yet20:59
edmondswwhen it's right, it will create the branchees20:59
efriedum, branches are just branches, but I think we may need to tag. esberglu is here, maybe he can advise. (And we can put it in a doc for future ref)20:59
edmondswI remembered / was reminded... I think the commit is updated to do that now21:03
efriedother than making users understand the new way to use extra specs, how would it be problematic?21:09
efriedthe other option to consider is dynamically rebalancing our inventory based on existing instances21:10
efriedwhich is something I was going to bring up anyway for calculating reserved values...21:11
edmondswit would be problematic in that you lose capabilities like I described above21:18
efriedno, I addressed that21:18
edmondswwith something that doesn't work...21:19
edmondswmaybe if you were clearer on what you're suggesting21:19
edmondswefried interjecting again... how do I use pdb in a tox run since we moved to stestr?21:21
efriedOkay, with:21:22
efriedCONF.cpu_allocation_ratio = 20.0  <== corresponds to micropartitioning granularity of 0.0521:22
efriedflavor.vcpus = 40   <== multiplied by 0.05 means you will get 2.0 proc units from the platform *and* the allocation will reduce your inventory by 4021:22
efriedand if you need:21:22
efriedextra_specs['powervm:how_many_vcpus_do_you_want_the_vm_to_see'] = 2   <== so each vcpu will correspond to 1.0 processor unit21:22
efriededmondsw: I use remote_pdb these days.21:22
edmondswefried that won't work on a lot of levels. It's not backward compatible. Nova is going to think you have 40 vcpus when you really have 2, etc.21:24
efriedWell, that latter thing is the whole point of doing it this way.21:25
edmondswthe problem here is placement... allocation ratios is an x86-specific concept and needs to be replaced with something that is not x86-specific21:25
efriedwith an allocation ratio of 20.0 and a system with 8 physical procs, nova/placement will allow you to create allocations with VCPU total of 8 x 20 = 160.21:26
edmondswnobody wants a VM with 160 VCPUs21:26
efriedGreat, so they can specify that extra spec.21:27
edmondswwhich will not help... nova will still say you have 160 VCPUs21:27
edmondswwell it helps, but not enough21:27
efriedYeah, it's still holey; I guess you're deciding which part you want to lie about.21:27
edmondswyou're trying to cram a square peg into a round hole21:27
efriedcause right now, there's no way NOT to lie about inventory/usage.21:28
edmondswbecause allocation ratio is an x86-specific concept (and not a good one even there) that needs to die21:28
edmondswwhen I say x86-specific, I mean it's a nova+x86-specific concept21:28
edmondswit's not purely x8621:29
edmondswthere is no reason that nova should have to have that concept for x8621:29
edmondswx86 users should have more flexibility than they have with the current allocation ratio system21:29
edmondswwhere they're limited by a conf value that controls the whole system instead of the ability to ask for what they want on a per-vm basis via flavor21:29
efriedso a few things there.21:30
efriedThe Power perspective on this is very not-cloudy.21:30
edmondswno it's not21:30
edmondswand I mean that in disagreement21:30
edmondswit's perfectly cloudy21:31
efriedIt's pet-not-cattle21:31
edmondswno, it has nothing to do with pet/cattle21:31
edmondswnothing whatsoever21:31
efriedthe allocation ratio as it's being used on x86 is allowing the admin to set up in one broad stroke how much moxy she wants a VCPU to have.21:32
efried1.0 means on physical proc to one VCPU.21:32
edmondswremember that admins control flavors... regular users will have no control over what they're getting in any model21:32
efriedexcept by picking a flavor.21:34
openstackgerritOpenStack Release Bot proposed openstack/ceilometer-powervm stable/rocky: Update .gitreview for stable/rocky
openstackgerritOpenStack Release Bot proposed openstack/ceilometer-powervm stable/rocky: Update UPPER_CONSTRAINTS_FILE for stable/rocky
edmondswwhich an admin created... so the admin controls what your options are21:34
edmondswthat is perfectly cloudy21:34
openstackgerritOpenStack Release Bot proposed openstack/networking-powervm stable/rocky: Update .gitreview for stable/rocky
openstackgerritOpenStack Release Bot proposed openstack/networking-powervm stable/rocky: Update UPPER_CONSTRAINTS_FILE for stable/rocky
openstackgerritOpenStack Release Bot proposed openstack/nova-powervm stable/rocky: Update .gitreview for stable/rocky
openstackgerritOpenStack Release Bot proposed openstack/nova-powervm stable/rocky: Update UPPER_CONSTRAINTS_FILE for stable/rocky
efriedthe not cloudy part is the part where you're saying a precise number of proc units you want. On one system that's a lot of moxy, on a different system, less.21:35
efriedallocation ratio is a way to normalize that picture across your cloud.21:36
efriedi.e. the ratio of all your allocation ratios makes all your hosts "equal" in terms of the moxy of one "VCPU"21:36
edmondswif that's not cloudy, then the whole concept of flavors is not cloudy. Every VM should have the same amount of mem and cpu and disk21:37
efriedswhat I'm saying. If you deploy a flavor on one host, it should have the same amount of moxy as same flavor deployed on a different host.21:38
edmondswand it does in any model21:38
edmondswwe're not talking about per-host changes here21:38
efrieduh, not with proc_units21:38
edmondswyes, with proc_units21:38
edmondswthere's nothing host-specific here21:38
edmondswyou're saying you don't like our flavor extra-specs21:38
edmondswnot a host setting21:38
efriedproc_units=2 on a P7 is going to give you a weaker VM than proc_units=2 on a P921:39
efriedor whatever.21:39
edmondswvcpus =1 on a skylake is oging to give you less power than vcpus=1 on the previous generation21:39
efriednot if you've set your allocation ratios properly.21:39
efriedis exactly my point.21:39
edmondswthere is no way to control that with allocation ratios even for x8621:40
edmondswa skylake core != a broadwell core21:40
efriedso if a proc on host X has twice as much power as a proc on host Y, set the allocation ratio on Y to twice what it is on X.21:40
edmondswit doesn't work like that21:41
efriedThen when you ask for one VCPU, you're getting twice as many proc units on host X as you are on host Y, ergo the same amount of moxy.21:41
efriedno, it sure doesn't today. Swhat I'm saying, we could make it so it does.21:41
efriedand, btw, that is (ish) how it works with the x86 allocation ratio. It only really comes into play on a crowded system, but at that point it's effectively as I've described.21:42
edmondswthere is no way to calculate that a proc on host X has twice as much power as a proc on host Y... it's not that cut and dry. It will be better at some things than at others in different proportions, etc...21:42
efriedyeah, I get that it's not an exact science, but it's up to the admin to figure out a close enough approximation and set the alloc ratios accordingly.21:42
edmondswand I really don't think that's what allocation_ratio is used for from what I read21:43
edmondswnova docs tout it as an overcommit feature, which would be an entirely different usage from what you're saying21:43
efriedAs it stands today, if I've specified proc_units in my flavor and I deploy it on a heterogeneous Power cloud, I have no idea how much actual compute power I'm going to get. So if I'm deploying a workload that needs to perform to a certain level, I need to tie it to a specific host. <= not cloudy.21:44
* efried opens link again to make sure reading same doc as edmondsw21:44
efried"increase the number of instances running on your cloud at the cost of reducing the performance of the instances"21:45
efrieddifference is that in the x86 model, that degradation only hits when you get crowded. (though on power, with uncapped shared procs, it would be similar)21:46
efriedthis is pretty academic, though. Because in the near term, we are not going to make major changes to how we interpret flavors for power; and we are not going to get nova to make changes to how allocations are calculated for our "boutique compute driver".21:48
efriedSo we just need to decide what we're going to do with the allocation ratio and reserved amounts right now, given the existing framework.21:48
efriedMy vote is to keep the behavior as it is now, which is:21:55
edmondswI believe the typical answer to what you're talking about is to put different types of systems in different cells and then request a specific cell21:55
efried- For VCPU, use CONF.cpu_allocation_ratio and default it to 16.0 if unspecified.21:55
efried- MEMORY_MB: CONF.ram_allocation_ratio, default to 1.521:55
efried- DISK_GB: CONF.disk_allocation_ratio, default to 1.021:55
efriedand we can try to improve upon that in the future.21:55
edmondswthere are other ways21:56
edmondsw"use" in what sense?21:56
efriedin the sense of specifying it to placement in the inventory record's `allocation_ratio` field.21:57
efriedbecause that's what happens today21:57
efriedbecause the resource tracker uses get_available_resource plus those conf values to generate placement inventory records for us21:58
efriedbecause we have not implemented get_inventory or update_provider_tree yet.21:58
edmondswso... change nothing?21:58
efriedonce we implement those, that automatic generation of inventory records goes away; we're responsible for doing it.21:58
efriedyeah, change nothing behaviorally by duplicating the existing logic in our update_provider_tree impl.21:59
efriedi.e. start explicitly lying about inventory in the same way as we were implicitly lying about it before.21:59
efriedwhich seemed worse to me, which is why I wanted to have this conversation.21:59
edmondswyou've got me worried now that the existing logic is fatally flawed21:59
efriedyeah. It is. It's completely bogus. Unequivocally.21:59
efriedToday if that conf option is not specified, we are allowing the nova scheduler to attempt to schedule instances to a host for a total of (# physical procs * 16) VCPUs.22:00
efriedwhich is completely arbitrary and guaranteed to be wrong almost no matter what kind of Power-ish flavor is being used.22:01
edmondswchange that to (# physical procs * 20) VCPUs?22:01
efriednow, if we go over *actual* capacity, we'll fail the deploy from within spawn(). Which will cause a reschedule. Which, as of I think Queens, will retry up to two other hosts in the same cell before dying.22:02
edmondswotherwise someone using 0.05 proc units per VCPU is going to start getting issues where the scheduler thinks it's full and it's not22:02
edmondswefried correct22:02
efriedwhy change it? But if we're going to change it, proc_units_factor defaults to 0.1, so we should change it to 1 / proc_units_factor.22:02
efriedWhich will at least make us correct if no flavors use dedicated procs or proc_units.22:03
edmondsw"defaults", not "min"22:03
edmondswmin is 0.0522:03
efriedright, but we'll never use 0.0522:03
edmondswyeah we would... why don't you think so?22:03
efriedunless proc_units_factor is 0.05 *and* the flavor doesn't say dedicated_proc=True or proc_units=X22:04
edmondswremember proc_units_factor isn't used if you specify proc_units in the flavor extraspec22:04
efriedexactly my point.22:04
edmondswso you can't base on proc_units_factor22:04
edmondswbecause it may not be used22:04
efriedIf dedicated_proc=True and/or proc_units is specified in the flavor, all bets are off and we have no way to calculate a reasonable allocation ratio at all.22:04
efriedusing proc_units_factor is the only thing we could possibly attempt to do dynamically (without "Solution Rebalance" - discussion for later).22:05
edmondswmy point of using 0.05 is that it's the min22:05
edmondswso scheduler is never saying "you're full" when you might not be22:05
edmondswI guess if we use proc_units_factor we can tell folks to change that to 0.05 to avoid that issue22:07
efriedthis on the off chance that you've said proc_units=0.05 somewhere?22:07
edmondswannoying as it is to have to change on every host22:07
edmondswI'd say it's more than an off chance, but yes22:07
efriedLook, right now with the default of 16.0, that corresponds to a micropartition of 0.0625, which is close enough to 0.05 for government work.22:08
efriedUnless you really envision a scenario where *all* the VMs deployed on a host are going to be using 1VCPU=0.05pproc *and* we're going to get real close to filling that host up, that seems adequate for me.22:09
efriedCause we also need to talk about reserved. Which again, nova is setting for us based on conf values, and which again have zero relation to what we should actually be reserving (for the nvl/VIOSes and any OOB LPARs).22:11
efriedrn those conf values default to 0 for disk and cpu, 512 for mem.22:14
efriedand btw, they reserve cpu equal to the amount of that conf value, not equal to that x allocation_ratio, yet the doc states it's a physical proc. For us, we would have to figure the calculation according to the number of actual physical procs, factored against whatever allocation ratio we set.22:16
edmondswwhy aren't they reserving based on what is actually used?22:20
edmondswmaybe they mean something different for reserved than I'm thinking?22:21
efried"They" who?22:24
efriedThe reserved amount is supposed to represent the amount of processing power the "hypervisor" is using.22:25
edmondswok, that's what I was thinking it should mean22:25
efriedIn the libvirt model, the hypervisor is just... running.22:25
efriedit's using whatever proc/mem it can get its hands on. sharing it all with the VMs.22:25
efriedSo by setting a reserved value you're essentially limiting the number of VMs so that there's so many of them that it prevents the hypervisor stuff from running properly.22:25
edmondswsame applies for PowerVM22:26
edmondswof course we have the NovaLink LPAR to factor into the equation, but there is still phyp beyond that22:26
efriedhm, I thought whatever phyp uses doesn't count.22:27
edmondswwhy wouldn't it count?22:27
efriedlike, I thought whatever phyp exposes to us is already whatever's left over after it has taken what it needs.22:27
edmondswand of course there may be VIOS as well22:27
efriedSo like, I thought we were just gonna add up the nvl and VIOSes and OOB partitions.22:27
edmondswthat would be news to me... maybe22:28
efriedanyway, for now I think we should again just duplicate the existing bogosity and leave TODOs to do things properly later.22:28
edmondswefried I've pip installed and pip3 installed remote_pdb, and even tox -r, but I'm still getting import errors on it...22:31
edmondswany ideas?22:31
efriedYou installed it in your venv?22:32
efriedwhen you tox -r you would be blowing it away.22:32
edmondswI can't find a venv... I assumed tox must be doing that under the covers now22:32
efriedHere's how I do it:22:32
edmondswthere used to be a .venv directory, but no more22:32
efriedbash  # create a new shell so I can ^D back to sanity22:33
efried. .tox/py27/bin/activate  # enter the venv22:33
efriedpip install remote_pdb22:33
efriedwhat repository are you working in?22:33
edmondswok, I think you told me what I needed22:33
edmondswactivate is in .tox/py27/bin22:33
efriedyes. But I think we did combine the venvs recently, so depending what actual commit you're on, it could be s/py27/something else/22:37
edmondswthat worked, tx22:40
edmondswsigning off...22:40
openstackgerritEric Fried proposed openstack/nova-powervm master: WIP: update_provider_tree with device exposure
efriededmondsw: FYI ^22:57
*** efried has quit IRC23:31
*** efried has joined #openstack-powervm23:31

Generated by 2.15.3 by Marius Gedminas - find it at!