Thursday, 2020-01-09

*** tetsuro has joined #openstack-meeting00:05
*** tetsuro_ has quit IRC00:07
*** armax has quit IRC00:08
*** armax has joined #openstack-meeting00:12
*** mattw4 has quit IRC00:20
*** jawad_axd has joined #openstack-meeting00:31
*** jawad_axd has quit IRC00:35
*** maohongbo1 has joined #openstack-meeting00:43
*** maohongbo has quit IRC00:45
*** maohongbo1 is now known as maohongbo00:45
*** armax has quit IRC00:45
*** maohongbo has quit IRC00:49
*** maohongbo has joined #openstack-meeting00:49
*** brinzhang has joined #openstack-meeting01:00
*** tetsuro has quit IRC01:04
*** tetsuro has joined #openstack-meeting01:04
*** tetsuro has quit IRC01:08
*** Liang__ has joined #openstack-meeting01:10
*** slaweq has joined #openstack-meeting01:11
*** maohongbo has quit IRC01:12
*** slaweq has quit IRC01:17
*** tetsuro has joined #openstack-meeting01:17
*** tetsuro_ has joined #openstack-meeting01:22
*** gyee has quit IRC01:23
*** tetsuro has quit IRC01:26
*** ykatabam has quit IRC01:49
*** tetsuro has joined #openstack-meeting02:00
*** tetsuro_ has quit IRC02:04
*** Liang__ is now known as LiangFang02:08
*** tonyb has quit IRC02:10
*** tinwood has quit IRC02:10
*** tinwood has joined #openstack-meeting02:12
*** brinzhang_ has joined #openstack-meeting02:31
*** brinzhang has quit IRC02:34
*** carloss has quit IRC02:48
*** ykatabam has joined #openstack-meeting03:02
*** tetsuro has quit IRC03:07
*** ricolin has quit IRC03:07
*** slaweq has joined #openstack-meeting03:11
*** apetrich has quit IRC03:13
*** slaweq has quit IRC03:16
*** ricolin has joined #openstack-meeting03:28
*** psachin has joined #openstack-meeting03:42
*** ricolin_ has joined #openstack-meeting03:57
*** ricolin_ has quit IRC03:57
*** tetsuro has joined #openstack-meeting03:59
*** ricolin has quit IRC04:02
*** ricolin has joined #openstack-meeting04:03
*** masahito_ has joined #openstack-meeting05:02
*** slaweq has joined #openstack-meeting05:11
*** slaweq has quit IRC05:16
*** links has joined #openstack-meeting05:21
*** ociuhandu has joined #openstack-meeting05:31
*** masahito_ has quit IRC05:35
*** ociuhandu has quit IRC05:35
*** vishakha has joined #openstack-meeting05:36
*** masahito has joined #openstack-meeting05:41
*** masahito has quit IRC05:48
*** masahito has joined #openstack-meeting05:55
*** tris has quit IRC06:07
*** tris has joined #openstack-meeting06:14
*** rcernin has quit IRC06:32
*** ykatabam has quit IRC06:32
*** anastzhyr has joined #openstack-meeting06:43
*** tris has quit IRC06:44
*** tris has joined #openstack-meeting06:45
*** jawad_axd has joined #openstack-meeting07:19
*** slaweq has joined #openstack-meeting07:52
*** pcaruana has joined #openstack-meeting08:02
*** rpittau|afk is now known as rpittau08:06
*** slaweq_ has joined #openstack-meeting08:29
*** tesseract has joined #openstack-meeting08:30
*** slaweq has quit IRC08:31
*** links has quit IRC08:35
*** trident has quit IRC08:37
*** trident has joined #openstack-meeting08:39
*** links has joined #openstack-meeting08:40
*** e0ne has joined #openstack-meeting08:42
*** jiaopengju has quit IRC08:43
*** jiaopengju has joined #openstack-meeting08:43
*** tetsuro has quit IRC08:43
*** jraju__ has joined #openstack-meeting08:49
*** links has quit IRC08:49
*** jraju__ has quit IRC09:03
*** ralonsoh has joined #openstack-meeting09:03
*** LiangFang has quit IRC09:12
*** apetrich has joined #openstack-meeting09:12
*** ociuhandu has joined #openstack-meeting09:15
*** ociuhandu has quit IRC09:19
*** e0ne has quit IRC09:20
*** electrofelix has joined #openstack-meeting09:36
*** slaweq_ has quit IRC09:38
*** ociuhandu has joined #openstack-meeting09:44
*** ociuhandu has quit IRC09:45
*** tetsuro has joined #openstack-meeting09:45
*** ociuhandu has joined #openstack-meeting09:48
*** slaweq_ has joined #openstack-meeting09:50
*** ociuhandu has quit IRC09:53
*** masahito has quit IRC09:58
*** tetsuro has quit IRC10:05
*** e0ne has joined #openstack-meeting10:08
*** ociuhandu has joined #openstack-meeting10:14
*** ociuhandu has quit IRC10:15
*** ociuhandu has joined #openstack-meeting10:16
*** ociuhandu has quit IRC10:21
*** ociuhandu has joined #openstack-meeting10:22
*** ociuhandu has quit IRC10:27
*** anastzhyr has quit IRC10:33
*** ociuhandu has joined #openstack-meeting10:47
*** ykatabam has joined #openstack-meeting10:49
*** ociuhandu has quit IRC10:52
*** brinzhang has joined #openstack-meeting10:54
*** ociuhandu has joined #openstack-meeting10:55
*** brinzhang_ has quit IRC10:57
*** ociuhandu has quit IRC10:58
*** ociuhandu has joined #openstack-meeting10:59
*** tetsuro has joined #openstack-meeting11:07
*** ociuhandu has quit IRC11:12
*** rpittau is now known as rpittau|bbl11:18
*** ykatabam has quit IRC11:29
*** Lucas_Gray has joined #openstack-meeting11:30
*** dviroel has joined #openstack-meeting11:47
*** ociuhandu has joined #openstack-meeting11:48
*** ociuhandu has quit IRC11:52
*** ociuhandu has joined #openstack-meeting11:58
*** ociuhandu has quit IRC12:00
*** ociuhandu has joined #openstack-meeting12:02
*** ociuhandu has quit IRC12:07
*** enriquetaso has joined #openstack-meeting12:08
*** tetsuro has quit IRC12:15
*** brinzhang_ has joined #openstack-meeting12:28
*** brinzhang has quit IRC12:31
*** brinzhang has joined #openstack-meeting12:31
*** brinzhang has quit IRC12:33
*** brinzhang has joined #openstack-meeting12:34
*** brinzhang_ has quit IRC12:34
*** brinzhang has quit IRC12:35
*** brinzhang has joined #openstack-meeting12:36
*** dosaboy has quit IRC12:44
*** dosaboy has joined #openstack-meeting12:45
*** jawad_axd has quit IRC12:50
*** Lucas_Gray has quit IRC12:51
*** rfolco has joined #openstack-meeting12:54
*** jawad_axd has joined #openstack-meeting12:59
*** Lucas_Gray has joined #openstack-meeting12:59
*** brinzhang_ has joined #openstack-meeting13:00
*** zbr|rover has quit IRC13:00
*** brinzhang_ has quit IRC13:01
*** brinzhang_ has joined #openstack-meeting13:02
*** brinzhang has quit IRC13:03
*** belmoreira has joined #openstack-meeting13:16
*** rpittau|bbl is now known as rpittau13:25
*** ociuhandu has joined #openstack-meeting13:32
*** brinzhang has joined #openstack-meeting13:36
*** ociuhandu has quit IRC13:37
*** brinzhang_ has quit IRC13:40
*** ociuhandu has joined #openstack-meeting13:48
*** efried has joined #openstack-meeting13:50
*** alistarle has joined #openstack-meeting13:53
*** ociuhandu has quit IRC13:54
*** davidsha has joined #openstack-meeting13:55
*** alex_xu has joined #openstack-meeting13:55
*** ociuhandu has joined #openstack-meeting13:55
efried#startmeeting nova14:00
openstackMeeting started Thu Jan  9 14:00:16 2020 UTC and is due to finish in 60 minutes.  The chair is efried. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: nova)"14:00
openstackThe meeting name has been set to 'nova'14:00
efried#link agenda https://wiki.openstack.org/wiki/Meetings/Nova#Weekly_Nova_team_meeting14:00
gibio/14:00
* gibi is on a parallel meeting so a bit on and off14:00
alistarleo/14:01
alex_xuo/14:01
*** huaqiang has joined #openstack-meeting14:02
*** sean-k-mooney has joined #openstack-meeting14:03
sean-k-mooneyo/14:03
kaiserso/14:03
huaqiango/14:03
*** ociuhandu has quit IRC14:03
*** lyarwood has joined #openstack-meeting14:03
lyarwood\o14:03
efriedOkay, let's get started.14:03
efriedWelcome back14:03
efried#topic Last meeting14:03
efried#link Minutes from last meeting: http://eavesdrop.openstack.org/meetings/nova/2019/nova.2019-12-12-14.00.html14:03
*** openstack changes topic to "Last meeting (Meeting topic: nova)"14:03
*** ociuhandu has joined #openstack-meeting14:04
*** eharney has joined #openstack-meeting14:04
efried#topic Bugs (stuck/critical)14:05
efriedNo Critical bugs14:05
*** openstack changes topic to "Bugs (stuck/critical) (Meeting topic: nova)"14:05
efriedstop me if you have things to say on a topic.14:05
efried#topic Reminders14:05
efried5 weeks to spec freeze.14:05
*** openstack changes topic to "Reminders (Meeting topic: nova)"14:05
efried#link ussuri open specs https://review.opendev.org/#/q/project:openstack/nova-specs+status:open+path:%255Especs/ussuri/approved/.*14:06
efriedcouple dozen there.14:06
sean-k-mooneyefried: has your vtpm spec merged14:06
*** brinzhang_ has joined #openstack-meeting14:06
*** brinzhang_ has left #openstack-meeting14:06
sean-k-mooneyyou adressed stephenfins nits right14:06
efriedsean-k-mooney: not yet. stephenfin wanted that little update yesterday and said he would fast approve, but it looks like he's been pulled away14:06
efriedyes14:07
*** brinzhang_ has joined #openstack-meeting14:07
sean-k-mooneyok cool14:07
sean-k-mooneyill ping him after the meeting14:07
efriedthanks sean-k-mooney14:07
efriedgibi: minor update to that if you want to re+2 for form's sake.14:07
efried#link vTPM spec https://review.opendev.org/68680414:07
efriedalso14:08
efried#link ussuri blueprints https://blueprints.launchpad.net/nova/ussuri14:08
gibiefried: ack14:08
efriedwhich may not be the same list, because not every blueprint needs a spec; and it's possible some of those specs don't have blueprints (they should - that would be a miss by the author) though I haven't checked.14:09
efried#action efried to paw through blueprints and specs and reconcile14:09
efriedunless someone else wants to volunteer for that drudgery ^14:09
efriedI know, way to sell it.14:09
sean-k-mooneyefried: do we intend to still do a desinge /direction assement of approved blueprint when we reach sepc freeze14:09
*** brinzhang has quit IRC14:10
sean-k-mooneyit feel like we have less the last cycle14:10
*** brinzhang has joined #openstack-meeting14:10
efriedAgree, but I still think it would be prudent to prune and prioritize14:10
efriedespecially since we've lost half of the core team since December.14:10
efriedby which I mean mriedem14:11
*** brinzhang has quit IRC14:11
sean-k-mooneyack just confirming we will still do that when we have hit spec freeze14:11
*** brinzhang has joined #openstack-meeting14:11
efriedthere are 1714:11
efried#link ussuri merged specs https://review.opendev.org/#/q/project:openstack/nova-specs+status:merged+path:%255Especs/ussuri/approved/.*14:11
efriedand IIRC we were talking about a cutoff in the neighborhood of 3014:12
brinzhang_https://review.opendev.org/#/q/owner:%22Brin+Zhang+%253Czhangbailin%2540inspur.com%253E%22++project:openstack/nova-specs++status:open14:12
efriedso if we follow that, we would expect approx a dozen to be cut.14:12
brinzhang_hi I have some specs, could you review this? or talking about it now?14:12
efriedSure, now is as good a time as any :)14:13
*** ociuhandu has quit IRC14:13
efriedanything specific to discuss here brinzhang_, or just a general request for reviews?14:13
sean-k-mooney it looks like each is a small independ feature14:14
brinzhang_https://review.opendev.org/#/c/580336/ this I want to know shold I continue it, and I was pushed it's poc code already14:14
sean-k-mooneyam i correct in saying there are no depencies between these sepcs14:14
brinzhang_https://review.opendev.org/#/c/693828/14:14
alex_xuthe concern for 580336 is swap_volume is admin API14:15
* lyarwood reads14:15
brinzhang_sean-k-mooney: yeah, it's samll, but I think need to continue14:15
alex_xuone way is adding a separate policy rule for the delete_on_terminate update14:15
*** mriedem has joined #openstack-meeting14:16
*** brinzhang has quit IRC14:16
sean-k-mooneyalex_xu: could you not that in the spec14:16
*** zbr has joined #openstack-meeting14:16
sean-k-mooney*note14:16
*** brinzhang has joined #openstack-meeting14:17
brinzhang_yeah, this is a way to avoid the policy role14:17
alex_xusean-k-mooney: I note that in previous PS14:17
sean-k-mooneyah ok14:17
*** brinzhang_ has quit IRC14:17
*** brinzhang has quit IRC14:17
*** brinzhang_ has joined #openstack-meeting14:18
efriedAnything else on specs/blueprints for now?14:18
*** zbr is now known as zbr|rover14:18
efriedhuaqiang: this would probably be a better time to bring up yours14:18
huaqiangthanks efried14:19
efried[Huaqiang] Spec: Use PCPU and VCPU in one instance: https://review.opendev.org/#/c/668656/. We have agreements in PTG meeting on the following aspects but during recent review process some different opinions raised:14:19
efriedSpecifying instance dedicated CPU mask vs specifying CPU count. Sean-k-mooney raised a flavor example for using CPU mask proposal in this place: https://review.opendev.org/#/c/668656/14/specs/ussuri/approved/use-pcpu-vcpu-in-one-instance.rst Line#14614:19
efriedKeep the interface of creating 'mixed' instance through 'resources:PCPU' and 'resources:VCPU' vs removing this interface. In L253, sean-k-mooney thinks that keeping the interface will cause issues with NUMA mechanism.14:19
huaqiangas stated, we have agreements in the PDT meeting14:19
efriedit would probably be best to have stephenfin in the room if we're going to make progress on this14:19
huaqiangbut I think sean-k-moony's comments are prtty reasonable14:19
huaqiangagree14:20
sean-k-mooneymy concuer with the resouce: synatx is that it couples the flavor to the current modelingin placment14:20
sean-k-mooneyso fi that evovles we dont have a laryer of indrection to hide that form our users so it fragile14:20
efriedI haven't been paying attention to this14:21
efriedbut14:21
efriedif the question is whether we should be using placement-ese vs. flavor-ese syntax,14:21
alex_xuyea, that we don't have consistent agreement at here. we support pcpu with resource syntax, but we don't support vpmem with reosurce syntax14:21
efriedI want the latter14:21
efriedI recognize we enabled support for the former for a few things, but I think we should move away from that in general.14:22
sean-k-mooneyi have a stong prefernce of the flavor-ese approch too.14:22
efriedTwo main reasons14:22
alex_xuI'm ok with flavor-ese also14:22
huaqiangAnother for discussion, is specify a CPU mask for dedicated CPUS14:23
efried1) Placement-ese syntax is hard. It's powerful for the API, but tough for humans to generate sensibly. Specifically in the case of CPU type stuff, they're already familiar with the flavor-ese syntax14:23
efried2) We will sometimes/often want non-placement side effects to be triggered by these options.14:24
efriedIn some cases we would be able to guess what those should be, but in some cases we'll need separate knobs14:24
efriedand in those cases we would end up with a mix of placement- and flavor-type syntax, which is icky.14:24
alex_xuyea, special after have numa in placement14:24
efriedfor sure ^14:25
sean-k-mooneyhuaqiang: so to the second point on mask vs count14:25
sean-k-mooneyi prefer a mask of list of pinned cpus so that with in the guest i have a way of know which cpus are pinned so i can confure my gues accordingly14:25
huaqiangyes. need input for these two14:25
efriedsean-k-mooney: are we talking about enumerating the *host* CPU IDs or the *guest* CPU IDs?14:26
sean-k-mooneyif we have a count we need to either discover the info dymicaly via medatadata api or hav a convention14:26
sean-k-mooneyguest14:26
*** slaweq_ is now known as slaweq14:26
alex_xuefried: guest CPU IDs14:27
sean-k-mooneyso i was sugestin a mask so you could say core 0 floats and the rest re pined so you could in the guest use core 0 for the os and 1-n for the application14:27
alex_xuor just bitmask14:27
efriedI remember talking about this. We need some way for the guest to discover which CPUs are dedicated. I'm not sure anything in the flavor helps us achieve that.14:27
dansmithefried: by placement-ese you mean things like resources:THING=1 ?14:27
efrieddansmith: yes14:27
dansmithand flavor-ese being what?14:28
efriedhw:numa_widgets=414:28
sean-k-mooneyhw:whatever=14:28
dansmithsorry I'm just jumping in here, but what's the exact example here, I was thinking pinning of some sort?14:28
sean-k-mooneyyes so we are discussing mix cpus so a vm with some shared/floating cpus and some pinned14:29
efriedIn this specific case we're talking about mixing PCPUs and VCPUs in one instance14:29
efriedand how the flavor should specify those.14:29
dansmithvcpu and pcpu together, but what is the knob we'd be turning in that case14:29
*** Lucas_Gray has quit IRC14:29
efriedwell, we're discussing whether the syntax should allow you to specify CPU IDs or just a count. If the former, obv placement-ese doesn't work.14:30
dansmithright, so that was going to be my point:14:30
efriedbut even if the latter, I don't like the idea of using placement-ese for reasons14:30
* bauzas waves after the meeting he had14:30
dansmithfor cases where we're specifying traits and simple resource counts, I see the placement-ese as a major improvement over how things have been in the past, simply because it's simple and standard14:31
dansmithare you saying you want to reverse course on resource and trait specifications in that syntax entirely?14:32
sean-k-mooneyrighti  if we dind resouces=VCPU:2,PCPU6 that is speficfly an umbered group which means if we model pcpus per num node and have a two numa node guest you would have to change the syntax to use the number form14:33
efriedI'm not saying we should remove support for placement-ese syntax. I'm saying that for new work, I think we should get in the habit of not using it when it's more than just a scheduling filter.14:33
dansmithefried: why is that? meaning, what are the "reasons" ?14:33
efriediow I don't like it when placement-ese syntax results in side effects, e.g. guest config14:33
sean-k-mooneywhere as if we did hw:pinned_cpus=2-4,5-7 hw:numa_node=214:34
efriedlike trait:CAN_DO_THING=required resulting in <thing>on</thing>14:34
sean-k-mooneythen we can keep the flavor the same and change the placmenet query depending on if we have numa in placment or not14:34
efrieddansmith: see 1) and 2) at time stamp :23:5214:35
dansmithefried: okay, well, that's something I can follow I guess. I'm not sure why it's bad though.. making the users and ops know the distinction of "well, I asked for some resource, why don't I have it?" or "why do I have to specify things in this format if it's devicey and not if not?"14:35
efriedI've gone into this reasoning more eloquently/completely in various reviews as well14:35
efrieddansmith: precisely.14:36
dansmithwhat, you think making them realize the difference is a good thing?14:36
efriedSupporting pass-through placement-ese which is sometimes side-effecty and sometimes not is bad. So I'm saying we shouldn't be promoting/documenting that pass-through-ness.14:36
efriedIf you want to do a specific thing, follow the documentation for the syntax for that thing14:37
efriedand if we decide a *specific* placement-ese trait or resource k=v is appropriate for that, it'll be in the docs14:37
efriedthough in general I would prefer to avoid even that14:37
sean-k-mooneyefried: you dont want to oterwise identical vms that land on the same host to have different side ieefec if one requested a trait and the other did not right14:38
efriedI guess that's one way to think of it.14:38
sean-k-mooney /side ieefect/sideffects/14:38
efriedI want trait:FOO=required to make you land on a host with FOO. I want hw:make_foo_happen to a) add required=FOO to the placement request *and* b) turn <foo>on</foo> in the guest XML.14:39
dansmithokay, I guess I have a major reaction to what seems to be reverting to the "old" non-standard, confusing has-changed-and-broken-people way of doing that, where individual patch authors make up their own syntax for each feature, and placement syntax being strict and simple was a blessing to me14:39
*** mriedem has left #openstack-meeting14:40
alex_xuwill placement syntax be simple after we have numa?14:40
efriedstrict and simple, but not always powerful enough.14:41
sean-k-mooneyok so in the interest of time lets loop back to the mask vs cound question14:41
dansmithno, and I get that it's not powerful enough as it is14:41
sean-k-mooney*count14:41
efriedSo whether we decide to use14:41
efried- $pinned_count, a convention saying e.g. the first $pinned_count guest CPU IDs are pinned and the remaining ($total - $pinned_count) are shared; or14:41
efried- a $pinned_mask14:41
efried...there needs to be some way for the guest to find out either $pinned_count or $pinned_mask. What did we decide that mechanism would be? Config drive?14:41
dansmithyep, sorry, I just literally flipped on the monitor when I woke up and saw the placement-ese thing, didn't mean to derail14:41
sean-k-mooneyshoudl we allow the flaovor to say wich cores are pinned and which float (with a mask or list) or how many cores are pinned(possible per numa node) and have the driver decide which ones are14:42
efrieddansmith: it is relevant, specifically one of the topics on the agenda. So we need to continue the discussion. But maybe we can take it back to -nova after the meeting.14:42
sean-k-mooneyi prefer hw:pinned_cpus=2-4 where core 0 is floating but the alterinve would be hw:pinned_cores_count=314:43
sean-k-mooneythe extrapec propsed in the spec are different but the name dont matter right now14:44
brinzhang_sean-k-mooney: I think it's a good idea, we should konw which vcpu pinned which numa node14:44
efriedyes14:44
efriedmaybe not immediately14:45
sean-k-mooneydansmith: efried effectivly im suggestion we just use the syntax form the vcpu_pin_set14:45
efriedI'm in favor of that14:45
sean-k-mooneybut the cores you list would be the logical guest cores not the host cores14:45
efriedmight be nice if the name indicated that somehow, e.g. by including '_guest_'. But we can bikeshed that.14:46
efriedBut my question remains:14:46
efriedhow does the guest *find out*?14:46
alex_xuthere is propose for metadata API14:46
sean-k-mooneyin the spec i belive the metadata api was going to be extended to include it14:46
sean-k-mooneyalso i assuem the toplogy api we just added could also be extended14:47
efriedthere wasn't some kind of magic where it shows up automatically in sysfs?14:47
efriedor procfs14:47
sean-k-mooneyno14:47
alex_xuno14:47
*** pcaruana has quit IRC14:47
efriedokay.14:47
dansmithwait, what?14:47
dansmiththe guest will know about its topology in the usual fashion, right? isn't that what efried is asking?14:47
efrieddansmith: the guest needs to be able to figure out which of its CPUs are pinned and which shared.14:47
dansmithoh, I see14:47
dansmithyeah, that has to be metadata14:48
sean-k-mooneydansmith: yes but there is no way to tell the kernel what is pinned or not14:48
dansmithright14:48
efriedokay cool.14:48
efrieddid we land?14:48
*** psachin has quit IRC14:48
huaqiangso, do we have conclusion? count or mask?14:49
sean-k-mooneyi think we laned on use extra spec/flavor syntack and use a list like the vpcu_pin_set brinzhang_ ? that work for you alex_xu ?14:49
alex_xuworks for me14:49
brinzhang_me too14:49
huaqiangthanks14:50
efriedI guess the question remains whether we should (or can) block the placement-ese syntax14:50
dansmithso, based on what you said above (I think):14:51
sean-k-mooneywell we can but im not sure if we should14:51
efriedIIRC stephenfin's PCPU work in train made a big deal of supporting that specifically14:51
efriedso this would be a step backwards14:51
dansmithyou're okay with hw: implying resources, but not resources implying hw: stuff?14:51
efriedI must not have complained loudly enough.14:51
efrieddansmith: yes, exactly that.14:51
dansmithwhat would be the blocking then? if they specify any pcpu in resources they're toast?14:52
efriedbecause resources syntax isn't always powerful enough to express hw: stuff so we would need to mix in hw: syntax anyway in those cases, which IMO is more confusing14:52
efriedYeah, that's what I'm trying to noodle. Because I think stephenfin's work in train may have made it so resources:PCPU does in fact imply some (simple) hw:-y stuff.14:52
sean-k-mooneyit did which i did not like either14:53
dansmithright, just asking for one PCU:1 gets you one pinned cpu with minimal opinion on how right?14:53
efriedSo I think we're okay actually if we just leave support for that (the train syntax) but explode if you try to *mix* with placement-ese.14:54
efriedIOW if you specify both resources:PCPU and resources:VCPU, boom14:54
sean-k-mooneyya it will but  PCPU:2 hw:numa_nodes=2 would break if we had numa in placement14:54
efriedthat too14:54
dansmithefried: I don't think that works, because vcpu is always specified by the flavor's own property, and we've said they can override that with VCPUs and PCPUs right?14:54
*** jawad_axd has quit IRC14:55
efrieddansmith: I don't have the details swapped in, but stephenfin wrote some intricate and very specific checks for those permutations.14:55
sean-k-mooneywell we translate flavor.vcpu into either PCPUs or VCPUs14:55
sean-k-mooneyand you can override them but yes14:55
sean-k-mooneystephenfin: also added a lot of checks for edgecases14:55
efried...but I'm pretty sure they included checks so that if you wound up with both PCPUs and VCPUs it would fail.14:55
*** jawad_axd has joined #openstack-meeting14:56
efriedOh, right, because there's a hard restriction on flavor.vcpus=014:56
sean-k-mooneyyes i think that is blocked currently14:56
efriedor am I confusing that with something else?14:56
dansmithyeah, I'm talking about the more general rule we've had (specifically for ironic) where overriding the resources specified in the flavor itself are done with the resources syntax,14:57
dansmithso it would be weird in this case to have that not be a thing because we're trying to ban specifying both or something14:57
*** brinzhang_ has quit IRC14:57
sean-k-mooneydansmith: yes in that case you set resouces=0 rather then the flavor values14:57
*** brinzhang_ has joined #openstack-meeting14:58
*** jawad_axd has quit IRC14:58
efried...so rather than lifting that restriction, we decided you had to specify vcpus=$count and then also specify resources:PCPU=$count (the same $count) if you wanted them to be pinned.14:58
efriedSo we would either need to lift that restriction, or make some new rule about e.g. resources:VCPU=$s,resources:PCPU=$p, flavor.vcpus=$(s+p) (ugh)14:58
sean-k-mooneyi think we can proceed with the feature without specificly blocking the overdies and just document dont do this14:58
*** jawad_ax_ has joined #openstack-meeting14:58
*** gagehugo has joined #openstack-meeting14:58
*** brinzhang_ has quit IRC14:59
dansmithefried: the easy thing to allow is the case where you don't specify VCPUs or you do and the math is right, right?14:59
efrieddansmith: we may be overthinking it. Today I'm 95% sure if you specify both resources:PCPU and resources:VCPU you'll explode. Keep that.14:59
efriedyeah14:59
efriedoh14:59
efriedum14:59
*** brinzhang_ has joined #openstack-meeting14:59
efriedyou mean "specify VCPUs" in placement-ese?14:59
efriedNo14:59
dansmithwell, whatever, it just seems like we're doing the thing I'm concerned about, which is that every feature has its own set of complex rules for this, and things go up in smoke if you ever need to combine two complex features :/15:00
dansmithyes, that's what I meant15:00
efriedTime.15:00
efriedLet's continue in -nova.15:00
efried#endmeeting15:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"15:00
openstackMeeting ended Thu Jan  9 15:00:14 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-01-09-14.00.html15:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-01-09-14.00.txt15:00
openstackLog:            http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-01-09-14.00.log.html15:00
gagehugo#startmeeting security15:00
openstackMeeting started Thu Jan  9 15:00:26 2020 UTC and is due to finish in 60 minutes.  The chair is gagehugo. Information about MeetBot at http://wiki.debian.org/MeetBot.15:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:00
*** openstack changes topic to " (Meeting topic: security)"15:00
openstackThe meeting name has been set to 'security'15:00
*** mhen has joined #openstack-meeting15:00
*** sean-k-mooney has left #openstack-meeting15:00
gagehugo#link https://etherpad.openstack.org/p/security-agenda agenda15:00
*** huaqiang has left #openstack-meeting15:00
*** andrebeltrami has joined #openstack-meeting15:01
gagehugoo/15:01
fungilight agenda again this week15:01
*** alistarle has left #openstack-meeting15:01
*** jawad_ax_ has quit IRC15:02
*** pcaruana has joined #openstack-meeting15:02
mheno/15:02
*** ykatabam has joined #openstack-meeting15:03
*** brinzhang_ has quit IRC15:04
*** brinzhang_ has joined #openstack-meeting15:04
gagehugo#topic open discussion15:05
*** openstack changes topic to "open discussion (Meeting topic: security)"15:05
gagehugofloor is open, light agenda today15:06
*** ociuhandu has joined #openstack-meeting15:07
fungione thing worth thinking about15:08
fungionce the vulnerability:managed policy update lands, that'll be a good opportunity for a review of currently covered projects against the remaining requirements15:09
gagehugoGood point15:10
*** carloss has joined #openstack-meeting15:10
fungi#link https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management15:10
*** brinzhang_ has quit IRC15:11
*** Lucas_Gray has joined #openstack-meeting15:11
fungistuff like making sure that's updated, and teams have a reasonable number of members in them, and that defect trackers are configured so that private security issues are initially only shared with them and/or the vmt15:11
fungialso closely scrutinize any multi-repo deliverables with the tag15:12
fungiand make sure covered deliverables are marked as following some sort of release model15:13
gagehugook15:15
fungioh, and a big one15:15
fungithe vmt is going to need to declassify a bunch of long-private reports of suspected vulnerabilities once the 90-day limit goes into effect15:15
fungiso we'll have a bunch of those to talk about when that happens, i expect15:16
gagehugoyes that too15:16
fungias soon as that update goes into effect, we'll leave a consistent comment on all currently private security bugs15:16
fungiand start the 90-day countdown15:17
fungialso we probably should update our embargo preamble template with those details so new reports include the embargo limit timeframe15:17
gagehugosure15:18
* fungi makes a to do note15:18
*** ociuhandu has quit IRC15:19
*** ociuhandu has joined #openstack-meeting15:20
gagehugocouple things to do then15:21
fungiyeah, i've added them to my personal to do list, but that doesn't necessarily mean i have to be the one to do them15:22
gagehugoI can tackle some in my spare time15:23
*** armstrong has joined #openstack-meeting15:23
fungivolunteers welcome (though to update still-embargoed vulnerabilities the volunteer needs to also volunteer to be on the vmt)15:23
fungi(or already be on the vmt, sure)15:23
*** anastzhyr has joined #openstack-meeting15:23
*** ociuhandu has quit IRC15:24
*** ociuhandu has joined #openstack-meeting15:24
gagehugoyup15:26
gagehugomhen: you have anything?15:26
mhennope15:26
gagehugomhen: fungi thanks for coming, have a good weekend!15:27
gagehugo#endmeeting15:27
*** ykatabam has quit IRC15:27
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"15:27
openstackMeeting ended Thu Jan  9 15:27:15 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:27
openstackMinutes:        http://eavesdrop.openstack.org/meetings/security/2020/security.2020-01-09-15.00.html15:27
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/security/2020/security.2020-01-09-15.00.txt15:27
openstackLog:            http://eavesdrop.openstack.org/meetings/security/2020/security.2020-01-09-15.00.log.html15:27
*** ociuhandu has quit IRC15:28
fungithanks gagehugo!15:31
*** armax has joined #openstack-meeting15:59
*** gyee has joined #openstack-meeting16:03
*** jawad_axd has joined #openstack-meeting16:07
*** jawad_ax_ has joined #openstack-meeting16:09
*** jawad_axd has quit IRC16:09
*** slaweq has quit IRC16:17
*** ociuhandu has joined #openstack-meeting16:28
*** Lucas_Gray has quit IRC16:30
*** slaweq has joined #openstack-meeting16:34
*** Lucas_Gray has joined #openstack-meeting16:37
*** ociuhandu has quit IRC16:38
*** slaweq has quit IRC16:40
*** Lucas_Gray has quit IRC16:43
*** Lucas_Gray has joined #openstack-meeting16:44
*** Lucas_Gray has quit IRC16:46
*** e0ne has quit IRC16:56
*** mattw4 has joined #openstack-meeting17:00
*** tesseract has quit IRC17:01
*** ricolin has quit IRC17:03
*** ociuhandu has joined #openstack-meeting17:06
*** Lucas_Gray has joined #openstack-meeting17:10
*** ociuhandu has quit IRC17:16
*** rpittau is now known as rpittau|afk17:44
*** eharney has quit IRC17:44
*** jawad_ax_ has quit IRC18:04
*** davidsha has quit IRC18:05
*** ociuhandu has joined #openstack-meeting18:07
*** carloss has quit IRC18:10
*** ociuhandu has quit IRC18:15
*** ociuhandu has joined #openstack-meeting18:16
*** ociuhandu has quit IRC18:21
*** ociuhandu has joined #openstack-meeting18:23
*** Lucas_Gray has quit IRC18:25
*** ociuhandu has quit IRC18:27
*** electrofelix has quit IRC18:31
*** andrebeltrami has quit IRC18:50
*** eharney has joined #openstack-meeting19:01
*** bbowen has quit IRC19:16
*** bbowen has joined #openstack-meeting19:19
*** pcaruana has quit IRC19:35
*** ociuhandu has joined #openstack-meeting19:42
*** ralonsoh has quit IRC19:44
*** ociuhandu has quit IRC19:47
*** gagehugo has left #openstack-meeting20:05
*** slaweq has joined #openstack-meeting20:07
*** armstrong has quit IRC20:12
*** Lucas_Gray has joined #openstack-meeting20:21
*** hyunsikyang__ has quit IRC20:40
*** Lucas_Gray has quit IRC20:57
*** hyunsikyang has joined #openstack-meeting21:00
*** Lucas_Gray has joined #openstack-meeting21:02
*** eharney has quit IRC21:05
*** Lucas_Gray has quit IRC21:10
*** Lucas_Gray has joined #openstack-meeting21:11
*** e0ne has joined #openstack-meeting21:42
*** hyunsikyang has quit IRC21:43
*** bbowen has quit IRC21:53
*** bbowen has joined #openstack-meeting21:54
*** slaweq has quit IRC22:19
*** bbowen has quit IRC22:32
*** bbowen has joined #openstack-meeting22:32
*** enriquetaso has quit IRC22:33
*** ykatabam has joined #openstack-meeting22:46
*** anastzhyr has quit IRC22:49
*** ykatabam has joined #openstack-meeting22:50
*** rcernin has joined #openstack-meeting22:57
*** slaweq has joined #openstack-meeting23:20
*** slaweq has quit IRC23:25
*** e0ne has quit IRC23:25
*** e0ne has joined #openstack-meeting23:26
*** e0ne has quit IRC23:26
*** dviroel has quit IRC23:28

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!