16:00:47 #startmeeting nova 16:00:47 Meeting started Tue Sep 20 16:00:47 2022 UTC and is due to finish in 60 minutes. The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:47 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:47 The meeting name has been set to 'nova' 16:00:54 hey all 16:00:54 o/ 16:00:55 o/ 16:01:07 #link https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting 16:01:46 let's start, people will come 16:01:53 #topic Bugs (stuck/critical) 16:02:02 #info No Critical bug 16:02:11 #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 5 new untriaged bugs (+0 since the last meeting) 16:02:19 #link https://storyboard.openstack.org/#!/project/openstack/placement 26 open stories (+0 since the last meeting) in Storyboard for Placement 16:02:25 #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster 16:02:37 sean-k-mooney: any bug you want to discuss here ? 16:02:46 I saw you triaged some 16:03:02 am not spcificaly 16:03:10 o/ 16:03:49 cool 16:03:50 https://bugs.launchpad.net/nova/+bug/1989894 is a tirival fix 16:04:01 2 invalid and one incomplte 16:04:14 i looked at the open bugs yesterday 16:04:18 i have not looked today 16:04:19 sean-k-mooney: thanks for your work 16:04:23 sean-k-mooney: nothing changed 16:04:27 cool 16:04:37 elodilles: fancy getting the baton this week ? 16:04:50 well, there are release duties, but let's try 16:05:12 bauzas: actully one i should highligh breilfy 16:05:16 https://bugs.launchpad.net/nova/+bug/1989357 is an rfe 16:05:26 or rfe request 16:05:31 you mean a blueprint request :) 16:05:36 yes 16:05:38 it shoudl be a spec 16:05:39 we name it Wishlist :) 16:05:59 desginate would like changing the instnace.hostname via an update 16:06:11 to porpagate to the neutron ports/floating ip 16:06:15 o/ 16:06:18 and therefor into designate 16:06:20 that reminds me a story :) 16:06:32 so i think this could a use case to consider in the fqdn saga 16:06:57 thats all i wanted to say on that 16:07:02 agreed and we have a PTG topic for this 16:07:05 so really a highlight to artom 16:07:25 * artom meerkats 16:07:33 sean-k-mooney: you'd be gentle if you could mention the fact we will discuss the usecase at the PTG 16:07:34 Eh, wha, who, when? 16:07:53 artom: it's all your fault, tl;dr 16:08:01 It usually is 16:08:06 sean-k-mooney: but I can write this in the bug report 16:08:15 bauzas: well i could but its a new usecase to include in the discussion 16:08:25 I guess they'd be interested in sharing their usecase 16:08:29 but yes ill file in artom later 16:08:31 sean-k-mooney: I don't disagree 16:08:44 i think we can move on for the meeting 16:08:47 yup 16:09:10 elodilles: so, about your release duties, yeah that doesn't really help 16:09:19 we can flip if you wish 16:09:29 bauzas: probably no need to flip 16:09:32 * bauzas goes looking at the next person in the roster 16:09:34 i'll try 16:09:51 oh heh, that's me 16:10:29 elodilles: we can flip if you wish 16:10:49 OK, thanks, if you insist :D 16:11:04 elodilles: just because I want you having no reason to punt this 16:11:12 :p 16:11:17 ok, then 16:11:28 #info bug baton is being passed to bauzas 16:11:35 thanks o/ 16:11:35 next, 16:11:43 #topic Gate status 16:11:48 #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs 16:11:55 #link https://zuul.openstack.org/builds?project=openstack%2Fplacement&pipeline=periodic-weekly Placement periodic job status 16:12:01 #link https://zuul.openstack.org/builds?job_name=tempest-integrated-compute-centos-9-stream&project=openstack%2Fnova&pipeline=periodic-weekly Centos 9 Stream periodic job status 16:12:07 #link https://zuul.opendev.org/t/openstack/builds?job_name=nova-emulation&pipeline=periodic-weekly&skip=0 Emulation periodic job runs 16:12:13 all of the above is solid green 16:12:32 so, moving on ? 16:13:37 looks so 16:13:44 #info Please look at the gate failures and file a bug report with the gate-failure tag. 16:13:49 #info STOP DOING BLIND RECHECKS aka. 'recheck' https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures 16:13:57 voila 16:14:13 next, 16:14:19 #topic Release Planning 16:14:25 #link https://releases.openstack.org/zed/schedule.html 16:14:42 even if we opened the Antelope series, we're still officially on Zed :) 16:15:02 #info RC1 was last Thursday 16:15:07 #info RC2 is planned this Thursday as we found one regression 16:15:23 as said, the regression was identified and the bugfix delivered on time 16:15:41 \o/ 16:15:49 the stable/zed backport is merged, so this is now just a matter of holding the RC2 patch until a bit of time 16:15:49 and that bug is slipped becuse we don't test our lower constraints 16:16:01 gibi: good point 16:16:05 hmmm 16:16:20 we forgot to bump os-trait deps when we started depeding on 2.8.0 from it 16:16:20 as a reminder, the master branch is now the antelope series 16:17:24 as a reminder too, backports to stable/zed and later releases are consequently held until Zed is officially released in two weeks (Oct 5th) 16:17:34 gibi: let's discuss this at the PTG 16:17:45 bauzas: good point, I can add a topic 16:17:54 gibi: appreciated, thanks 16:18:21 any question about RCs or anything else ? 16:18:38 hah, also, I created the Launchpad antelope series 16:18:47 (still TBD for novaclient) 16:19:10 specs also have their antelope directory 16:19:36 so, even if we aren't officially on Antelope, people shouldn't feel constrained by discussing next release 16:19:55 well we are 16:20:02 master is Antelope currently 16:20:12 sean-k-mooney: from a git PoV, yes 16:20:23 and form a schdule point of view 16:20:30 sean-k-mooney: from an official release calendar, we aren't :D 16:20:31 that is why i asked you to update launachpad 16:20:46 https://releases.openstack.org/antelope/schedule.html 16:20:59 we're in a grey period of time 16:21:04 but anyway 16:21:22 we're in a strong consensus, nothing prevents us to move forward and propose specs and blueprints 16:21:52 this is said 16:21:58 yep and even merging things but hold off on large refactors 16:22:05 (personnally, I should try to write some spec next week) 16:22:08 with that said i want to merge the new defaults change soon 16:22:27 +1 to land the default changes soo 16:22:28 n 16:22:35 we're officially entering the tick-tock cadence btw. 16:22:40 ill try and update that this week https://review.opendev.org/c/openstack/nova/+/830829 16:22:49 so yeah, config changes seem appropriate to be done in Antelope 16:23:26 anyway, moving on 16:23:34 #link https://etherpad.opendev.org/p/nova-zed-rc-potential Zed RC tracking etherpad 16:23:45 I'll continue to ping a few people begging for reviews 16:23:55 but should be seamless 16:24:04 (just paperworking) 16:24:13 next topic, 16:24:20 #topic PTG planning 16:24:30 as a reminder 16:24:32 #link https://etherpad.opendev.org/p/nova-antelope-ptg Antelope PTG etherpad 16:24:40 #link https://ptg.opendev.org/ptg.html PTG schedule 16:25:09 people are welcome to add any topic they want to address at the PTG 16:25:33 the earlier we have a solid list of things to discuss, the better it will be for planning in advance when to discuss those 16:26:02 I have a question, 16:26:08 shall we use a separate etherpad for ops-friendly sessions on Tuesday and Wednesday ? 16:26:35 for the moment, in the list of etherpads, we have a specific etherpad for the nova-operator-hours https://etherpad.opendev.org/p/oct2022-ptg-operator-hour-nova 16:26:59 of course, I can rename it, change it... or point to our developer etherpad 16:27:22 I personnally feel a separate etherpad would be less scary for ops 16:27:26 if we expect a lot of operator feedback than I think it is better to have it on a separate etherpad 16:27:36 crosslinked with the main nova one 16:27:38 but in this case, that means we need to come up in advance with a list of topics to address 16:27:51 gibi: yup, I was thinking this 16:28:16 ok, looks like it's sold 16:28:21 noone is arguing 16:28:22 but, 16:28:56 this also means I feel we should do a bit of team brainstorm about what we'd like to discuss with ops at those hours 16:29:09 don't tell me "pain points" this is the easiest 16:29:22 16:29:46 anyway, I'll draft something before next week 16:29:54 and we could discuss this then 16:30:01 open an etherpad and ping us with it I can put in some questions 16:30:11 #action bauzas to draft some agenda for nova-operator-hours etherpad 16:30:30 gibi: etherpad is already created, I just left the standard url 16:30:35 ack 16:30:49 (the foundation pre-creates all the PTG project etherpads) 16:31:08 well, "precreates" is actually just a matter of generating an URL 16:31:37 anyway, moving on 16:31:50 #topic Review priorities 16:31:57 #link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+(label:Review-Priority%252B1+OR+label:Review-Priority%252B2) 16:32:44 I'm happy to see sean-k-mooney using it :) 16:32:50 and takashi too 16:33:03 i have it as a review dashboard in gerrit 16:33:03 you're more than encouraged to do as well ! 16:33:20 sean-k-mooney: yeah, that's one possibility 16:33:47 i have two i use commonly to look for reviews 16:33:57 anyway, nothing to mention here ? 16:34:02 the nova-priorty one and another one i got form stephen year ago 16:34:38 nothing that cant wait until we are out of rc period 16:34:52 cool 16:35:01 #topic Stable Branches 16:35:05 elodilles: shoot 16:35:07 yes 16:35:12 i had a quick look, 16:35:18 so here is a quick update :) 16:35:24 #info stable/yoga is blocked by openstacksdk-functional-devstack job -- proposed fix: https://review.opendev.org/c/openstack/openstacksdk/+/858268 16:35:24 :) 16:35:28 new fix ^^^ 16:35:36 #info stable/stein (and older) are blocked: grenade and other devstack based jobs fail with the same timeout issue as stable/train was previously 16:35:47 #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci 16:35:56 and that's it :X 16:36:34 thanks 16:36:39 np 16:37:08 last topic 16:37:13 #topic Open discussion 16:37:22 Add support for setting min/max unit for the VCPU and MEMORY_MB resource-providers in placement to values other than 1/all. Can configuration-options be OK for this, or are other approaches prefferred? See suggested use of configuration-options at https://review.opendev.org/c/openstack/nova/+/857595 16:37:41 unfortunately, the write hasn't written his nick 16:37:44 but we can guess 16:37:47 Its me :) 16:38:20 obre: o/ 16:38:31 obre: yeah I was looking for your nick 16:38:32 The use-case is basicly to allow restricting some compute-nodes to not get VM's using too many of its VCPU's. 16:38:52 To better spread out load. 16:39:06 I tested that changing these values give the desired outcome. 16:39:11 so I quickly dicussed with obre before and suggested extending provider.yaml but that might be a bigger work than what obre's use case needs 16:39:36 I'm not fan of adding yet another knob to this 16:39:51 so, yeah, provider.yaml or accepting that inventories can change from a client perspective 16:40:05 It is a similar knob to the one we have setting over-provisioning of resources. 16:40:23 obre: sure, but we designed placement for avoiding such knobs :) 16:40:49 so we really shoudl not allow this to be configurable 16:40:58 bauzas: I'm not sure but I assume that today nova would periodically overwirte max_unit in placement for inventories its own 16:41:09 gibi: correct 16:41:10 I can confirm that assumption :) 16:41:17 obre: thanks :) 16:41:38 So that logic needs to change then; in addition to allowing setting other inventorys than CUSTOM_* 16:41:43 so the usecase here is to limit the max size of a flavor 16:41:47 gibi: that's why I was saying that if operators want this to be tunable thru API calls, some efforts have to be done 16:41:48 that can land on a host 16:41:52 Either max or min. 16:42:11 so we can do that today 16:42:18 using provider.yaml 16:42:21 No? 16:42:21 to set those values no 16:42:22 correct ^ 16:42:38 we have a configurable 16:42:41 I think we cannot set those value on standard resources 16:42:41 not an API call 16:42:51 You are only allowed to set CUSTOM_*. Setting VCPUs for instance would make nova-compute refuse to start. 16:42:56 gibi: i would be ok with lifting that restriction 16:43:00 hah, my bad then 16:43:05 but not adding a new config to nova for this 16:43:07 sean-k-mooney: yeah, me too 16:43:10 and yeah 16:43:32 if operators want to play with nova inventories, I'm OK with this 16:43:39 sean-k-mooney: yepp, that was my suggestion to obre too, lift the provider.yamls restriction 16:43:43 placement was designed for such usecases 16:43:48 But then you would like to lift that restriction, and then have nova-compute check its inventories before setting the default-values if none exists? 16:44:34 yes nova compute 16:44:46 would instead of hardcoding its min/max/step values 16:44:51 Basicly similar to how we do allocation_ratios; just without the config-file option. 16:44:51 yepp 16:44:52 get tehm form provider.yaml 16:45:26 Im not entirly sure I am able to figure all this out by myself; but Ill give it a try; and see if I can manage to write such a patch :) 16:45:59 obre: feel free to ping me here with questions. I can try to look at the code and help 16:46:43 gibi: Thanks! 16:46:46 gibi: I probably will. 16:46:48 I'm sure we have some unit / functional test coveragae on provider.yaml to play with 16:47:07 we will need to modify the schma 16:47:23 looks like we have an agreement and further steps to 16:47:37 and introduce a new adjective(exisitng) https://specs.openstack.org/openstack/nova-specs/specs/ussuri/approved/provider-config-file.html#provider-config-file-schema 16:47:43 obre: the next step for you I guess is to write a blueprint 16:48:01 and then lift the resticion on the resouce_class startign with CUSTOM_ 16:48:07 so this would likely need a spec 16:48:15 I was debating it 16:48:16 to spell it out clearly 16:48:39 it will need a new schema_version at a minium 16:49:00 I agree to have a small spec if we need to figure out a new schema 16:49:02 i think there is enough of a change required that a spec would be helpful for documentation if nothing elses 16:49:09 #agreed sounds a valid usecase that requires a blueprint and a spec to be filled in order to address how to properly manage inventories override by placement.yaml file 16:49:33 obre: do you feel comfortable with this process ? do you need help ? 16:49:50 or is that whole think old greek to you ? 16:49:55 thing* 16:50:03 bauzas: Ill probably need a bit of help yes. 16:50:11 obre: you got my nick 16:50:20 bauzas: Im not really a developer; more a sysadmin :P 16:50:22 obre: ping me tomorrow and I'll point you some docs and examples 16:50:32 bauzas: Whats your timezone? 16:50:41 obre: well, specs are formal textfiles, so you shouldn't be afraid :) 16:50:50 obre: CEST 16:50:56 * gibi is in CEST too 16:51:01 * obre as well. 16:51:06 that matches then 16:51:09 So then the workdays probably sync up :P 16:51:20 I'm more than happy to help you 16:51:26 bauzas: Great! 16:51:41 our processes can look a bit scary but those are just design documents 16:52:11 basically, the idea is just to identify all potential design concerns (upgrades or others) before they come up at review time 16:52:11 Ill sorta understand why we need the formal process; Its just that I would have preffered an easier solution for _my_ problems :P 16:52:18 But its fine :P 16:52:33 :) 16:52:34 obre: you're litterally at the very beginning of the cycle :) 16:52:45 so, you wouldn't hear 'sorry, too late' 16:53:08 the point is, you have gibi and me for helping you out 16:53:14 \o/ 16:53:16 obre: one thing to think about is do you want this to be config driven. api driven or both 16:53:25 we will need to document tha tin the spec 16:53:29 sean-k-mooney: I tought we said config-driven 16:53:33 as provider.yaml 16:53:46 yes but provide.yaml can say -1 16:53:51 which means this is api contoled 16:53:57 or something like that if we care about that usecase 16:54:02 making it api-driven means we accept our inventories to be changed thru osc-placement 16:54:09 so im assuming config driven is enough 16:54:12 I think it makes sense to be as close to the way we do CUSTOM_* today as possible? 16:54:16 and if so the that simple 16:54:27 stick to the bare minimum requirements :) 16:54:45 people could come up with api-driven needs later if they want to :) 16:54:54 ok just that was going to be one of the question si would ask in the spec review 16:55:01 so i didnt want it to come out of the blue 16:55:16 sean-k-mooney good point, stating that api-driven is out of the spec seems reasonable 16:55:39 yep we can state is as not a usecase we want to enabel now in the alternitives 16:55:46 anyway, we're approaching end of time and we have a way forward 16:55:50 obre: anyway as bauzas said keep it simple for now 16:55:57 sean-k-mooney: Ack. 16:56:11 anything to else to bring before we call it out ? 16:57:02 looks not 16:57:11 so, I hereby officially declare the meeting as over. 16:57:14 thanks all 16:57:18 #endmeeting