16:00:41 <bauzas> #startmeeting nova
16:00:41 <opendevmeet> Meeting started Tue Feb 27 16:00:41 2024 UTC and is due to finish in 60 minutes.  The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:41 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:41 <opendevmeet> The meeting name has been set to 'nova'
16:00:44 <bauzas> #link https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
16:00:48 <bauzas> hey folks
16:00:55 <bauzas> let's have a quick meetingb
16:01:10 <elodilles> o/
16:01:29 <fwiesel> o/
16:01:32 <kgube> o/
16:02:07 <bauzas> okay, moving on
16:02:30 <bauzas> #topic Bugs (stuck/critical)
16:02:34 <dansmith> o/
16:02:35 <bauzas> #info One Critical bug
16:02:39 <bauzas> #link https://bugs.launchpad.net/nova/+bug/2052937
16:02:51 <bauzas> sean-k-mooney: do you want to discuss it now?
16:04:09 <auniyal> o/
16:04:32 <bauzas> looks he's not around
16:05:15 <bauzas> I wonder what we need to do related to thaty
16:05:54 <gibi> o/
16:06:13 <sean-k-mooney> o/
16:06:19 <sean-k-mooney> sorry was double booked
16:06:38 <sean-k-mooney> am im not sure what there is to say really for that bug
16:06:55 <sean-k-mooney> im a littel concerned
16:07:05 <sean-k-mooney> that our heal allocation code might actully be broken by it
16:07:16 <sean-k-mooney> on the otehr hand with a quick look at the code
16:07:17 <bauzas> why is it in Critical state for nova ?
16:07:28 <sean-k-mooney> it was a gate blocker
16:07:35 <sean-k-mooney> that is what broke nova-next last week
16:08:15 <sean-k-mooney> when i looked at the nova manage code i belive its using a neutron client with both a service token and admin token
16:08:20 <sean-k-mooney> so it should be fine
16:08:44 <sean-k-mooney> but when i we removing the check in post it looks like heal allcoations was not properly healing the port resouce resuets
16:09:00 <bauzas> can we then move it to High ?
16:09:07 <sean-k-mooney> sure
16:09:26 <bauzas> ack
16:09:40 <bauzas> if you want, we can continue to discuss the bug next week
16:10:20 <sean-k-mooney> we should likely at least spotcheck that heal allcotions works
16:10:29 <sean-k-mooney> otherwize we might need to fix it in the RC period
16:11:43 <bauzas> ack
16:11:56 <sean-k-mooney> https://zuul.opendev.org/t/openstack/build/3c2c7955a4b34112a377a857623d6a73/log/job-output.txt#37642-37669
16:12:14 <sean-k-mooney> the heal there looks like its broken
16:13:12 <sean-k-mooney> lets move on for now
16:13:32 <bauzas> cool
16:13:41 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 70 new untriaged bugs (+4 since the last meeting)
16:13:48 * bauzas hides but I'll do something
16:13:54 <bauzas> #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster
16:14:04 <bauzas> #info bug baton is bauzas
16:14:10 <bauzas> moving on ?
16:14:56 <bauzas> #topic Gate status
16:15:00 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs
16:15:05 <bauzas> #link https://etherpad.opendev.org/p/nova-ci-failures-minimal
16:15:09 <bauzas> #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&pipeline=periodic-weekly Nova&Placement periodic jobs status
16:15:15 <bauzas> #info Please look at the gate failures and file a bug report with the gate-failure tag.
16:15:26 <bauzas> nothing here to report
16:15:29 <bauzas> anyone else ?
16:15:44 <dansmith> just rechecked a random timeout
16:16:45 <bauzas> cool
16:16:50 <bauzas> #topic Release Planning
16:16:55 <sean-k-mooney> was it the rbac jobs
16:16:56 <bauzas> #link https://releases.openstack.org/caracal/schedule.html#nova
16:17:07 <sean-k-mooney> they seam to be timing out more often then others
16:17:32 <sean-k-mooney> but it alwasy just seams like a slow node when i looked
16:20:31 <gibi> ddd
16:20:35 <bauzas> last but not * least (private joke)
16:20:36 <gibi> sorry
16:20:38 <bauzas> #info Caracal-3 (and feature freeze) milestone in 2 days
16:20:50 <bauzas> voila
16:20:56 * bauzas doing atm a lot of reviews
16:21:01 <bauzas> #topic Review priorities
16:21:09 <bauzas> #link https://etherpad.opendev.org/p/nova-caracal-status
16:21:13 <bauzas> #link https://etherpad.opendev.org/p/nova-caracal-status
16:21:20 <bauzas> #topic Stable Branches
16:21:23 <bauzas> elodilles: ?
16:21:26 <elodilles> o/
16:21:39 <elodilles> #info stable gates seem to be OK
16:21:51 <elodilles> surprisingly, might be even zed is OK
16:22:18 <elodilles> as nova-grenade-multinode is just passing :-o
16:22:29 <elodilles> i don't know how
16:22:34 <elodilles> so maybe it's just a bug :D
16:22:51 <elodilles> it should be using unmaintained/yoga
16:23:04 <bauzas> ...
16:23:31 <elodilles> #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci
16:24:08 <elodilles> please add here any issue if you see one on stable branches ^^^
16:24:30 <elodilles> that's all from me for now
16:25:47 <bauzas> thanks
16:25:57 <bauzas> #topic vmwareapi 3rd-party CI efforts Highlights
16:25:59 <bauzas> fwiesel: ?
16:26:02 <fwiesel> #info FIPS do work now, but not reliably (Pass 459 Failure 23 Skip 44) (previously Passed: 435, Failed: 41, Skipped: 42)
16:26:18 <fwiesel> #link http://openstack-ci-logs.global.cloud.sap/openstack/nova/1858cf18b940b3636e54eb5aafaf4050bdd02939/tempest.html
16:27:03 <fwiesel> Better, but not quite there yet.
16:27:23 <bauzas> kudos
16:27:35 <fwiesel> Serveral of the failures have simpler reasons. I'll probably get those down first before I dig into the FIP issue.
16:28:03 <fwiesel> Questions?
16:28:19 <bauzas> none from me, good luck for debugging
16:28:37 <fwiesel> Okay, then that's from my side. Back to you.
16:29:38 <bauzas> thanks
16:29:52 <bauzas> #topic Open discussion
16:29:57 <bauzas> anything anyone ?
16:30:26 <kgube> Hi, I wanted to ask a question on how to proceed with my feature: https://review.opendev.org/c/openstack/nova/+/873560
16:30:57 <kgube> It has a cinderclient dependency, so we need to bump the cinderclient version
16:31:23 <bauzas> good question
16:31:24 <kgube> the problem is that the needed change will only be in the comming cinderclient release
16:31:52 <kgube> wich happens together with the featurefreeze
16:31:54 <bauzas> kgube: have you seen my question about a Tempest test checking your patch ?
16:32:11 <bauzas> I wonder how Zuul can say +1 if nothing is changed
16:32:50 <kgube> bauzas, yes, none of the current tempest jobs are using the codepath that calls the new volume action
16:33:16 <kgube> so it just works with the previous version
16:33:37 <bauzas> kgube: do you have a follow-up patch for Tempest about it ?
16:33:47 <bauzas> like another attribute
16:34:50 <kgube> so the only job that will be testing the new volume action is the devstack-plugin-nfs one
16:35:43 <kgube> I have a patch for this, but it also needs changing once the feature is merged
16:35:47 <kgube> https://review.opendev.org/c/openstack/devstack-plugin-nfs/+/896196
16:35:59 <kgube> so it's currently marked as WIP
16:36:50 <bauzas> okay, then we need to have a consensus for accepting your feature without Tempest scenarios
16:37:07 <bauzas> what people think about ^ ?
16:37:34 <bauzas> as kgube said, we only know it doesn't trample the other usages
16:39:24 <kgube> well, the second change I posted has successful tempest runs for nfs online extend, with all of the changes merged in
16:39:57 <kgube> it just isn't part of the nova tempest jobs
16:40:44 <bauzas> kgube: I'll ping other cores async
16:40:52 <bauzas> to see what they think
16:41:04 <kgube> thank you!
16:41:32 <bauzas> any other things before we close ?
16:42:34 <bauzas> looks not
16:42:36 <bauzas> thanks all
16:42:38 <bauzas> #endmeeting