16:00:19 <johnsom> #startmeeting Octavia
16:00:20 <openstack> Meeting started Wed Feb 10 16:00:19 2021 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:24 <openstack> The meeting name has been set to 'octavia'
16:00:28 <johnsom> #chair rm_work
16:00:29 <openstack> Current chairs: johnsom rm_work
16:00:42 <gthiemonge> hi
16:00:44 <haleyb> hi
16:01:12 <johnsom> Hello everyone, not sure if rm_work is around or not
16:01:34 <johnsom> #topic Announcements
16:01:48 <johnsom> Final client release is first week in March
16:01:55 <johnsom> Feature freeze for everything else is the second week in March
16:02:01 <rm_work> o/
16:02:04 <johnsom> We have a priority bug review list
16:02:09 <rm_work> I am, in fact! stayed up just for this
16:02:12 <johnsom> #link https://etherpad.openstack.org/p/octavia-priority-reviews
16:02:47 <johnsom> I would also like to point out a proposed patch:
16:02:49 <johnsom> #link https://review.opendev.org/774300
16:03:10 <johnsom> Where there is discussion of disabling the "fail-fast" setting in Zuul we have used for two years now.
16:03:13 <gthiemonge> I added the VIP subnet issue on top, but it depends on a change in octavia-tempest-plugin that is not small..
16:03:34 <johnsom> There may be a mailing list post about this today.
16:03:59 <haleyb> johnsom: guess i shouldn't have mentioned that in my email to -discuss
16:04:09 <rm_work> hmmm
16:04:18 <rm_work> so I'm not sure how much that really affects my workflow
16:04:27 <johnsom> That is all the announcements I have today. Anyone else?
16:05:01 <johnsom> Well, it means we don't wait (and hold instances) for two hours while the rest of the gate pipeline jobs finish.
16:05:31 <johnsom> This has been very helpful when the infra AFS was randomly broken and the job would finish, the the log archiving would fail on the jobs.
16:05:41 <rm_work> I usually am watching stuff actively if it's something I'm in a hurry to get through, and if it fails I don't really worry about whether it actually fails all the way out of the gate -- I'll just fix the issue and push a patch which will shove it out anyway, or just rebase which will kick it out anyway
16:05:56 <rm_work> really all they'd be hurting is the nodepool availability
16:06:04 <haleyb> right, when the gate fails it's usually tragic, like a mirror failure, and it shouldn't really happen
16:06:04 <rm_work> which is like ... themselves? since this is infra proposing it
16:06:04 <johnsom> Or when the networking is broken at one of the nodepool hosts, etc...
16:06:36 <johnsom> rm_work This is gate job only, not the check jobs. So, after +W
16:06:41 <rm_work> yeah I know
16:06:59 <rm_work> and I rebase and +W again :P since a recheck has to go through everything just the same
16:07:53 <johnsom> I have found it very handy when there are infra outages. We can get back in the zuul queue quickly once the issue is fixed and not get stuck at the end of a 250 job queue again.
16:08:13 <johnsom> This is a perk of being the canary project
16:08:46 * haleyb agrees, think it would help if everyone failed quickly in the gate
16:08:59 <rm_work> yeah, though right now I think maybe only we do?
16:09:12 <johnsom> There are two projects I think
16:09:28 <haleyb> i think it was only things with octavia in the name, i did it in the ovn provider as well
16:12:41 <johnsom> Well, you can comment if you would like on the patch.
16:12:47 <johnsom> I think that is it for announcements.
16:12:55 <johnsom> #topic Brief progress reports / bugs needing review
16:14:52 <johnsom> I am focusing on the RBAC and scoped token changes this week.
16:14:54 <gthiemonge> I have some commits in review for octavia-tempest-plugin, but I think we need to fix the gates :/
16:15:41 <johnsom> The current patches proposed open up our API wider than it was before and have no support for our advanced RBAC.
16:15:58 <johnsom> I am leaning towards proposing an alternate patch chain.
16:16:34 <johnsom> Pretty much what is proposed is a "throw away what they had" and replace it, so no good.
16:18:44 <johnsom> Any other updates this week?
16:19:17 <johnsom> #topic Open Discussion
16:19:22 <johnsom> Ok, how about other topics?
16:20:44 <rm_work> sorry, stepped away for a sec -- yeah that's kinda bleh, I thought at first glance it was a real translation of what we had, sadness T_T
16:20:56 <rm_work> not sure I have anything else -- I will try to work through some of the priority list today
16:22:24 <johnsom> I hope to have patches rolling this week. Once I get started it should go fast, it's just planning the right approach that is taking a bit more than I planned. Each step will need both Octavia and associated tempest patches.
16:22:50 <johnsom> lol, I guess the first red flag on that other patches is everyone failed the API tests.
16:23:07 <rm_work> yay for good API tests on policy? :D
16:23:22 <johnsom> Yep, we did a good job with those
16:24:09 <johnsom> I think we can get rid of the "deprecated" code too, which has tanked neutron's API performance.
16:24:47 <johnsom> I am not sure we really need that. I think oslo.policy already has the config switches we need to be compatible.
16:24:53 <johnsom> But, anyway, WIP
16:25:34 <rm_work> hmm, i'll have to see what you're talking about there, I may be out of the loop
16:27:50 <johnsom> #link https://bugs.launchpad.net/oslo.policy/+bug/1913718
16:27:52 <openstack> Launchpad bug 1913718 in neutron "Enforcer performs deprecation logic unnecessarily" [Critical,Confirmed] - Assigned to Slawek Kaplonski (slaweq)
16:28:09 <johnsom> There are a flood of oslo policy fixes going in with problems related to these "secure RBAC" changes
16:29:19 <johnsom> If there are not any other topics today we can wrap up early....
16:29:57 <johnsom> Oh, I should mention:
16:29:59 <johnsom> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020218.html
16:30:42 <johnsom> The OpenStack Ansible folks reached out about Octavia tests failing on CentOS 8 due to nova being unable to plug a port into a VM instance.
16:31:09 <johnsom> I looked through the logs and found rabbit errors in Nova and a libvirt error.
16:31:26 <johnsom> Octavia just reports that nova throws a 504 on port plug.
16:32:10 <rm_work> hmm, i had noticed cent8 tests all failing but thought it was due to some DIB issue
16:32:11 <johnsom> I didn't find a root cause, but it's clear it's a nova issue on CentOS and not an Octavia issue. I recommended they bring in a nova expert to help.
16:32:25 <johnsom> Yeah, I think our gates break before they hit this issue
16:34:36 <haleyb> wait, our gates are broken? :-p
16:35:07 <johnsom> Wait, the CentOS 8 gates ever worked?
16:35:11 <johnsom> grin
16:35:18 <gthiemonge> :D
16:35:33 <gthiemonge> I think Centos gates will be fixed after the Wallaby release
16:35:39 <johnsom> lol
16:35:47 <gthiemonge> no joke
16:36:11 <johnsom> Oh, I thought that was a comment towards the CentOS stream drama.
16:36:13 <gthiemonge> devstack uses victoria RPMs, and our fix should be in master RDO
16:39:30 <johnsom> Ok, if there is nothing else, we can wrap the meeting up
16:39:55 <rm_work> o/
16:40:00 <johnsom> #endmeeting