16:00:56 #startmeeting Octavia 16:00:56 Meeting started Wed Mar 15 16:00:56 2023 UTC and is due to finish in 60 minutes. The chair is gthiemonge. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:56 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:56 The meeting name has been set to 'octavia' 16:00:59 Hi Folks 16:01:08 o/ 16:01:09 o/ 16:01:33 Hello 16:01:37 o/ 16:02:30 #topic Announcements 16:02:34 * Antelope Release schedule: RC2 16:02:44 as discussed last week, we proposed a RC2 for Octavia 16:02:59 it includes a workaround for an openstacksdk bug that would have broken octavia-dashboard 16:03:15 +1 16:03:18 #link https://review.opendev.org/c/openstack/octavia/+/876919 16:03:32 now I think we're done for Antelope 16:03:43 nice 16:04:21 * PTG 16:04:44 the Octavia PTG will be on March 28th (14:00-18:00 UTC) 16:05:01 #link https://etherpad.opendev.org/p/bobcat-ptg-octavia 16:05:11 don't forget to register 16:05:14 sounds good 16:05:20 and add your topics to the etherpad! 16:06:05 I think a hot topic will be the let's encrypt RFE 16:06:50 jonhsom, yes :) 16:07:49 I need to read the spec before the PTG 16:08:05 Yeah, there is a lot to unpack there 16:10:01 any other announcements? 16:12:09 #topic CI Status 16:12:17 FYI johnsom fixed an issue when bulding ubuntu images 16:12:22 #link https://review.opendev.org/c/openstack/octavia/+/877141 16:12:42 it merged earlier today 16:12:43 Yeah, saves almost 1GB of space inside the amphora images 16:12:49 thanks again johnsom 16:13:15 a question about ubuntu image; did already anyone test to build amphora image with jammy ? 16:13:22 The "build-essentials" uninstall was not removing everything 16:13:30 Yes, we test on Jammy in the gates now 16:14:03 ohhh ok last time i tried it i had issue with openssl v3 16:14:06 #link https://review.opendev.org/c/openstack/octavia/+/862131 16:14:31 johnsom: Thanks ! 16:14:33 But as we recently found, they are using more disk space in the cloud image as of a week or two ago 16:15:03 hmm I'm not sure, this job ran on focal: https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0b4/857676/11/check/octavia-v2-dsvm-scenario/0b42def/job-output.txt 16:15:28 https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/861369 16:16:41 Umm, that is a big problem then. Antelope was supposed to be tested on Jammy 16:18:18 #link https://zuul.opendev.org/t/openstack/build/69cc6fb302d74064be499eead9af5f1e/log/controller/logs/dib-build/amphora-x64-haproxy.qcow2_log.txt#335 16:18:34 I think we are running jammy amphora images on focal 16:18:35 This job ran Jammy, the scenario for the disk space reduction. 16:19:47 That needs to be fixed ASAP, as all of antelope should have been on Jammy 16:20:12 could you review: 16:20:16 #link https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/861369 16:21:03 Done 16:21:11 thanks 16:21:29 Well, I guess the answer still stands, the amp image is running jammy in the upstream gate jobs 16:22:14 I also have manually run Octavia/devstack on Jammy without issue 16:22:30 me too AFAIR 16:23:10 bobcat/2023.2 will also be on Jammy 16:23:22 #link https://governance.openstack.org/tc/reference/runtimes/2023.2.html 16:24:12 nice. Rocky will be supported too 16:24:29 AFAIK rocky linux doesn't work ATM 16:24:33 (in dib) 16:24:49 "best effort" 16:25:27 I have WIP commit that adds rockylinux support: https://review.opendev.org/c/openstack/octavia/+/873489 16:25:35 well, the intention counts already 16:25:39 but I got some firewalling issues with the o-hm0 iface 16:27:19 #topic Brief progress reports / bugs needing review 16:27:34 I've worked on an interesting issue with sqlalchemy: 16:27:39 #link https://storyboard.openstack.org/#!/story/2010646 16:27:56 the lock of the load balancers in the member batch update API call didn't work as expected 16:28:10 #link https://review.opendev.org/c/openstack/octavia/+/877414 16:28:13 I fix the issue with disk space inside the test amphora images. 16:28:18 johnsom: thanks for helping in this issue 16:28:26 you're our sqlalchemy expert now :P 16:28:41 I posted an api-ref patch to call out the 501 status code that provider drivers may return for features/options they don't support. 16:30:05 I have also been working on fixing the octavia tempest plugin now that scoped tokens are not going to happen. That impacted admin credential tests because the admin credential would have required a scoped token. I think we have all of that straight now. 16:30:24 +1 ! 16:30:31 There is another patch that will be needed to completely remove the scoped token logic, but that is a bobcat topic 16:31:09 I do have a question about gate jobs related to this, but I can wait for open discussion on that 16:31:25 #topic Open Discussion 16:31:32 lol 16:31:41 #link https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/876904 16:31:58 Ok, so now that scoped tokens are not happening in OpenStack, we have three scenarios: 16:32:18 1. Advanced RBAC - this has been the default for Octavia since Pike 16:33:05 2. Advanced RBAC with "enforce_new_defaults" - This is the above with the addition of requiring the new "member" and "reader" roles. 16:34:27 3. Keystone default roles (aka "enforce_new_defaults" without advanced RBAC) - This means no "load balancer" specific roles required, but users must have either "reader" or "member". We provide this via a policy override file. 16:35:12 My question is, do we want gate jobs for all of these? Currently we test #1, my patch adds #3 as we want that for downstream use. 16:36:05 johnsom: do you add #3 in octavia-tempest-plugin and in octavia? 16:36:08 I think #2 will become the new default for Bobcat, so we could just transition #1 to #2 16:36:20 Yes, here: 16:36:20 I think the scope-token job was only in tempest-plugin 16:36:22 #link https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/876904 16:36:51 It will need to be in octavia proper jobs, not just tempest plugin jobs 16:37:02 ok 16:37:09 IMO\ 16:39:09 I think #1 and #3 are fine ATM (in both octavia and o-t-p) 16:39:46 Ack, then maybe transition #1 to #2 as all the service enable "enforce_new_defaults" in bobcat. I am good with that. 16:40:26 I hate adding more jobs, but they are no-op jobs, so run quickly 16:40:47 quickly-ish 16:40:52 ahem... 1h19m 16:41:10 yeah, lol 16:41:18 it's all relative 16:41:58 I think we're building an amphora image in the noop jobs 16:42:03 Nope 16:42:05 no? 16:42:06 ok 16:42:48 so what is no-op doing for more than an hour then? 16:43:20 maybe I can try to investigate that a bit 16:43:55 lol 16:44:01 Ran: 578 tests in 3306.5628 sec. 16:44:09 The rest is all devstack/zuul 16:44:33 It is a bit slower than you would expect really 16:44:43 maybe we can increase the number of threads in tempest 16:45:00 --concurrency=4 16:45:10 We should look at RAM usage. It could be swapping, which slows everything down. There is a devstack flag to lock mysql ram usage down that may help 16:45:14 I guess a lot is resource setup/teardown?! 16:45:59 There shouldn't be much of that really, but yeah, it would be good to get fresh eyes on this. 16:46:09 2023-03-15 00:55:34.256022 | controller | {1} octavia_tempest_plugin.tests.api.v2.test_member.MemberAPITest2.test_UDP_RR_member_batch_update [67.189751s] ... ok 16:46:15 for no-op is.... odd 16:47:36 Though, no-op does run on the slower nodes too, so that will be a factor 16:48:39 I will propose a test job today with the devstack mysql memory cap setting to see how that impacts the jobs 16:48:50 ack 16:49:43 any other topics? 16:51:03 ok! 16:51:11 thank you guys! 16:51:14 #endmeeting