16:00:02 <rm_work> #startmeeting Octavia
16:00:03 <openstack> Meeting started Wed Jan 22 16:00:02 2020 UTC and is due to finish in 60 minutes.  The chair is rm_work. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:06 <openstack> The meeting name has been set to 'octavia'
16:00:10 <rm_work> #chair johnsom
16:00:11 <openstack> Current chairs: johnsom rm_work
16:00:18 <rm_work> #chair cgoncalves
16:00:19 <openstack> Current chairs: cgoncalves johnsom rm_work
16:00:26 <rm_work> o/
16:00:35 <gthiemonge> Hi
16:00:41 <johnsom> o/
16:00:44 <haleyb> o/
16:00:57 <cgoncalves> o/
16:01:38 <rm_work> #topic Announcements
16:02:09 <rm_work> Umm... Not sure I have any real announcements today. Anyone?
16:02:32 <johnsom> I have one:
16:02:35 <johnsom> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-January/012097.html
16:02:59 <johnsom> The release team is changing how the wheels are being built.
16:03:15 <johnsom> We *may* need to make a change on the stable branches for this.
16:03:20 <luksky> OK, that is fine
16:03:27 <luksky> I created it this way:
16:03:48 <johnsom> I have not had time to read/internalize that yet, but wanted to raise it in case we have release issues or someone had time to take a look.
16:04:28 <ataraday_> hi
16:04:30 <luksky> (DEVEL)[root@test1 ~/octavia]# git checkout 89a2f6e
16:04:30 <luksky> (DEVEL)[root@test1 ~/octavia]# cd octavia/diskimage-create
16:04:31 <rm_work> I thought we actually did have that set locally?
16:04:37 <luksky> (DEVEL)[root@test1 ~/octavia/diskimage-create]# ./diskimage-create.sh
16:04:48 <johnsom> I know we did at one point, but I think someone removed them
16:04:53 <rm_work> I remembered removing it in our patch to drop py2
16:04:57 <rm_work> I thought
16:05:02 <johnsom> luksky Can you post this after the meeting?
16:05:06 <rm_work> Can check later I guess
16:05:10 <johnsom> Ok, thanks
16:05:23 <johnsom> Also, nominations for "W" naming are starting I guess:
16:05:25 <rm_work> Either way I think it's not the end of the world
16:05:26 <johnsom> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-January/012123.html
16:06:00 <rm_work> If it's not there in stable, it'll still work, just takes a little longer to install because it requires the wheel build to happen locally
16:06:06 <johnsom> Yeah, I think it would mean that no py2 compatible wheel would be made by OpenStack.
16:06:16 <haleyb> we could start having issues with tempest-plugin jobs eventually like neutron did too, once we release an octavia-lib without py2 support, luckily i think i know the way around that one
16:06:40 <luksky> ok
16:07:38 <johnsom> That is all I have for announcements
16:07:56 <rm_work> Wait, no more two summits per year? Did I miss that announcement? lol
16:09:17 <haleyb> we'll need to start having midcycles to make up for it :-p
16:09:25 <johnsom> Oh, I should say that tickets for the PTG are now available:
16:09:36 <cgoncalves> haleyb, +1
16:09:41 <johnsom> #link https://www.openstack.org/ptg
16:10:04 <rm_work> Do we need to get those separately?
16:10:35 <johnsom> It appears to be one ticket
16:11:28 <johnsom> I don't know if there is an ATC discount or not
16:14:05 <rm_work> Hmm k
16:14:09 <johnsom> It was not in the e-mail I saw.
16:14:09 <rm_work> Well, moving on
16:14:53 <rm_work> Err, do you have the next topic copied? I'm on mobile ;)
16:15:43 <rm_work> Ah I got it
16:15:49 <rm_work> #topic Brief progress reports / bugs needing review
16:16:12 <johnsom> Sorry, I didn't have my cheat sheet open either.
16:16:34 <rm_work> So, I've been super crazy stuck internally, almost non-present here for a bit
16:16:47 <johnsom> I now have the new failover flow working in the lab. It rebuilds a standalone LB in about 30 seconds (includes nova boot time).
16:17:15 <ataraday_> Want to highligt jobboard changes
16:17:18 <ataraday_> #link https://review.opendev.org/#/q/status:open+project:openstack/octavia+branch:master+topic:jobboard
16:17:23 <rm_work> But I have a couple of patches I need to probably rebase and check for comments and update
16:17:35 <johnsom> Active/Standby also works now. It's about 70 seconds because it sequentially rebuilds the amphora to limit failover downtime.
16:17:41 <rm_work> Ah yeah, I'll find time to review those again
16:17:57 <ataraday_> we are going to have it in U cycle? :)
16:18:01 <johnsom> I have some optimization to do there to make the VIP come up sooner, then on to fixing tests.
16:18:36 <johnsom> Currently I plan to post a v1 (non-jobboard) patch first, then circle back for a second patch on v2. This is to make backporting easier.
16:19:03 <johnsom> We will get it in "U" I'm pretty sure.
16:19:32 <johnsom> Ussuri-2 milestone is the week of February 10th
16:20:13 <cgoncalves> I started reviewing the jobboard patches. I paused with code review on the third one in the chain but tested on devstack jobboard and looks really good. again, thank you ataraday_!
16:20:26 <johnsom> I also need to get back to reviewing that, but I have prioritized getting failover flow improvements done.
16:22:54 <cgoncalves> I've spent some time on the CI side: fixing some, adding others, etc, plus testing nested-virtualization to speed up the jobs when possible
16:24:54 <haleyb> For anyone needing some light reviews, I have a few py2 cleanup ones in both octavia/octavia-lib
16:25:10 <rm_work> Alright, well ping me directly if you need reviews, I'm still head down for a little bit, going to have to work on a kinda weird patch this week :/
16:25:12 <cgoncalves> this reminds me that, dulek, we do have a fix for the centos 8 amphora posted. I had completely forgotten about it, sorry.
16:25:29 <haleyb> cgoncalves: i saw your review adding jobs, can do a failure dashboard follow-on once it merges
16:25:29 <rm_work> Ah I should look at that, as we use that internally now
16:26:00 <cgoncalves> #link https://review.opendev.org/#/c/698885/
16:26:16 <cgoncalves> haleyb, sounds good, thanks
16:26:40 <dulek> cgoncalves: No problem at all, I just tried switching our gates to CentOS amp today as Toni was disgusted we're using Ubuntu one. :D And then I realized it doesn't seem to work.
16:26:59 <cgoncalves> dulek, see link I just pasted
16:28:46 <dulek> cgoncalves: I added it as Depends-On, let's see.
16:30:39 <rm_work> Ok, think we're ready for:
16:30:46 <rm_work> #topic Open Discussion
16:31:58 <cgoncalves> ah, I have one.
16:32:27 <cgoncalves> I have this patch that I'd like to discuss with the team
16:32:28 <cgoncalves> #link https://review.opendev.org/#/c/702845/
16:33:06 <cgoncalves> it adds new jobs to the check queue but also to the gate (hence voting)
16:33:30 <cgoncalves> they are: spare pool, active-standby and cinder
16:33:55 <johnsom> With the current/up coming distro releases, keepalived is rolling over to 2.x.x. I think we should consider switching to VRRP v3 only. This brings some concerns about compatibility with existing amphora.  I think the only use case where we would end up in with one v2 and one v3 is failover. Most of those would be full load balancers, but if someone triggered an amphora failover of just one amp in a pair we
16:33:55 <johnsom> might have an issue.  I have not investigated this yet.
16:34:20 <johnsom> VRRP v3 brings faster failover and better IPv6 support among other enhancements.
16:34:28 <cgoncalves> the team has been cautious about adding jobs that rely on other projects. in this patch, I propose adding one that relies on cinder.
16:34:57 <cgoncalves> oops, sorry johnsom. let's discuss your topic first
16:35:05 <johnsom> My personal feeling is I don't want a gate job that depends on cinder.
16:35:17 <johnsom> cgoncalves It's fine, mine is more of a "think about it"
16:36:35 <johnsom> Historically I have purposely disabled cinder in our tempest tests (it's on by default) because they have broke job runs, usually with broken dependencies.
16:36:56 <cgoncalves> maybe we can discuss first the other two jobs: spare pool and active-standby. these are important features in Octavia that provides high-availability. would anyone not agree to adding those to the gate?
16:36:57 <johnsom> Until this volume based amphora, we did not need cinder.
16:37:16 <cgoncalves> lol, we keep on conflicting xD
16:38:15 <johnsom> I think both of spares and act/stdby need to be voting.
16:38:23 <rm_work> Yes
16:38:33 <johnsom> It would be nice to have multi-node working, especially for act/stdby
16:38:56 <johnsom> Also, now with flavors supporting topology, we can probably consolidate some of these tests.
16:39:00 <rm_work> Ugh, yeah, we've kinda put that on hold right? No one is looking at it recently?
16:39:26 <cgoncalves> the two-node job started failing because devstack dropped Xenial support. there's a patch up to fix it
16:39:27 <cgoncalves> #link https://review.opendev.org/#/c/703365/
16:39:28 <johnsom> Yeah, it would be super great if someone could work on multi-node
16:39:46 <johnsom> Well, it was broken before that too, but maybe that is fixed now too
16:40:05 <rm_work> Yeah it's been broken for a while, didn't think that was what broke it
16:40:24 <haleyb> i can look once that merges, looking at the grenade job too
16:40:35 <cgoncalves> I didn't look other problems, just upgraded to bionic and it passed but may be unstable
16:41:22 <rm_work> Hmm k
16:41:38 <johnsom> haleyb Thank you!
16:41:38 <rm_work> Yeah can try to get that to pass and merge soon
16:41:41 <cgoncalves> so what I'm understanding is that no one disagrees with adding spare pool and active-standby to the gate, and cinder should be non-voting. if that's so, I will update the patch
16:42:26 <rm_work> Yep
16:43:31 <openstackgerrit> Carlos Goncalves proposed openstack/octavia master: Fix jobs not running and add new ones to the gate  https://review.opendev.org/702845
16:43:33 <johnsom> cgoncalves Gave my +2 to the bionic nodes based on the previous run issue.
16:44:01 <cgoncalves> thanks!
16:45:35 <rm_work> Ok, anything else?
16:47:29 <rm_work> Guess not? Thanks for coming today!
16:47:34 <johnsom> o/
16:47:36 <rm_work> Make sure to toss a coin to your Witcher, folks. He's a friend to humanity.
16:47:42 <rm_work> #endmeeting