16:00:01 <rm_work> #startmeeting Octavia
16:00:02 <openstack> Meeting started Wed Jul 17 16:00:01 2019 UTC and is due to finish in 60 minutes.  The chair is rm_work. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:05 <openstack> The meeting name has been set to 'octavia'
16:00:11 <rm_work> Morning everyone. :)
16:00:15 <ataraday_> hi
16:00:31 <johnsom> o/
16:00:48 <cgoncalves> hi
16:01:05 <gthiemonge> hi
16:02:04 <rm_work> moving right along!
16:02:05 <rm_work> #topic Announcements
16:02:15 <johnsom> Shanghai Summit session voting is open until the 22nd
16:02:18 <rm_work> Voting is happening for Shanghai summit talks!
16:02:22 <rm_work> Ah, johnsom beat me
16:02:23 <johnsom> #link https://www.openstack.org/summit/shanghai-2019/vote-for-speakers
16:02:44 <rm_work> That's all I had.
16:02:45 <johnsom> There are a bunch of Octavia related talks!  Search for "Octavia" in the voting.
16:03:04 <johnsom> I hope you all will review those sessions and vote appropriately on them.
16:03:35 <johnsom> Interestingly enough, only one has a core member as part of the presentation. So good community involvement!
16:04:06 <johnsom> rm_work Have you setup the Octavia PTG etherpad for Shanghai?
16:04:32 <rm_work> Not yet. Maybe someone wants to volunteer to be the official Octavia PTG Etherpad Manager? :)
16:04:47 * johnsom Steps back, but can create it
16:05:32 <johnsom> Do we have a name for the next release yet?
16:06:12 <rm_work> Ukelele? :D
16:06:19 <cgoncalves> not that I'm aware of
16:06:31 <johnsom> #link https://etherpad.openstack.org/p/octavia-shanghai-U-ptg
16:06:38 <johnsom> "U" it is then
16:08:52 <rm_work> Coooool. Ok.
16:09:05 <rm_work> #topic Brief progress reports / bugs needing review
16:09:26 <rm_work> johnsom and I were still working on the amp refactor, though I've been pulled away this week
16:09:33 <rm_work> I think you've got it close?
16:09:43 <rm_work> eyes + testing would be great
16:09:48 <cgoncalves> THANK YOU to both!
16:09:51 <rm_work> I think we're at that stage now?
16:10:26 <rm_work> #link https://review.opendev.org/#/c/668068/
16:10:27 <johnsom> I am almost done with the coverage cleanup. Just a few more files to do. Then I need to do the v1->v2 clone.
16:10:33 <cgoncalves> I started running some tests, focusing on CentOS 7 with HAProxy 1.8 and 1.5
16:10:42 <rm_work> yeah, but testing and general review could begin
16:10:57 <johnsom> I have tested: standalone, act/stdby, with old xenail image, with UDP, with TLS (barbican tempest test).
16:11:01 <ataraday_> I finished amphora flows more or less, hopefully will do loadbalancers this week, then will be able to have updated PoC patch with taskflow jobboard usage.
16:11:28 <johnsom> I hope to hit a "could start reviewing" state late today.
16:11:48 <ataraday_> I've updated https://review.opendev.org/#/c/659538/ rm_work, could you take a look?
16:11:49 <johnsom> cgoncalves Thanks for helping with CentoOS tests! much appreciated.
16:12:58 <cgoncalves> I had some time to work on the VIP ACL RFE. I think it is also ready for review
16:13:00 <cgoncalves> #link https://review.opendev.org/#/q/topic:vip-acl
16:13:33 <ataraday_> Not sure it make sense to put links for all my changes...
16:13:46 <ataraday_> Anyway review is really appreciated
16:13:49 <cgoncalves> unfortunately our unit and functional jobs do not pick up the depends-on, so the octavia patch fails at CI. it depends on the octavia-lib patch https://review.opendev.org/#/c/659625/
16:13:51 <ataraday_> #link https://review.opendev.org/#/c/662791/
16:14:08 <johnsom> ataraday_ I reviewed one of your patches, is that ready for another look?
16:15:07 <ataraday_> johnsom, yes, if you have time please check my comments and updates
16:15:22 <johnsom> +2
16:15:31 <johnsom> opps, +1 I meant. grin
16:15:43 <johnsom> Getting ahead of myself.  I will review
16:15:53 <ataraday_> :)
16:16:15 <cgoncalves> I also quickly proposed two patches: one to change centos amps to centos-minimal DIB element, and another one to honor the CLOUD_INIT_DATASOURCES option in our disk image create script
16:20:00 <rm_work> ahh so that will make them a little smaller / faster to boot I hope? :D
16:20:26 <cgoncalves> yes. I have other ideas on how to reduce the size, but changes would be in DIB
16:20:57 <cgoncalves> uninstall linux-firmware (~172 MB) and delete rescue initrd+vmlinuz in /boot (~50 MB)
16:21:04 <johnsom> FYI, the DIB change for the smaller Ubuntu images has been released
16:21:07 <johnsom> #link http://tarballs.openstack.org/octavia/test-images/
16:21:11 <rm_work> cool
16:21:18 <johnsom> Update your DIB if you want smaller images
16:21:20 <rm_work> that's a big (lol) win IMO :)
16:22:09 <rm_work> ok is that it? we can move on to
16:22:10 <rm_work> #topic Open Discussion
16:22:19 <cgoncalves> centos-minimal 520 MB - 170 linux-firmware - 50 rescue = 300 MB
16:22:27 <johnsom> Yeah, back down to "reasonable" IMO. I would really list to see CentOS there too, so +1 to cgoncalves
16:22:29 <rm_work> that'd be great
16:23:24 <rm_work> anything for open discussion today?
16:23:57 <cgoncalves> is anyone aware of any recent change that might have impacted octavia master PY3, stein and queens?
16:24:05 <cgoncalves> https://review.opendev.org/#/c/634988/
16:24:39 <cgoncalves> I quickly checked this morning but couldn't spot the problem. the amp times out to boot
16:25:03 <cgoncalves> their syslog looks okay, though
16:25:34 <johnsom> Yeah, I saw one of those this week too. I just guessed it was a nova issue and re-checked. I should have looked in our new console logs.... lol
16:25:40 <rm_work> looks kinda random, not specifically stein/rocky related
16:26:03 <cgoncalves> the recheck result was the same: failed on same 3 jobs
16:26:18 <johnsom> Mine passed
16:26:38 <cgoncalves> ok, just wanted to ask if someone knew of something. I'll look into it
16:26:48 <johnsom> I know there is a new nodepool region that infra brought online that was having network problems over the weekend
16:27:05 <rm_work> was it? the failed gate was on different ones
16:27:19 <rm_work> i would recheck again...
16:27:24 <cgoncalves> rm_work, check last two gate results
16:27:29 <rm_work> hmm i guess you have a few in there but
16:27:29 <johnsom> "fortneubula" was having networking problems
16:27:38 <cgoncalves> ah, wait. sorry
16:27:40 <rm_work> i am still hesitant to call it a specific issue with this sample set
16:27:55 <cgoncalves> I was looking at the CI table next to the voting table
16:28:09 <cgoncalves> never mind, folks. sorry. rechecking
16:28:50 <rm_work> ok, anything else for open discussion?
16:29:11 <johnsom> cgoncalves Yeah, that one you linked was a fortnebula nodepool. I suspect was related to the networking issues
16:29:51 <cgoncalves> johnsom, do they have nested virt enabled? xD
16:30:02 <johnsom> I don't know. We have it turned off
16:30:43 <rm_work> I will mention that I am a little concerned about our review velocity currently... Myself included... I am not sure whether at the current pace we'll get much of the stuff we need done before even more builds up. So, if you can find time, please do reviews, especially non-cores! Remember, that's the path to core-hood. ^_^
16:32:57 <johnsom> +1
16:36:29 <rm_work> hokay
16:36:33 <rm_work> so if that's it...
16:36:54 <rm_work> we can call this meeting adjourned and go enjoy the rest of the day :D
16:37:06 <rm_work> thanks for stopping by!
16:37:07 <rm_work> #endmeeting