20:00:02 <johnsom> #startmeeting Octavia
20:00:03 <openstack> Meeting started Wed Jan 17 20:00:02 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:06 <openstack> The meeting name has been set to 'octavia'
20:00:11 <johnsom> Hi folks
20:00:13 <nmagnezi> o/
20:00:17 <longstaff> hi
20:00:17 <Alex_Staf> o/
20:00:19 <johnsom> Sounds like we have a good crowd today
20:00:23 <xgerman_> o/
20:00:45 <johnsom> #topic Announcements
20:00:54 <johnsom> Final reminder: feature freeze - Queens MS3 is coming January 22nd
20:01:01 <johnsom> #link https://releases.openstack.org/queens/schedule.html
20:01:06 <johnsom> This is next week folks!
20:01:22 <jniesz> hi
20:01:40 <johnsom> Also, please note that all requirements changes need to be in next week as well as global requirements will freeze
20:02:04 <rm_work> ok, I will check today to see if there's anything of mine that's waiting on me
20:02:08 <johnsom> I am closely monitoring the openstacksdk GR change to make sure it makes it for our dashboard.
20:02:28 <johnsom> Rocky release schedule has been published
20:02:31 <nmagnezi> johnsom, do we have a review priority list?
20:02:35 <johnsom> #link https://releases.openstack.org/rocky/schedule.html
20:03:01 <johnsom> Priority bug/patch list for Queens
20:03:10 <johnsom> #link https://etherpad.openstack.org/p/Octavia-Queens-Priority-Review
20:03:18 <johnsom> nmagnezi It was next on the agenda....
20:03:27 <nmagnezi> sorry :)
20:03:33 <johnsom> NP
20:03:54 <johnsom> Priority now is getting feature patches merged. Then we will have a bit of time to focus on bugs.
20:04:17 <johnsom> PTL nominations for Rocky open January 29th
20:04:20 <Alex_Staf> :( I am QA this will always be sad for me
20:04:33 <johnsom> #link https://governance.openstack.org/election/
20:04:41 <rm_work> I nominate johnsom as PTL!
20:04:44 <rm_work> Four more years!
20:04:59 <nmagnezi> +2
20:05:00 <nmagnezi> :)
20:05:03 <johnsom> So now is the time to get you company buy-in if you are interested in running for PTL
20:05:19 <johnsom> You guys have to wait until the 29th
20:05:21 <johnsom> grin
20:05:46 <johnsom> If anyone has questions about the PTL role, feel free to ping me
20:05:58 <johnsom> Also of note, the provider specification has merged!
20:06:04 <jniesz> johnsom are you able to run again?
20:06:30 <johnsom> jniesz I don't know yet
20:06:53 <johnsom> It's too early for the pointy hair types to have considered it...  grin
20:07:07 <cgoncalves> able and willing to :)
20:07:26 <johnsom> Any other announcements today?
20:08:24 <johnsom> #topic Brief progress reports / bugs needing review
20:08:50 <johnsom> I am busy on act/act and making sure we have a plan for MS3.
20:09:13 <johnsom> I posted a patch for the amphora driver that merges the OVS and L3 needs.
20:09:55 <johnsom> I am currently working on the OVS element for the amphora image. This is a bit complicated as the distros do not seem to have a version that has the features we need.
20:10:13 <Alex_Staf> johnsom, BTW are there plans for octvia to work with OVN ?
20:10:27 <jniesz> johnsom, usually the cloud archive repos have newer ovs
20:10:43 <jniesz> distros ovs version is usually way behind
20:10:47 <Alex_Staf> I am wokring on moving neutron lbaas tests to octavia plugin to have very basic coverage
20:11:14 <johnsom> jniesz, yes, but those also bring other baggage...
20:11:41 <jniesz> yea, I've also compiled from source for new ovs
20:11:47 <rm_work> Alex_Staf: so, I would push the brakes a bit on your work there actually
20:12:01 <Alex_Staf> rm_work, do tell :)
20:12:02 <johnsom> Alex_Staf Currently, I don't think octavia needs to be aware of OVN itself.  There is a proposed driver for it however on the neutron-lbaas side.
20:12:04 <rm_work> Alex_Staf: A) we're explicitly not directly porting tests, because our old tests were bad
20:12:26 <rm_work> Alex_Staf: B) I've noticed a lot of your patches are duplicates of stuff we already have in progress (other patches)
20:12:33 <rm_work> and that are almost complete
20:12:51 <rm_work> I would recommend taking a day or two to look at what already is in progress
20:12:59 <rm_work> and maybe review / test / contribute to those patches
20:13:03 <Alex_Staf> rm_work, pls share the patches . I know there is a listener patch but it has no traffic in it
20:13:03 <johnsom> +1 Yes, this tempest-plugin should be legacy free and aligned to the current tempest plugin requirements.  The old tests are horrible and we have explicitly not copied them over.
20:13:49 <rm_work> Alex_Staf: just look at https://review.openstack.org/#/q/project:openstack/octavia-tempest-plugin
20:13:51 <Alex_Staf> johnsom, rm_work ok , Could u pls expand why do u think they were bad ? ( I did not write them so it is ok  :))
20:14:13 <Alex_Staf> I am asking bcus I think they provide us with very basic nice covarage
20:14:18 <Alex_Staf> coverage
20:14:21 <rm_work> so for example, your listener test
20:14:34 <rm_work> https://review.openstack.org/528727 (Loadbalancer test_listener_basic test added.)
20:14:39 <rm_work> is literally right next to this one in the list:
20:14:43 <johnsom> One other progress update on my part (probably should have been an announcement): Ubuntu has posted an early version of bionic (18.x) that will be the new LTS.  It has the old 1.7.9 haproxy in it from artful. I have opened a bug to get 1.8-stable in bionic: https://bugs.launchpad.net/bugs/1743465
20:14:44 <openstack> Launchpad bug 1743465 in haproxy (Ubuntu) "Bionic should have haproxy 1.8-stable" [Undecided,Confirmed]
20:14:50 <rm_work> https://review.openstack.org/#/c/492311/ (
20:14:50 <rm_work> Add basic tests for listeners)
20:15:09 <johnsom> Please upvote that bug if you care about getting 1.8 into the next LTS that OpenStack will adopt.
20:15:43 <rm_work> which has been in development since August
20:15:46 <johnsom> #link https://review.openstack.org/#/q/project:openstack/octavia-tempest-plugin+status:open
20:15:53 <Alex_Staf> rm_work, agree- but this test does not send traffic to be load balanced . it is CRED api test basically
20:15:57 <johnsom> That may help, it is a list of the open patches for the tempest-plugin
20:16:24 <rm_work> we have a test already merged that sends traffic
20:16:54 <rm_work> https://github.com/openstack/octavia-tempest-plugin/blob/master/octavia_tempest_plugin/tests/v2/scenario/test_basic_ops.py#L62-L92
20:16:56 <Alex_Staf> the basic test ?
20:16:58 <rm_work> yes
20:17:30 <nmagnezi> might be worth having a storyboard for newly suggested test so we can have a discussion about new tests before one invest time in coding them..
20:17:32 <rm_work> we will need more complex ones eventually for like, L7 Session Persistence and the like
20:17:36 <johnsom> Alex_Staf The issues with the old tempest tests are: 1. They do not adhere to the tempest-plugin stable APIs, they use legacy code from tempest. 2. They are laid out poorly, loading too many things into single calls, so things can't be reused. Etc.
20:17:37 <rm_work> but yeah
20:17:51 <rm_work> what i'm saying is, we need to step back and discuss what tests we need, which ones are in flight, etc
20:18:07 <rm_work> before people end up doing a ton of duplicate or unnecessary work
20:18:10 <Alex_Staf> ok guys I will not proceed with those
20:18:31 <Alex_Staf> I will start proposing tests from my test plan though
20:18:36 <rm_work> as an action item, people who are interested in testing (hopefully a lot of us) should schedule a time to actually go through this
20:18:43 <Alex_Staf> I have bunch of test I need to automate
20:18:44 <rm_work> and make an etherpad we can use as a development guide
20:19:02 <rm_work> 1) Agree what tests we need to have
20:19:06 <rm_work> 2) Lay them out logically
20:19:11 <johnsom> #link https://storyboard.openstack.org/#!/project/910
20:19:14 <rm_work> 3) Add links to ones that are in progress
20:19:15 <johnsom> Stories can go there.
20:19:20 <xgerman_> is our google doc with test cases still floating around?
20:19:25 <rm_work> LOL
20:19:28 <rm_work> that was from
20:19:30 <rm_work> oh man
20:19:36 <rm_work> like 2016?
20:19:37 <johnsom> years ago
20:19:38 <rm_work> 2015?
20:19:46 <rm_work> I might have access somewhere...
20:19:47 <xgerman_> LBs didn’t change that much…
20:20:09 <rm_work> anyway, again, what i am saying is we need to meet about this
20:20:09 <johnsom> True, it is still probably valid to some degree.  Let me look if I have a link
20:20:16 <rm_work> because it will eat our whole meeting if we let it
20:20:34 <rm_work> so let's agree to meet soon, and then move on with this meeting :P
20:20:56 <rm_work> I am also very interested in getting testing sorted out
20:21:09 <Alex_Staf> cool I already cant wait
20:23:03 <johnsom> Yeah, also interested in helping out with testing.
20:23:10 <xgerman_> +1
20:23:14 <nmagnezi> +1
20:23:22 <Alex_Staf> my main effort is testing so let make it count :)
20:23:22 <rm_work> would be good to at least have all objects covered with CRUD
20:23:29 <johnsom> Any other progress updates today?
20:23:40 <rm_work> Alex_Staf: yeah i was not saying STOP WORKING, just ... focus on review/test/contribution to the existing patches first
20:23:43 <rm_work> so we can get those merged
20:23:47 <rm_work> and then move on to the harder stuff
20:23:56 <johnsom> Yeah, we need the basic CRUD to close our community goal...
20:23:58 <Alex_Staf> rm_work, np  this is how I understood it
20:24:07 <cgoncalves> I have made some progress on the health manager dynamic vif instatiation thingy PoC
20:24:25 <johnsom> Ah, for the lb-mgmt-net
20:24:37 <cgoncalves> the thing I mentioned a few weeks ago (last year!) that was considering os-vif
20:24:51 <johnsom> Cool, feel free to put that on the agenda when you are ready to present it.
20:25:00 <cgoncalves> I have uploaded a super early patch set to os-vif
20:25:07 <cgoncalves> I will :)
20:25:34 <cgoncalves> #link https://review.openstack.org/#/c/533713/
20:25:41 <johnsom> #topic CLI for amphora actions (failover, etc.) (xgerman)
20:25:50 <johnsom> #link https://review.openstack.org/#/c/532424/
20:26:05 <johnsom> Ugh, the agenda got a little out of order, sorry Alex, you are next!
20:26:33 <xgerman_> ok, my turn?
20:26:42 <Alex_Staf> xgerman_, shoot
20:27:21 <xgerman_> yeah, ao my main worry is that if we keep adding Octavia provider speicifc functionality to the client without marking it as such things will get confusing for users
20:27:36 <xgerman_> imagine another provider adds special functionality to the CLI…
20:27:54 <xgerman_> how does a user know which commands are for wich provider
20:27:56 <xgerman_> ?
20:28:01 <rm_work> yes, i actually agree
20:28:12 <rm_work> i kinda DO want a namespace "octavia" for these things
20:28:17 <rm_work> because if you think about it
20:28:17 * cgoncalves wonders if 'octavia' provider is the best name for calling the reference implementation
20:28:24 <rm_work> we have `loadbalancer` for /lbaas/
20:28:31 <rm_work> we should have `octavia` for /octavia/
20:28:42 <rm_work> cgoncalves: that's ... a fair point :/
20:28:54 <xgerman_> cgoncalves: +1
20:29:04 <rm_work> where we are now is where i kept trying to get people to go originally, like 3 years ago, but at the time politically we had to play things a little differently
20:29:09 <johnsom> Well, it is the bundled reference (default) driver
20:29:33 <cgoncalves> putting that naming aside, wouldn't most/all providers provide a failover api anyway?
20:29:45 <xgerman_> we have that with laod balancer
20:29:47 <rm_work> maybe? but we can't guarantee it's consistent
20:29:47 <johnsom> I agree that the currently proposed: openstack loadbalancer amphora failover is a bit confusing.
20:29:55 <rm_work> in how it works, or what it requires as input
20:29:55 <xgerman_> but this talks about amphora failover
20:30:21 <rm_work> i would recommend we do `openstack octavia amphora failover` and `openstack octavia amphora list`
20:30:24 <rm_work> for example
20:30:38 <rm_work> i am pretty sure the namespace is clear >_>
20:30:39 <johnsom> I think we will have a hard time getting "openstack octavia" approved, as they call out *not* including project names/code names in the OSC docs.
20:30:51 <rm_work> eugh
20:30:52 <jniesz> wouldn't failover be relevant to all providers that used amps?
20:30:57 <jniesz> some providers don't use amps
20:31:12 <cgoncalves> -1 for 'openstack octavia' because openstack cli is supposed to be service implementation agnostic
20:31:13 <rm_work> jniesz: well, we have no idea what the API for that would look like
20:31:26 <cgoncalves> on the other side, there's 'openstack congress' for example
20:31:29 <Alex_Staf> I think openstack loadbalancer is very good
20:31:31 <rm_work> well, this is not something that's agnostic to service implementation
20:31:41 <rm_work> it's specifically octavia
20:31:58 <rm_work> this won't work now or probably ever for F5 / A10 / etc
20:32:06 <rm_work> because it's not part of the loadbalancer API yet
20:32:07 <cgoncalves> rm_work: true. I also don't like much 'openstack loadbalancer' either but I can't think of a better name for it
20:32:26 <nmagnezi> for what it's worth, there are precedents for implementation specific API calls even in neutron-lbaas API (something with which agent is hosting which loadbalancer)
20:32:35 <rm_work> when/if we make a version of the call that's architected to be exposed inside /lbaas/ (the namespace for the generic calls) then we would want those CLI options to be available
20:32:35 <johnsom> I hate to make it so long, but maybe "openstack loadbalancer driver octavia amphora failover" LOL
20:32:41 <xgerman_> well, we are ok with openstack loadbalancer for the functionality every provider *should* support
20:32:42 <Alex_Staf> the cli should be comfortable for the userm and the API path should be matched to that and not the opposite
20:32:45 <nmagnezi> johnsom, LOL +1
20:32:59 <Alex_Staf> johnsom, +1
20:33:01 <rm_work> xgerman_: +1
20:33:04 <rm_work> that's what i'm saying
20:33:24 <xgerman_> the question is where do we put provider specific stuff
20:33:27 <Alex_Staf> I, as  a user would like to see what I am doing in the command I execute , ieven if it is long
20:34:02 <rm_work> so, we DO have an `openstack loadbalancer failover` which is different from what `openstack octavia amphora failover` would do
20:34:14 <rm_work> not to mention there's LIST/GET commands for amphora too
20:34:23 <johnsom> One other consideration "openstack extension" exists...  Maybe we can build on that
20:34:25 <xgerman_> yes, failing over a load balancer has meaning for ote rproviders
20:34:29 <rm_work> I vote we at least try to get `openstack octavia` approved
20:34:38 <cgoncalves> I suggest we check how other projects are doing, e.g. neutron (obviously), with regard to provider-specific commands and generic ones
20:34:48 <xgerman_> +!
20:35:35 <johnsom> Yeah, this is a good conversation for the PTG too, though that is a while out.  In Denver I just camped in the OSC room for an hour and hashed out our issues....
20:36:00 <rm_work> yeah, I'm still looking at whether i can get to that or not
20:36:11 <rm_work> it's looking a lot like "if i can crash on someone's floor for the week"
20:36:17 <rm_work> then I can buy my own airfare...
20:36:31 <xgerman_> :-(
20:36:45 <rm_work> similar to you in AUS I guess
20:36:53 <xgerman_> yep
20:37:16 <johnsom> So, let's take an action item to go research how other projects have handled the driver specific commands, and come back next week with some ideas.
20:37:20 <johnsom> #link https://etherpad.openstack.org/p/octavia-drivers-osc
20:37:33 <johnsom> Here is an etherpad we can put down references and ideas.
20:37:37 <rm_work> Or I might just suck it up and get a cheap airbnb if such a thing exists
20:37:55 <xgerman_> hostel?
20:38:07 <rm_work> in europe those tend to be OK, yeah
20:38:08 <xgerman_> rm_work also try tavel assistance
20:38:13 <cgoncalves> johnsom: thanks!
20:38:13 <rm_work> i think it's too late
20:38:47 <xgerman_> nope: #link http://superuser.openstack.org/articles/tips-for-getting-a-travel-grant-to-the-next-openstack-summit/
20:38:48 <rm_work> as usual, corporate is unclear on whether they'll do travel, until after the travel assist program finishes
20:39:00 <rm_work> kkk
20:39:01 <rm_work> i'll file
20:39:45 <johnsom> Joy, more zuul fun
20:40:07 <johnsom> Ok, moving on
20:40:16 <johnsom> #topic Testing automation - L7 content switching coverage plan- what is planned for that? (Alex_Staf)
20:40:16 <rm_work> though i can't find the dublin page :/
20:40:45 <johnsom> rm_work https://www.openstack.org/ptg#tab_travel
20:41:09 <Alex_Staf> Oל דם ןא ןד רקךשאקג אם אקדאןמע שמג 'ק שרק מםא איקרק נוא
20:41:12 <Alex_Staf> ohhhhh
20:41:14 <johnsom> rm_work or specifically: https://openstackfoundation.formstack.com/forms/travelsupportptg_dublin
20:41:16 <rm_work> anyone here doing single occupancy that would just upgrade to a double and i can give you the difference :P
20:41:21 <rm_work> it's small
20:41:25 <johnsom> Ha, ummm, do I need google translate for this? grin
20:41:30 <Alex_Staf> ok this is related to test of l7
20:41:41 <Alex_Staf> we should have a robot for wrong language
20:41:59 <johnsom> Ouch, google translate fails me
20:42:35 <Alex_Staf> I made a matrix with the feature coverage for planning but I think rm_work once told me there will be some tool that will run automatically on all options ?
20:43:13 <johnsom> DDT? I'm not sure what the status of that is....
20:43:25 <xgerman_> yep
20:43:32 <xgerman_> but wroth looking into
20:43:53 <Alex_Staf> oh this is what are those DDT test in lbaas ?
20:44:00 <xgerman_> yep
20:44:04 <Alex_Staf> I could not figure that out
20:44:41 <johnsom> Yeah, that got started, but never finished.  They currently don't run in the gate if I remember correctly
20:45:12 <johnsom> If I remember it was based on this module:
20:45:14 <johnsom> #link https://pypi.python.org/pypi/testscenarios
20:45:18 <xgerman_> they understandibly take a very long time… so ran afoul the gate limit
20:45:51 <johnsom> Which hasn't been updated since 2015
20:46:07 <johnsom> So, that might not be a good idea
20:46:53 <Alex_Staf> I will start automating tests even if it is for Redhat - we need and want that ASAP. I would love to write them in a way that fit upstream so it can be contributed so I think we should have a plan next week
20:47:02 <johnsom> Yes, we had gate time limit issues with the way the tests were designed.  For things like this we should really work to design tests to re-use the LB as much as possible to limit the number of VM boots.
20:47:20 <johnsom> Alex_Staf +1
20:47:56 <johnsom> My #1 concern is making sure we adhere to only using the stable interface:
20:47:58 <johnsom> #link https://docs.openstack.org/tempest/latest/plugin.html#stable-tempest-apis-plugins-may-use
20:48:25 <johnsom> This is where the old tests go off the rails due to legacy
20:49:04 <Alex_Staf> Ok I will consult face to face with nmagnezi in the office
20:49:06 <johnsom> #2 is making it nicely moduler so we can add new tests easily.
20:49:09 <Alex_Staf> not to miss anything
20:49:27 <johnsom> Alex_Staf was there a specific topic about L7 tests?
20:49:50 <Alex_Staf> johnsom, yes to ask regarding the automatic matrix
20:49:59 <Alex_Staf> as I did and got an answer
20:50:00 <Alex_Staf> :)
20:50:07 <johnsom> Ok
20:50:26 <nmagnezi> Alex_Staf, might be worth to have a dedicated community meeting about tests :-)
20:50:46 <Alex_Staf> nmagnezi, indeed
20:51:37 <johnsom> I think we need to align our API testing too.  Currently this is done with the functional tests that are in-tree.  I think we should talk about moving those out of tree into the plugin or ...  I don't think we need duplicate tests in the tempest-plugin
20:52:08 <johnsom> Yeah, good idea.
20:52:17 <xgerman_> +1
20:52:33 <johnsom> #link https://etherpad.openstack.org/p/octavia-testing-planning
20:53:21 <Alex_Staf> is there anybody else planning to write tests ?
20:54:16 <cgoncalves> FYI, added these 2 new etherpads to https://etherpad.openstack.org/p/octavia-ptg-rocky
20:54:26 <johnsom> <PTL hat> Testing is everyone's responsiblity</PTL hat>
20:54:43 <johnsom> I would like to help with tests, eventually
20:55:04 <nmagnezi> johnsom, let's see how the meeting minutes keeps the formatting with this ^^ :)
20:55:12 <johnsom> Ha
20:55:22 <Alex_Staf> johnsom, I know but everybody has ton of other staff so it eventually could be unprioritized
20:55:37 <Alex_Staf> nmagnezi, LOL
20:55:43 <johnsom> So let's update that etherpad with thoughts/issues then we can schedule a testing meeting either via IRC or hangouts.
20:56:21 <johnsom> Alex_Staf Once we have a good framework set out, it will become a gating factor to get features merged.
20:56:28 <johnsom> IMO
20:56:35 <Alex_Staf> johnsom, amen to that
20:56:36 <nmagnezi> +1
20:57:07 <johnsom> I think we have just been missing the basic framework for folks to build off of.
20:57:50 <rm_work> yep
20:58:01 <johnsom> #topic Open Discussion
20:58:11 <johnsom> Anything else in the few minutes left?
20:58:26 <jniesz> #link https://review.openstack.org/#/c/528850/
20:58:30 <jniesz> posted some comments
20:58:40 <jniesz> regarding support for multiple service_types for amps
20:59:21 <johnsom> jniesz Ok.  It's good to vote so they don't get overlooked... I see it already has a +2.
20:59:56 <johnsom> I will take a loot
20:59:58 <xgerman_> yeah, I like to get some things into Q-3 so we have them
20:59:59 <johnsom> look
21:00:08 <rm_work> yeah i will review my patches today
21:00:08 <johnsom> #endmeeting