20:00:06 <johnsom> #startmeeting Octavia
20:00:07 <openstack> Meeting started Wed Feb  7 20:00:06 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:10 <openstack> The meeting name has been set to 'octavia'
20:00:16 <johnsom> Hi folks!
20:00:24 <jniesz> hello
20:00:25 <longstaff> hi
20:00:30 <cgoncalves> hi
20:00:30 <xgerman_> o/
20:00:33 <blake> hi
20:00:55 <johnsom> #topic Announcements
20:01:10 <johnsom> #link https://releases.openstack.org/queens/schedule.html
20:01:17 <johnsom> Queens RC1 will be cut today
20:01:32 <johnsom> Some zuul issues and late breaking library issues have delayed this a bit.
20:01:45 <nmagnezi> o/
20:02:02 <johnsom> Currently I am waiting on a couple of release notes issues to get through the gates and then I will do the release.
20:02:37 <johnsom> As it looks at the moment, this will be our Queens release unless something major (bug wise) comes up and we decide we need an RC2.
20:03:17 <johnsom> Thank you to everyone that has been helping with the priority bug lists.  We got a lot done.
20:03:39 <johnsom> The Rocky PTG in Dublin is rapidly approaching
20:03:45 <johnsom> #link https://etherpad.openstack.org/p/octavia-ptg-rocky
20:04:19 <johnsom> Please add your name to the attendance (or not) list and add any topics you think we should discuss.
20:04:57 <johnsom> Also of note at the PTG, there is a proposal for a session around OpenStack releases, LTS, etc.
20:05:03 <johnsom> #link http://lists.openstack.org/pipermail/openstack-dev/2018-February/127057.html
20:05:22 <johnsom> Many of us running OpenStack probably have an interest here.
20:05:52 <johnsom> I fully expect there will be a follow on session during the forum at the summit.
20:06:20 <johnsom> I should also announce that the PTL nominations are still open for a few hours.
20:06:29 <johnsom> #link https://governance.openstack.org/election/
20:07:04 <johnsom> Any other announcements today?
20:07:27 <xgerman_> CfP for Vancouver closing
20:07:43 <xgerman_> link https://t.e2ma.net/click/6m31w/am5h7v/a2pgoi
20:07:47 <johnsom> Ah, yes, good call.  The call for presentations for the Vancouver summit closes this week.
20:07:53 * xgerman_ took that out of an e-mail
20:08:20 <johnsom> #link https://www.openstack.org/summit-login/login?BackURL=%2Fsummit%2Fvancouver-2018%2Fcall-for-presentations%2F
20:08:37 <johnsom> I have signed us up for a project update session.
20:09:23 <johnsom> #topic Brief progress reports / bugs needing review
20:09:43 <johnsom> As you have probably seen, I have been busy with last minute patch updates for the release.
20:10:12 <johnsom> I have also been looking at some of our gate jobs, working to improve the gate testing situation.
20:10:39 <johnsom> I do have more work to do there, specifically about modernizing our tempest gates. However, this is work that will land for Rocky.
20:12:03 <johnsom> I also took a hour or two to poke at the nova-lxd situation to see how that is progressing and if we could consider using it for gating.  Initial tests are still showing some issues and right now I can't boot an amp with it.  I'm looking at it as a side project, so I'm not setting any expectations.
20:12:45 <johnsom> My motivator is the gate hosts.  Some of them are up to 19 minutes to boot a VM, which leads to tests timing out.
20:13:29 <johnsom> Any other progress reports to note today?
20:14:13 * johnsom wonders if everyone is taking vacation while I battle the gates for our release.....
20:15:02 <johnsom> #topics Octavia testing planning
20:15:10 <johnsom> #link https://etherpad.openstack.org/p/octavia-testing-planning
20:15:30 <johnsom> We started this etherpad to talk about our gate testing situation.  I hope we can move forward on this.
20:15:58 <johnsom> As part of my recent gate job cleanup I hit an issue that speaks to getting some clarity here.
20:16:05 <johnsom> #link https://review.openstack.org/#/c/541001/
20:16:15 <johnsom> Specifically around the API testing.
20:16:45 <johnsom> I split out the tempest API and scenario tests into their own jobs in preparation for additional tests to come in.
20:17:05 <johnsom> I also attempted to setup the API jobs to use the no-op drivers to speed the gate.
20:18:04 <johnsom> For those not familiar with the no-op setup Octavia has, we have the capability to enable no-op, or no operation drivers, that cause Octavia to not call out to the third party services such as neutron, nova, barbican, etc.
20:18:44 <johnsom> They implement our internal APIs and allow testing of the API without the penalty of interacting with outside services.
20:19:06 <johnsom> Currently our neutron-lbaas API tempest tests and our functional API tests run against these drivers.
20:19:37 <johnsom> However, the new tempest plugin API tests fail as they are attempting to attach a floating IP to the load balancer under test.
20:20:24 <johnsom> My question to the community is: should we be running the tempest plugin API tests against the no-op drivers, or should they be using live services?
20:21:22 <johnsom> Aside from that, I think our API tempest test should not be using floating IPs, I think that is a bug in the tests.
20:21:33 <xgerman_> +1
20:21:45 <xgerman_> I think we should limit teh rela driver to as little tests as possible
20:22:03 <xgerman_> especially API tests shouldn’t rely on the thing actually working…
20:23:12 <nmagnezi> +1 for not using FIPs
20:23:29 <johnsom> Yeah, I kind of agree.  They still test our code paths, just stopping at the external service drivers.
20:24:25 <johnsom> It's the calls to nova that really slow down our test runs. By using no-op we can include many more tests per gate job before we come close to the timeouts.
20:25:43 <johnsom> Ok, please take some time this week to look at the tempest docs and comment on the etherpad about our testing strategy.  We need to make a call on this soon as there is work going on with the tempest plugin.
20:26:01 <johnsom> #topic CLI for driver specific features
20:26:08 <johnsom> #link https://etherpad.openstack.org/p/octavia-drivers-osc
20:26:51 <johnsom> Last week I said I would contact some of the OSC folks to discuss how we expose admin commands that are driver specific via OpenStack client.
20:26:53 <nmagnezi> i was off for couple of weeks, but I remember this was mentioned 3 weeks ago
20:26:59 * nmagnezi reads the etherpad
20:27:28 <johnsom> Our vote was for "openstack octavia amphora failover" but I had concerns that I knew the OSC team did not want project names involved.
20:27:55 <johnsom> A recommendation that came out of it was "openstack amphora failover"
20:28:27 <johnsom> Removing the project name and/or "loadbalancer" all together and making it a top level "thing"
20:28:40 <xgerman_> mmh, isn’t that confusing
20:28:52 <johnsom> So, if we loaded a OSC plugin for ADMIN management of amphora it would be "openstack amphora failover"
20:29:36 <johnsom> Likewise (total example off my head) F5 could have "openstack LTM failover"
20:30:04 <nmagnezi> johnsom, i was not around so I have to ask: why do we hate "openstack loadbalancer amphora" so much?
20:30:07 <johnsom> Thoughts/comments?
20:30:40 <johnsom> nmagnezi Not sure, rm_work? Others that voted that -1?
20:31:11 <nmagnezi> since we cannot use "octavia" (which I actually don't hate in this context) "loadbalancer" looks like something to think about
20:31:56 <xgerman_> my main worry is that it ’s not clear what os Octavia specific admin functionality and what is mot
20:31:57 <nmagnezi> but again, I was not a part of the discussion up until now so I don't want to take the discussion backwards.
20:32:53 <johnsom> nmagnezi No worries, the discussion is still open (thus the topic), I am just bringing more information to the discussion from my conversations with the OSC folks.
20:33:49 <johnsom> Hmmm, we might be light on quorum today....  Or people are being shy
20:34:05 <nmagnezi> xgerman_, fair enough. but I actually also agree your claim about the fact that it might be confusing ais actually more si
20:34:09 <nmagnezi> xgerman_, fair enough. but I actually also agree your claim about the fact that it might be confusing
20:34:30 <nmagnezi> sorry pressed enter before I had to.. :)
20:34:35 <cgoncalves> openstack harmony {loadbalancer, listener, member, pool} {create,delete,update} ?
20:34:36 <nmagnezi> johnsom, yeah..
20:35:18 <xgerman_> we are trying to prevent:
20:35:27 <xgerman_> 1) Somebody tries that on a 3rd party loadblancer
20:35:50 <xgerman_> 2) A 3rd party loadbalancer adds admin functionality to OSC and so things get really confufing
20:36:56 <openstackgerrit> Merged openstack/octavia master: Fix release notes job  https://review.openstack.org/541810
20:36:57 <johnsom> To some degree there are concerns about name collisions
20:38:12 <xgerman_> yeah, what I like to avoid is something
20:38:16 <rm_work> I actually like the concept of amphora as a toplevel
20:38:20 <rm_work> but
20:38:31 <rm_work> yeah it doesn't help us a lot for distinguishing internal vs vendor vs whatever
20:38:38 <nmagnezi> I'm not worries about the word amphora (here should be a placeholder for johnsom to grin) , but other drivers might collide with the naming indeed..
20:39:09 <xgerman_> or you have a list of top level objects from each LB vendor and nobosy can read that help
20:39:49 <johnsom> Yeah, tab completion would give a shorter list if it is under something as opposed to "openstack"...
20:40:05 <johnsom> Not sure that is a big factor though
20:40:38 <rm_work> so did they 100% nix "octavia" as a toplevel?
20:40:47 <rm_work> i figured we could schmooze and lobby at Dublin
20:40:48 <rm_work> and get it done
20:41:04 <xgerman_> well, if we make amphora top level - does this make people think we are the amphora driver?
20:41:07 <jniesz> i think putting the vendor namespace directly under openstack is ugly
20:41:14 <johnsom> rm_work yes, the fact that congress is in there is a problem evidently
20:41:50 <johnsom> xgerman_ well, it would be the 'thing' the action would be against
20:41:56 <rm_work> jniesz: actually that is a fair point, if you consider us a vendor
20:41:57 <cgoncalves> rm_work: I also would advise against. the intent of the openstack cli is to be implementation-wise agnostic. openstack is about APIs
20:42:17 <rm_work> let me look at the list again
20:42:28 <nmagnezi> the way I see it, "amphora" as a top level thing is worse than "openstack loadbalancer amphora failover"
20:42:31 <johnsom> cgoncalves Yes, the api is /amphora so kind of aligns
20:42:47 <cgoncalves> openstack {equilibrium, harmony} as top level?
20:43:25 <johnsom> Ok, I guess not, our api is /v2.0/octavia/amphorae/{amphora_id}/failover
20:43:32 <jniesz> what about if vendor had multiple plugins, maybe networking and lb
20:43:41 <johnsom> We chose to put the driver in there
20:43:45 <jniesz> it would all go under openstack <ting>
20:44:30 <johnsom> We are down to 15 minutes in the meeting. Should we table this for next week and think about it more?
20:45:18 <rm_work> i don't mind openstack amphora
20:45:18 <rm_work> but
20:45:32 <rm_work> i liked the concept of somewhere to shove all the admin stuff
20:45:48 <johnsom> Yep
20:45:59 <rm_work> I guess `openstack loadbalancer <vendor> <thing> is ... maybe better
20:46:11 <xgerman_> if that can be done
20:46:13 <rm_work> openstack loadbalancer octavia amphora failover
20:46:20 <johnsom> Ok, let's give some time for open discussion and come back next week.  We can throw comments/thoughts on the etherpad.
20:46:26 <rm_work> openstack loadbalancer f5 something
20:46:27 <xgerman_> +1
20:46:28 <jniesz> rm_work:+1
20:46:40 <rm_work> k
20:46:50 <rm_work> i'll give my vote to that
20:46:56 <johnsom> #topic Open Discussion
20:46:58 <cgoncalves> rm_work: -1
20:47:14 <johnsom> Other topics to discuss today?
20:47:30 <rm_work> I think i gave it +0 before cause i had questions
20:47:31 <rm_work> cgoncalves: whyfore?
20:47:32 <nmagnezi> rm_work, why not just openstack loadbalancer amphora failover
20:47:48 <rm_work> because amphora are a vendor
20:47:50 <cgoncalves> johnsom: if there's nothing urgent we could discuss the o-hm vif
20:48:09 <cgoncalves> rm_work: too much vendor specific IMO
20:48:13 <rm_work> i mean
20:48:14 <johnsom> cgoncalves Sure thing
20:48:18 <rm_work> that's what we're talking about
20:48:21 <rm_work> we need to put it somewhere
20:48:51 <cgoncalves> so re: https://storyboard.openstack.org/#!/story/1549297
20:49:10 <johnsom> cgoncalves I put some comments on the story/patch. I'm still struggling with the concept, so others should jump in.
20:49:14 <cgoncalves> i'm not sure it has got other cores' attention
20:49:29 <johnsom> cgoncalves I did note that we should use privsep instead of rootwrap
20:49:55 <cgoncalves> johnsom: neutron does not yet use os-vif but it is expected at some point in time they will
20:50:03 <johnsom> cgoncalves +1 I really would like others to join the conversation
20:50:19 <cgoncalves> johnsom: neutron would not be needed on the same host that runs the health monitor
20:50:23 <rm_work> i'm not quite sure i understand either
20:50:34 <xgerman_> yeah, for me it would be helpful to see a proof-of-concept
20:50:59 <cgoncalves> johnsom: neutron would not even be required in the deployment if the driver is 'flat'
20:51:25 <johnsom> So, help we with that.  My understanding is the OS-VIF is glue code that hooks the neutron host networking to the nova instances. Basically all of the OVS/bridge/libvirt fun stuffs
20:51:29 <cgoncalves> xgerman_: that is what I have been working on. I uploaded a very very WIP to gerrit
20:52:02 <xgerman_> ok, will take a look
20:52:17 <cgoncalves> johnsom: kind of. the documentation makes it sound coupled to nova and neutron but it is not
20:53:16 <cgoncalves> johnsom: all os-vif does is to make sure the network backend is configured as desired without being too explicit
20:54:01 <cgoncalves> i.e. os-vif users would not need to know how to create switches and ports, and attach
20:54:05 <johnsom> My take (correct me) is this is an alternative to how an operator sets up the lb-mgmt-net, OSA does it with provider networks and VLANs, devstack does it with https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L316, some deployments just use routes.
20:54:59 <cgoncalves> johnsom: os-vif would be a single point place for that
20:55:05 <johnsom> Basically right now we leave it as an activity for the operator to setup.
20:55:42 <cgoncalves> operator/installer
20:56:05 <johnsom> Right, same as neutron
20:56:22 <cgoncalves> the network backend could be ovs, linux bridge, ovn, opendaylight, etc. there are too many to consider
20:57:02 <johnsom> It is super flexible, so hard to code up automation for it as there are a bunch of options. IMO
20:57:19 <johnsom> Right, or even none of the above
20:58:40 <johnsom> I guess what might help me is understanding what the octavia.conf would look like for this os-vif option.
20:58:46 <cgoncalves> all we would need would be an os-vif plugin which is expected to exist for major net drivers
20:59:17 <cgoncalves> johnsom: I think you can see in my WIP patch
20:59:22 <johnsom> Darn, just about out of time.  I can chat after our meeting window though if you have time.
20:59:45 <cgoncalves> unfortunately tonight I cannot, sorry
21:00:06 <johnsom> Ok, we should setup a special IRC to discuss more maybe
21:00:11 <johnsom> #endmeeting