20:00:06 #startmeeting Octavia 20:00:07 Meeting started Wed Feb 7 20:00:06 2018 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:10 The meeting name has been set to 'octavia' 20:00:16 Hi folks! 20:00:24 hello 20:00:25 hi 20:00:30 hi 20:00:30 o/ 20:00:33 hi 20:00:55 #topic Announcements 20:01:10 #link https://releases.openstack.org/queens/schedule.html 20:01:17 Queens RC1 will be cut today 20:01:32 Some zuul issues and late breaking library issues have delayed this a bit. 20:01:45 o/ 20:02:02 Currently I am waiting on a couple of release notes issues to get through the gates and then I will do the release. 20:02:37 As it looks at the moment, this will be our Queens release unless something major (bug wise) comes up and we decide we need an RC2. 20:03:17 Thank you to everyone that has been helping with the priority bug lists. We got a lot done. 20:03:39 The Rocky PTG in Dublin is rapidly approaching 20:03:45 #link https://etherpad.openstack.org/p/octavia-ptg-rocky 20:04:19 Please add your name to the attendance (or not) list and add any topics you think we should discuss. 20:04:57 Also of note at the PTG, there is a proposal for a session around OpenStack releases, LTS, etc. 20:05:03 #link http://lists.openstack.org/pipermail/openstack-dev/2018-February/127057.html 20:05:22 Many of us running OpenStack probably have an interest here. 20:05:52 I fully expect there will be a follow on session during the forum at the summit. 20:06:20 I should also announce that the PTL nominations are still open for a few hours. 20:06:29 #link https://governance.openstack.org/election/ 20:07:04 Any other announcements today? 20:07:27 CfP for Vancouver closing 20:07:43 link https://t.e2ma.net/click/6m31w/am5h7v/a2pgoi 20:07:47 Ah, yes, good call. The call for presentations for the Vancouver summit closes this week. 20:07:53 * xgerman_ took that out of an e-mail 20:08:20 #link https://www.openstack.org/summit-login/login?BackURL=%2Fsummit%2Fvancouver-2018%2Fcall-for-presentations%2F 20:08:37 I have signed us up for a project update session. 20:09:23 #topic Brief progress reports / bugs needing review 20:09:43 As you have probably seen, I have been busy with last minute patch updates for the release. 20:10:12 I have also been looking at some of our gate jobs, working to improve the gate testing situation. 20:10:39 I do have more work to do there, specifically about modernizing our tempest gates. However, this is work that will land for Rocky. 20:12:03 I also took a hour or two to poke at the nova-lxd situation to see how that is progressing and if we could consider using it for gating. Initial tests are still showing some issues and right now I can't boot an amp with it. I'm looking at it as a side project, so I'm not setting any expectations. 20:12:45 My motivator is the gate hosts. Some of them are up to 19 minutes to boot a VM, which leads to tests timing out. 20:13:29 Any other progress reports to note today? 20:14:13 * johnsom wonders if everyone is taking vacation while I battle the gates for our release..... 20:15:02 #topics Octavia testing planning 20:15:10 #link https://etherpad.openstack.org/p/octavia-testing-planning 20:15:30 We started this etherpad to talk about our gate testing situation. I hope we can move forward on this. 20:15:58 As part of my recent gate job cleanup I hit an issue that speaks to getting some clarity here. 20:16:05 #link https://review.openstack.org/#/c/541001/ 20:16:15 Specifically around the API testing. 20:16:45 I split out the tempest API and scenario tests into their own jobs in preparation for additional tests to come in. 20:17:05 I also attempted to setup the API jobs to use the no-op drivers to speed the gate. 20:18:04 For those not familiar with the no-op setup Octavia has, we have the capability to enable no-op, or no operation drivers, that cause Octavia to not call out to the third party services such as neutron, nova, barbican, etc. 20:18:44 They implement our internal APIs and allow testing of the API without the penalty of interacting with outside services. 20:19:06 Currently our neutron-lbaas API tempest tests and our functional API tests run against these drivers. 20:19:37 However, the new tempest plugin API tests fail as they are attempting to attach a floating IP to the load balancer under test. 20:20:24 My question to the community is: should we be running the tempest plugin API tests against the no-op drivers, or should they be using live services? 20:21:22 Aside from that, I think our API tempest test should not be using floating IPs, I think that is a bug in the tests. 20:21:33 +1 20:21:45 I think we should limit teh rela driver to as little tests as possible 20:22:03 especially API tests shouldn’t rely on the thing actually working… 20:23:12 +1 for not using FIPs 20:23:29 Yeah, I kind of agree. They still test our code paths, just stopping at the external service drivers. 20:24:25 It's the calls to nova that really slow down our test runs. By using no-op we can include many more tests per gate job before we come close to the timeouts. 20:25:43 Ok, please take some time this week to look at the tempest docs and comment on the etherpad about our testing strategy. We need to make a call on this soon as there is work going on with the tempest plugin. 20:26:01 #topic CLI for driver specific features 20:26:08 #link https://etherpad.openstack.org/p/octavia-drivers-osc 20:26:51 Last week I said I would contact some of the OSC folks to discuss how we expose admin commands that are driver specific via OpenStack client. 20:26:53 i was off for couple of weeks, but I remember this was mentioned 3 weeks ago 20:26:59 * nmagnezi reads the etherpad 20:27:28 Our vote was for "openstack octavia amphora failover" but I had concerns that I knew the OSC team did not want project names involved. 20:27:55 A recommendation that came out of it was "openstack amphora failover" 20:28:27 Removing the project name and/or "loadbalancer" all together and making it a top level "thing" 20:28:40 mmh, isn’t that confusing 20:28:52 So, if we loaded a OSC plugin for ADMIN management of amphora it would be "openstack amphora failover" 20:29:36 Likewise (total example off my head) F5 could have "openstack LTM failover" 20:30:04 johnsom, i was not around so I have to ask: why do we hate "openstack loadbalancer amphora" so much? 20:30:07 Thoughts/comments? 20:30:40 nmagnezi Not sure, rm_work? Others that voted that -1? 20:31:11 since we cannot use "octavia" (which I actually don't hate in this context) "loadbalancer" looks like something to think about 20:31:56 my main worry is that it ’s not clear what os Octavia specific admin functionality and what is mot 20:31:57 but again, I was not a part of the discussion up until now so I don't want to take the discussion backwards. 20:32:53 nmagnezi No worries, the discussion is still open (thus the topic), I am just bringing more information to the discussion from my conversations with the OSC folks. 20:33:49 Hmmm, we might be light on quorum today.... Or people are being shy 20:34:05 xgerman_, fair enough. but I actually also agree your claim about the fact that it might be confusing ais actually more si 20:34:09 xgerman_, fair enough. but I actually also agree your claim about the fact that it might be confusing 20:34:30 sorry pressed enter before I had to.. :) 20:34:35 openstack harmony {loadbalancer, listener, member, pool} {create,delete,update} ? 20:34:36 johnsom, yeah.. 20:35:18 we are trying to prevent: 20:35:27 1) Somebody tries that on a 3rd party loadblancer 20:35:50 2) A 3rd party loadbalancer adds admin functionality to OSC and so things get really confufing 20:36:56 Merged openstack/octavia master: Fix release notes job https://review.openstack.org/541810 20:36:57 To some degree there are concerns about name collisions 20:38:12 yeah, what I like to avoid is something 20:38:16 I actually like the concept of amphora as a toplevel 20:38:20 but 20:38:31 yeah it doesn't help us a lot for distinguishing internal vs vendor vs whatever 20:38:38 I'm not worries about the word amphora (here should be a placeholder for johnsom to grin) , but other drivers might collide with the naming indeed.. 20:39:09 or you have a list of top level objects from each LB vendor and nobosy can read that help 20:39:49 Yeah, tab completion would give a shorter list if it is under something as opposed to "openstack"... 20:40:05 Not sure that is a big factor though 20:40:38 so did they 100% nix "octavia" as a toplevel? 20:40:47 i figured we could schmooze and lobby at Dublin 20:40:48 and get it done 20:41:04 well, if we make amphora top level - does this make people think we are the amphora driver? 20:41:07 i think putting the vendor namespace directly under openstack is ugly 20:41:14 rm_work yes, the fact that congress is in there is a problem evidently 20:41:50 xgerman_ well, it would be the 'thing' the action would be against 20:41:56 jniesz: actually that is a fair point, if you consider us a vendor 20:41:57 rm_work: I also would advise against. the intent of the openstack cli is to be implementation-wise agnostic. openstack is about APIs 20:42:17 let me look at the list again 20:42:28 the way I see it, "amphora" as a top level thing is worse than "openstack loadbalancer amphora failover" 20:42:31 cgoncalves Yes, the api is /amphora so kind of aligns 20:42:47 openstack {equilibrium, harmony} as top level? 20:43:25 Ok, I guess not, our api is /v2.0/octavia/amphorae/{amphora_id}/failover 20:43:32 what about if vendor had multiple plugins, maybe networking and lb 20:43:41 We chose to put the driver in there 20:43:45 it would all go under openstack 20:44:30 We are down to 15 minutes in the meeting. Should we table this for next week and think about it more? 20:45:18 i don't mind openstack amphora 20:45:18 but 20:45:32 i liked the concept of somewhere to shove all the admin stuff 20:45:48 Yep 20:45:59 I guess `openstack loadbalancer is ... maybe better 20:46:11 if that can be done 20:46:13 openstack loadbalancer octavia amphora failover 20:46:20 Ok, let's give some time for open discussion and come back next week. We can throw comments/thoughts on the etherpad. 20:46:26 openstack loadbalancer f5 something 20:46:27 +1 20:46:28 rm_work:+1 20:46:40 k 20:46:50 i'll give my vote to that 20:46:56 #topic Open Discussion 20:46:58 rm_work: -1 20:47:14 Other topics to discuss today? 20:47:30 I think i gave it +0 before cause i had questions 20:47:31 cgoncalves: whyfore? 20:47:32 rm_work, why not just openstack loadbalancer amphora failover 20:47:48 because amphora are a vendor 20:47:50 johnsom: if there's nothing urgent we could discuss the o-hm vif 20:48:09 rm_work: too much vendor specific IMO 20:48:13 i mean 20:48:14 cgoncalves Sure thing 20:48:18 that's what we're talking about 20:48:21 we need to put it somewhere 20:48:51 so re: https://storyboard.openstack.org/#!/story/1549297 20:49:10 cgoncalves I put some comments on the story/patch. I'm still struggling with the concept, so others should jump in. 20:49:14 i'm not sure it has got other cores' attention 20:49:29 cgoncalves I did note that we should use privsep instead of rootwrap 20:49:55 johnsom: neutron does not yet use os-vif but it is expected at some point in time they will 20:50:03 cgoncalves +1 I really would like others to join the conversation 20:50:19 johnsom: neutron would not be needed on the same host that runs the health monitor 20:50:23 i'm not quite sure i understand either 20:50:34 yeah, for me it would be helpful to see a proof-of-concept 20:50:59 johnsom: neutron would not even be required in the deployment if the driver is 'flat' 20:51:25 So, help we with that. My understanding is the OS-VIF is glue code that hooks the neutron host networking to the nova instances. Basically all of the OVS/bridge/libvirt fun stuffs 20:51:29 xgerman_: that is what I have been working on. I uploaded a very very WIP to gerrit 20:52:02 ok, will take a look 20:52:17 johnsom: kind of. the documentation makes it sound coupled to nova and neutron but it is not 20:53:16 johnsom: all os-vif does is to make sure the network backend is configured as desired without being too explicit 20:54:01 i.e. os-vif users would not need to know how to create switches and ports, and attach 20:54:05 My take (correct me) is this is an alternative to how an operator sets up the lb-mgmt-net, OSA does it with provider networks and VLANs, devstack does it with https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L316, some deployments just use routes. 20:54:59 johnsom: os-vif would be a single point place for that 20:55:05 Basically right now we leave it as an activity for the operator to setup. 20:55:42 operator/installer 20:56:05 Right, same as neutron 20:56:22 the network backend could be ovs, linux bridge, ovn, opendaylight, etc. there are too many to consider 20:57:02 It is super flexible, so hard to code up automation for it as there are a bunch of options. IMO 20:57:19 Right, or even none of the above 20:58:40 I guess what might help me is understanding what the octavia.conf would look like for this os-vif option. 20:58:46 all we would need would be an os-vif plugin which is expected to exist for major net drivers 20:59:17 johnsom: I think you can see in my WIP patch 20:59:22 Darn, just about out of time. I can chat after our meeting window though if you have time. 20:59:45 unfortunately tonight I cannot, sorry 21:00:06 Ok, we should setup a special IRC to discuss more maybe 21:00:11 #endmeeting