20:01:15 #startmeeting Octavia 20:01:16 Meeting started Wed Nov 22 20:01:15 2017 UTC and is due to finish in 60 minutes. The chair is xgerman_. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:01:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:01:19 The meeting name has been set to 'octavia' 20:01:29 #topic Announcements 20:01:47 so johnsom won’t attend today so it’s all me ;-) 20:02:18 and me :D 20:02:37 hi 20:02:39 :-) 20:02:50 hi 20:03:05 #chair nmagnezi 20:03:06 Current chairs: nmagnezi xgerman_ 20:03:36 xgerman_, ^^ i hope that chair is not Ikea made 20:03:41 :) 20:03:46 ok, I don’t really have any announcements other that most of us will be out until Monday 20:04:18 #topic Brief progress reports / bugs needing review 20:05:27 johnsom made some good progress on #link https://review.openstack.org/#/c/519509/ - please review 20:06:03 we also need reviews for #link https://review.openstack.org/#/c/509957/ 20:06:10 #link https://review.openstack.org/#/c/519509/ 20:06:37 #link https://review.openstack.org/#/c/458308/ 20:06:56 for the provider spec, auth token approach sounded ok to me 20:07:03 QoS is pretty close but I also discovered that we don’t enable the qos extension in our devstack 20:07:26 ok thanks jniesz 20:07:31 I've created a story about QoS progress: https://storyboard.openstack.org/#!/story/2001310 20:07:44 xgerman_, yeah, I also saw such comment from bar_ . any reason not to include that addition in the same patch? 20:07:50 I made devstack deployment work earlier today 20:08:21 we can add to that patch or make that patch dependant on the devstack update 20:09:24 ack 20:09:34 then we probably also need to tempest test… 20:09:45 any updates on our test efforts? 20:10:04 Alex_Staf, ^ ? 20:10:37 we have a scenarios gate for the tempest plugin repo now, don't we? 20:11:08 yeah, I think so, too ;-) 20:11:38 #topic Provider driver spec 20:12:02 hi guys 20:12:07 hi 20:12:20 I will make another topic for you after that ;-) 20:12:32 no news from my side . I bugged u enough on the plugin 20:12:41 ok 20:12:44 I do have some request though I submitted 20:12:52 regarding HA documentation etc 20:13:14 ok, it’s definitely on our radar 20:13:21 cool 20:13:24 another thing 20:14:06 I think we need some api call tool that request for the octavia processes status - like neutron agent-list but for octavia 20:15:06 yes, we talked about that in Denver at the PTG 20:15:08 yea, where the driver would be required to give the status 20:15:10 I think it will help in the automated tests as well - to getr status after failover , for example 20:15:15 and maybe an error field to list an error 20:15:24 xgerman_, really? I missed that I guess =\ 20:15:43 it was on one of the provider driver days 20:15:57 we should make sure the spec includes that call 20:16:37 that is something we should require drivers to implement 20:16:40 xgerman_, not sure, but do o-hm and o-hk even use RPC? is not, I wonder how can we achieve an "agent-list" for octavia services 20:17:15 btw I think we should be discussing the provider spec at the moment.. :P 20:17:38 we are doing it - agent list is something all providers need 20:17:56 but we call it more “component” list 20:18:11 ack 20:18:24 nmagnezi we can start having the octavia components sending heartbeats 20:18:59 xgerman_, yup. 20:19:02 or use one of the new etcd integrations… 20:19:28 xgerman_, another question . I discussed with nir the octavia operation and we know that api sends the request to the workers ( ha setup 3 controllers- 3 api,worker,HM,houseKeeper) , and it send the task to the messaging que and one of the workers pick it up and execute right ? 20:19:31 do we have a story for that? 20:19:37 if not, maybe Alex_Staf can submit one 20:20:06 yes 20:20:09 nmagnezi, octavia components sending heartbeats ? 20:20:29 etcd seems to make sense for status / health 20:20:29 Alex_Staf, about the octavia components list 20:20:38 xgerman_, how do the other 2 workers are not executing the request again ? 20:20:39 jniesz +1 20:20:53 we pop the request from the queue 20:21:06 and if the worker with the request dies you are SoL 20:21:10 when it was received it removed fro mthe que ? 20:21:16 yes 20:21:19 cool 20:21:22 what is SoL ? 20:21:42 nmagnezi, lets discuss that tomorrow - the story 20:21:42 the idea is that one day we use task board and taskflow will manage persisting tasks 20:21:58 out of luck 20:22:20 O_o 20:23:01 ok, after we went on a tangent is there anything longstaf_ needs to move ahead with the spec 20:23:06 ? 20:23:16 to limit provider drivers from only updating the status of their objects (amphora, loadbalancer, listener...) with auth token 20:23:25 from provider, do we need to add something like "owner" field to the tables 20:23:42 we have a provider field on LB 20:23:57 that’s what i thought we would be using 20:24:56 ok, I don't remember seeing that field added in the spec 20:24:57 for lb 20:25:13 i guess it is implied through flavor 20:25:13 it has been there forever 20:25:29 hi, guys, does that make sense to support http communication between worker and amphora? 20:25:36 yes, nm 20:25:44 for auth token: Octavia would generate a token and pass to provider driver when loading driver, and drivers include token on calls to update Octavia 20:26:09 does this make sense? 20:26:30 kong, kindly wait for the open part :) 20:26:58 ooh, sorry, i didn't know i am in a 'meeting room'... 20:27:04 longstaf_ I see two authentications 20:27:16 1) Is the token to update status/stats 20:27:29 2) is user/pwd to read from Octavia API if needed 20:28:04 (1) since we are using a library we can leave that for implementation details 20:28:22 there might be multiple “drivers” with different auth methods 20:29:27 I am not sure what volume those updates are and if we get a ton every second we likely need to do soemthing different then when they are more sporadic 20:29:47 I think the use-case is for the driver to update status/stats of only objects that it owns 20:30:08 and the library having protection, so one driver doesn't update status of objects from another 20:30:15 indeed 20:30:57 shouldn't that be a single auth method, since it would be part of the library 20:31:37 I like the library to only do the updates… for anyhting else a driver should use the Octavia API 20:31:45 yea, agreed 20:31:56 strictly for status / stats update 20:32:04 anything else should leverage existing apiv 20:32:09 +1 20:32:27 agreed 20:33:23 anyhow I like to keep auth/transport driver specific so we can swap that out if we can’t handle the volume 20:33:38 or somebody has a weird network topology 20:35:46 ok, moving on 20:35:52 #topic Open Discussion 20:37:03 kong, ^ 20:37:14 hi 20:37:20 just wanna ask does that make sense to support http communication between worker and amphora? we don't want to use https in our ci environment, it will increase our complexity 20:37:54 to the best of my knowledge it is will not work out of the box 20:38:00 yeah 20:38:02 i looked at it once 20:38:10 ok, so we generate the certs ourselves and out the amphora-id in them 20:38:33 then we check if the amphora-id in the server cert matches the one we are supposed to talk to 20:38:52 kong, can't you have the same workflow we have in devstack? 20:38:56 which means if we go to http that would all need to be refactored 20:39:05 It's CI... it might fit. 20:39:18 +1 20:39:20 xgerman_, +1 20:39:43 xgerman_, that's exactly what I saw when I looked at it in the context of tripleO support 20:40:16 i want to know if adding an option makes sense 20:40:18 to the upstream 20:40:38 we don't want to maintain private code :-) 20:40:39 can you describe the usecase? 20:40:58 yeah, I can’t see anyone wanting less security ;-) 20:41:01 what would anyone who use Octavia in production would prefer http over https? 20:41:12 what/ why 20:41:20 shouldn't CI match what you would run in prod? 20:41:27 +1 20:41:50 +1 20:41:55 ok, i know. doesn't make sense to add this kind of thing just for ci 20:42:07 yeah, in our prod, we use https 20:42:23 sorry for that guys but I need to drop 20:42:24 yeah, since the SSL stuff has a lot of custom things you likely need to test that 20:42:24 o/ 20:42:26 just because the specialty of our ci 20:43:09 i mean, special case 20:43:32 well, I think we don’t see a need for upstream at the moment but we won’t stop you 20:43:43 we will try to figure out another way for that, thanks 20:43:51 ok 20:44:21 anything else to discuss? 20:45:04 #endmeeting