20:00:29 #startmeeting Octavia 20:00:30 Log: http://eavesdrop.openstack.org/meetings/octavia__/2017/octavia__.2017-12-20-20.00.log.html 20:00:31 Meeting started Wed Dec 20 20:00:29 2017 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:34 The meeting name has been set to 'octavia' 20:00:35 o/ 20:00:40 Try that again without the type-o.... 20:00:40 o/ 20:01:00 Hi folks 20:01:00 hi 20:01:16 #topic Announcements 20:01:28 hi 20:01:46 I plan to cancel the weekly IRC meeting next week. We will resume 1/3/18. 20:01:55 +1 20:01:59 Many folks are taking some time off at the end of the year. 20:02:08 Yep 20:02:16 I will send out an e-mail after the meeting. 20:02:41 Also news, freenode (the IRC host for OpenStack) had a spam issue over the weekend 20:02:52 Lol yes 20:02:54 Such spam 20:02:59 There were offensive comments posted to rooms and they were direct messaging folks. 20:03:33 Because of that you now need to be registered with freenode and logged in to post in some channels and to direct message folks. 20:03:58 I know some folks didn't get the notification of the change and were having trouble with IRC. 20:04:06 o/ 20:04:18 Let me know if you have folks having trouble and I can help get them setup on freenode. 20:04:49 There was a summary of the "1 year release cycle" discussion posted to the mailing list: 20:04:56 #link http://lists.openstack.org/pipermail/openstack-dev/2017-December/125688.html 20:05:20 At this point it seems like an ongoing discussion, but thought I would keep you posted. 20:05:51 thanks for that url. 20:05:56 my feeling it’s a done deal 20:06:19 Final announcement I have this week, we had a video conference hosted by RedHat to talk about the provider drivers. It was announced on the mailing list. There is a short summary of topics here: 20:06:25 #link https://etherpad.openstack.org/p/octavia-providers-queens 20:07:03 xgerman_ Yeah, I don't know. There is another 30+ message chain that has started up, so... 20:07:04 thanks for all attendees. i think we have a very good discussion. 20:07:16 +1 20:07:24 Any other announcements today? 20:07:26 s/have/had 20:07:33 it's getting late for me. sorry :) 20:07:46 #topic Brief progress reports / bugs needing review 20:07:59 I have been focusing on Active/Active patches this week. 20:08:14 I have a data model patch up for review and have started on the amphora driver patch. 20:08:35 Mostly this is a breakdown of one of the older patches that was pretty large and needed some love. 20:08:59 Plus many reviews in the Active/Active space. 20:09:10 #link https://review.openstack.org/#/c/529191/ 20:09:15 I also reviewed the QoS again today. Looks pretty good to me. 20:09:15 #link https://review.openstack.org/#/c/528850/ 20:09:39 thanks nmagnezi 20:09:59 Any other progress updates? 20:10:25 octavia client for qos is ready 20:10:26 https://review.openstack.org/#/c/526217/ 20:10:54 Oh, cool. I will check out the update on that 20:11:06 great 20:11:18 It was good last time I checked though you couldn't delete the policy, which I expect is what you fixed. 20:11:28 right 20:11:38 #link https://review.openstack.org/#/c/526217/ 20:11:44 ^^^ get that in the minutes. 20:12:02 #topic Heat updates for Octavia 20:12:09 #link https://bugs.launchpad.net/heat/+bug/1737567 20:12:10 Launchpad bug 1737567 in OpenStack Heat "Direct support for Octavia LBaaS API" [Medium,New] - Assigned to Rabi Mishra (rabi) 20:12:27 There is a bug open to update Heat for the new Octavia endpoint. 20:13:12 The author is hoping to drum up support with "affects me" votes on the bug. So if you have an interest in Heat getting updated please voice your interest on the bug. 20:14:01 #topic Open Discussion 20:14:27 I didn't have anymore agenda items as I wasn't sure what the turnout was going to be. Are there other topics we would like to continue? 20:14:34 we are planning to add the octavia v2 api support in fog-openstack gem 20:14:57 Good question. 20:15:33 I don't know anyone working on that currently 20:15:38 an issue is created on their github. 20:15:46 kpalan1 Are you able to help with that? 20:16:05 yes i will working on it 20:16:07 that was the way I read it :P 20:16:12 "we are planning to add" :) 20:16:23 yes, we would like to contribute that 20:16:25 also: +A'd the QoS patch 20:16:30 Oh, oppps, got distracted looking it up 20:16:31 sweet 20:16:53 Cool, it looks like it currently only has lbaasv1 support.... Sad face 20:17:40 Oh, maybe not, I see it in the"requests" just not the models 20:17:49 waiting for active-active work to complete , we will be starting soon to add octavia v2 api support there, we need it internally forone of our chef based tool 20:17:57 kpalan1 Please feel free to ping us if you run into questions, etc. 20:18:12 sure, thanks 20:18:14 There're 2 issues/proposal that haven't been resolved in prior meeting: (1) the independent member API (no pool id) 20:18:35 (2) bind amphora agent patch 20:19:16 1) I think we just need to vote on whether we think it will ever be useful to have shared member objects 20:19:17 rm_work, ^^ started to sleep in normal hours.. so we did not discuss this again :< 20:19:29 Right. Since we do have a good group here today, let's start at the top 20:19:42 nmagnezi: i told people that going to a normal schedule would not be a *good thing* for work :/ 20:19:55 but no one listens 20:20:04 Hahaha 20:20:35 So, independent members... 20:20:48 bar_ Do you want to give a quick summary again? 20:21:20 k, currently we access in octavia api to members, only by specifying both pool_id and member_id 20:21:38 member_id is unique, so why not ADD another API, to access by member_id alone 20:21:50 that's the proposal 20:22:01 yeah I think this is full circle, right? 20:22:17 doing /v2.0/lbaas/member/ 20:22:32 because ... we don't really need to do shared members ever IMO 20:22:47 I would agree with this idea, don't need to know a pool_id to look up a member 20:23:22 so we only wanto to do GET and LIST ? 20:23:35 (read only) 20:23:38 hmm 20:23:47 I mean, I actually don't know why we couldn't do POST 20:23:48 I think there was a concern raised before about the relationship with pool and member today, that deleting a pool currently deletes it's members too. xgerman_ is that right? 20:23:52 xgerman_, why not update as well? 20:24:02 if you pass a pool_id 20:24:13 yes, we cascade the pool deletion to members 20:24:26 listeners are a sub-object on a LB, and those aren't *under* LB 20:24:31 but if we only provide read only I see that less of a concern 20:24:35 and they can be cascade deleted as well 20:24:57 I'm not sure what the cascade deletion has to do with it 20:25:06 I will say that we would have to maintain the current API paths, etc. for backward compatibility. Otherwise we are talking about LBaaSv3, which I really don't want to consider right now due to all of the other work going on. 20:25:19 lol yes 20:25:33 so we're talking about just adding another resource 20:25:38 like, member_standalone 20:25:42 at /member/ 20:26:02 technically, /members/ 20:26:15 err, yeah i always forget if our resource names are plural in the API >_> 20:26:46 since we could spend our time doing other things - do we have a use case why we need that? 20:26:56 it's ... easier to access? <_< 20:26:57 dunno 20:27:04 You would have to make pool_id mandatory on the member create calls 20:27:06 i'm just saying i'd vote to allow that 20:27:16 not that we should prioritize it 20:27:18 xgerman_ Very good question 20:27:34 if someone wants to spend their time doing something though, I can't stop them 20:27:48 xgerman_, I don't see much use, if members are not to be shared. 20:27:50 that's the point of open source, and why companies hire us anyway -- to set priorities 20:28:09 It would be easier to access, that's it. 20:28:15 yep 20:28:17 Agreed, but it would be another distraction from getting our major goals for the release done (act/act, drivers, flavor) 20:28:33 yeah, even if we don;t write it we will need to review it 20:28:39 so we can set it as "wish list" or something 20:28:45 yep 20:28:59 i'm just saying, if i saw code pop up that does this, I'd review it and be willing to +2 if it's good 20:29:08 rm_work, +1 20:29:10 i think the point of the question was just "is this OK?" 20:29:48 Approved then? 20:29:49 Does this reach the spec bar or just an RFE? 20:29:58 i *think* we kinda all agree that read only direct access to members is okay, it's just not a prio 20:30:10 RFE — did we ever figure out versioning? 20:30:37 xgerman_ Like API micro-versioning? 20:30:52 like a client knowing that /members is available 20:31:09 (without tetsing every new path) 20:32:14 xgerman_ API discovery is still up in the air last time I checked the api-wg. We would need to change the client to support this. 20:32:42 ok, so we should threat lightly on API extensions 20:32:52 just my 2cts 20:32:55 #link https://specs.openstack.org/openstack/api-wg/guidelines/discoverability.html 20:32:59 Big fat TODO still 20:33:53 >_> 20:34:05 we so we DO have a version bit 20:34:07 yeah, I have seen too many clients relying on the user to define what’s possible — hate to see a —use-member-direct flag 20:34:25 but i imagine the client could try and fallback 20:34:48 We do have a version that would increment for this enhancement 20:34:51 rm_work, if a pool id was not provided, how should the client fall back? 20:35:00 nmagnezi: ah good point 20:35:12 :) 20:35:13 so if no pool-id is provided and the new endpoint isn't there.... <_< 20:35:22 then fail 20:35:26 I guess 20:35:29 Yep 20:36:02 ok, so we increment our version - client checks that and acts accordingly 20:36:07 … 20:36:59 i think that API versioning is a broader topic. for example we can say similar things about the upcoming QoS support 20:37:09 Right. 20:37:10 +1 20:37:22 Yep 20:37:32 We have overlooked that recently 20:37:43 +1 20:37:49 * johnsom slaps his own wrist 20:38:02 johnsom, not we have two incentives not to :) 20:39:10 we should also add that to the API docs so someone knows which version has which AI 20:39:13 calls 20:39:27 I will take an action to go update the version starting with QoS. 20:39:55 xgerman_, do we have an API call to fetch the version number? 20:39:57 +1 (I can see us also lumoing all changes for a cycle together) 20:40:05 nmagnezi Yes 20:40:14 GET / 20:40:57 #link https://developer.openstack.org/api-ref/load-balancer/#api-discovery 20:41:23 xgerman_: agreed, for one cycle it's probably fine to lump stuff 20:41:37 and for those of us on master, "missing" features is less of a problem :P 20:41:50 but with one year cycles looming I would increment more often 20:42:10 Though I wonder if we should not have a numerical version here as well. Will have to go back and double check the api-wg 20:42:13 yeah we should just buckle down and be good about doing it i guess 20:42:22 +1 20:42:25 +1 20:42:37 You guys should fire your PTL 20:42:42 Grin 20:42:46 lol nope 20:42:49 4 more years! :P 20:42:54 Ha 20:43:02 +1 Adam 20:43:04 Oye 20:43:16 Anyway, let's summary here.... 20:43:17 Yeah we could do version increments there too probably 20:43:46 cause just having "last updated date" is kinda weird 20:43:58 id=v2.0 is also weird 20:44:14 RFE it. Ok to add read-only paths, remember to bump the api minor version as a start 20:44:15 shouldn't we have like... ['major', 'minor'] at a minimum? 20:44:18 right? 20:44:39 rm_work Yeah, I am wondering too. I know this was a copy of neutron-lbaas, but that doesn't mean it's right.... 20:44:55 yeah certainly read-only is easier, but i still don't see why it couldn't be a full CRUD resource 20:45:04 yep, but major is v2 20:45:20 johnsom: i wonder what we break if we add major/minor and just point it to the ID 20:45:33 or maybe actually 20:45:49 major=2 minor=0 20:45:51 would be "now" 20:45:58 indeed 20:45:59 Well, if you ask too many questions here your answer will come to micro-versions.... 20:46:03 heh 20:46:14 k i'd probably be in favor of major/minor/micro 20:46:15 or something 20:46:20 what's the third one 20:46:31 nah, we just increase minor sequentially 20:46:38 https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html#version-discovery 20:46:41 #link https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html#version-discovery 20:46:48 kk 20:46:50 so we do THAT 20:46:59 that works 20:47:01 it has id 20:47:04 +1 for microversion 20:47:07 but also the real stuff 20:47:11 So... We are compliant today, just not supporting microversions yet 20:47:12 wooo standards 20:47:21 yep, +1 implement microversion 20:47:35 probably we should do that inside the cycle (it looks trivial) 20:47:36 id is the same as we have now 20:47:48 ok, microversions it is 20:47:55 so we have a first microversion for QoS and whatever else 20:48:08 Oye, ok... Please read the whole doc before deciding we want to jump on that. 20:48:10 and then we can try to be good about incrementing on API changes from now 20:48:15 ok will read 20:48:17 It can also make the client a bit of hell 20:48:31 should we do an official vote in January when we're all back? 20:48:47 Yes, let's hold off on the microversion stuffs 20:49:17 can’t we just do server and ignore client? 20:49:26 bar_ Did you get an answer out of that on the member API? 20:49:31 i mean ... the POINT is for the client, isn't it? 20:49:38 rm_work +1 20:49:38 hmm, why only read-only path? 20:49:47 yeah i'm not sure i follow read-only either 20:49:50 I don’t like the ACCEPT Header stuff 20:49:52 i would just do it as a full thing 20:49:58 and we'll review 20:50:05 i think it'll be fine 20:50:12 when people see that it works 20:50:21 again though, some other stuff is probably higher priority 20:50:31 like finishing our tempest stuff (did you say you were going to look at that?) 20:50:35 Ok, maybe split the patches just in case someone comes up with a reason why the updates are a bad idea 20:50:38 I am. 20:50:42 kk 20:50:47 It's... neglected... 20:51:05 Oh yes, tempest is important. 20:51:14 I need to re-write some patches. 20:51:20 It is a community goal. I have updated our status to in-progress. 20:51:32 can we deprecate octavia/tests/tempest? 20:51:40 *deprecate=delete 20:51:48 Yes, it goes way with the tempest plugin patch 20:52:06 Though we need to time it with the overall tempest plugin switch over 20:52:28 Is it for Queens? 20:52:49 We need to have a working tempest plugin for queens, yes 20:53:10 #link https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html 20:53:27 ok. API proposal is approved? (though not prioritized) 20:54:01 Well, technically you would create the RFE story and we would approve it there, but essentially yes. 20:54:09 I see. 20:54:17 After the tempest plugin is done.... GRIN 20:54:19 just kidding 20:54:23 :-{ 20:54:25 :-) 20:54:33 I'm working on it 20:54:38 THANK YOU 20:54:39 bind amphora agent? 20:55:02 Ok, six minutes, bind amphora agent. This was about a better way to do it right? 20:55:22 rm_work, nmagnezi ? 20:55:41 Last I remember rm_work was going to comment/help with the "better" way 20:55:57 yeah, but nmagnezi and I have reservations.... 20:56:12 better way should just be to finally implement the amphora-api for update-config 20:56:20 basically bar's current implementation is to create the neutron port before we call "nova boot" so we'll know the IP in advance and configure amphora-agent.conf 20:56:30 and our initial connection to the amp can do a config update to set the right listening IP 20:56:49 yeah, and that's untenable for some types of networks 20:56:58 rm_work does not like that implementation and prefers an agent restart API call to update the file and reload config 20:57:10 nmagnezi We didn't do that because it doesn't work in some deployments if I remember 20:57:13 rm_work, what types of networks ?: ) 20:57:22 can we have different flows for different types of networks? Is it... done? 20:57:33 the kind where you can't choose the network that gets plugged to a new VM :) 20:57:39 johnsom, correct. rm_work's deployment does not support it for example. 20:58:01 rm_work, so how does nova knows? :) just wondering.. 20:58:09 nova figures it out internally 20:58:20 based on the HV that it schedules, it also schedules a network 20:58:25 I think originally nova-networks had an issue with it too. Like you couldn't boot without at least one nic 20:58:54 johnsom: true 20:58:55 rm_work, is that a thing in nova? or is it an internal solution you guys have? 20:59:00 nova networks is dead now BTW 20:59:00 I know it's not just me, too -- i talked to at least one other deployer that had the same issue 20:59:24 in our case it is a custom scheduler in nova, yes 20:59:33 johnsom, good riddance (nova network) 20:59:46 we also talked about taking it from DHCP/cloud init/? 20:59:55 yeah that might be possible 20:59:57 I will say, we still need the amp config update API. That is still a super valid need. 21:00:10 but we shouldn’t comingle the two 21:00:12 yes, and my point was that we should just take this opportunity to do it and use it 21:00:21 johnsom, for health manager list? (trying to recall) 21:00:25 yes 21:00:29 rm_work I know that is what was originally discussed 21:00:49 Ugh, meeting time is up.... 21:00:54 #endmeeting