20:00:04 <xgerman> #startmeeting octavia
20:00:04 <openstack> Meeting started Wed Sep  2 20:00:04 2015 UTC and is due to finish in 60 minutes.  The chair is xgerman. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:08 <openstack> The meeting name has been set to 'octavia'
20:00:14 <xgerman> #chair blogan
20:00:15 <openstack> Current chairs: blogan xgerman
20:00:16 <johnsom> o/
20:00:20 <bana_k> o/
20:00:21 <ajmiller> o/
20:00:23 <madhu_ak> hi
20:00:28 <abdelwas> hi
20:00:41 <blallau> hi all
20:00:46 <xgerman> hi
20:00:49 <xgerman> #topic Announcements
20:00:58 <xgerman> johnsom you are up for GSLB
20:01:06 <ptoohill> ./
20:01:19 <sbalukoff> Howdy, howdy!
20:01:20 <johnsom> Mostly looking at stub code checked into gerrit
20:01:26 <minwang2> o/
20:01:30 <xgerman> exciting stuff
20:01:32 <Aish> o/
20:01:37 <johnsom> polaris has changed the repo around and split off the health manager code a bit better.
20:01:43 <evgenyf> Hi
20:01:50 <bharathm> o/
20:01:51 <TrevorV> o/
20:01:54 <blogan> hello!
20:01:59 <johnsom> That is all I have
20:02:14 <xgerman> any other announcements?
20:02:43 <xgerman> #topic Liberty deadline stuff
20:03:06 <xgerman> #link https://etherpad.openstack.org/p/YVR-neutron-octavia
20:03:42 <sbalukoff> So, L7 isn't going to make it in. But it'll probably land in a few weeks.
20:03:58 <xgerman> :-(
20:04:05 <sbalukoff> (Assuming people aren't completely consumed by bugfixes at that time.)
20:04:20 <dougwig> sbalukoff: what's not in gerrit yet?
20:04:22 * blogan is consumed by many things
20:04:37 <sbalukoff> dougwig: Octavia stuff is the biggest thing. Haven't had time to do it yet.
20:04:48 <dougwig> sbalukoff: ok.
20:04:54 <dougwig> where are we with octavia?
20:05:05 <blogan> octavia L7 is not even started
20:05:06 <evgenyf> Tempest is also missing, right?
20:05:09 <sbalukoff> And the updates to the reference driver... well, I think Mark closed my patch due to lack of activity. But even after I re-open it, it'll need to be reworked given the pool sharing stuff.
20:05:27 <sbalukoff> evgenyf: Correct.
20:05:27 <blogan> evgenyf: tempest tests would be needed as well
20:05:44 <xgerman> well, when is the hard drop date for L7?
20:05:45 <johnsom> minwang2 and I are still working on the gate.  The slowness of booting a vm inside of a vm without vt-x emulation is causing timeout pain
20:06:08 <TrevorV> vm-inception
20:06:15 <madhu_ak> evgenyf: yes, the tempest is missing, thought I should wait as the L7 has partial functionality
20:06:22 <dougwig> we past the hard drop date. we can argue for an FFE if it's really close, but i'm not hearing the sounds of really close.
20:06:22 <sbalukoff> xgerman: Er... last Monday? ;)  Seriously, though... if you want us to keep pushing on it, I would *love* to get it in.
20:06:46 <blogan> dougwig: i dont think it is realistic
20:06:58 <sbalukoff> blogan: +1
20:07:04 <xgerman> just looked at #link https://wiki.openstack.org/wiki/Liberty_Release_Schedule
20:07:07 <evgenyf> blogan: right, and drivers should be fixed with  a pool sharing
20:07:11 <xgerman> we are running out of time
20:07:18 <sbalukoff> yep.
20:07:22 <dougwig> sbalukoff: if you could get it merged by the 11th, i think we could push for an exception. do you think that's possible?  it sounds like not, given octavia stuff.
20:07:31 <blogan> evgenyf: correct, which sucks but its a necessary evil
20:08:20 <sbalukoff> dougwig: I can try. I'll be spending every waking hour working on it... but given what I've learned doing the pool sharing stuff, and the state of evgeny's code (which is good), I think we might be able to do it.
20:08:34 <sbalukoff> Octavia, when it comes to doing haproxy configs, isn't all that different from the reference implementation.
20:08:35 <blogan> sbalukoff: even with octavia not having the pool sharing?
20:08:38 <dougwig> sbalukoff: ok, i'll keep the option open, then.
20:08:48 <blogan> sbalukoff: yes but api cahnges will need to be made for that as well
20:08:48 <sbalukoff> And the L7 stuff doesn't really affect any of the amphora health or management stuff.
20:09:05 <sbalukoff> blogan: That is correct.
20:09:06 <blogan> sbalukoff: unless we just duplicate the pools in octavia
20:09:17 <sbalukoff> blogan: Let's not do that.
20:09:29 <blogan> sbalukoff: and sahring pools between listeners is going to require duplication bc its all different configs
20:09:30 <xgerman> #decision Keep working on L7 until 9/11
20:09:40 <blogan> for haproxy
20:09:45 <xgerman> blogan +1
20:09:47 <sbalukoff> blogan: It's probably not as bad as you think.
20:09:51 <sbalukoff> :)
20:09:53 <blogan> sbalukoff: the sky is falling!!!
20:09:56 <TrevorV> too soon
20:10:06 <sbalukoff> "pools" are database constructs only until they're built into a config.
20:10:11 <sbalukoff> Haha
20:10:15 <sbalukoff> In any case though...
20:10:27 <sbalukoff> If we're going to succeed in this, we'll need y'all to review like crazy.
20:10:33 <sbalukoff> Starting with this: https://review.openstack.org/#/c/218560/
20:10:56 <sbalukoff> And then this: https://review.openstack.org/#/c/148232/
20:11:10 <xgerman> ok, will do
20:11:14 <xgerman> back to Octavia
20:11:19 <sbalukoff> Ok!
20:11:26 <blogan> thats one of the problems, so many things to review, i feel like there's not enough time to fully test everything out and we're gonig to give a bad first impression of octavia
20:11:48 <dougwig> that is a pretty huge risk/downside to rushing here.
20:11:54 <sbalukoff> That is true.
20:12:06 <dougwig> it's not like we have an awesome lbaasv1 reputation.
20:12:12 <xgerman> mmh, we have the automated test suite so it shouldn’t be too bad
20:12:12 <blogan> lol
20:12:23 <crc32> don't forget to rubber stamp this one. https://review.openstack.org/201882
20:12:31 <blogan> xgerman: but we dont have test suites for failover, heartbeats, etc
20:12:32 <TrevorV> dougwig, so you're saying the only direction we can go is up, right?
20:12:36 <sbalukoff> If y'all think we should take the safer route that's fine by me. I'll do a couple more things on the L7 stuff in Neutron LBaaS to get that to a state I'm comfortable merging, the concentrate solely on Octavia reviews and stuff in the next weeks.
20:13:16 <blogan> sbalukoff: i'm not saying that i'm just saying there's a ton of code to review for failover, heartbeat sending receiving, active passive, event stream
20:13:23 <sbalukoff> Right.
20:13:32 <xgerman> well, event stream I think we can skip for L
20:13:42 <sbalukoff> So, I've always seen L7 as being less important than Octavia-as-reference.
20:13:44 <TrevorV> What would be considered the safer route?  Meaning not pushing for octavia to be the ref impl?
20:13:45 <johnsom> I think event stream is off the table for Liberty
20:13:55 <xgerman> +1
20:13:56 <crc32> I just realized the EventStreamer needs to be in both Octavia and Nuetron lbaas. Is it too late to get a stub commit in nuetron-lbaas to make it into the liberty release?
20:13:57 <sbalukoff> So if Octavia-as-reference is in danger of not landing, then we should put all our efforts into fixing that problem.
20:14:01 <blogan> xgerman: maybe, but i wouldnt be surprised if active passive was too bc that is going to be a bear to reveiw and test out
20:14:27 <xgerman> yeah, it’s on the at risk list
20:14:28 <dougwig> do any of you operators have plans to put octavia into your clouds anytime soon?
20:14:29 <sbalukoff> TrevorV: Safer route is to shelve L7 until Octavia-as-reference has landed.
20:14:45 <sbalukoff> dougwig: I know we do.
20:14:57 <blogan> dougwig: yes i think hp and us both do, for us an experimental high risk product customers can try out though
20:14:59 <xgerman> next year
20:15:02 <sbalukoff> But, we're also less beholden to what is in the official release
20:15:18 <xgerman> same here — but we would need to back port...
20:15:22 <sbalukoff> So, these deadlines don't matter all that much to me, except in determining priority of what's going to be worked on.
20:15:48 <sbalukoff> xgerman: And we're totally willing to back port if it means we get a loadbalancer that doesn't suck.
20:15:48 <dougwig> so, did i hear us just push L7 to M?
20:16:10 <johnsom> Sad, but I am ok with L7 pushed to M
20:16:15 <sbalukoff> dougwig: Possibly. The question I have is: Is Octavia-as-reference in danger of not landing in Liberty?
20:16:41 <evgenyf> That's a reasonable choice
20:16:48 <xgerman> well, we certainly have to cut some corners (e.g. event stream)
20:17:04 <xgerman> and potentially active-passive
20:17:14 <dougwig> sbalukoff: it's currently targeted to rc1, so it's currently in.
20:17:22 <sbalukoff> Yeah, we won't use Octavia in production without active-passive.
20:17:22 <xgerman> but I think we are close
20:17:31 <sbalukoff> dougwig: Rad!
20:17:51 <xgerman> yeah, active-passive is huge for us as well
20:17:57 <xgerman> and we will certainly back port that
20:18:09 <johnsom> Very close on VRRP.  I have been reviewing, over half done, and it looks pretty good
20:18:20 <sbalukoff> Cool beans.
20:18:36 <johnsom> So, health manager has +A, what is the timeline for folks to get UDP sender reviewed?
20:18:44 <xgerman> but again we probably need an extension for that since we need to close on Octavia first
20:18:55 <xgerman> (extension for active-passive)
20:19:07 <xgerman> johnsom today
20:19:10 <sbalukoff> Can we get some links thrown out on the stuff that's highest priority to review? I can jump in and help with that.
20:19:20 <johnsom> #link https://review.openstack.org/#/c/201882/
20:19:27 <blogan> johnsom: as soon as failover works for rest that should get in, and i think the health manager can get in too, the sender will be next on my list to fully test out and review
20:19:28 <TrevorV> I think health manager is depending upon the failover review right?
20:19:33 <dougwig> do we have an etherpad with a priority list of reviews?
20:19:44 <blogan> TrevorV: yes
20:19:45 <xgerman> https://etherpad.openstack.org/p/YVR-neutron-octavia
20:19:54 <sbalukoff> dougwig: +1
20:20:02 <xgerman> I think all those reviews need to be done
20:20:10 <johnsom> We can still review and +A UDP without failover.  Technically failover is only called by health manager which is already +A
20:20:19 <sbalukoff> Ok, cool.
20:20:45 <xgerman> so basically Failover, etc + TLS
20:20:54 <xgerman> look outstanding
20:21:07 <xgerman> and we have some bug fixes to review...
20:21:19 <blogan> the bugs are what concerns me about the first impression
20:21:39 <blogan> and i know we have a low bar to hurdle bc of v1
20:21:41 <blogan> but still
20:21:46 <xgerman> we can work on bug fixes until 9/21 +
20:21:55 <johnsom> We still have time for bug fixes though right?
20:21:56 <blogan> okay
20:22:12 <xgerman> yeah, it’s a pain to get them into the RCs but
20:22:29 <xgerman> dougwig? any advice
20:22:39 <blogan> isn't octavia an independent release? or are we on the same release cycle now bc we're the ref impl?
20:22:49 <xgerman> yep, same release cyscle
20:22:52 <sbalukoff> So, we should concentrate on bug squashing.
20:22:58 <dougwig> i'm really less worried about functionality bugs than we fall over performance-wise.
20:23:13 <dougwig> blogan: as-ref would be the same cycle
20:23:20 <xgerman> +1
20:23:39 <xgerman> and we can get bug fixes in until very shortly before release
20:24:02 <johnsom> Didn't RAX do some performance testing?
20:24:19 <blogan> yeah and the performance was relatively fine
20:24:25 <blogan> i dont know the exact numbers though
20:24:27 <crc32> You mean heartbeat performance?
20:24:35 <blogan> no load balancing performance
20:24:38 <xgerman> load balancing performance
20:24:44 <crc32> yea we need to address that.
20:24:48 <ptoohill> with haproxy and containers
20:24:57 <crc32> Were testing on containers though.
20:25:00 <crc32> not vms
20:25:15 <ptoohill> ^
20:25:23 <xgerman> vms are inherently worse but we did some tests at the beginning
20:25:36 <xgerman> and Octavia is not really making the vms lower
20:25:39 <xgerman> slower
20:25:43 <sbalukoff> Did anyone keep the code around for these tests (Should we add said code to the project as well?)
20:26:02 <xgerman> well, harpy in a vm and apachebench
20:26:07 <xgerman> haproxy
20:26:11 <blogan> that damn harpy
20:26:27 <ptoohill> Eh, this was internal stuff that would be a pain to break apart. It utilized ansible and other things. But yea, theres other tools that we can write tests in for perf
20:26:51 <sbalukoff> In my experience, when it comes to running haproxy on any host, there are certain kernel parameters you need tuned right to get the best performance. Since the existing reference implementation doesn't do any of this, I would anticipate Octavia to be on-par with it out of the box, performance wise.
20:27:09 <xgerman> actually we tune them in the image
20:27:13 <johnsom> We can come up with a doc on benchmarking, if we need to go there
20:27:15 <blogan> new tests shoudl be done by hitting the neutron-lbaas API and all that just to get good, real numbers
20:27:15 <sbalukoff> xgerman: Good!
20:27:22 <sbalukoff> johnsom: +1
20:27:38 <blogan> but priorities :)
20:27:42 <johnsom> sbalukoff Your kernel settings are in the amphora already
20:27:46 <blogan> unless someone has cycles
20:27:59 <sbalukoff> I think we might want to start thinking about writing that getting started guide. Nobody is going to complain about octavia's performance if they can't get the damned thing to work in the first place. :)
20:28:10 <blogan> we didnt have the kernel tunings when we did our container tests and it was still decent
20:28:13 <xgerman> they will all come to our lab and learn
20:28:27 <sbalukoff> xgerman: Along with the rainbows and unicorns
20:28:38 <blogan> sbalukoff: but if they can't get it to work they still might ahve the thinking that if it did work it would be perform really well!
20:28:56 <sbalukoff> blogan: Get the car running first, then make it go fast. :)
20:28:57 <xgerman> sbalukoff, we have an octavia lab during the Tokyo summit
20:29:17 <xgerman> so wahtever docs we hand out we can put online
20:29:21 <sbalukoff> xgerman: I know. I'm just saying that not everyone who is going to try Octavia out is going to be there.
20:29:23 <johnsom> So what is the need here?
20:29:36 <sbalukoff> I'm happy to take point on writing that getting started guide in a week or two, eh.
20:29:52 <crc32> :)
20:29:52 <johnsom> Does dougwig just need numbers for competitive marketing(JK doug)?  do we have a goal?
20:29:55 <xgerman> well, we need that anyway for the lab + we will get inout from the people stuggling
20:30:04 <xgerman> so I think getting started should be after Tokyo
20:30:22 <sbalukoff> xgerman: First draft should be done prior to the lab, eh.
20:30:25 <crc32> I agree sbalukoff should be on point for the doc.
20:30:30 <sbalukoff> HAHA
20:30:40 <dougwig> johnsom: better than the current ref.
20:30:44 <sbalukoff> crc32: Not sure if compliment
20:31:22 <crc32> :)
20:31:24 <blogan> sbalukoff: being good at something most of us arent is a compliment
20:31:25 <johnsom> Ok.  If we feel we need "better than current ref." verification for l3 I can take that.
20:31:57 <xgerman> that’s just pointless. We say we scale better — never said we beat out ref when you just put one lb on it
20:32:00 <sbalukoff> johnsom: Yay!
20:32:13 <blogan> one thing that is not better than current ref impl is density
20:32:38 <sbalukoff> blogan: True. Think we need to start looking into multiple LBs per amphora at this time, then?
20:32:45 <blogan> not at this time
20:32:46 <johnsom> blogan Now you just bring up that process per listener again....  grin
20:32:48 <xgerman> current ref is bare metal and we are subject to nova scheduling vagarities
20:33:05 <sbalukoff> That's true.
20:33:18 <xgerman> density is a problem of your nova overcommit
20:33:19 <blogan> johnsom: not just that, just having a 1GB VM per load balancer will mean less density than just spinning up a process
20:33:25 <crc32> vms on bare mettle?
20:33:44 <johnsom> True
20:33:46 <blogan> yeah i think its something that is going to be fine
20:33:50 <blogan> i mean acceptable
20:34:15 <blogan> but people running on devstack will be disappointed, unless they're running on huge VMs
20:34:22 <xgerman> yep
20:34:39 <sbalukoff> Right. But people running production loads on devstack are doing it wrong in so many ways...
20:34:47 <xgerman> and people having one LB might be as well since the virtualization takes its toll
20:34:53 <dougwig> if we replace the ref with something that scales *worse*, i don't think it'll be a good day. and if we only define scalability as for operators with huge nova clouds, we might be setting ourselves up.
20:35:36 <blogan> maybe we need a namespace compute driver
20:35:43 <blogan> seriously
20:35:51 <johnsom> We do have the problem that if people run devstack in a VM without vt-x emulation it is a painful experience
20:35:53 <xgerman> well, we are making assumptions what our customers want
20:36:14 <sbalukoff> Well, the problem is that Octavia's scaling performance only can significantly beat the on-bare-metal ref impl when active-active lands (ie. Octavia 2.0)
20:36:20 <blogan> xgerman: devstack isn't run by customers (for the most part) it'll be a pain for developers or people testing it out
20:36:37 <sbalukoff> Having said this, I have multiple groups in IBM that are champing at the bit to try to deliver active-active for Octavia.
20:36:49 <sbalukoff> I'm going to try to get them to attend these meetings.
20:36:54 <xgerman> cool
20:37:44 <TrevorV> champing at the bit... ha ha that made me chuckle
20:37:54 <sbalukoff> My guess is they'll want to start working on that immediately. Which means the roadmap we came up with... over a year ago, will probably need some revision. But we can cross that bridge when they start showing up to these meetings, eh. :)
20:38:01 <xgerman> well, so if we roll out Octavia with the you need VT-X I think that’s ok. There are other services which run in VMs so we are not unique with that need
20:38:18 <sbalukoff> xgerman: +1
20:38:34 <dougwig> good point. how does trove handle the devstack/smaller cloud case?
20:39:02 <johnsom> They have the same problem we do as far as I know
20:39:14 <blogan> does hp have the trove PTL?
20:39:20 <xgerman> he is on vacation
20:39:24 <xgerman> butbyes
20:39:35 <xgerman> and we have the cue PTL who has a similar problem
20:40:17 <blogan> well i really do think we can have a namespace compute driver in octavia
20:40:21 <blogan> but obviously not for L
20:40:25 <johnsom> Again, if you aren't running the amps inside nova inside a VM you are fine.  It's just the vm inside vm thing that qemu drops to tcg and no performance
20:40:44 <xgerman> yeah, I think we can document that and should be fine
20:41:04 <sbalukoff> Yep.
20:41:09 <blogan> dougwig: does that suit your concerns?
20:41:18 <dougwig> right, i'm not worried about production. i'm a bit worried about first impressions.
20:41:28 <blogan> yeah
20:41:37 <xgerman> so we should check for VT-X and not install...
20:41:57 <sbalukoff> dougwig: So am I, but I'm more concerned that our 'getting started' documentation sucks, so I'll work on fixing that. In a week or two.
20:41:58 <blogan> wouldn't that prevent devstack?
20:42:34 <xgerman> well, we can have a —force parameter for our tests
20:42:49 <sbalukoff> Heh!
20:43:02 <dougwig> i don't think this is germane to the L deliverable, but I wonder if we can do a mini-amp with cirros+haproxy that can run with 128 or 256MB.
20:43:06 <johnsom> devstack on a vt-x enabled system (vmware for exmaple) works fine
20:43:23 <sbalukoff> Or we could just write a FAQ answering the question, "Why does my Octavia load balancing performance suck?"
20:43:34 <xgerman> yeah, johnsom has been looking into smaller amps but no luck so far
20:43:37 <johnsom> Right now the current amp sits at about 700MB on ubuntu
20:43:45 <sbalukoff> Holy moly!
20:43:53 <dougwig> right, cirros is tiny in comparison.
20:43:55 <blogan> johnsom: thats ubuntu overhead though now?
20:43:57 <blogan> no?
20:44:08 <johnsom> It is ubuntu overhead
20:44:10 <xgerman> yep, and cirrus doesn't have packaging
20:44:32 <johnsom> Right, we used triple-o which has limited base image support.
20:44:32 <dougwig> i'm not suggesting it bet he main amp.  but cirros + static binary would make a nice devstack experience.
20:44:45 <sbalukoff> dougwig: +1
20:44:57 <xgerman> yeah, would love that as well
20:45:14 <johnsom> I think we need to switch to something else overall, but is it worth it vs. containers?
20:45:32 <xgerman> containers should work, too
20:45:52 <xgerman> so we agree we need to fix that
20:45:54 <xgerman> now
20:46:02 <xgerman> 1) Should that stop the release?
20:46:05 <johnsom> It's definitely a wish list it.  I looked at what triple-o has and was disappointed.
20:46:08 <xgerman> 2) Can it be done in M?
20:46:17 <johnsom> I think we should just build a custom image at some point
20:46:32 <blogan> i think its q uestion of whether it should stop octavia becoming ref impl
20:46:40 <xgerman> yep
20:46:59 <xgerman> and if so how long do we have to create that image?
20:47:02 <sbalukoff> I don't think this should stop Octavia from becoming the reference implementation.
20:47:14 <xgerman> +1
20:47:26 <dougwig> no, don't stop L.  people devstack master.
20:47:31 <dougwig> IMO.
20:47:47 <xgerman> ok, cool. So we do L and then work on that image. Deal?
20:47:57 <sbalukoff> +1
20:47:59 <johnsom> xgerman +1
20:48:07 <blogan> if i get the time i may attempt to do a namespace compute driver
20:48:13 <blogan> bc i like punishment
20:48:18 <xgerman> :-)
20:48:25 <sbalukoff> Haha!
20:48:26 <crc32> I'll help with that blogan.
20:48:34 <xgerman> I think we have 12 minutes for two more topics
20:48:38 <johnsom> I think we have reviews to focus on for L
20:48:45 <sbalukoff> xgerman: Let's move on then.
20:48:47 <xgerman> #topic Discuss Flavor-Framework and Service providers
20:49:05 <xgerman> dougwig you have any inout on how you see those two co-exist?
20:49:22 <xgerman> since we are pushing hard to have flavor in for LBaaS
20:49:23 <dougwig> i see them as co-existing, but i'm open.
20:49:42 <sbalukoff> dougwig: Co-existing in L?
20:49:50 <sbalukoff> And moving to flavor-framework only in M?
20:50:18 <sbalukoff> I've not looked closely into it, but it sounds reasonable to me, especially given that flavor-framework is so new.
20:50:34 <xgerman> well, flavor is sort of on the edge for M
20:50:39 <dougwig> do sp's provide enough overhead that it's even worth killing them? the upgrade migration seems unpleasant to deal with.
20:51:17 <sbalukoff> So is the question we're really discussing, "Do we do flavor-framework in L?"
20:51:34 <xgerman> well, I have a guy who says he is close...
20:51:41 <sbalukoff> Cool
20:52:13 <xgerman> and i like labs-loadbalancer-create —flavor gold better than —provider <some hardcoded string>
20:52:32 <sbalukoff> Put another way, I don't see us *not* doing the flavor-framework at some point. If we can get it in L, that's great. But since it's so new, let's leave SPs in there and decide later whether to kill it. (ie. we can / should probably kick that can down the road)
20:52:46 <dougwig> sbalukoff: +1
20:52:51 <xgerman> +1
20:53:04 <xgerman> just was confused what the plans were and needed clarification
20:53:13 <sbalukoff> Ok, cool!
20:53:19 <xgerman> #topic Come to consensus on pool sharing: At loadbalancer or listener level?
20:53:29 <xgerman> sbalukoff you have the floor
20:53:40 <sbalukoff> Ok, I think evgeny and I are now in agreement that pool sharing should happen at the loadbalancer level.  Right, evgeny?
20:53:46 <sbalukoff> If so, then there's not much to discuss. :)
20:54:02 <xgerman> nice
20:54:11 <sbalukoff> It's mostly: Do I need to beat any more of y'all into submission over this design decision. ;)
20:54:21 <evgenyf> sbalukoff: right
20:54:31 <sbalukoff> Especially since we have working code that does the pool sharing at the loadbalancer level.
20:54:34 <xgerman> we never questioned that. Just the API design frightened us
20:54:39 <sbalukoff> Haha!
20:54:53 <sbalukoff> Ok, then... we're good? Anything else to discuss on this topic?
20:55:18 <xgerman> well, API design. We can specify listener-id or l7-policy-id on the pool?
20:55:26 <xgerman> or is ti load balancer id?
20:55:37 <rm_work> o/
20:55:38 <sbalukoff> xgerman: Either or.
20:55:47 <xgerman> all three?
20:55:48 * rm_work is on an even weirder sleep schedule right now than normal
20:56:01 <crc32> hey rm_work the meetings just getting started.
20:56:09 <sbalukoff> Or you can specify both so long as the listener is on the loadbalancer specified.
20:56:21 <rm_work> lol crc32
20:56:26 <xgerman> brogan likes that?
20:56:26 <evgenyf> l7 policy is specified to a listener
20:56:40 <sbalukoff> evgenyf: Right. As it should be, IMO. :D
20:56:57 <xgerman> evgenyf that’s finr
20:57:15 <sbalukoff> xgerman: So to answer your question: Yes. It allows for any combination of the above and figures out how to "do the right thing."
20:57:28 <sbalukoff> Ultimately, the pool you create is going to be associated with a loadbalancer one way or another.
20:58:06 <sbalukoff> xgerman: This should prevent breaking backward-compatilibity with previous Neutron LBaaS v2 API.
20:58:14 <xgerman> yep
20:58:16 <sbalukoff> (Even though that API is technically experimental. ;) )
20:58:47 <sbalukoff> In other words, the API changes in my proposed patch are additive.
20:59:09 <sbalukoff> Ok, anything else to discuss?  We're almost out of time.
20:59:18 <sbalukoff> I wanted to make one last announcement:
20:59:22 <sbalukoff> On a different topic.
20:59:31 <xgerman> yeah, I think we are done
20:59:44 <xgerman> everybody save your Open Discussion until next time
20:59:49 <xgerman> #endmeeting