20:00:07 <johnsom> #startmeeting Octavia
20:00:12 <openstack> Meeting started Wed Jan  6 20:00:07 2016 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:13 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:14 <sbalukoff> Howdy, folks!
20:00:15 <rm_work> o/
20:00:16 <openstack> The meeting name has been set to 'octavia'
20:00:18 <bana_k> hi
20:00:22 <TrevorV> o/
20:00:28 <johnsom> Hi all
20:00:32 <johnsom> #topic Announcements
20:00:43 <ajmiller> o/
20:00:45 <TrevorV> Lookin forward to that there Meetup amirite?!?!
20:00:46 <johnsom> Top of the list is: Happy New Year!
20:00:50 <sbalukoff> Yep!
20:00:58 <eranra> Hello
20:00:58 <johnsom> Looking forward to some bbq
20:01:07 <rm_work> yeah, gonna get me some BBQ since no one down here will go to BBQ with me :P
20:01:19 <rm_work> need you guys to come in from WA so I can actually go
20:01:21 <TrevorV> Hey now, I went the last time we all went (as far as I remember)
20:01:21 <sbalukoff> rm_work: I'm in!
20:01:23 <blogan> hi
20:01:24 <xgerman> rm_work no worries BBQ every day!
20:01:24 <johnsom> Such a shame
20:01:25 <TrevorV> I got the chili dog
20:01:29 <rm_work> lol
20:01:44 <johnsom> Midcycle in San Antonio, TX - January 12-15
20:01:53 <johnsom> #link https://etherpad.openstack.org/p/lbaas-mitaka-midcycle
20:02:10 <johnsom> Speaking of the mid-cycle.
20:02:28 <dougwig> my goal is to see blogan towed again.
20:02:35 <xgerman> dougwig lol
20:02:41 <sbalukoff> Haha!
20:02:45 <johnsom> I have added just a few topics.  I would also like to propose a day where we focus on fixing bugs as we have a bit of a backlog
20:02:51 <xgerman> johnsom is commandeering our car ;-)
20:02:52 <blogan> my goal is to see everyone else get towed
20:03:12 <minwang2> o/
20:03:13 <blogan> johnsom: good idea
20:03:27 <johnsom> Yeah, I can drive by an open parking space and park in a tow zone....
20:03:55 <TrevorV> johnsom that's a great idea, however that was honestly part of my plan for next week.  I've been so out of the loop working on internal stuffs, I thought I'd focus my time throughout the whole thing on bugs.  The more the better, however :D
20:04:13 <johnsom> Ok, I have proposed Wednesday as bug day.  That gives us a day to chat about the other topics.  We could slide it to Thursday though.
20:04:38 <johnsom> Cool, sounds like folks are on board.
20:04:39 <dougwig> bug day should be the day we make the biggest effort to have remote involvement, IMO.
20:04:49 <sbalukoff> dougwig: +1
20:04:52 <xgerman> +1
20:05:04 <rm_work> Yeah, we are figuring that out... we didn't get a room with builtin VC again, so ... we're looking at getting webcams set up again...
20:05:18 <rm_work> hopefully we can have a couple of webcams to point at whiteboards
20:05:23 <johnsom> Yep, sounds good.  Maybe we should make it Thursday, just to have a fixed day, as Tuesday could run into Wednesday
20:05:32 <rm_work> the room should at least have VOICE VC I hope
20:06:00 <sbalukoff> johnsom: I suspect we'll have enough to talk about with the other topics that that will take more than one day.
20:06:06 <TrevorV> rm_work I may just bring in my studio mic and tower to handle all the visual stuffs
20:06:13 <blogan> oh btw this time around there will only be drinks and snacks, no unlimited bacon and eggs for breakfast like last time, sooo everyone should eat breakfast before they come if htey eat breakfast
20:06:14 <sbalukoff> So I would say "Let's make Thursday the bug day"
20:06:38 <blogan> rm_work: its the same room as last time so if it did then it does now
20:06:43 <xgerman> blogan no bacon...
20:06:46 <johnsom> Ok, updated for Thursday is bug day.  Remote folks should join.
20:07:03 <xgerman> let’s revisit that...
20:07:24 <rm_work> yes, we should revisit the no bacon situation
20:07:29 <TrevorV> xgerman just to be clear, you guys didn't offer bacon in the mornings EITHER...
20:07:30 <johnsom> Haha, ok, that was a question I had.  So no breakfast supplies this time.
20:07:48 <blogan> there will probably be granola bars or something
20:08:02 <dougwig> red bull?
20:08:05 <sbalukoff> Heh!
20:08:08 <johnsom> There were frozen breakfast sandwiches in the kitchen, but yes, no bacon
20:08:09 <rm_work> do we have a budget for tacos or something? lol
20:08:13 <blogan> yes i made sure to specifically mention red bull
20:08:14 <TrevorV> johnsom when those of us who are collectings snacks ACTUALLY collect the snacks, we can pick up like, Poptarts or something that could be consumed for breakfasts... but it won't be a full balanced meal or anything
20:08:45 <johnsom> Ok, good info.  It sounds like we are going to have great turnout
20:08:49 <TrevorV> rm_work if you want to be the costco card supplier, we can go pick up large amounts of taco making supplies, and just DO it...
20:08:54 <TrevorV> If'n you're wantin ta
20:09:03 <johnsom> Any other questions/comments about the mid-cycle?
20:09:04 <rm_work> heh
20:09:09 <rm_work> I meant like, Benny's :P
20:09:15 <blogan> is everyone who is going signed up on the therpad?
20:09:21 <xgerman> TrevorV I have a Coisyco card...
20:09:23 <blogan> and is everyone is signed up on the etherpad still going?
20:09:28 <rm_work> worst case there's TacoC across the street that does cheap 12-packs of tacos
20:09:31 <xgerman> so we can also have muffins ;-)
20:09:39 <dougwig> blogan: most years it's the etherpad plus a few.
20:10:09 <blogan> dougwig: yeah thats what i'm wondering, bc we're already pushign capacity on that one room, well at least that big table in that room
20:10:12 <sbalukoff> I'm pretty sure all the IBM and Blue Box folks listed on the etherpad are all still going.
20:10:15 <johnsom> I know bedis is coming, he e-mailed me about it
20:10:35 <blogan> he's signed up too
20:10:59 <johnsom> Yep.  Any possibility of a break-out room for the FWaaS folks?
20:11:29 <blogan> i was going to ask to see if any of the surrounding rooms are available, not sure if they will be on short notice
20:11:45 <johnsom> Ok, thanks.
20:12:08 <johnsom> Also: GSLB Midcycle in Seattle - January 20-22
20:12:17 <johnsom> #link https://etherpad.openstack.org/p/kosmos_2016_winter_midcycle_meetup
20:12:35 <johnsom> In case folks are interested in GSLB/Kosmos
20:12:56 <sbalukoff> It's in my back yard... but do you *really* want me coming? ;)
20:13:14 <johnsom> We are a welcoming community....
20:13:29 <johnsom> As long as you are ready to code.....  grin
20:13:33 <dougwig> sbalukoff: if you want to contribute, sure.
20:13:36 <xgerman> And I can bring some bacon from the homewood
20:13:37 <sbalukoff> (he said, begrudgingly, through clenched teeth)
20:13:56 <sbalukoff> :)
20:13:57 <johnsom> #topic Brief progress reports
20:14:16 <sbalukoff> I need some eyes on this, please: https://review.openstack.org/#/c/256369/4
20:14:19 <johnsom> Ok, what is new other than holiday weight loss plans?
20:14:50 <johnsom> Cool
20:14:59 <sbalukoff> I'm working on L7 functionality for Octavia, and it's going to be dependent on that patch. So, I really need some feedback before I potentially waste a whole lot of effort.
20:15:00 <TrevorV> #link https://review.openstack.org/#/c/256369/4
20:15:24 <johnsom> Before the break I was mostly working on bug fixes.  Some of which can use review still.
20:15:29 <eranra> https://bugs.launchpad.net/octavia/+bug/1514510
20:15:30 <openstack> Launchpad bug 1514510 in octavia "Delete of last member does not remove it from haproxy config" [Critical,Fix committed] - Assigned to Eran Raichstein (eranra)
20:15:42 <minwang2> we got lbaas/Octavia status tree API documented and tested , the patch has been merged #link https://review.openstack.org/#/c/256884
20:15:54 <johnsom> #link https://review.openstack.org/#/c/256369/4
20:16:06 <johnsom> Just to get it in the minuts
20:16:30 <TrevorV> Copycat johnsom I already did that :D
20:16:31 <johnsom> eranra Yes, thank you for fixing that!
20:16:43 <TrevorV> #link https://bugs.launchpad.net/octavia/+bug/1514510
20:16:44 <openstack> Launchpad bug 1514510 in octavia "Delete of last member does not remove it from haproxy config" [Critical,Fix committed] - Assigned to Eran Raichstein (eranra)
20:16:44 <sbalukoff> Sweet!
20:16:48 <eranra> I was not sure it was real ... but it was :-)
20:17:18 <johnsom> That was one of our critical bugs, so good to have fixed
20:17:27 <sbalukoff> Yes, thank you very much for that!
20:17:48 <eranra> Also, if you can look on the new version of N+1
20:17:50 <eranra> https://review.openstack.org/#/c/234639/2/specs/version1/active-active-topology.rst
20:18:02 <johnsom> #link https://review.openstack.org/#/c/259240/
20:18:11 <johnsom> #link https://review.openstack.org/259130
20:18:23 <johnsom> #link https://review.openstack.org/257643
20:18:33 <johnsom> Are all bug fixes that need review
20:18:42 <sbalukoff> #link https://review.openstack.org/#/c/234639/2/specs/version1/active-active-topology.rst
20:19:08 <johnsom> Yep, active-active would be good to have some review before next week.
20:19:30 <sbalukoff> N+1 is a topic we're going to be discussing next week as well, so it'd be good to get feedback on that prior to the face-to-face so we can accelerate things next week (where time is going to be short anyway)
20:19:44 <sbalukoff> Jinx!
20:19:59 <johnsom> Ok, any other progress to report?  I think I saw some event streamer patches and some "get me an LB"
20:20:25 <xgerman> get me an LB was WIP
20:20:34 <sbalukoff> I imagine it's still WIP?
20:20:35 * xgerman looked closer
20:20:46 <blogan> yeah i should have the get me an LB ready to go soon, but the neutron-lbaas side I need to figure things out
20:21:10 <johnsom> Yep.  This is progress reports, so, opportunity for updates, etc. not just review requests (IMHO)
20:21:12 <blogan> bc with how we're supposed to do extensions to add features in neutron-lbaas throws a wrench and how i wanted to do it
20:21:36 <xgerman> sounds like a mid cycle discussion topic?
20:21:59 <blogan> probably, though ive been meaning to interrogate dougwig about some of it
20:22:07 <johnsom> Ok, moving on...
20:22:15 <johnsom> #topic Discuss review velocity
20:22:29 <johnsom> I am guilty here as well, but wanted to mention it.
20:22:38 <sbalukoff> Well... review velocity is going to be crap right now because everyone was on vacation.
20:22:49 <blogan> holidays augmented the problem
20:22:54 <sbalukoff> But you're right-- we need more people reviewing.
20:22:56 <dougwig> i am super guilty.
20:22:58 <sbalukoff> blogan: Yep.
20:22:59 <johnsom> We have a bunch of stuff up for review, so cores if you can make some cycles....
20:23:20 <sbalukoff> dougwig: Oh, we know. ;)
20:23:45 <blogan> i'm afraid to look at stackalytics, but here we go
20:23:56 <johnsom> I will call out one, a one line change, that had three +2's, but no +A for most of December
20:24:18 <xgerman> well, the human CI takes a lot of time
20:24:18 <blogan> and the reason for 2 +2's without a +A?
20:24:21 <johnsom> Thank you to blogan for getting that done
20:24:45 <xgerman> I am reluctant to +A stuff I haven’t run… since I can’t trust our CI
20:24:48 <johnsom> Agreed, I have the ruler on my wrist too....  grin
20:25:02 <blogan> xgerman: even if all the jobs tested the real drivers, that feature would not have been tested by them, so human interaction would have been required
20:25:27 <blogan> but honestly, that patch is perfect for a scenario test, which does test the real drivers
20:25:36 <xgerman> ;-)
20:25:40 <johnsom> Yeah, we have a gap with octavia specific testing.  I want to talk about that at the mid-cycle.
20:26:03 <blogan> well i mean a neutron-lbaas scenario test could be written to test that
20:26:10 <johnsom> We need to pick a test framework and get some resources available for testing newbies to be able to go after this.
20:26:21 <xgerman> +1
20:26:56 <blogan> in the short term though, we need more people with +A abilities to be able to server as that verifier though
20:27:07 <johnsom> blogan agreed, but it currently is a feature of just the octavia driver, so might be better to be an octavia scenario.  Anyway, we should chat about this
20:27:26 <blogan> johnsom: oh good point, very good point, i stand corrected
20:27:44 <johnsom> I think we just added one recently, so some progress there
20:28:05 <johnsom> #topic Bug review
20:28:09 <blogan> i dont mean more cores, i mean more of our current cores verifying
20:28:10 <sbalukoff> I'd throw my name into the hat for +A again. :)
20:28:28 <johnsom> Most of you missed the last meeting where I introduced this topic.
20:28:32 <ptoohill> Yea, i was under the impression we should be testing/verifying most things that come through as a core
20:28:39 <ptoohill> i must like to do things wrong
20:28:51 <johnsom> I want to pick a few bugs to talk about, highlight, each meeting.  Not go through the full list, just a few
20:28:54 <sbalukoff> ptoohill: Er... that's my impression as well.
20:29:04 <sbalukoff> johnsom: Go for it!
20:29:42 <johnsom> So, this week I will cover the same critical bugs, minus the one eranra fixed
20:29:45 <johnsom> #1490033 Devstack scripts need to enable lb-network controller IP
20:29:51 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/1490033
20:29:53 <openstack> Launchpad bug 1490033 in octavia "Devstack scripts need to enable lb-network controller IP" [Critical,New] - Assigned to Brandon Logan (brandon-logan)
20:29:53 * blogan hides
20:30:29 <blogan> if someone wants to take that up, be my guest, i have not worked on it yet :(
20:30:34 <johnsom> This one is about getting our devstack controller to have an IP on the lb-mgmt-net so we can enable health monitoring by default.
20:30:58 <johnsom> It needs someone with devstack networking mojo to get the interface setup.
20:31:19 <johnsom> #1491560 Deleting a non-existing member deletes the actual member
20:31:27 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/1491560
20:31:29 <openstack> Launchpad bug 1491560 in octavia "Deleting a non-existing member deletes the actual member" [Critical,New]
20:32:04 <johnsom> This one probably needs to be verified that it still happens.  Evil if it does.
20:32:12 <ptoohill> that is odd
20:32:30 <johnsom> #1509706 Failover fails when instance is deleted
20:32:42 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/1509706
20:32:43 <openstack> Launchpad bug 1509706 in octavia "Failover fails when instance is deleted" [Critical,New]
20:33:08 <blogan> johnsom: is that a manual deletion of the instnace?
20:33:12 <johnsom> This one may need some discussion.  The current failover flow expects the nova instance to exist before failover
20:33:30 <TrevorV> johnsom I think we left it that way on purpose
20:33:36 <johnsom> Yes, so if it is nova deleted or nova *lost* failover fails
20:33:52 <xgerman> clearly we want that to be more robust
20:34:08 <TrevorV> The "issue" isn't that it fails, because that's what we want, we just want it to fail gracefully, or move on as though it SHOULD have been deleted.
20:34:36 <johnsom> Yeah, I think we did, but we should discuss and decide what we want to do about it.
20:34:38 <TrevorV> One specific problem with that (since I wrote this originally) was if we don't specifically get information from the instance, it makes creating the new one "impossible"
20:34:49 <johnsom> TrevorV ACtually, that part of the bug is fixed.
20:34:51 <blogan> clearly but i wouldn't say that is a high priority, to me thats almost akin to someone manually deleting a record from the database and having to code aginst that
20:35:02 <johnsom> deletes of deleted instances now logs and moves on
20:35:10 <blogan> but not entirely the same thing, it should handle it gracefully
20:35:35 <TrevorV> johnsom okay, so if its graceful already, what is the exact problem?
20:35:36 <johnsom> blogan +1
20:35:39 <sbalukoff> blogan: +1
20:36:17 <johnsom> What you mentioned, if the instance isn't there failover fails because we don't have the details.
20:36:38 <TrevorV> Oh, so it moves on, but still fails is what you're saying?
20:36:40 <xgerman> and instance not there can happen when a hypervisor crashes
20:36:41 <johnsom> I am open to dropping this from critical, I just wanted to have a discussion about it.
20:37:10 <johnsom> It doesn't fail on the nova delete call any longer, but fails to failover the missing amphora
20:37:24 <xgerman> yeah, I think we need to fix that
20:37:25 <blogan> i'd move it off critical because under normal operations this shouldn't happen, it takes an odd event to cause this
20:37:46 <sbalukoff> blogan: +1 again.
20:37:48 <xgerman> well, hypervisor crashing is exactly when you need fail over :-)
20:37:51 <TrevorV> Right, and it can't failover as we have failover right now.  About the only way we can change that is if we store new information in the DB for each instance so we don't have to call nova to retrieve it
20:37:58 <johnsom> Well, nova hosts going away does happen
20:38:28 <johnsom> TrevorV +1
20:38:39 <johnsom> So, drop it to High?
20:38:59 <xgerman> yep, though we should test if we actually fail over when we loose a host
20:39:27 <blogan> when a host is dropped, does nova delete those records from its db?
20:39:35 <blogan> this is all implementation details here sorry
20:39:59 <TrevorV> Honestly I think we might need to talk about this more face-to-face unless we don't have more to discuss in the meeting today
20:40:00 <xgerman> I am not sure but I had weird things happen when hosts dropped
20:40:04 <blogan> yeah High to me seems to be the right priority
20:40:09 <xgerman> +1
20:40:27 <blogan> just not critical as critical to me means normal workflow is affected
20:40:31 <johnsom> Ok, cool, and we can talk more in person
20:40:38 <TrevorV> +1
20:40:58 <johnsom> That is it for the bugs I wanted to talk about this time.
20:41:01 <johnsom> #topic Open Discussion
20:41:07 <xgerman> nmagnezi?
20:41:13 <nmagnezi> xgerman, here
20:41:24 <xgerman> yeah, now you can rose your voice :-)
20:41:42 <johnsom> Feel free to pick up one of these bugs and work on it....  grin
20:41:44 <blogan> is there a bug report about immediately stopping the polling of the nova instance if nova reports it as error?
20:41:45 <nmagnezi> thanks xgerman :)
20:41:56 <johnsom> blogan yes there is
20:41:57 <blogan> i thought i put one in, or there was one
20:42:21 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/1490180
20:42:22 <openstack> Launchpad bug 1490180 in octavia "Amphora create should check for ERROR status in compute" [High,Confirmed]
20:42:26 <blogan> ah yes
20:42:40 <blogan> thanks
20:42:42 <nmagnezi> Hey guys, first time here :-). So I have some questions, that follow-up some answers I got in #openstack-lbaas
20:42:52 <johnsom> Sounds like a juicy candidate for bug day....
20:43:02 <blogan> sure does
20:43:04 <sbalukoff> johnsom: +1
20:43:07 <blogan> nmagnezi: no hard questions please
20:43:10 <sbalukoff> nmagnezi: Welcome!
20:43:15 <blogan> nmagnezi: ask us our favorite colors
20:43:20 <nmagnezi> blogan, lol
20:43:21 <johnsom> nmagnezi Welcome!
20:43:36 <nmagnezi> So, The first two questions are related to VRRP - Active/Standby: From answers I got I understand that VRRP traffic between amphoras i being passed in the VIP network (which is the tenant network..), If that is indeed the case, isn't that a security issue? what If someone capture that traffic, or imposes A higher priority for himself to become the Active node?
20:43:37 <sbalukoff> nmagnezi: Mine is yellow. No blue! Argh!
20:43:59 <xgerman> sbalukoff contractually obligated blue
20:44:01 <nmagnezi> sbalukoff, johnsom thanks!
20:44:19 <blogan> nmagnezi: if that vip network is an isolated network, is it still a security issue?
20:44:41 <sbalukoff> nmagnezi: amphorae are not shared between tenants. So at worst they're not getting access to anything they don't already have access to.
20:44:49 <nmagnezi> blogan, is that always the case?
20:45:03 <johnsom> nmagnezi It is still the tenant's own network.  Also, there is a unique pass phrase, but I think there is an issue with keepalived that it is clear text.
20:45:22 <nmagnezi> blogan, and still, vms in that tenant will see that vrrp traffic, which is bad
20:45:23 <blogan> nmagnezi: yes, unless there's a way to do provider subnets which I honestly don't know, but thats kind of the operator's issue
20:46:01 <johnsom> nmagnezi I do advocate for a vrrp network (virtual networks are cheap).
20:46:28 <xgerman> well, we went tenant to also have a failover if connectivity to that network got lost
20:46:30 <blogan> yeah i think that is the eventual plan anyway, or one of the solutiosn, we just need to hash it out
20:46:30 <johnsom> nmagnezi If this is something you are interested in working on, I think you would find support for that work.
20:46:40 <xgerman> aka vrrp network is good but connection to tenant severed
20:47:15 <nmagnezi> johnsom, you cannot create a separate network for this, it uses the network from which load balancing works
20:47:16 <johnsom> xgerman I think there are other ways to test/monitor that.  It was just more code
20:47:33 <xgerman> agreed.
20:47:36 <sbalukoff> xgerman: In other words, in-band health monitors (which a VRRP heartbeat is), are a good thing.
20:47:36 <nmagnezi> johnsom, ack, trying to understand things before I jump in
20:47:40 <xgerman> but that was the rational for tenant network
20:47:49 <johnsom> nmagnezi Octavia could create a vrrp network for the amphora, we have that capability
20:48:02 <blogan> a vrrp network per load balancer
20:48:16 <johnsom> blogan correct
20:48:20 <nmagnezi> does it? I was told that it currently does not
20:48:25 <johnsom> i.e. per amphora pair
20:48:40 <sbalukoff> Although, again, I dislike using "VRRP" in the name. As such a thing will have utility when doing active-active, which doesn't depend on VRRP.
20:48:44 <blogan> nmagnezi: not currently, but thats the prevailing solution we have right now, but its not in right now
20:48:49 <johnsom> nmagnezi It would just be development effort to make that happen
20:48:56 <sbalukoff> johnsom: Per amphora pool.
20:48:59 <sbalukoff> or group.
20:49:12 <xgerman> but then we should also dod the in-band monitoring another way
20:49:19 <sbalukoff> xgerman: +1
20:49:24 <nmagnezi> sbalukoff, well, I can only ask about current features, right? :)
20:49:26 <xgerman> I hate to take that away without a ready replacement
20:49:45 <sbalukoff> nmagnezi: It's not a bad idea to be forward thinking about features currently in development.
20:49:55 <xgerman> +1
20:50:04 <johnsom> I think active/standby "VRRP" is still a viable tier of service, even with active-active
20:50:07 <nmagnezi> johnsom, blogan, is there a bug/blueprint that I can follow up on this?
20:50:31 <johnsom> nmagnezi Not that I know of.  Feel free to put one in.
20:50:32 <nmagnezi> sbalukoff, sure not. but first I try to learn what already exists and how it works.
20:50:35 <sbalukoff> johnsom: Yes of course. Let's just use a more generic / more descriptive name that doesn't specify a specific technology.
20:50:37 <blogan> nmagnezi: i dont believe so, we haven't really talked about it formally just kind of on the side like
20:50:55 <nmagnezi> johnsom, will do
20:51:00 <nmagnezi> blogan, got it
20:51:44 <nmagnezi> The Second Question (VRRP): context: I asked about the option for large amount of amphoras in the same setup, specifically about the use of lb-network, which I understand is *one* shared network for all amphoras (under admin tenant, at least in devstack). The Question: So even if I allocate a large subnet for this network, how can anyone use more than 256 amphoras? the use of one network creates a limitation by VRID. s
20:51:44 <nmagnezi> ee https://tools.ietf.org/html/rfc3768#section-5.3.3 -> VRID configurable item in the range 1-255 (decimal) is there a workaround or better practice for it?
20:51:51 <johnsom> sbalukoff Well, what was implemented is a specific technology, so until something comes along it's what we have.  I understand you comment on it.  Unfortunately it came too late and a lot of code had already merged using VRRP.
20:52:29 <blogan> i think johnsom can answer this as i think i asked somethign similar before
20:52:34 <sbalukoff> johnsom: Yeah, I know. We'll just have to do another patch to rename things going forward.
20:52:47 <sbalukoff> I really hate having misleading names for things. ;)
20:53:00 <blogan> renaming to more generic names would be fine, especially before the next release :)
20:53:05 <sbalukoff> (One of my biggest gripes about LBaaSv1:  A VIP there was not a VIP!)
20:53:09 <johnsom> VRIDs are only used in amphora-amphora VRRP communications, which is over the tenant VIP network, so do not conflict.  Also, due to neutron issues with multicast, in some deployments, we are using unicast.
20:53:17 <xgerman> nmnagnezi so this is a vrrp limitation aka a single tenant can’t have more than 255 LBs?
20:53:24 <blogan> sbalukoff: but v2 load balancer is kind of a vip now
20:53:44 <johnsom> At least VRRP *IS* VRRP here... grin
20:53:45 <sbalukoff> blogan: Yes. And I know you were there for the three weeks it took to come to agreement on that.
20:53:47 <sbalukoff> ;)
20:53:55 <sbalukoff> johnsom: For now. ;)
20:54:24 <sbalukoff> Anyway, I didn't mean to hijack nmagnezi's discussion.
20:54:32 <nmagnezi> xgerman, mmm when I think about it, yes. due I have not tested that amount of amphoras myself.
20:54:35 <johnsom> To wrap up, all of our VRID's are 1 right now
20:54:45 <nmagnezi> xgerman, won't it limit the whole setup to 256?
20:55:09 <nmagnezi> xgerman, i don't know how VRID is allocated on Octavia
20:55:10 <johnsom> Nope, no limit because of VRIDs
20:55:28 <blogan> i still think there is a neutron port limit on an isolated network
20:55:30 <nmagnezi> johnsom, not even for a single tenant?
20:55:33 <blogan> but thats a separate issue
20:55:35 <sbalukoff> nmagnezi: Yes, and this is one reason why VRRP might not be the best tool for a large deployment. Or why we should use separate networks per amphora group for VRRP heartbeats and whatnot
20:55:47 <johnsom> nmagnezi correct, not even for a single tenant
20:55:59 <johnsom> blogan Yes, that is a problem with the lb-mgmt-net
20:56:09 <blogan> johnsom: what if 2 load balancers have vips on the same network, won't that cause VRIDs to clash?
20:56:16 <nmagnezi> johnsom, can you explain? i think that sbalukoff thinks differently
20:56:32 <johnsom> sbalukoff No, you are wrong.  There is no limit with VRRP
20:56:52 <sbalukoff> johnsom: I might be thinking of CARP. Sorry... still trying to get back from vacation mode.
20:57:29 <nmagnezi> johnsom, as I pasted above, VRID range is limited (RFC) so, please elaborate.
20:57:30 <johnsom> nmagnezi keepalived is configured for unicast heartbeat.  We could not use multicast.  So each VRRP pair are only talking to themselves, no other instances.  Thus, they are all VRID 1
20:58:15 <blogan> johnsom: ah that solves my question too
20:58:40 <xgerman> 2 minutes left
20:58:46 <nmagnezi> johnsom, good to know!
20:58:52 <nmagnezi> okay final question :)
20:59:05 <nmagnezi> Last question is about quotas. As you all prolly know, all amphoras are being created under the same tenant, regardless of which tenant spawned the 'neutron lbaas-loadnalancer-create' command. The question: the quota for that selected tenant will be used by all other tenants. for example: the default quota for instances for each tenant is: 10. what happens if we pass that number? not to mention that in case that we hav
20:59:05 <nmagnezi> e a pool or Active/Standby configuration, we will consume that quota even faster. Is there a different/better practice for this issue?
20:59:15 <johnsom> We can switch back to lbaas channel if we run out of time.
20:59:30 <blogan> nmagnezi: use a service account with a high quota for the octavia service
20:59:43 <sbalukoff> blogan: +1
20:59:59 <xgerman> +1
21:00:07 <johnsom> Thanks folks!
21:00:09 <johnsom> #endmeeting