20:00:02 <johnsom> #startmeeting Octavia
20:00:03 <openstack> Meeting started Wed May 25 20:00:02 2016 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:06 <xgerman> o/
20:00:07 <openstack> The meeting name has been set to 'octavia'
20:00:09 <johnsom> Hi folks
20:00:11 <fnaval> o/
20:00:12 <bana_k> hi
20:00:21 <sbalukoff> Howdy folks!
20:00:24 <blogan> hi
20:00:38 <Frito> hola
20:00:38 <johnsom> #topic Announcements
20:00:40 <alhu> hey guys
20:00:56 <johnsom> I don't have any exciting announcements today.  Anyone else?
20:00:57 <dougwig> o/
20:01:07 <Frito> announcement; welcome back Johnsom! ;-)
20:01:15 <johnsom> Ha, yes, thanks!
20:01:21 <xgerman> yep, we survived the thunderstorm
20:01:27 <sbalukoff> Um... not exciting, but I'm back for a while, and should have more time to devote to upstream work, which means I will be re-commencing reviews on stuff.
20:01:33 <johnsom> Was dougwig that bad that you are excited I have returned?  grin
20:01:35 <sbalukoff> Get ready for a bunch of -1's.
20:01:48 <johnsom> sbalukoff Excellent!
20:01:52 <blogan> sbalukoff: who are you again?
20:01:56 <dougwig> yes, i was, i forgot until 20s before.
20:02:21 <TrevorV|Home> \o
20:02:30 <johnsom> #topic Brief progress reports / bugs needing review
20:02:40 <TrevorV|Home> #link https://review.openstack.org/#/c/257201/
20:02:46 <johnsom> Ok, what has been going on?  I saw some merges and patches
20:02:47 <xgerman> dougwig likely while boarding a plane ;-)
20:02:48 <fnaval> #link https://review.openstack.org/305525
20:02:48 <fnaval> #link https://review.openstack.org/310326
20:02:48 <fnaval> #link https://review.openstack.org/317692
20:02:49 <TrevorV|Home> #link https://review.openstack.org/#/c/306084
20:02:59 <fnaval> #link https://review.openstack.org/306669
20:02:59 <fnaval> #link https://review.openstack.org/308091
20:03:01 <fnaval> #link https://review.openstack.org/309635
20:03:03 <fnaval> #link https://review.openstack.org/310069
20:03:06 * Frito braces for the floodgates
20:03:08 <blogan> geez
20:03:09 <fnaval> there's more but those can go frist
20:03:11 <sbalukoff> Nice!
20:03:21 <johnsom> sbalukoff There is a list to start with....  Grin
20:03:32 <fnaval> =D
20:03:39 <johnsom> I did get time to review the L7 doc spec.  Others please have a look too
20:03:55 <sbalukoff> Yay!
20:04:19 <johnsom> Trying as best I can to live up to my commitment to support our docs work.
20:04:39 <johnsom> #topic Merging the Cores
20:04:54 <johnsom> TrevorV you have the floor as I think you added this
20:04:56 <xgerman> vote?
20:05:09 * xgerman ducks
20:05:14 <TrevorV|Home> Yeah, no big deal.
20:05:28 <TrevorV|Home> A couple weeks ago we had a conversation about potentially merging the core groups
20:05:48 <johnsom> Yep, don't remember the final outcome
20:05:59 <sbalukoff> I don't recall seeing anyone against the idea.
20:06:12 <TrevorV|Home> At the time we didn't really say this was good or bad to do, but lately there has been a drop in reviews from contributors (including myself), and some of our patches have sat idly with one +2
20:06:18 <johnsom> Yeah, I almost want to say that dougwig was going to go make it happen
20:06:34 <sbalukoff> So... yes? Let's make it happen.
20:06:44 <johnsom> dougwig comments?
20:06:44 <TrevorV|Home> If we can merge the core groups, we can sort-of mitigate the lack of +2/+A we've been seeing, with the same 1 week cadence, and hopefully be alright to progress
20:06:45 <Frito> +1
20:06:51 <dougwig> yep, this was me.  i need to checkin with our ptl, then i'll make the change.  i spaced it.  *writes note*.
20:07:09 <TrevorV|Home> dougwig, all good homie, everyone's plates have been full with something or another lately
20:07:19 <sbalukoff> Yep.
20:07:25 <johnsom> #action dougwig to contact overlord for core reviewer merge between lbaas and octavia projects
20:07:32 <a2hill> 0/
20:08:04 <johnsom> #topic Revisiting Status visibility
20:08:10 <johnsom> #link https://bugs.launchpad.net/neutron/+bug/1585250
20:08:12 <openstack> Launchpad bug 1585250 in neutron "Statuses not shown for non-"loadbalancer" LBaaS objects on CLI" [Undecided,In progress] - Assigned to Elena Ezhova (eezhova)
20:08:13 <blogan> i added this and the next one
20:08:25 <johnsom> blogan You have to floor...
20:08:36 <johnsom> floor/channel/tow ticket
20:08:44 <TrevorV|Home> all the above
20:08:48 <blogan> so that bug basically is askign for an easier way to show statuses, they want it on the objects themselves
20:08:58 <blogan> im explaining in the bug report why pools can't be done that way
20:09:10 <TrevorV|Home> I certainly would like to see it at LEAST on the resource itself.
20:09:13 <TrevorV|Home> Maybe not in the list.
20:09:15 <blogan> but listeners probably could bc i think we have given up on the idea of eventually gonig teh shared listeners route
20:09:16 <TrevorV|Home> Idk, arguable.
20:09:57 <blogan> well shareable pools has issues, until at least we switch to octavia's api which allows seeing resource info in scope of a listener/loadbalancer
20:10:03 <a2hill> would an attached tree on a load balancer response resolve this ask?
20:10:18 <blogan> we already have a statuses call
20:10:24 <johnsom> a2hill hahaha
20:10:34 <a2hill> yea, but as part of the reponse?
20:10:46 <blogan> maybe
20:10:52 <a2hill> is the fundamental issue having to make multiple calls to see the statues?
20:10:53 <TrevorV|Home> It wouldn't solve it for me
20:10:54 <blogan> but they only get that if they use single create
20:11:01 <blogan> once that merges
20:11:04 <a2hill> hmm
20:11:23 <TrevorV|Home> a2hill, for me its more about the status concerning a singular object
20:11:34 <TrevorV|Home> I'd rather not have to check the whole tree just to see OPstatus of a member
20:11:46 <blogan> and this particualr case is for creating a pool individually, wouldn't make sense to return back lb info, well i guess we could do something like that but it doesn't feel right either
20:11:50 <a2hill> well yea, its just thats sorta difficult
20:12:05 <TrevorV|Home> Agreed, a2hill
20:12:09 <a2hill> what about a cut up tree for each resource
20:12:17 <blogan> well first thing's first, do we ever intend on having shared listeners?
20:12:22 <johnsom> We are async, so status returned on create calls is "suspect" anyway
20:12:30 <TrevorV|Home> Or a dictionary of resources, "pool: status" etc
20:12:36 <a2hill> tree is calculate, but we can parse and return related part of tree as a new status field for each resource
20:12:46 <TrevorV|Home> johnsom, no no not on create, I mean on "get /pools/id/members/id"
20:12:52 <sbalukoff> I haven't heard a need for shared listeners.
20:12:53 <TrevorV|Home> that should give me a op-status
20:12:59 <sbalukoff> Or at least, nobody clamoring for them.
20:13:05 <blogan> but that op status will be different per listener
20:13:28 <blogan> well it can be different
20:13:31 <TrevorV|Home> Right
20:13:44 <johnsom> Status on member is really helpful for end users as well
20:13:54 <TrevorV|Home> Which means the object could have a list of statuses per different listener.  I know we talked about that before and said "no", but I'm still not against it
20:14:10 <a2hill> and something like that would solve our ui asks too
20:14:21 <a2hill> or however this is solved would alleviate thier concerns
20:14:57 <sbalukoff> right.
20:15:05 <blogan> they're going to have to make antoher GET request on the member to get the status, why is doing a GET on /loadbalancer/{lb_id}/statuses any different? they just have to traverse the tree that returns
20:15:20 <a2hill> tell that to them
20:15:21 <johnsom> I do worry about column sprawl in the CLI, but status is fairly useful IMHO
20:15:31 <a2hill> but it is additional queries and logic that has to be built
20:15:36 <TrevorV|Home> blogan, literally the traversal is the issue.
20:15:38 <a2hill> and in reach's case, thats a scale issue
20:15:43 <TrevorV|Home> Not that its "problematic" its just "cumbersome"
20:16:27 <a2hill> well, TrevorV|Home in some cases it is also problematic
20:16:55 <a2hill> scaling status details for thousands of loadbalancers is proving difficult were finding
20:17:09 <TrevorV|Home> Right, or if an LB has a brazillion listeners, right?
20:17:18 <a2hill> right now its not a big deal, and were moving forward, but once things fill up its going to get really heavy
20:17:21 <blogan> well if adding the statuses in the single GET request to a member and/or pool is wanted go for it, the CLI may start to look ugly
20:17:35 <xgerman> yeah, we don’t have paging
20:17:41 <sbalukoff> Most load balancers will have 1 or 2 listeners.
20:17:45 <xgerman> which we should add when we overhaul that
20:18:13 <TrevorV|Home> sbalukoff, that's true, but I like to keep in mind the edge cases
20:18:35 <blogan> nah the problem is our UI team wants to give color coordinated alerts based on status on a page taht shows a list of a customers load balancers
20:18:37 <sbalukoff> TrevorV|Home: I don't like the regular use case to suffer because of outlandish edge cases. :/
20:19:02 <a2hill> blogan: that still translates to additional logic that if isnt done right could cause issues for them
20:19:08 <a2hill> primarly because of the seperate statuses
20:19:31 <blogan> so its a separate statuses issue? provisiong_status and operating_status or just that its separate api calls?
20:19:44 <a2hill> separate api calls
20:19:50 <johnsom> blogan So what you guys are looking for is a call that gets the status of all of the lbs in a tenant?
20:20:00 <a2hill> which is why i kinda suggested tagging the related part of tree to the response
20:20:08 <blogan> okay so basically they want the statuses show in a list of pools, members, lbs, etc
20:20:14 <TrevorV|Home> sbalukoff, agreed, but with fewer listeners the "ugly" problem is moot.
20:20:24 <a2hill> someone else reported this bug, im just speaking from our side
20:20:29 <sbalukoff> TrevorV|Home: Agreed.
20:20:35 <dougwig> shouldn't the question of showing status on indivudual GET/cli be separate from a batch call for UI speed?
20:21:11 <johnsom> dougwig +1
20:21:16 <blogan> dougwig: you mean CLI doesn't need to implement it?
20:21:39 <TrevorV|Home> I'm not entirely sure what you're getting at dougwig
20:21:39 <dougwig> i would think step 1 would be the basic fix.  step 2 would be optimize.  imo.
20:21:50 <TrevorV|Home> I'd like to get the op-status on the cli "show" for an object
20:21:58 <sbalukoff> Eh... I'm of the opinion that having a couple extra status fields in the CLI is pretty useful. :)
20:22:08 <sbalukoff> That might just be me, though.
20:22:12 <TrevorV|Home> I agree sbalukoff
20:22:15 <a2hill> sbalukoff: +1
20:22:17 <johnsom> Yeah, me too.  Status in the show output is good
20:22:21 <dougwig> sbalukoff: +1
20:22:29 <TrevorV|Home> idk about the "list" output, but def the "show" output
20:22:33 <sbalukoff> Yep!
20:22:46 <fnaval> +1
20:22:54 <blogan> TrevorV|Home: well the list would probably show what the show shows...thats redudnant
20:22:58 <TrevorV|Home> To be clear, though, I'm not against the current status tree existing, its totally useful.
20:23:00 <a2hill> fnaval: plus one'd himself again
20:23:02 <a2hill> :P
20:23:03 <fnaval> lol
20:23:10 <TrevorV|Home> blogan, that's not true, there are omitted fields in the list compared to the show
20:23:11 <fnaval> (for visibility)
20:23:14 <a2hill> lol
20:23:35 <sbalukoff> Right.
20:24:04 <Frito> +1 for fnaval so he's not the only one ;-)
20:24:12 <fnaval> =D
20:24:12 <johnsom> Alright, I haven't read blogan's novel yet on this bug.  Should we all comment on the bug?
20:24:18 <blogan> okay so should this go into nlbaas api? even though its nto exactly the future
20:24:21 <sbalukoff> johnsom: Yes.
20:24:27 <johnsom> I think we are coming to agreement on the CLI issue.
20:24:38 <blogan> well its raising more questions :)
20:24:42 <johnsom> List status should be a different RFE
20:24:48 <TrevorV|Home> Commenting on the bug is a good idea, even if we just say "This will be provided in octavia"
20:24:50 <TrevorV|Home> you know?
20:24:52 <blogan> not the cli, but the fact that we're okay with it giong into the api
20:25:00 <dougwig> i'd expect a list lb for the vip to be more important than the status.
20:25:21 <blogan> dougwig: one call must have all the things in it
20:25:25 <blogan> we should just make a db dump api call
20:25:48 <xgerman> in json
20:25:51 <johnsom> hahaha, careful blogan, it will be next...
20:25:59 <sbalukoff> Heh!
20:25:59 <dougwig> blogan: connect your UI to your sql db
20:26:02 <dougwig> make any query you want
20:26:12 <dougwig> :)
20:26:32 <TrevorV|Home> Alright, so "comment on bug" is the result of the convo?
20:26:35 <johnsom> So, now, ODBC, JDBC, ...?
20:26:37 <a2hill> thats actually something theyve considered i think :P
20:26:39 <a2hill> j/k
20:26:41 <a2hill> i think
20:26:46 <johnsom> Yes please.  Comment away!
20:26:49 <blogan> i wouldn't be surprised
20:26:52 <a2hill> lol
20:27:07 <dougwig> rax should just rewrite it all as quarkaas
20:27:14 <a2hill> lol
20:27:19 <johnsom> #action Everyone comment on the status/CLI RFE
20:27:19 <blogan> sounds to me like everyone is okay with putting the statuses in the GET and LIST of resources, not just LB
20:27:19 <sbalukoff> Haha
20:27:41 <johnsom> #topic Error Message as LB API Attribute
20:27:46 <dougwig> i never said i was ok with LIST.  look at nova list... you get id, name, ip.
20:27:57 <johnsom> #link https://bugs.launchpad.net/neutron/+bug/1585266
20:27:58 <openstack> Launchpad bug 1585266 in neutron "Can't specify an error type on LBaaS objects that fail to provision" [Wishlist,New] - Assigned to Brandon Logan (brandon-logan)
20:27:58 <blogan> dougwig: isn't there a details list?
20:28:06 <blogan> that nova supports
20:28:17 <a2hill> blogan: im only in agreement so it gets reach of our back. Otherwise i think a few additional calls are too bad, even at the scale. was just playing devils advocate since i know they wont stop on this one
20:28:26 <xgerman> still think we need paging but that’s for N-API
20:28:27 <TrevorV|Home> guys guys guys, topic changed.
20:28:32 <a2hill> arent*
20:28:48 <blogan> okay error description, i tagged it as rfe but wantedt o see what yall thought
20:28:51 <a2hill> i was in middle of typing ><
20:28:57 <johnsom> Yeah, sorry, was trying to keep things moving.
20:29:02 <blogan> i've been reading octavia and nlbaas bug reports today, so thats why i've been bringing these two up
20:29:10 <TrevorV|Home> I'm not sure what this means.
20:29:14 <johnsom> Plus I want comments captured for future generations....
20:29:18 <TrevorV|Home> We want to include an error response in the "show" or something?
20:29:24 <TrevorV|Home> I thought that already existed
20:29:47 <blogan> so v1 had status_descritpion on all the objects and if the object ever ERRORed it'd give reason why, similar to hwo nova will show a stack trace in its show server command if it ERRORs
20:29:48 <fnaval> http://developer.openstack.org/api-ref-compute-v2.1.html#showServer
20:29:57 <johnsom> I actually had this thought recently.  Nova has a "reason" field for the status.  This is a decent idea, but can be hard to implement well.
20:30:02 <blogan> yeah
20:30:22 <blogan> sounds like there could be some scrubbing of the error message needed
20:30:30 <a2hill> yea, this would be good to have. would it bubble up nova errors too i assume?
20:30:36 <TrevorV|Home> Would we worry about this in nlbaas or in octavia or both?
20:30:39 <blogan> depends on how its implemented
20:30:58 <blogan> well thats antoher topic, do we just do a hard stop on all nlbaas features or do we do both?
20:31:07 <sbalukoff> +1 on this being a good thing to have.
20:31:07 <johnsom> TrevorV|Home Remember we are on a path where nlbaas and octavia become one, so.....
20:31:29 <TrevorV|Home> johnsom, right, but we're on that path to *not* duplicate work, but this is an nlbaas v2 request, amirite?
20:31:47 <blogan> it'd carry over to octavia api
20:31:50 <a2hill> yea, but until we can say thats actively being done, and we have something deployable thats not including nlbaas we kinda need features in both places
20:31:51 <johnsom> Yeah, so until we are merged, both is the right answer
20:31:51 <blogan> lbaas api
20:31:55 <a2hill> sorry.. i type slow..
20:32:27 <johnsom> Think of it as motivation to merge....
20:32:34 <sbalukoff> +1 to a2hill typing too slow.
20:32:37 <TrevorV|Home> ha ha ha
20:32:38 <a2hill> ><
20:32:49 <a2hill> so..
20:32:50 <sbalukoff> johnsom: Yep, that it is!
20:32:53 <TrevorV|Home> Yeah, alright, so I agree it'd be nice
20:33:04 <TrevorV|Home> It'd help "newbies" understand problems that they're experiencing too
20:33:14 <dougwig> blogan: ooh, is there?
20:33:27 <johnsom> Yeah, we get a lot of "It's in error or pending_*" why? questions now
20:33:50 <a2hill> the other side of this is logging and alerts ?
20:34:04 <blogan> dougwig: http://developer.openstack.org/api-ref-compute-v2.1.html#listServersDetailed
20:34:31 <TrevorV|Home> We need to get better logging across octavia, unless I missed that patch already ha ha
20:34:41 <dougwig> blogan: so, it's a db dump
20:34:44 <a2hill> i mean customers logs
20:35:02 <johnsom> So, this one with the "Reason".  I'm in favor of it.  I think it should follow a cleanup of our revert/rollback and be implemented as such.
20:35:05 <TrevorV|Home> Oh, like logs from haproxy
20:35:05 <TrevorV|Home> got it
20:35:17 <a2hill> if they want to see whats going on with things, this may be a bit different, but could probably be coupled with alerting, though i dont know where or if those things are .. things
20:35:18 <sbalukoff> johnsom: +1
20:35:21 <blogan> ok, dougwig tell the neutron drivers we approve ;)
20:35:27 <TrevorV|Home> johnsom, agreed, for octavia.  Can that be done in neutron lbaas in a similar fashion though?
20:35:29 <a2hill> TrevorV|Home: not just haproxy
20:35:30 <johnsom> Customer logs from haproxy, that is a WHOLE different issue...
20:35:39 * TrevorV|Home is unfamiliar with neutron lbaas's rollback system
20:35:51 <a2hill> not what i was getting at i guess
20:36:01 <xgerman> neutron labs would need to get error info from the driver
20:36:09 <johnsom> Well, I think in n-lbaas we implement the field/db.  Leave it up to the driver to populate.
20:36:16 <xgerman> +1
20:36:21 <johnsom> Wouldn't it mostly be a pass through from the driver?
20:36:32 <dougwig> our current rollback system involves singing, "rollback rollback, rawhide!" to yourself, and... that's it.
20:36:33 <blogan> yeah driver can populate it or give it to the plugin to populate
20:36:33 <xgerman> or repopulate with <vendor> sucks
20:36:41 <TrevorV|Home> Unless the request rolls back before hitting the driver
20:36:46 <TrevorV|Home> johnsom, ^^
20:36:57 <fnaval> lol
20:37:18 <johnsom> TrevorV|Home well, then it's neutron's problem (just kidding)
20:37:41 <blogan> if the requests rolls back the lb would not be in ERROR state i don't tink
20:37:52 <blogan> before it gets to teh driver i mean
20:38:05 <TrevorV|Home> That's a fair point...
20:38:11 <sbalukoff> Whatever we do, we need to make sure the user cries.
20:38:17 <sbalukoff> Just sayin'.
20:38:20 <johnsom> I should be perfectly reasonable for n-lbaas exception handling (yes, I'm laughing too) to populate that field if it makes sense.  I.e. driver not found or something
20:38:20 <blogan> continues to cry
20:38:22 <sbalukoff> Load balancing is serious business.
20:38:52 <xgerman> no kidding
20:39:14 <johnsom> I won't comment on that in light of recent events in my life...
20:39:41 <sbalukoff> Seriously though-- let's not get too much into the weeds on this right now, eh. I'm hearing general agreement that having error messages that are more helpful than a Microsoft error code is a good thing.
20:39:42 <johnsom> Ok, so I'm thinking this is a "Please comment" as well.  Generally a good idea
20:39:53 <sbalukoff> Yep!
20:40:09 <blogan> now, who's going to do the work?
20:40:10 <johnsom> Ahh, I was totally firing up the uuid generator for the error codes....
20:40:19 <sbalukoff> Haha!
20:40:46 <xgerman> blogan, didn’t you step forward?
20:40:48 <johnsom> #action Everyone comment on the "error message" in API RFE
20:40:54 <sbalukoff> Right.
20:40:58 * johnsom steps to the right
20:41:12 * xgerman steps far back
20:41:23 * blogan leaps into the back
20:41:32 * Frito slips into the shadows
20:41:44 * fnaval +1's himself.
20:41:48 <Frito> lmao
20:41:50 <johnsom> We can say it's a good thing and get to it when we can....
20:41:51 <sbalukoff> Huzzah!
20:41:55 <Frito> well played sir
20:41:59 <xgerman> johnsom +!
20:42:08 <blogan> be sure to comment on the statuses one
20:42:09 <johnsom> #topic Neutron-LBaaS Tempest Plugin
20:42:19 <johnsom> #link https://review.openstack.org/#/c/321087
20:42:31 <fnaval> ok, so i got this ask from the neutron folks - they want to run the neutron-lbaas tests but we dont have a tempest plugin
20:42:31 <johnsom> Ok, who added this and can talk to it?
20:42:36 <fnaval> me
20:42:55 <fnaval> i haven't done a plugin before so comments is appreciated
20:43:16 <fnaval> i recall that madhu had done a few for the other projects - was trying to ping him but he hasn't been online of late
20:43:37 <johnsom> Aren't these plugins typically in a separate repo?
20:43:50 <blogan> looks like assaf is asking for nonvoting jobs for these first to verify they work as intended and then switch them to voting once validated
20:44:00 <fnaval> is there a reason why we don't have a tempest plugin hooked up to our gates? could it be because it can't selectively choose tests to run?
20:44:24 <blogan> fnaval: tempest will run these tests?
20:44:41 <johnsom> Well, the whole tempest thing has been evolving/changing for the last year, so....
20:44:54 <blogan> if so, i think their runner can run tests based on a pattern option
20:45:01 <fnaval> the gate jobs will run those tests with the plugin, but some changes need to happen with post_test_hook.sh or some scripts like that
20:45:25 <blogan> yeah but is it tempest running them, i think so
20:45:40 <fnaval> ok, well - if anyone has any comments or experience on setting up a gate job with TempestPlugin, please make some comments or reach out to me later.
20:45:56 <blogan> fnaval: you're best bet is probably amuller in neutron
20:45:59 <blogan> your
20:46:04 <fnaval> I wanted to see whether it would be a good idea for the rest of the team; if so, then i'll continue working on it
20:46:18 <fnaval> blogan: cool - i'll continue speaking with him
20:46:50 <fnaval> he directed me to this PR: https://review.openstack.org/#/c/304324/
20:46:52 <johnsom> fnaval Thanks for following up on this!
20:46:56 <fnaval> which I'll probably use as a template
20:47:03 <fnaval> thanks for the floor johnsom! =)
20:47:41 <fnaval> blogan: i'll also look into regex name test matching
20:47:49 <johnsom> fnaval If you want me to get madhu online for some questions, I can do that.  He just won't be working on this much in the near term.
20:48:12 <fnaval> yes that would be great too johnsom! i'll send an email as well
20:48:20 <johnsom> Ok, sounds good
20:48:28 <fnaval> just want to pick his brain on it
20:48:36 <johnsom> #topic Open Discussion
20:48:36 <fnaval> thanks
20:48:53 <xgerman> I have been told https://github.com/rackspace/gophercloud is getting LBaaS V2 support - so look and comment...
20:49:12 <sbalukoff> Huh!
20:49:46 <johnsom> Hmmm, not sure I would want to have a project called Gopher in Texas....
20:50:08 <xgerman> yeah, RAX what’s up with the naming?
20:50:26 <blogan> isn't it go based?
20:50:27 <fnaval> Go as in go language
20:50:40 <sbalukoff> Running short on 'go' puns.
20:50:40 <fnaval> pher as in uh
20:50:47 <fnaval> gopher
20:50:48 <blogan> phertastic
20:50:56 <sbalukoff> Haha!
20:50:58 <fnaval> yep blogan knows
20:50:59 <johnsom> Yeah, I know.  Just seems like it could be dangerous given the guns and all
20:51:34 <blogan> guns are no more dangerous to gophers than lawn mowers
20:51:39 <fnaval> oh yea guns - they like guns here
20:51:48 <dougwig> gun thread!
20:52:03 <fnaval> as long as you don't bring them in or sell them in the parking lot
20:52:32 <xgerman> the feedback I got from the team doing that is that our docs suck are not up-to-date
20:52:52 <xgerman> and OpenStack in general doesn’t follow a common API designe
20:52:55 <johnsom> xgerman You are welcome to pick a docs bug and help fix that!
20:53:19 <johnsom> #link https://bugs.launchpad.net/octavia/+bugs?field.tag=docs
20:53:26 <xgerman> well, I think that feedback was expected
20:53:29 <sbalukoff> Yes!
20:54:17 <johnsom> Any other open discussion items today?
20:54:26 <alhu> guys, can i steal 2 mins for a question?
20:54:29 <fnaval> reminder: please look at my commits
20:54:30 <sbalukoff> Yep!
20:54:41 <xgerman> aloha you have 6
20:54:46 <xgerman> alhu
20:54:49 <johnsom> alhu, please, this is the time to ask!
20:55:03 <alhu> o-wk can be deployed with multiple instances, what about o-hm and o-hk?
20:55:19 <xgerman> they can be deployed as well
20:55:32 <xgerman> in multiple instances as long as they share the same db
20:55:40 <johnsom> Yes, all of them should be deployable in multiple instances
20:55:45 <alhu> from the impl, looks like it accesses db directly, will that cause any race conditions?
20:55:57 <xgerman> DB’s lock
20:56:08 <xgerman> so don’t think so
20:56:24 <alhu> those are queries, how the db gets locked?
20:56:46 <johnsom> The sqlalchemy transactions should be handling that
20:56:57 <bharathm> Its transactional .. so should be fine
20:57:14 <xgerman> alhu or are you wondering what happens when one message overtakes the other?
20:57:16 <alhu> what if it's deployed on multiple hosts?
20:57:29 <alhu> sqlalchemy can handle that as well?
20:57:39 <xgerman> yeah, DB side is fine
20:57:46 <alhu> yes, xgerman
20:57:47 <johnsom> Yes, the transactions are held in the database
20:58:00 <alhu> i c, thanks guys
20:58:04 <bana_k> but if we are running 2 instances of say h-hk
20:58:05 <xgerman> alhu we send enough messages that it should get back to normal pretty quick
20:58:32 <bana_k> who does both mange to keep the spare count ?
20:58:37 <bana_k> how*
20:58:38 <blogan> i'm not too confident in a multi write galera cluster
20:58:47 <bana_k> manage*
20:58:55 <xgerman> hk does
20:59:12 <xgerman> and that’s all in DB
20:59:12 <bana_k> yea what if we have 2 instances of them running
20:59:32 <xgerman> we have at least 3 running and haven’t seen issues
20:59:49 <Frito> not sure if anyone has the room after us but were basically at time.
20:59:49 <bharathm> bana_k 's question si valid
21:00:02 <bharathm> We haven't seen any issues in our multi-node env
21:00:07 <bharathm> but it can cause issues
21:00:07 <johnsom> bana_k yes. we use a transaction when setting the "busy" flag.  As for spares, they are also allocated inside a transaction
21:00:26 <alhu> let's move discussion to lbaas then
21:00:32 <johnsom> We can continue in the lbaas channel
21:00:34 <johnsom> #endmeeting