20:00:05 <johnsom> #startmeeting Octavia
20:00:07 <openstack> Meeting started Wed Feb  3 20:00:05 2016 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:09 <sbalukoff> Howdy, howdy!
20:00:10 <bharathm> o/
20:00:11 <openstack> The meeting name has been set to 'octavia'
20:00:15 <fnaval> o/
20:00:17 <evgenyf> o/
20:00:17 <minwang2> o/
20:00:18 <ajmiller> o/
20:00:28 <blogan> #startvote do a vote?
20:00:29 <openstack> Only the meeting chair may start a vote.
20:00:29 <fnaval> #help
20:00:37 <johnsom> Hi folks
20:00:38 <sbalukoff> Heh!
20:00:41 <dougwig> o/
20:00:45 * johnsom slaps blogan's hand
20:00:52 <bana_k> hi
20:00:57 <johnsom> #topic Announcements
20:01:02 <ptoohill> 0/
20:01:19 <johnsom> So we have a bunch of patches on the high priority list this week.
20:01:29 <sbalukoff> Indeed!
20:01:34 <johnsom> If you add a patch, please put a working link in (makes my life easier)
20:01:41 <johnsom> So, let me start....
20:01:49 <johnsom> Updates the failover flow for active/standby
20:01:55 <johnsom> #link https://review.openstack.org/253724
20:02:04 <johnsom> This is still sitting out there with one +2
20:02:33 <johnsom> I know sbalukoff tried to test this out, but couldn't get the base act/stby going.
20:02:41 <johnsom> Thanks for giving it a go
20:02:49 <sbalukoff> Yep-- I should have time later this week to look into it more closely.
20:02:58 <johnsom> On to our L7 work!
20:03:00 <johnsom> Shared pools support
20:03:01 <rm_work> o/
20:03:04 <sbalukoff> Yay!
20:03:07 <johnsom> #link https://review.openstack.org/256369
20:03:11 <sbalukoff> Here's the etherpad:
20:03:14 <sbalukoff> #link https://etherpad.openstack.org/p/lbaas-l7-todo-list
20:03:22 <johnsom> I am in the middle of re-reviewing this one.  Will finish today.
20:03:31 <sbalukoff> The most important one is the patch johnsom linked.
20:03:34 <sbalukoff> Nice!
20:03:42 <johnsom> L7 support
20:03:44 <sbalukoff> Again, I hope we can merge it soon--
20:03:48 <johnsom> #link https://review.openstack.org/148232/
20:03:59 <johnsom> Yeah, shared pools looking pretty good so far
20:04:07 <sbalukoff> I expect to be mostly done with the L7 code by mid-month, and it would be really nice not to have to rework everything, since it all depends on that shared pools patch.
20:04:11 <evgenyf> Made l7 a separate extension
20:04:13 <johnsom> So, L7 is the next layer on this.
20:04:20 <sbalukoff> evgenyf: Awesome!
20:04:33 <xgerman> evgenyf this is good had a q regarding that in neutron meeting
20:04:38 <sbalukoff> Yep, I think the Octavia schema-addition and repo patches are mostly stable.
20:04:40 <evgenyf> There are issues on gate I need to fix
20:04:51 <johnsom> Someone added - L7 CLI
20:04:53 <sbalukoff> Expect an update to the repo patch for l7rules sanity checking today or tomorrow.
20:05:01 <johnsom> #link https://review.openstack.org/217276
20:05:04 <evgenyf> I added l7 CLI
20:05:04 <eranra> Hi Guys
20:05:13 <sbalukoff> Howdy, eranra!
20:05:15 <blogan> can we talk about pools being direct children of load balancers?
20:05:19 <blogan> i kid!
20:05:21 <blogan> im kidding
20:05:22 <johnsom> So CLI is up for review.  Is the L7 API in neutron-lbaas ready such that this could be tested?
20:05:25 <ptoohill> hes really not
20:05:39 <johnsom> Welcome eranra
20:06:13 <evgenyf> I will push another commit for L7 CLI tomorrow
20:06:22 <evgenyf> is reedip here?
20:06:52 <blogan> grr scenario tests failing
20:07:17 <johnsom> Ok, moving on
20:07:18 <johnsom> Add spec for active-active
20:07:24 <johnsom> #link https://review.openstack.org/234639
20:07:26 <sbalukoff> blogan: Seriously? Or are you trolling?
20:07:34 <sbalukoff> (Wow, I lagged out there... though blogan was serious for a moment...)
20:07:35 <blogan> sbalukoff: on that sahred pools patch
20:07:42 <johnsom> Again, I have gone through about half of that one.  Will finish soon
20:08:02 <blogan> sbalukoff: depends what comment you lagged out on
20:08:11 <sbalukoff> blogan: That one fails on other patches all the time as well. It's the session persistence test, right? I can't get the damned thing to happen locall.
20:08:13 <sbalukoff> locally.
20:08:38 <sbalukoff> johnsom: Thanks for that! I know eranra and dean will appreciate that. :)
20:08:40 <blogan> sbalukoff: yeah, not sure what to do with it
20:08:44 <blogan> sbalukoff: active active spec?
20:09:10 <johnsom> Yeah, that last link was active/active.
20:09:17 <sbalukoff> blogan: Er... I think I might have lagged again. I was talking about the shared pools patch. Except for my last comment to johnsom which is about the active-active spec.
20:09:19 <johnsom> I guess IRC is lagging bad today
20:09:29 <blogan> johnsom: yeah i was trying to get us back on course since i got us off course :)
20:09:30 <TrevorV> Sorry, am here.
20:09:39 <sbalukoff> No worries. :)
20:09:39 <blogan> i think its just sbalukoff lagging
20:09:50 <sbalukoff> That wouldn't be new, eh. ;)
20:10:09 <blogan> sbalukoff: lagging not slacking bwahaha
20:10:14 <johnsom> Ok.  It's a long list, so if I don't see chatter on items, I am moving on..  Maybe too fast, let me know
20:10:16 * sbalukoff winces.
20:10:31 <sbalukoff> johnsom: +1
20:10:45 <johnsom> Horizon panels
20:10:51 <johnsom> #link https://review.openstack.org/#/q/project:openstack/neutron-lbaas-dashboard+status:open
20:11:08 <johnsom> Just a few more panels up for review.  Really good progress here.  Thanks!
20:11:12 <xgerman> ajmiller says they are being in good shape
20:11:29 <sbalukoff> Dang, I meant to review those last week. Ok. I'll add them to my list and see if I can do that this weekend, if they don't merge before then.
20:11:57 <ajmiller> Yes, several merged last week, and there are others that are now close.
20:12:00 <blogan> johnsom: when is M-3?
20:12:04 <sbalukoff> Excellent!
20:12:13 <johnsom> Feb 29ish
20:12:18 <sbalukoff> #link http://docs.openstack.org/releases/schedules/mitaka.html
20:12:23 <johnsom> Is feature complete
20:12:30 <blogan> ok i need to get the single create call completed this weekend
20:12:37 <xgerman> not sure if the panels follow the drum though
20:12:44 <johnsom> #link http://docs.openstack.org/releases/mitaka/schedule.html
20:12:52 <johnsom> Actually it's that link now...
20:12:53 <blogan> we're going to have a mad rush of new features getting in that will probably cause breakages
20:13:05 <sbalukoff> Oh, ok.
20:13:20 <sbalukoff> Well... we have a couple weeks of bug-fix after that right?
20:13:38 <johnsom> Yeah, there is an RC planned
20:14:05 <johnsom> There are a few smaller patches that someone added to the please review list:
20:14:11 <johnsom> Endpoint filtering
20:14:16 <johnsom> #link https://review.openstack.org/271476/
20:14:22 <johnsom> CSR openssl fix
20:14:27 <sbalukoff> blogan: Note, also: I think you'll make all our lives easier if you make the get-me-a-load-balancer dependent on the shared pools patch. ;)
20:14:27 <johnsom> #link https://review.openstack.org/272708/
20:14:37 <johnsom> Nova interface instead of Nova network
20:14:39 <blogan> sbalukoff: or just yours :)
20:14:42 <johnsom> #link https://review.openstack.org/273733
20:14:52 <johnsom> I know there has been some discussion on those this week.
20:14:56 <blogan> sbalukoff: will make my life easier if you put the shared pools patch dependent on the get me a load balancer review :)
20:15:13 <sbalukoff> blogan: Except mine's basically ready for merge now.. ;)
20:15:27 <blogan> sbalukoff: insignificant details
20:15:28 <TrevorV> #link https://review.openstack.org/#/c/275337/
20:15:29 <rm_work> yeah i just looked at blogan and asked him how long it had been since he updated his >_>
20:15:32 <sbalukoff> Haha!
20:15:33 <johnsom> Just a few minor comments so far on shared pools from me
20:15:34 <rm_work> (but he couldn't hear me)
20:15:42 <TrevorV> This review is being tested by me right now, but if people could look it over, that'd be great
20:15:46 <sbalukoff> johnsom: Sweet!
20:15:50 <blogan> johnsom: yeah the nova interface thing
20:15:55 <sbalukoff> I will act quickly on them. Count on that!
20:16:05 <minwang2> anti-affinity
20:16:06 <minwang2> #link https://review.openstack.org/#/c/272344/
20:16:19 <blogan> only reason i'm pushign back on going only the os-interfaces is because our public cloud does not support it, and i know its a sucky reason but it is our reason
20:16:35 <johnsom> Yeah, I think anti-affinity is close.
20:16:35 <sbalukoff> blogan: Valid enough. If you can't use it, it's useless to you.
20:16:36 <xgerman> yeah, I changed the patch
20:16:40 <xgerman> blogan
20:16:44 <johnsom> We need an enable/disable
20:16:49 <sbalukoff> minwang2: Oh, good!
20:17:30 <johnsom> #topic Brief progress reports
20:17:31 <blogan> johnsom: enable disable for?
20:17:43 <johnsom> blogan enable/disable for anti-affinity
20:17:49 <blogan> johnsom: ah okay
20:18:01 <johnsom> blogan Otherwise active/standby will fail on single compute devstack
20:18:04 <xgerman> blogan it breaks single node clouds
20:18:08 <sbalukoff> Still going, full steam ahead. L7 is coming together nicely.
20:18:15 <xgerman> NICE!
20:18:26 <johnsom> sbalukoff Good to hear!
20:18:35 <blogan> johnsom: makes sense, i wasn't sure if you were talking about the nova networks thing or the anti-affinity one
20:18:37 <evgenyf> tempest for L7 is not there yet
20:18:54 <johnsom> I have been playing whack-a-mole on gate issues, trying to do reviews, and some internal activities.
20:19:22 <blogan> i've been doing mostly internal stuff so i've been largely useless for upstream work, sorry :(
20:19:24 <sbalukoff> evgenyf: I'll try to respond to your latest L7 patch within the next couple of days. A couple other things have occurred to me since doing the L7Rule sanity checks. :P
20:19:29 <xgerman> we have lot’s of internal stuff going on
20:19:54 <xgerman> so I am also of limited use those days
20:19:57 <sbalukoff> johnsom: Thank you very much for (repeatedly) keeping the gate going.
20:19:57 <evgenyf> sbalukoff: good
20:19:58 <rm_work> same, mostly stuck on internal, but got a breath of air yesterday to get a bunch of reviewing done
20:20:05 <rm_work> still trying to get to the larger ones (sorry sbalukoff)
20:20:07 <blogan> xgerman: so like a normal day for you?
20:20:09 <blogan> xgerman: ZING!!!
20:20:15 <johnsom> Shout out to TrevorV for squashing two bug with one patch this week!
20:20:25 <xgerman> lol
20:20:29 <fnaval> still working on octavia tempest setup
20:20:30 <TrevorV> AWWWW YISSSS
20:20:31 <sbalukoff> rm_work: No worries. But hopefully I'm not pissing you off too badly with my comments on your smaller patches. :)
20:20:33 <fnaval> #link https://review.openstack.org/#/c/182554/
20:20:37 <rm_work> meh :P
20:20:43 <TrevorV> https://media3.giphy.com/media/9PsgUfyrNiKK4/200_s.gif
20:20:49 * blogan golf claps for TrevorV
20:20:59 <rm_work> I'm trying to figure out what's up with that config value for signing digest hash
20:21:25 <sbalukoff> rm_work: It'd be easy enough just to add it to that config.py file. I wouldn't block that, eh!
20:21:44 <rm_work> yeah but it's already used T_T
20:21:45 <johnsom> rm_work Thanks.  I didn't see it with a quick review of the config code, so that's why I asked
20:21:47 <rm_work> so ... wat
20:22:00 <sbalukoff> rm_work: Yeah, kinda my thoughts too...
20:22:03 <rm_work> yeah i need to scan again, it might be in a specific file, in which case, good catch
20:22:09 <rm_work> it'd have to be loaded more globally
20:22:53 <johnsom> #topic Shared pools & L7: Should integrity validations be made on CLI level? It's validated in API
20:23:02 <johnsom> Ok, someone added this topic....
20:23:08 <evgenyf> I did
20:23:12 <sbalukoff> Oh, interesting!
20:23:15 <evgenyf> Need your opinion
20:23:19 <johnsom> evgenyf the floor is yours
20:23:29 <sbalukoff> So... my thought is that we want that to happen deeper. I've been coding so it happens at the repo level.
20:23:34 <evgenyf> Should we or not do integrity tests on CLI level, I mean, redirect_to_pool pool of a policy and it's listener belong to the same LB? We check it on API
20:23:35 <xgerman> I am fine with API only validation
20:23:37 <sbalukoff> This makes us less dependent on the api framework.
20:23:55 <xgerman> less stuff to keep in sync
20:24:05 <sbalukoff> It also means that potentially we have to do less DB calls from the API itself.
20:24:13 <sbalukoff> (I'm not sure that buys us anything)
20:24:18 <sbalukoff> Note also:
20:24:19 <rm_work> johnsom: yeah found it, https://github.com/openstack/octavia/blob/master/octavia/certificates/common/local.py#L50
20:24:21 <xgerman> you always need to validate on the API
20:24:27 <evgenyf> Especially, unit testing it raises some issues in CLI UT framework
20:24:29 <sbalukoff> Some validation is happening at the API level already through the wsgi types stuff.
20:24:29 <xgerman> CLI is not the only client
20:24:55 <bharathm> rm_work: johnsom: I commented the same url on the patch :-)
20:24:59 <sbalukoff> Oh right-- we're asking about CLI, not API.
20:25:02 <rm_work> ah :P
20:25:25 <johnsom> Yeah, I agree, it makes sense to do validation in the API to catch both CLI and direct REST requests.
20:25:34 <evgenyf> Usualy, on CLI we do elementary validations, like if mentioned entities exist, etc..
20:25:43 <rm_work> bharathm: i don't see that?
20:25:49 <nmagnezi> hello everyone, sorry for being late
20:26:06 <sbalukoff> evgenyf: But, basically, if someone tries to do something wrong at the CLI, it'll hit the API, and then the API will return an error, right?
20:26:11 <sbalukoff> So, I don't see a problem with that.
20:26:19 <bharathm> rm_work: https://review.openstack.org/#/c/272708/5
20:26:26 <rm_work> bharathm: oh, on that one
20:26:32 <bharathm> Yeah stable/liberty
20:26:54 <evgenyf> sbalukoff: yes, So, I will remove those validations from L7 CLI, Stephen, on shared pool also
20:27:00 <sbalukoff> So in any case, the API needs to do integrity checks. But the CLI doesn't have to-- it just needs to respond appropriately to errors from the API.,
20:27:08 <xgerman> +1
20:27:14 <evgenyf> sbalukoff: right
20:27:16 <johnsom> +1
20:27:24 <sbalukoff> evgenyf: Oh, did I add those into the shared pools CLI?
20:27:35 <johnsom> Cool.  I'm glad we are really thinking about validation.
20:27:43 <xgerman> +1
20:27:56 <evgenyf> sbalukoff: I thing yes, but it's not covered with UT yet
20:27:59 <blogan> I don't think the CLI should do that
20:28:25 <sbalukoff> evgenyf: Ok. I can revisit that. Or you can if you want. Doesn't matter that much who does the work, to me. :)
20:28:34 <johnsom> blogan So you agree with us that we should focus that in the API?
20:28:40 <blogan> johnsom: indeed
20:28:44 <sbalukoff> Note, we're talking about the Neutron-LBaaS CLI, since an Octavia native CLI doesn't exist.
20:28:45 <johnsom> Excellent
20:28:51 <sbalukoff> So please! Carry on with the shared pools patch!
20:28:58 <sbalukoff> That is unaffected by this discussion.
20:29:09 <sbalukoff> (And its API already does validations.)
20:29:10 <rm_work> evgenyf: when my barbican-update patch merged, did it break anything of yours that we need to revisit? I know you had some objections to something I was doing, but I never managed to sync up with you before people merged my patch
20:29:15 <evgenyf> Another question: Do we really need ">" and "<" comare types for L7 rules?
20:29:15 <johnsom> sbalukoff No worries, I need to finish that review today
20:29:40 <rm_work> sorry, I am really bad at staying on-topic T_T
20:29:50 <johnsom> #topic Open Discussion
20:29:54 <johnsom> I will solve that issue
20:29:55 <sbalukoff> rm_work: No worries.
20:30:00 <blogan> evgenyf: in what context are teh "<" and ">" being used?
20:30:13 <sbalukoff> Hey folks: What do y'all think of making me an Octavia core dev? :D
20:30:15 <rm_work> that might be a question for sbalukoff?
20:30:16 <evgenyf> rm_work: yes I have patch for TLS which is on hold, will resurect it soon
20:30:31 <rm_work> evgenyf: ok, I was concerned that we didn't get to talk over your objections on my patch though
20:30:35 <xgerman> nmagnezi?
20:30:41 <nmagnezi> xgerman, hi
20:30:42 <sbalukoff> rm_work: I don't think it broke anything I was directly working on.
20:30:58 <nmagnezi> xgerman, open discussion part? :)
20:30:59 <rm_work> sbalukoff: i meant about < and > context for L7
20:30:59 <xgerman> I think you have the floor
20:31:03 <xgerman> yep
20:31:10 <nmagnezi> aye
20:31:28 <nmagnezi> I wanted to bring this bug your attention https://bugs.launchpad.net/octavia/+bug/1541362
20:31:29 <openstack> Launchpad bug 1541362 in octavia "The diskimage-create script fails to run on Fedora23" [Undecided,New]
20:31:41 <nmagnezi> this used to work before, not sure why it got broken
20:32:08 <evgenyf> blogan: We have < and > on rules next to =, contains, starts_with and ends_with, looks like nobody will use < and > on headers, files or others
20:32:13 <sbalukoff> nmagnezi: When I tried a fedora image in my test environment, it didn't want to launch in 1GB of RAM.
20:32:18 <nmagnezi> note: please don't mix it with https://bugs.launchpad.net/octavia/+bug/1531092 which is in the works and I actully cherry picked the patch and it seemed to work
20:32:19 <openstack> Launchpad bug 1531092 in octavia "The diskimage-create script fails to build an amphora image for centos and fedora" [Medium,In progress] - Assigned to Phillip Toohill (phillip-toohill)
20:32:36 <sbalukoff> nmagnezi: However, I didn't see that as *our* problem to solve per se: And the patch did at least fix building the image.
20:33:07 <xgerman> johnsom our dib guru?
20:33:14 <nmagnezi> sbalukoff, i'm not sure i'm following
20:33:29 <ptoohill> sbalukoff: nmagnezi It may be failing because its expecting a config option to be set to true
20:33:36 <ptoohill> or false
20:33:39 <blogan> evgenyf: my opinion is always if its not going to be useful don't put it in, if someone wants it, then the need will eventually be known
20:33:42 <johnsom> nmagnezi Yeah, looking at the log.  I suspect that is an upstream DIB issue with the fedora element.
20:33:51 <sbalukoff> evgenyf: Technically, you can check a header or cookie against an integer value, and do a > or <
20:33:59 <nmagnezi> ptoohill, I was using the script implicitly, by using devstack
20:34:02 <sbalukoff> But I have never seen any of our clients actually do that.
20:34:16 <sbalukoff> So I'd be OK with dropping those comparison types until someone cries for them.
20:34:18 <ptoohill> nmagnezi: Im talking about with my patch
20:34:22 <ptoohill> building the image works with it
20:34:23 <blogan> sbalukoff: +1
20:34:26 <evgenyf> blogan: It's on spec but looks like it's unnecessary, will remove it, when massive reviews will come we can close this dilema finally
20:34:27 <ptoohill> building lbs may not
20:34:45 <blogan> evgenyf: sounds good to me
20:34:55 <johnsom> nmagnezi Did you check in the openstack/diskimage-builder launchpad bugs?
20:34:56 <nmagnezi> ptoohill, oh, that one looks like it's in a good direction (worked for me when I cherry-picked)
20:34:57 <sbalukoff> evgenyf: Ok! I'll update my Octavia patches to remove those comparison types, too.
20:35:18 <ptoohill> nmagnezi: Ah, good deal, was there another issue?
20:35:28 <nmagnezi> johnsom, honestly no :(
20:35:31 <ptoohill> johnsom: The bug initially wasnt upstream, we needed to update a few things
20:35:31 <evgenyf> sbalukoff: If someone needs it, we can add them
20:35:36 <nmagnezi> johnsom, is it a duplicate?
20:35:42 <ptoohill> https://review.openstack.org/#/c/272905/
20:35:43 <sbalukoff> evgenyf: Exactly.
20:36:03 <ptoohill> I need to fix conflicts, am i on the wrong page here?
20:36:05 <sbalukoff> ptoohill: I think I +1'd that, right?
20:36:09 <nmagnezi> ptoohill, yes. it fails to create an ubuntu based amphora when the script is used on a fedora node
20:36:39 <ptoohill> If thats the case, using this patch requires use_upstart = False
20:36:57 <ptoohill> I have yet to test it, and will get to it, but its documented in the element (should be documented elsewhere?)
20:37:42 <johnsom> I'm not sure.  It seems like it is past most of the config stuff and just wrapping up the image file, so that makes me think either our disk size is too small or it's a DIB issue.
20:37:58 <ptoohill> sbalukoff: You did but it has conflicts now
20:38:05 <johnsom> ptoohill Usually element stuff is documented in the element readme
20:38:06 <sbalukoff> ptoohill: Aah, ok.
20:38:14 <sbalukoff> johnsom: +1
20:38:22 <ptoohill> johnsom: The issue is that it uses upstart by default when we should have used systemd by default(more work i guesS)
20:38:34 <ptoohill> but i added patch thats already merged allowing for this configuration
20:38:49 <ptoohill> im 99% sure that's the issue were discussing
20:39:03 <ptoohill> er.. sysvinit, whatever
20:39:05 <ptoohill> :)
20:39:45 <ptoohill> johnsom: Then I should have the proper documentation ;)
20:39:45 <nmagnezi> ptoohill, is that about https://bugs.launchpad.net/octavia/+bug/1541362 ?
20:39:47 <openstack> Launchpad bug 1541362 in octavia "The diskimage-create script fails to run on Fedora23" [Undecided,New]
20:39:54 <johnsom> Ok, nmagnezi give it a try and let us know
20:40:30 <ptoohill> Ah, i didnt see this one, ill have to look a little further into it and actually test it out
20:40:35 <ptoohill> and merge conflicts nmagnezi
20:41:18 <nmagnezi> johnsom, I managed to cherry-pick and use ptoohill's patch before It got to the merge-conflict state
20:41:28 <johnsom> evgenyf Did we get your question about > and < answered?
20:41:39 <sbalukoff> johnsom, evgenyf: Yes.
20:41:47 <nmagnezi> another question for you quys, in regards to documentation
20:41:48 <evgenyf> johnsom: sure, will remove them for now
20:41:56 <sbalukoff> johnsom: We are going to eliminate those comparison types until someone cries for them.
20:42:01 <nmagnezi> Is there a document with best practices for Octavia services installation? allow me to elaborate: I wish to understand how Octavia services can be distributed between nodes (for example: worker node A, healthmonitor in node B etc)
20:42:03 <johnsom> Ok
20:42:07 <ptoohill> nmagnezi: Ill fix that and update bug reports with my findings here a little later this afternoon
20:42:35 * nmagnezi sending good karma at ptoohill's direction
20:42:37 <sbalukoff> nmagnezi: I had something in the works for that prior to Liberty, but I'm afraid I didn't get that far
20:42:39 <nmagnezi> thank you :)
20:42:43 <sbalukoff> If anyone wants to tackle that, feel free.
20:42:50 <sbalukoff> Otherwise, I'll try to do it again, after the feature freeze.
20:43:25 <johnsom> #link https://review.openstack.org/#/c/232173/
20:43:38 <johnsom> That is the doc sbalukoff was talking about
20:43:47 <sbalukoff> Oh, man, how embarrassing...
20:43:50 <sbalukoff> ;)
20:44:54 <sbalukoff> Ok, on another note... since this got buried:
20:45:17 <sbalukoff> Other than risking the death of the universe and all we hold dear, what do y'all think of the prospect of making me an Octavia core dev?
20:45:28 <johnsom> nmagnezi In general they don't need to be co-located.  The only pointer is the health manager IPs need to be set in the Octavia.conf so health messages can go to the right nodes
20:46:29 <dougwig> sbalukoff: the numbers support it.  http://stackalytics.com/report/contribution/octavia/90
20:47:07 <xgerman> #6 - let’s give him a challenge to get into the top 5 :-)
20:47:16 <sbalukoff> Oh hey! Nifty!
20:47:16 <johnsom> Thanks dougwig, quicker on the draw
20:47:23 <dougwig> +1 from me, fwiw.
20:47:24 <nmagnezi> johnsom, but I just gave it as an example, I'm interested to know how to deploy ideally. starting at my multi-node devstack :)
20:47:32 <xgerman> but I am cool with adding him back +1 from me
20:48:06 <sbalukoff> Thanks, y'all!
20:48:10 <fnaval> +1 also
20:48:17 <xgerman> on another note how did I become #1 — totally unexpected :-)
20:48:28 <xgerman> blogan, rm_work?
20:48:49 <johnsom> nmagnezi Yeah, I don't know what we have that documented right now.  In fact, multi-controller is lightly tested in my opinion.
20:48:49 <xgerman> also we probably should run that on the ML
20:48:59 <sbalukoff> xgerman: +1
20:49:04 <dougwig> typically you ask in private, so the ptl can quietly poll the cores and not make a public hash of things if there are objections, btw.  :)
20:49:05 <ptoohill> that 1% got me ><, this bug is something else. My apologies nmagnezi johnsom
20:49:21 <xgerman> dougwig +1
20:49:28 <sbalukoff> dougwig: Haha! Well, you know me, eh. ;)
20:49:33 <johnsom> Correct.  Solicitation is a bit, odd
20:49:35 <dougwig> sbalukoff: haha, indeed.
20:49:36 <sbalukoff> When have I ever done anything quietly>?
20:49:36 <blogan> xgerman: i've been admittedly bad at reviewing lately
20:49:44 <nmagnezi> johnsom, should that even work? If I deploy two workers with a highly available db?
20:50:00 <nmagnezi> johnsom, (worker is stateless, right?)
20:50:00 <dougwig> i suggest we let folks think about it, and not put everyone on the spot here.  johnsom can follow-up?
20:50:11 <blogan> dougwig: just bc someone is above you doesn't mean thats good
20:50:33 <johnsom> nmagnezi In theory yes.  We really want to get to HA-controllers.  We need to do testing
20:50:56 <johnsom> dougwig +1 will follow up
20:50:57 <sbalukoff> dougwig: +1
20:50:58 <dougwig> blogan: are you saying that because you tower over everyone?  literally.
20:51:03 <sbalukoff> Thanks guys!
20:52:42 <johnsom> nmagnezi Since we pull from the queue, it should all be ok.  We tried to keep an eye out for that deployment scenario.  I just can't say that we have done a lot of testing yet.
20:53:07 <johnsom> Let us know how it goes if you give it a shot
20:53:33 <nmagnezi> I'll do my best to provide an informative feedback
20:53:43 <johnsom> Any other topics for open discussion?
20:53:44 <sbalukoff> nmagnezi: I think we all want HA controllers-- so yes, please let us know how it goes, eh!
20:53:54 <nmagnezi> sadly i'm a bit blocked by the fedora issue with creating the amp
20:54:04 <nmagnezi> sbalukoff, will do :)
20:54:16 <johnsom> The next phase beyond that would be enabling job board, so flows can resume on alternate controllers.
20:54:39 <sbalukoff> nmagnezi: Out of curiosity, why the need for the fedora amp? Do you work for RedHat, or have a mandate to produce one?
20:54:47 <sbalukoff> (It's really just out of curiosity that I'm asking.)
20:55:12 <nmagnezi> sbalukoff, yes, I'm with Redhat :)
20:55:17 <sbalukoff> Ok, cool!
20:55:25 <sbalukoff> Yay for vendor representation, eh!
20:55:26 <rm_work> ah yeah, ML prolly but I would +1 sbalukoff
20:55:31 <rm_work> (was afk a sec)
20:55:31 <nmagnezi> lol
20:56:03 <johnsom> Don't be shy with putting up patches...  grin
20:56:11 <sbalukoff> Indeed!
20:56:28 <johnsom> Ok, I think we are slowing down.
20:56:33 <sbalukoff> I'd love to eventually see an amphora image that can run in like 128MB of RAM.
20:56:37 <johnsom> Last call for topics....
20:56:49 <xgerman> sbalukoff alpine linux
20:57:03 <sbalukoff> xgerman: I hear you. Other things on the table right now, though. ;)
20:57:03 <xgerman> when i have time I will totally try that ;-)
20:57:07 <johnsom> We all have hopes for containers too....
20:57:09 <sbalukoff> Sweet!
20:57:12 <sbalukoff> Yes!
20:57:23 <sbalukoff> But even containers will benefit from a smaller RAM footprint.
20:57:35 <xgerman> johnsom we will have containers soon
20:57:44 <xgerman> :-)
20:57:54 <johnsom> Alright thanks folks!
20:57:57 <johnsom> #endmeeting