20:00:10 <johnsom> #startmeeting Octavia
20:00:11 <openstack> Meeting started Wed Feb 13 20:00:10 2019 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:14 <openstack> The meeting name has been set to 'octavia'
20:00:27 <johnsom> Hi folks
20:00:33 <colin-> \o
20:00:33 <nmagnezi> o/
20:00:37 <eandersson> o/
20:00:40 <johnsom> pinging rm_work  grin
20:00:50 <cgoncalves> ~o~
20:01:07 <johnsom> #topic Announcements
20:01:18 <johnsom> TC nominations are open
20:01:32 <johnsom> If you are interested in running for the TC, the details are here:
20:01:37 <johnsom> https://governance.openstack.org/election/
20:01:52 <openstackgerrit> Carlos Goncalves proposed openstack/octavia master: Add Python 3.7 support  https://review.openstack.org/635236
20:02:21 <johnsom> The week of Feb 25th is the last library release for Stein.
20:02:34 <johnsom> I would really, really, really like to get the octavia-lib changes in.
20:02:55 <cgoncalves> #link https://review.openstack.org/#/q/project:openstack/octavia-lib+status:open
20:03:11 <johnsom> Faster on the copy/paste.... lol
20:03:30 <nmagnezi> lol
20:03:36 <nmagnezi> was about to post the same
20:03:40 <cgoncalves> slackers :P
20:03:41 <johnsom> Also note, the week of March 4th is feature freeze
20:03:48 <johnsom> And final clients.
20:04:11 <johnsom> Any other announcements this week?
20:04:45 <johnsom> #topic Brief progress reports / bugs needing review
20:05:14 <openstackgerrit> Erik Olof Gunnar Andersson proposed openstack/octavia master: Fix oslo messaging connection leakage  https://review.openstack.org/636428
20:05:16 <johnsom> Ok, I have updated the octavia-lib patches (except for the constant migration patches) and flavors is all wrapped up.
20:05:32 <colin-> nice
20:05:39 <johnsom> Right now my focus is 100% on reviews and helping patches get merged.
20:05:57 <johnsom> I worked on the VIP refactor patch german had proposed, that is good if not merged now.
20:06:08 <cgoncalves> merged
20:06:14 <xgerman> thanks!
20:06:23 <johnsom> I am currently working with zhao on the TLS patch chain.  The first of which should be posted today for review.
20:07:13 <johnsom> I am planning to really push to get the TLS features in for Stein as I think they have great value. (TLS client auth and backend re-encryption)
20:08:22 <johnsom> Any other updates?  I see Erik was crafty at pushing his patch during the progress report...  Good stuff there.
20:08:25 <colin-> ah b-channel, that would be great
20:08:28 <openstackgerrit> Erik Olof Gunnar Andersson proposed openstack/octavia master: Fix oslo messaging connection leakage  https://review.openstack.org/636428
20:08:36 <colin-> look at that, he can't help himself :p
20:08:39 <eandersson> :D
20:08:48 <johnsom> b-channel?
20:09:10 <eandersson> I am gonna try to keep this patch as consistent as possible with how nova / neutron etc does it (so ideally keep things like asserts for now at least)
20:09:26 <eandersson> I have a question about the EventStream
20:09:30 <colin-> backend re-encryption is terminating the client https request and re-initiating it from the amp to the backend with a new negotiation right?
20:09:36 <eandersson> Should I just leave that code path as-is for now?
20:10:24 <johnsom> colin- Yes, frontend VIP is the same, but when we proxy to the member, this path is also over TLS.
20:11:48 <johnsom> eandersson In my opinion, the event streamer stuff needs to be totally ripped out. This is in plan with the v1/neutron-lbaas retirement this year.
20:12:23 <cgoncalves> +1
20:12:35 <johnsom> I don't think it's used much and is...  I think in the future we need a better design for general eventing, but that code is not a good base.
20:13:21 <eandersson> Makes sense and I agree
20:13:21 <cgoncalves> in the spirit of getting rid of some warnings in my new shiny IDE, I ended up adding python 3.7 support (+ minor refactors) and fixing a couple of functional tests under python 3.6
20:13:36 <johnsom> So, maybe a simple __del__ hook for now, re-address if someone cares/uses it.
20:13:46 <cgoncalves> also not sure I reported last week on a new tempest test + job for amphora spare pool
20:14:24 <openstackgerrit> Erik Olof Gunnar Andersson proposed openstack/octavia master: Fix oslo messaging connection leakage  https://review.openstack.org/636428
20:14:25 <johnsom> Yeah, interesting stuff there. I need to look at that.
20:14:55 <eandersson> I would prefer to keep it out of the scope for this patch to make it easier to backport etc
20:14:59 <cgoncalves> octavia-grenade in stable/rocky is still faulty, consistently (please do not recheck). I tried to run grenade locally but ran into several issues. I will be continuing that
20:15:02 <eandersson> And we can just follow up for master
20:15:24 <johnsom> Ok
20:15:25 <eandersson> Since this is primarily jsut impacting the api service from my testing
20:16:00 <johnsom> Oh, and I got an itch to try out LXD again over the weekend and was successful (after way too much effort) to launch lxd/lxc amps.
20:16:07 <eandersson> Nice
20:16:25 <johnsom> There is a patch up with a passing gate. It shaves ~30 minutes off a tempest scenario run.
20:16:59 <colin-> strong!@
20:17:21 <johnsom> That said, it has none of our kernel tuning, disables most of the lxc security, nova throws a ton of errors, and UDP probably doesn't work.
20:17:24 <colin-> still intending to circle back to that first opportunity johnsom, container amps are an objective
20:17:30 <colin-> hehe
20:17:32 <colin-> fair
20:18:00 <johnsom> So, highly not recommended for production workloads....
20:18:21 <johnsom> Maybe in open discussion I will ask if we even want to merge that or not.
20:18:40 <johnsom> #topic Talk about cascade delete via the dashboard
20:18:47 <johnsom> #link https://review.openstack.org/#/c/553381/
20:18:57 <nmagnezi> Yup, thank you for bringing this up
20:18:58 <johnsom> nmagnezi You have the floor
20:19:12 <nmagnezi> Basically Jacky did a great job with this
20:19:26 <nmagnezi> It's just that the discussion on the patch prevented it from getting in
20:19:39 <nmagnezi> Now, I know Jacky probably didn't have the cycles to follow up
20:19:47 <nmagnezi> And I can help with that
20:20:01 <nmagnezi> But we need agree if we want to change the default to cascade
20:20:07 <nmagnezi> And if we want to add a warning
20:20:23 <johnsom> Wish rm_work was here, he is one of the -1's
20:20:31 <cgoncalves> the warning would be in the delete confirmation window itself, no?!
20:20:36 <nmagnezi> The reason I'm saying this is because some cores (my included) voted -1 and some +2
20:20:41 * nmagnezi looks at xgerman
20:20:58 <xgerman> the motivation for cascade in our API was for the dashboard
20:21:02 <nmagnezi> cgoncalves, I think so, yeah
20:21:10 <xgerman> I even had to code an lbaasv2 neutron extension just for the cascade flag
20:21:28 <xgerman> so yes, it should go in!
20:21:37 <nmagnezi> I think that keeping things simple by simply added a warning text is enough, but would like to hear others
20:21:42 <cgoncalves> all it would take is string change. we just need consensus
20:21:51 <johnsom> Agreed, that was part of the intent. I have not had time to try this out however.
20:22:13 <cgoncalves> I'd say let's be explicit about delete cascade in the delete message and be done with it
20:22:18 <nmagnezi> johnsom, I posted my test result to the patch but I can wait for you to test before I touch this
20:22:25 <johnsom> For those that have loaded it up, does it pop up a confirmation now or does it just do the cascade delete?
20:22:25 <xgerman> yep, warn, confirm - done
20:22:26 <nmagnezi> Unless Jacky is here?
20:22:58 <johnsom> He likely won't be online for a few hours
20:23:07 <nmagnezi> johnsom, I uploaded this https://pasteboard.co/HR3IkXF.png
20:23:55 <johnsom> Ok, cool. Yeah, my vote would be to update the text so it is explicit that it will delete the whole LB and call it good.
20:24:30 <nmagnezi> I think the same, just wanted to double check since this is changing the default to --cascade
20:24:41 <cgoncalves> good, we are all in agreement. let's just ping dayou and ask if he could do it, otherwise nmagnezi or any of us can do
20:24:51 <johnsom> Yeah, make sure it's called out in the release notes
20:24:53 <nmagnezi> Yup
20:25:03 <nmagnezi> johnsom, fair point
20:25:21 <nmagnezi> dayou, let me know if you want to continue with this, otherwise I can help out
20:25:28 <johnsom> Yeah, web GUI is the keep-it-simple path IMO, so the simpler it is for users, the better.
20:25:36 <nmagnezi> +1
20:25:48 <johnsom> Also FYI, he is going to help me out by adding the flavor drop down on LB create.
20:26:39 <cgoncalves> yay!
20:26:57 <johnsom> Though I need the SDK folks to finish merging: https://review.openstack.org/#/q/project:openstack/openstacksdk+owner:%22Michael+Johnson+%253Cjohnsomor%2540gmail.com%253E%22
20:27:13 <johnsom> As he will need the list flavors method
20:28:27 <johnsom> Ok, nmagnezi Can you comment on the patch with the decision, wording update, and release note request?
20:28:33 <rm_work> ah
20:28:47 <nmagnezi> johnsom, doing that as we speak :)
20:28:51 <johnsom> Thank you
20:28:54 <rm_work> catching up
20:28:59 <johnsom> rm_work Sorry, we made the decision...
20:29:03 <johnsom> grin
20:29:26 <nmagnezi> rm_work, we decided to refactor the UI plugin to adobe flash
20:29:59 <johnsom> With adobe air components. Because they are cool
20:30:44 <johnsom> Funny enough I just found an Adobe Air sticker cleaning out a closet this weekend
20:31:03 <johnsom> #topic Talk about log offloading
20:31:10 <johnsom> #link https://review.openstack.org/624835
20:31:45 <johnsom> This is one I wanted to circle back on. Sorry if we already hashed through this, but since we have some operators on I wanted to ask about logging infrastructure.
20:32:01 <cgoncalves> ah, cool. I wanted to ping people about this :)
20:32:12 <johnsom> Do you folks have a centralized logging solution in your deployment?
20:32:35 <johnsom> If so, is it per project views, just for admin use, etc.?
20:35:25 <nmagnezi> cgoncalves and I mentioned this in an internal discussion today, I don't know the exact details of this just yet (need to compare to how configure this for other OpenStack projects)
20:36:08 <nmagnezi> But it was mentioned that for amps we will have a separate user (or view? Not 100% sure I remember the term)
20:36:55 <cgoncalves> nmagnezi, not sure what you mean with that last message
20:37:23 <johnsom> Ok, so here is my concern with the current patch. It puts both admin and tenant logs in one stream. I feel like if we merge this and then make it more flexible, we may have painted ourselves in a backward compatibility issue. I would like to spend a bit more time on it before we merge.
20:37:45 <colin-> any impact to resource footprint of amps?
20:37:51 <xgerman> well, you can configure it whatever way you want
20:38:18 <xgerman> colin-: well, you ship logs off so maybe some cpu cycles and network -
20:38:31 <colin-> should be fairly minimal, even on a busy one i'd imagine
20:38:41 <xgerman> +1
20:38:52 <nmagnezi> cgoncalves, meaning that the logs collected from amps should  communicate with rsyslog using some user that we specifically configure in Octavia
20:39:13 <johnsom> It actually improves performance over writing them local.
20:39:14 <xgerman> you cna make a custom logging template to ship off more or less
20:39:45 <xgerman> I think question is what should we deafult to
20:40:08 <xgerman> There are two scenarios:
20:40:22 <xgerman> 1) We ship everyhting to some central logging and operator carves it up the way he sees fit
20:40:33 <johnsom> I would personally like to see two endpoints configurable, one for admin logs, one for the tenant traffic flow logs. They could be configured to the same place, but could be enabled/disabled individually
20:40:44 <xgerman> 2) users run their own log servers (maybe a vm in tenant network) and we ship haproxy logs there
20:40:49 <xgerman> combination of 1+2
20:41:05 <johnsom> Or adding in my option
20:41:27 <xgerman> that wold combination of 1+2
20:42:03 <johnsom> It would be a different #1, but still leaving an option for #2 later.
20:42:37 <johnsom> Basically I might want to send the tenant flow logs to a customer facing kibana/logstash, and the admin to a different one.
20:42:40 <xgerman> yeah, as I said we can change the log template default
20:43:06 <xgerman> but for(2) we migth need to add some new field to the API so usrs can cinfigure log servers
20:43:21 <colin-> yeah how trivial/non-trivial is that?
20:43:24 <colin-> i ahve trouble approximating it
20:43:25 <johnsom> Right. I don't want to go down that path in Stein.
20:43:41 <cgoncalves> in that case we would need to extend the API to pass in the user's log server
20:43:46 <xgerman> so for the template you cna overwrite like with the haproxy one
20:43:58 <johnsom> The issue with the tenant hosted syslog is then the routing, etc. So that is going to need some strong docs, etc.
20:44:51 <johnsom> I'm thinking doing the central locations right would be a good first step. It would require an addition config option to split the log streams, etc.
20:44:52 <cgoncalves> yeah :/
20:45:14 <cgoncalves> +1
20:46:01 <xgerman> Sure we can add that and change the template — but most people (I know) only run one logstash
20:46:20 <cgoncalves> would it make sense to log to the local node syslog instead of forwarding logs from all amps in multiple nodes to a central one?
20:46:24 <johnsom> Mostly I was hoping for some operator feedback on if that is useful or not
20:46:42 <colin-> being able to differentiate tenant v non-tenant logs would be useful for us
20:46:53 <cgoncalves> I' thinking like if there's something wrong between amp and log server. we wouldn't get any logs
20:47:20 <johnsom> Sure, we can leave the admin logs still writing out local.
20:47:35 <xgerman> that’s what the log template is for
20:47:37 <johnsom> I think it's more important to not log the tenant traffic flows local.
20:48:03 <cgoncalves> johnsom, agreed
20:48:31 <colin-> definitely but having remote non-tenant logs would be useful too i think
20:48:38 <johnsom> Is log template just configuring the haproxy log format or like a chunk of rsyslog config?
20:48:46 <xgerman> rsyslog config
20:48:58 <xgerman> haproxy-log format isn’t done
20:49:19 <xgerman> https://review.openstack.org/#/c/624835/16/octavia/amphorae/backends/logging/templates/10-rsyslog.conf.template
20:49:45 <xgerman> the system will take an operator one like with the haproxy-template
20:50:04 <cgoncalves> I'm tempted to say writing out local for admin logs would be a good default. we can more safely assume that option is available across most/all deployments
20:50:26 <johnsom> Yeah, should be a configuration item IMO
20:50:36 <xgerman> that’s what if does right now
20:51:02 <xgerman> configure like in custom template or configure like exlicit option
20:51:36 <johnsom> Hmm, ok. I need to think about this a bit more. I also want to leave some time for open discussion.
20:51:46 <johnsom> #topic Open Discussion
20:51:58 <cgoncalves> oh, wait. maybe I'm confused. when I say local I mean to compute node's syslog
20:52:04 <johnsom> Any other topics for folks today?
20:52:28 <cgoncalves> ok, we can continue discussion at another time and offline
20:52:34 <xgerman> +1
20:52:46 <cgoncalves> FYI, I'm confirmed to Denver. Summit + PTG
20:52:56 <xgerman> Congrats!!
20:53:25 <johnsom> cgoncalves I don't think there is any way the amp could send it's logs to the compute host's syslog.  That is breaking some of the isolation barriers. It would still be inside the amp, like it is today I think.
20:53:38 <johnsom> Sadly, I am not.  Not sure when I can lock that in.
20:53:44 <cgoncalves> would that be such an achievement over there? :D
20:54:23 <johnsom> We are confirmed to have an Octavia room for the  PTG.
20:54:30 <cgoncalves> well, you guys are "locals" ;)
20:54:39 <rm_work> eiddccidnvgeddinjvjljetdnnrtbujivkekejfneinf
20:54:44 <rm_work> grrrrrr
20:54:49 <johnsom> lol
20:54:51 <cgoncalves> road-trip!
20:54:58 <colin-> it's like the irc equivalent of sliding under the closing door at the last moment :)
20:55:01 <colin-> welcome
20:55:15 <johnsom> Well, it's a straight flight for me. Just need to see what my employer is going to do
20:55:45 <rm_work> i should be there as well
20:55:51 <cgoncalves> if budget is short, road-trip!
20:55:59 <rm_work> waiting on confirmation from work, but
20:56:07 <rm_work> i think will be there regardless
20:56:28 <xgerman> sweet - tell them you need to hand out some AOL CDs
20:56:53 <johnsom> So one more quick topic. The LXD gate stuff. It works, but is not exactly production ready.  My question is if we even want to merge that as a non-voting gate. We would likely need a "Here is why you shouldn't do this" doc.
20:57:24 <johnsom> It would probably be cheaper for me to fly that drive.
20:58:18 <colin-> The LXD gate stuff = conatiner amps? or something else
20:58:38 <colin-> or a CI construct
20:58:38 <rm_work> i could drive down and then we drive over :P
20:58:45 <johnsom> Yeah, ~$250 bucks to drive
20:59:06 <johnsom> It is amps running in LXC containers instead of service VMs
20:59:08 <rm_work> do you fit in a miata? :D
20:59:22 <colin-> personally in favor of anything that moves that ball forward, just my $0.02
20:59:30 <johnsom> lol, not for 1,254 miles each way I don't
20:59:58 <johnsom> Ok,, about out of time.  Thanks folks!  Have a good week.
21:00:01 <rm_work> i drove up from TX to WA with my 6'4" friend and our luggage :P
21:00:12 <johnsom> #endmeeting