15:00:35 <srwilkers> #startmeeting openstack-helm
15:00:36 <openstack> Meeting started Tue Sep 26 15:00:35 2017 UTC and is due to finish in 60 minutes.  The chair is srwilkers. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:39 <openstack> The meeting name has been set to 'openstack_helm'
15:00:46 <portdirect> o/
15:00:46 <srwilkers> #topic roll-call
15:00:52 <srwilkers> o/
15:00:55 <srwilkers> \o/
15:01:01 <srwilkers> \o
15:01:07 <portdirect> w00t for OSH ;)
15:02:12 * srwilkers looks around
15:02:15 <srwilkers> where's everyone else?
15:03:50 <alanmeadows> \o
15:03:53 <jayahn> o/ was sleeping..... need to wake up :)
15:04:02 <srwilkers> hey jayahn :)
15:04:11 <srwilkers> sorry we woke you, but glad you're here
15:04:29 <v1k0d3n> o/
15:04:53 <srwilkers> let's get started -- we've got a full agenda
15:05:04 <srwilkers> here's the agenda: https://etherpad.openstack.org/p/openstack-helm-meeting-2017-09-26
15:05:09 <srwilkers> #topic PTG Summary
15:05:23 <srwilkers> it was great seeing everyone at the PTG last week
15:06:19 <jayahn> yeah. it was really good. :)
15:06:23 <srwilkers> think we made a lot of progress in terms of paths forward and for cleaning up some stale work
15:06:58 <srwilkers> since we've got a full agenda -- it'd be awesome to get some feedback on this summary over the next few days and to provide any additions where necessary
15:07:27 <srwilkers> and if there's anywhere you'd like to do some work, feel free to add your name to any sections, and we can revisit this summary next week
15:07:44 <srwilkers> #action srwilkers follow up with summary etherpad action items next meeting
15:08:11 <lrensing> o/
15:08:15 <srwilkers> #topic kubernetes entrypoint namespace support
15:08:22 <srwilkers> hey lrensing o.
15:08:23 <srwilkers> o/
15:08:40 <srwilkers> portdirect: you added this one
15:08:59 <srwilkers> seems the PR for adding support for namespaces in k8s entrypoint is here:  https://github.com/stackanetes/kubernetes-entrypoint/pull/25
15:09:50 <portdirect> would be great to get some eyes on this
15:10:13 <portdirect> and see if we can help them get it merged quickly
15:10:23 <portdirect> as this will really unblock a lot of things for us
15:10:30 <portdirect> not much more to say than that really :)
15:11:05 <srwilkers> yeah, this would be great.  i'll take a look at it and provide some feedback
15:12:14 <srwilkers> anything else on this topic?
15:12:38 <srwilkers> moving on then
15:12:53 <srwilkers> #topic NFS in OSH
15:13:14 <portdirect> so theres been some bit-rot in the nfs based deployment for dev
15:13:36 <portdirect> I was wondering if we wanted to contunue support for it, and if so do it properly
15:13:44 <portdirect> v1k0d3n: i think you have guys on this?
15:14:15 <v1k0d3n> we had it in terms of repairing it for the gate. :)
15:14:40 <v1k0d3n> chart work is a bit of a new request albeit not hard.
15:14:57 <portdirect> nice - we dont have a gate for nfs atm - so getting one would be great :)
15:15:18 <portdirect> I think that to do it properly it would need to be a chart rather than the static manifests we have atm
15:15:24 <v1k0d3n> sorry misspoke nfs in general.
15:15:38 <v1k0d3n> would we be willing to accept the nfs cleanup before starting on the chart work?
15:15:49 <portdirect> wfm
15:15:51 <v1k0d3n> because if yes, than i think sean has this already.
15:15:55 <v1k0d3n> sean?
15:15:56 <srwilkers> v1k0d3n: yep.  im all for that
15:16:39 <v1k0d3n> ok, well that works. this morning i was thinking chart was being asked for instead of the repair work. this works perfectly for us then.
15:16:43 <srwilkers> i'd really like to see an NFS chart for some of the services i've been playing with, as ceph backing things like elasticsearch makes me :(
15:16:57 <slarimore02> yeah I have the fixes for a gate ready to go and will submit a PS today
15:17:07 <portdirect> nice - cheers slarimore02
15:17:12 <lrensing> nfs chart sounds like a good idea
15:17:15 <srwilkers> yeah, just a misunderstanding.  triaging any issues with the gates, NFS or otherwise, is always higher priority :)
15:17:17 <v1k0d3n> perfect. then we can work with tyson to get the chart work completed.
15:17:22 <srwilkers> awesome
15:17:30 <v1k0d3n> ok...great to know. thanks guys/gals. :)
15:17:50 <srwilkers> anything else on this topic?
15:17:56 <v1k0d3n> we're good on our side.
15:18:05 <v1k0d3n> thanks @portdirect
15:18:19 <srwilkers> #topic oslo-genconfig hack removed
15:18:41 <portdirect> wanted to say thanks to everyone for getting reviews on this
15:18:57 <portdirect> was a big change, but went through very smooth
15:19:05 <v1k0d3n> +1
15:19:06 <portdirect> only one ps left and i think its all done
15:19:15 <portdirect> https://review.openstack.org/#/c/507293/
15:19:34 <v1k0d3n> the ps from yesterday portdirect ? i had a good look yesterday, just needed to ++ it.
15:19:39 <srwilkers> im personally glad to see it gone
15:19:54 <v1k0d3n> yes, very creative way to remove it.
15:19:54 <portdirect> yeah (to both of you ;) )
15:20:25 <portdirect> just took the work dulek did, and added multstring support
15:20:36 <portdirect> so credit to him
15:20:46 <srwilkers> the hero we need
15:20:50 <v1k0d3n> that dulek...sharp guy :)
15:21:00 <srwilkers> anything else on this topic?
15:21:16 <portdirect> nah - with the fix for the armada gate merged I'm good
15:21:19 <srwilkers> nice
15:21:23 <v1k0d3n> done
15:21:29 <srwilkers> #topic docker version support
15:21:40 <portdirect> ohh - I'm on a roll
15:21:56 <portdirect> so as in the etherpad, k8s 1.7 does some dif things with docker 1.13 and above
15:22:04 <portdirect> it shares the pid namespaces in a pod
15:22:13 <portdirect> this means that the pause container had pid 1
15:22:15 <portdirect> *has
15:22:34 <portdirect> so systemd (other than centos afaik) is kinda broke
15:22:54 <portdirect> and the rabbitmq liveliness probes as well
15:23:20 <portdirect> the rabbitmq stuff really needs fixed, as they are pretty cray
15:23:33 <portdirect> but the systemd support (for things like maas) is a bigger issue
15:23:41 <portdirect> so what should we do?
15:23:56 <portdirect> say 1.13 is not supported on k8s 1.7? or work to make it work?
15:24:06 <v1k0d3n> https://github.com/kubernetes/kubernetes/pull/45236 i believe.
15:24:19 <portdirect> thats the one
15:24:24 <portdirect> they are rolling it back in 1.8
15:24:33 <portdirect> by making it optional
15:24:46 <v1k0d3n> it's not optional at all in 1.7?
15:25:15 <srwilkers> honestly, id rather say 1.13 isn't supported and wait for 1.8.  i think there's more pressing work that needs to be done right now
15:25:17 <portdirect> nope
15:25:19 <srwilkers> but thats my opinion
15:25:31 <lrensing> i’d vote for just saying 1.13 isnt supported also srwilkers
15:25:41 <v1k0d3n> +1 definitely
15:25:44 <jayahn> +1
15:25:47 <portdirect> thats my pref as well
15:25:54 <srwilkers> there's some work id like to see done with rabbitmq first before we touch anything else
15:25:59 <lamt> portdirect : I tried to dnf install docker (not latest), and I think it is still pulling 1.13
15:26:03 <v1k0d3n> easier. let's us continue with 1.7
15:26:08 <portdirect> I'll get the docs updates as part of this ps then: https://review.openstack.org/#/c/507305/
15:26:08 <srwilkers> yep
15:26:23 <v1k0d3n> that works portdirect
15:26:38 <portdirect> lamt: hmm for me on f26 it was 1.12 but we can pit it if needed :/
15:26:42 <portdirect> *pin
15:27:04 <lamt> if you can - that would be great
15:27:06 <srwilkers> seems we have consensus here then.  anything else/
15:27:12 <lamt> that or my f26 is doing something weird
15:27:54 <portdirect> I've seen a lot of crazyness in the infra mirrors on it as well
15:28:44 <portdirect> so - TLS?
15:28:56 <srwilkers> #topic TLS
15:29:19 <portdirect> so this is a long pole - but would be good to  get some people thinking about it
15:29:36 <portdirect> I'd really like to see internal tls supported ootb in osh
15:30:08 <portdirect> though this gets tricky - as though its very easy to provide support for a opinionated deployment
15:30:25 <portdirect> turns out theres a lot of opinions about the best way to manage certs
15:30:59 <portdirect> one thing i do see as being a requirement - is having something fronting the openstack apis
15:31:14 <portdirect> to perform termination
15:31:58 <portdirect> this is done in kolla-k8s very well
15:32:22 <portdirect> and I think it would make sense to follow their pattern of having a sidecar doing that function
15:32:26 <v1k0d3n> man, i hate to say this for fear of getting flamed...but is this a good place for a spec proposal?
15:32:36 <v1k0d3n> this is a touchy one
15:32:36 <portdirect> though this leavs the big issue of how certs get to pods
15:32:48 <portdirect> its exectly what Im about to propose
15:32:49 <portdirect> :)
15:32:52 <srwilkers> v1k0d3n: yep.
15:32:56 <v1k0d3n> awesome. good deal.
15:33:04 <portdirect> but would be great to get some initial ideas for the direction that i should go in
15:33:31 <portdirect> so - any thoughts?
15:33:52 <portdirect> in the etherpad I outlineed two approaches i see as viable (but I'm sure there are many more)
15:34:02 <portdirect> 1) an init container that reqests a cert
15:34:28 <portdirect> 2) just use secrets (wildcards) and make it the deployers problem to get/manage them
15:35:13 <srwilkers> id prefer the spec lays out the options mentioned, and outlines the pros/cons/considerations for each
15:35:31 <srwilkers> then we can just iterate from there and remove what doesnt make sense as we go
15:35:33 <srwilkers> bceause honestly
15:35:41 <srwilkers> i dont know where to suggest to start
15:35:51 <portdirect> cool - damn, hoped i was gonna get to kull some early
15:35:56 <srwilkers> nope
15:36:12 <srwilkers> #action portdirect to draft spec for handling TLS
15:36:19 <portdirect> :P
15:36:27 <srwilkers> anything else here?
15:36:31 <v1k0d3n> any thought on providing certs via letsencrypt?
15:36:55 <portdirect> I can put that in for sure
15:37:06 <portdirect> though it would be ahrd to get the required number of certs from them
15:37:15 <v1k0d3n> yeah :(
15:37:46 <portdirect> for public (external) they would be great - though there is an upstream chart that does this quite well afaik
15:37:56 <v1k0d3n> would be interesting to hear how some big orgs do this today. some may have policies that all certs need to be signed by a known entity.
15:38:16 <jayahn> surely
15:38:34 <v1k0d3n> i know AT&T used to be that way. not sure how things are done specifically for groups like AIC or SKT (even Charter for that matter...i'd have to find out).
15:38:51 <portdirect> yeah - we ant doing any self signed and 0 validation hackery thats for sure
15:38:55 <v1k0d3n> total pain :( like you said...the subject is deep.
15:39:28 <v1k0d3n> yeah, that's what i figured.
15:40:15 <srwilkers> sounds like considerations to include and discuss in the spec ;)
15:40:48 <srwilkers> anything else on this topic?  i bet jayahn is excited to talk about SONA :)
15:41:14 <v1k0d3n> i'm good. added the placeholder.
15:41:23 <srwilkers> #topic SONA integration
15:41:31 <srwilkers> jayahn: all you :)
15:41:38 <jayahn> not much :)  it is on-going work as described in the etherpad
15:42:03 <jayahn> had two questions which all resoved from the comment there. :)
15:42:56 <jayahn> especially, with upcoming sona chart on openstack-helm-infra, we will work on providing 3rd party gating.
15:43:07 <jayahn> and I needed to get an official approval on that. :)
15:43:21 <portdirect> thats awesome jayahn
15:43:37 <portdirect> the ps you have in for neutron looks very close to ready
15:44:32 <jayahn> i got your comments, we will try to finish this ps asap.
15:44:39 <portdirect> The other thing I'll add - though tangental is that I;m reworking the kubeadm container at the moment
15:44:54 <portdirect> I'm hoping that we can get to making all the gates voting within a month
15:45:05 <jayahn> i got your comments, we will try to finish this ps asap.
15:45:18 <jayahn> ah, great! lots of voting machines will come.
15:46:33 <jayahn> we can move on to the next topic.
15:46:51 <srwilkers> #topic fluent-logging
15:47:17 <jayahn> i submitted spec for fluent-bit & fluentd logging & chart for that.
15:47:25 <srwilkers> spec looks good jayahn :)
15:47:31 <srwilkers> also been looking at the work in flight
15:47:54 <jayahn> we will remove WIP tag soon. pls provide your reviews on this.
15:48:40 <jayahn> one question here, ps includes both fluent-bit and fluentd. for now, we think this is okay.
15:49:28 <jayahn> however, since we also have fluentd standalone chart, want to know if we want to do further effort to combine fluentd chart into one
15:49:29 <srwilkers> your approach here makes the most sense.  i think this would make the current fluentd chart obsolete, and i think that's okay
15:50:22 <jayahn> the current one does not have capability to run "fluentd" as standalone daemonset w/o fluent-bit.
15:50:55 <jayahn> there are two use cases, 1) just use fluentd as agent 2) use fluent-bit as agent, and fluentd as server(aggregator)
15:51:08 <srwilkers> i think option 2 is more sane to be honest
15:51:20 <srwilkers> as that's the typical use ive seen the more ive dug into how others are using it
15:51:44 <srwilkers> especially as we look at larger deployments, the smaller footprint for fluent-bit makes me happy
15:51:58 <jayahn> okay. :)
15:52:07 <srwilkers> plus i like that fluent-bit includes the popular plugins by default
15:52:52 <srwilkers> im good here.  anything else to add jayahn ? :)
15:52:59 <jayahn> nope. i am also good
15:53:07 <srwilkers> #topic cinder-backup
15:53:22 <srwilkers> nice, it's merged
15:53:30 <srwilkers> glad it's working jayahn
15:53:34 <jayahn> ah, this is just simple question from me, since cinder-backup was brought up on ptg meeting
15:54:00 <jayahn> let us know if you find any issue on cinder-backup.
15:54:04 <srwilkers> sounds good
15:54:05 <srwilkers> will do
15:54:18 <portdirect> roger - we should also enable it in the horizon config
15:54:21 <srwilkers> #topic OSH-infra/addons gates
15:54:44 <srwilkers> so lamt is working on getting zuul cloner set up appropriately so we can start gating addons/infra better
15:55:05 <srwilkers> as im starting to run into issues testing prometheus exporters, as the services they're monitoring live in OSH proper currently
15:56:07 <portdirect> yeah - along with the updates to kubeadm-aio I'm hoping that we can get the gates running well across all three repos by the end of next week
15:56:13 <srwilkers> but right before the meeting, i've now got prometheus running with exporters for:  ceph, rabbitmq, mysql, and cadvisor
15:56:18 <srwilkers> and they're functional :D
15:56:21 <portdirect> nice :)
15:56:32 <jayahn> great!
15:56:47 <srwilkers> #topic open discussion
15:56:53 <srwilkers> i'll make this one quick
15:57:19 <srwilkers> i had a question regarding our weekly meeting -- it seems there was an email thread about teams being able to now host weekly meetings in the project channels
15:57:39 <srwilkers> i think it'd be great to start hosting our meeting in #openstack-helm, so anyone who misses it can read the scrollback
15:57:49 <srwilkers> without needing to be mindful of the scrollback of other teams
15:57:56 <srwilkers> but that's my opinion, and overall not that important
15:58:21 <srwilkers> any opinions any other way?
15:58:43 <jayahn> not much. i am okay with either way
15:58:44 <portdirect> hmm - I quite like the meeting channel as it logs the meetings: http://eavesdrop.openstack.org/meetings/openstack_helm/2017/
15:58:59 <portdirect> would we be able to keep this if we moved it into #openstack-helm
15:59:03 <jayahn> as long as it leaves logs.. like portdirect mentioned
15:59:28 <srwilkers> ill look into what we need to do to ensure they're captured
15:59:39 <portdirect> if we can then moving may make sense
15:59:48 <portdirect> as it would make slack access easier
15:59:56 <srwilkers> exactly what i was going to say
16:00:06 <srwilkers> #topic srwilkers check into logging meetings in #openstack-helm
16:00:08 <srwilkers> oops
16:00:12 <srwilkers> #topic open discussion
16:00:22 <srwilkers> #action srwilkers check into logging meetings in #openstack-helm
16:00:23 <portdirect> so -1 if we loose the meeting logs, +1 if we can keep them :D
16:00:27 <srwilkers> alright, thats it for today
16:00:31 <srwilkers> see you in #openstack-helm
16:00:32 <srwilkers> #endmeeting