21:00:32 #startmeeting containers 21:00:33 Meeting started Tue Jan 22 21:00:32 2019 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:36 The meeting name has been set to 'containers' 21:00:39 #topic Roll Call 21:00:44 o/ 21:01:34 ping eandersson lxkong colby_ brtknr 21:01:41 o/ 21:01:44 o/ 21:02:00 o/ 21:02:09 imdigitaljim otw 21:02:32 #topic Announcements 21:03:02 thanks to eandersson and jakeyip, we have the ci runnig with python3 by default. 21:03:12 \o/ 21:03:19 nice 21:03:35 #topic Stories/Tasks 21:04:32 From my side, I have only the patch to add tiller for usage within magnum, not for end users unless they really try to use it: 21:04:35 https://review.openstack.org/#/c/612336/ 21:05:15 It deploys tiller only on master nodes, in a separate namespaces, with tls, and access to kube-system. 21:05:51 To better understand how it is deployed take a look in this section: 21:07:17 https://docs.helm.sh/using_helm/#example-deploy-tiller-in-a-namespace-restricted-to-deploying-resources-in-another-namespace 21:07:54 and this one: https://docs.helm.sh/using_helm/#using-ssl-between-helm-and-tiller 21:08:33 My colleague Diogo is working on consuming it to deploy the metrics server which deprecates heapster and the prometheus operator 21:09:01 https://review.openstack.org/#/c/629577/ 21:09:27 have you been using prometheus much internally? curious how you find it. obviously it's wildly popular and has a lot to offer 21:09:35 If you are not familiar with k8s, the patch for tiller might seem complicated. 21:09:39 i've had positive experiences with it in the past 21:10:12 colin-: limited experience so far without any issue but also no big challenges 21:10:40 sorry im late! had workstation reimaged and had to reinstall IRC 21:10:56 colin-: we work on a solution that an external service (probably another central prometheus) will consume the cluster prometheuses 21:11:16 imdigitaljim: welcome 21:11:31 colin-: makes sense? 21:11:33 understood. when it starts to take shape i am curious what KPIs from the clusters themselves will be collected 21:12:48 hi guys, i have been working on octavia-ingress-controller integration(https://review.openstack.org/#/c/631330/) and fixing a bug that deletes fip of the load balancer vip in octavia. 21:13:00 colin-: cpu/ram/disk/network and when use cases become more serious, user should expose user-defined metrics 21:14:50 Lingxian Kong proposed openstack/magnum master: [k8s_fedora_atomic] Delete floating ip for load balancer https://review.openstack.org/630820 21:14:58 lxkong: I will take a look to ingress too 21:15:31 strigazi: thanks, i will write some doc this morning in that patch 21:17:18 lxkong: thanks 21:17:32 lxkong - We maybe looking at that soon also 21:17:52 cbrumm_: sounds great :-) 21:18:24 cbrumm_: if you have any question relating to the octavia-ingress-controller, please let me know 21:18:37 FYI, at CERN we will write out own "ingress" to complement traefik 21:19:12 s/out/our/ 21:19:23 lxkong - Are you sure you want that? colin- can be persistent 21:19:49 strigazi: as more and more add-on components adding to magnum, i think we should find a plugable way for the implementation 21:20:34 so give the users the option that they could do their own thing if it's not acceptable for the upstream 21:20:52 cbrumm_ :-) 21:20:52 lxkong: helm can help :) 21:21:36 strigazi: i mean, deploy all the things during the cluster creation 21:21:59 rather than leaving the setup things to the magnum user 21:22:05 we have a cern chart for our internal components, so we can just point to that chart to deploy at creation time 21:22:17 that's nice 21:22:29 not for the user. for us 21:23:21 you can deploy this in ingress or anything else, packaged in a downstream chart 21:23:31 you can deploy this ingress or anything else, packaged in a downstream chart 21:23:59 lxkong: were working towards a complete driver that has high customizability, low operational costs, extra features, and improvements running on centos 21:24:01 is the helm/charts mechanism supported in magnum aleady? 21:24:22 looking to upstream it when we get to a satisfiable point 21:24:37 lxkong: 612336 629577 21:24:41 strigazi: thanks 21:24:44 will take a look 21:24:57 adding a hook for this is very easy 21:27:49 I think the approach with a chart maintained in its repo to add extra components is very maintainable 21:28:15 Or even a list of components deployed *on* kubernetes 21:28:23 strigazi: agree 21:29:08 strigazi: but can i stick with the current octavia-ingress-controller patch and maybe use the helm way for the future add-on? 21:29:16 sure 21:29:19 cool 21:32:01 Anything else you want to bring up? 21:32:19 not from me 21:32:32 I got a couple of follow up patches to the py3 patches, but nothing critical. https://review.openstack.org/#/c/631083/ https://review.openstack.org/#/c/631331/ 21:33:11 +2 21:33:55 Ideally I would like to move to dump_as_bytes for everything, but one step at a time. 21:34:02 thanks strigazi 21:35:02 eandersson: https://review.openstack.org/#/c/631331/10/magnum/drivers/common/templates/swarm/fragments/make-cert.py 21:35:33 It was original on purpose as I changed import json 21:35:41 but can revert that change to make it cleaner 21:36:20 (I prefer two spaces but it's super nit) 21:36:26 *new lines 21:36:28 you can leave it 21:36:51 strigazi: yes sorry, I was out of office till today. I just spun up a cluster with that tag and will try again with cinder 21:37:46 colby_: brtknr is bringing queens really close the current state. but with 1.11.5-1, it shoult work. 21:40:58 If there is anything else to discuss we can end the meeting 21:41:08 objections? 21:41:37 strigazi: ok great. Will that backport he linked to me be merged into a future release? I saw 6.3.0 just dropped. 21:42:18 colby_: yes 21:44:07 thanks for responding back! 21:44:18 Ill let you know how my tests go 21:44:51 cool 21:47:31 It seems we are out of topics, thanks for joining everyone 21:47:41 see you next week 21:47:44 Thanks! 21:47:52 have a good day/night, everyone 21:48:05 #endmeeting