09:59:42 #startmeeting containers 09:59:45 #topic Roll Call 09:59:47 Meeting started Tue Jun 5 09:59:42 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:59:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:59:50 The meeting name has been set to 'containers' 10:00:01 o/ 10:00:09 o/ 10:00:15 hi o/ 10:00:26 o/ 10:01:15 o/ 10:01:53 Thanks for joining the meeting folks 10:01:56 #topic Announcements 10:02:38 python-magnumclient 2.9.1 is released #topic Announcements 10:02:46 python-magnumclient 2.9.1 is released https://releases.openstack.org/queens/#queens-python-magnumclient 10:02:58 nice 10:03:41 It includes a fix for the quotas entrypoint and the OSC client 10:04:04 #topic Blueprints/Bugs/Ideas 10:04:51 Last week I pushed a patch for disabling the cloud provider, https://review.openstack.org/#/c/571190/ 10:05:04 thanks brtknr for the input 10:05:22 only minor points, no worries 10:05:24 I propose to have it on by default, and opt out 10:06:08 just left a comment for that there too 10:06:28 it matches the *_enabled scheme in the other labels 10:07:07 Also last week, I tested f28, it seems that skopeo is buggy atm and we need a new version of it to run syscontainers. 10:07:26 rochapor1o: makes sense! 10:07:28 rochapor1o: I don't have a strong preference on it, what others think? 10:07:51 brtknr is on board with _enabled 10:08:12 ok, not a big change I can revise 10:08:22 although its a label 10:08:38 Is the enabled by default option good with you? 10:08:45 i prefer xxx_enabled to be consistent with other labels 10:10:26 ok, I'll change it have it on by default. 10:12:09 that is it from me, feilong do you want to go next? 10:12:26 sure 10:13:01 i'm still working on the k8s-keystone-auth integration work because we're improving the code of k8s-keystone-auth to support configmap 10:13:50 with that support, we may be able to get rid of the default policy file or at least allow user easily change the policy via configmap 10:14:18 flwang1 so we will need to run k8s-keystone-auth as a pod right? 10:14:21 and i have proposed the map to fully deprecate send_cluster_metrics 10:14:33 strigazi: hopefully, still need testing 10:14:50 flwang1: ok 10:15:35 flwang1 the health check will be a new periodic task since you propose to deprecate send_cluster_metrics? 10:15:43 here is the patch for deprecate send_cluster_metrics https://review.openstack.org/572249 10:15:58 flwang1: I think we can keep one task and disable that functionality instead 10:15:59 yep, i'm going to add a new task 10:17:03 https://github.com/openstack/magnum/blob/master/magnum/service/periodic.py now there are 2 in this file, sync cluster status and send_cluster_metrics 10:17:35 i think we can add the health status check into existing sync_cluster_status 10:17:53 or add a new task given we will deprecate the send cluster metrics, thoughts? folks? 10:18:32 i'll try to help with the reviews there starting this week 10:18:33 and btw, the patch add health_status and health_status_reason is ready for review https://review.openstack.org/570818 10:18:40 rochapor1o: lovely 10:19:01 let's evaluate it, if the new task is not expensive it is cleaner. If we don't stress a lot the conductor i'm ok with the new task 10:19:54 strigazi: 2 api calls per cluster, we need one for /componentstatuses and one for /nodes 10:20:09 also let's be sure that we use the cached certs for authentication 10:20:17 if we don't want to support master health check, then only /nodes 10:20:31 strigazi: definitely 10:20:43 flwang1 master health is good 10:20:52 we need it 10:21:32 to catch a missconfig of the controller manager for example 10:21:54 currently, the idea is returning a dict for different nodes and it's component/condition status 10:22:18 sounds good 10:22:41 like this {"master1": {"etcd": "OK", "scheduler": "OK"}, "master2": {}, "node1":{}, "node2": {} .......} 10:22:49 and reason of node being down? 10:22:57 absolutely 10:23:10 cool 10:23:58 rochapor1o: i will submit a draft later this week 10:24:09 perfect, looking forward to review 10:25:04 that's all from my side 10:25:13 flwang1: thanks 10:25:17 strigazi: btw, i'm keen to review the upgrade patch 10:25:29 very keen :D 10:25:44 we only have 2 months for Rocky 10:25:51 @all people 10:26:23 I am also happy and willing to test the cluster upgrade, it's a killing feature 10:26:24 I promise to have something in gerrit tmr 10:27:12 strigazi: haha, i'm always the pusher 10:27:19 flwang1: it's good 10:28:00 others, do you want to bring something up? 10:28:30 I'm going to start work on https://bugs.launchpad.net/magnum/+bug/1722573 I'll keep you updated 10:28:32 Launchpad bug 1722573 in Magnum "Limit the scope of cluster-update for heat drivers" [Undecided,New] 10:29:28 vabada: do you have any questions? 10:29:56 Haven't looked deep yet, but I'll raise them in the channel if any 10:30:06 I think I know more or less how to proceed 10:30:26 thanks 10:31:15 i've been looking at CSI support in Magnum, mostly for CephFS with kubernetes 1.10. is this something other people are interested on? Other drivers maybe? 10:31:34 i have the patch for CephFS ready already, soon with Manila integration for PVCs 10:32:10 rochapor1o: sounds interesting, unfortunately we(catalyst cloud) don't have manila yet 10:32:36 ok 10:35:13 thanks rochapor1o 10:35:45 brtknr: from your side, do you want to bring something up? 10:36:25 rochapor1o: we will be comparing container infra with gluster vs cephfs so might be relevant at some point 10:36:38 I've filed 2 patched currently undergoing review 10:37:15 first one related to specifying cgroup driver for k8s when using Docker-CE 10:37:33 https://review.openstack.org/#/c/571583/ 10:37:51 I think specifying the cgroup driver even when not using docker-ce is useful 10:37:53 second one related to disabling floating ip in swarm mode 10:38:05 https://review.openstack.org/#/c/571200/ 10:38:35 strigazi: yes, i guess so 10:38:50 brtknr: the deployment for glusterfs-csi should be similar, we can use cephfs as a reference later 10:39:39 currently writing a script to change docker cgroup driver when cgroupdriver label is specified: 10:39:42 #! /bin/bash 10:39:44 cp /usr/lib/systemd/system/docker.service /etc/systemd/system/ 10:39:46 if cat /etc/systemd/system/docker.service | grep 'native.cgroupdriver'; then 10:39:48 sed -i "s/native.cgroupdriver.\*/native.cgroupdriver=${1} \\/" /etc/systemd/system/docker.service 10:39:50 else 10:39:52 cat > /etc/systemd/system/docker.service.d/cgroupdriver.conf <<< EOF 10:39:54 ExecStart=- --exec-opt native.cgroupdriver=$1 10:39:56 EOF 10:39:58 fi 10:40:00 systemctl daemon-reload 10:40:02 systemctl restart docker 10:40:04 y 10:40:16 this should work for both old and new docker versions 10:40:41 rochapor1o: sounds good, we are currently manually mounting volumes using Ansible 10:42:46 the patches work on my devstack deployment on both queens and master 10:43:00 brtknr: thanks, I'll review them 10:43:26 strigazi: thanks :) 10:43:58 @all Anything else? 10:44:12 Not form my side 10:44:46 s/form/from/ 10:45:22 Any news re reducing number of things heat is responsible for deploying? 10:45:30 we were talking about this last week 10:46:44 brtknr: From now one, we can start not adding new software configs or deployment if it is not absolutely needed. 10:46:52 *from now on 10:47:37 We can propose patches to consolidate the deployments. 10:47:57 i'll need to leave a bit early, thanks everyone 10:48:15 rochapor1o: see you 10:48:52 strigazi: okay 10:49:48 it is not clear if reducing the number of db entries helps with a lot with performance. I think that it uses the same db connection but insertions are faster 10:51:27 Thanks everyone, see you in the channel or next week 10:51:50 #endmeeting