09:59:34 #startmeeting containers 09:59:35 Meeting started Tue May 1 09:59:34 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:59:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:59:39 The meeting name has been set to 'containers' 10:00:04 #topic Roll Call 10:00:24 Hello flwang1 10:00:40 o/ 10:02:06 #topic Blueprints/Bugs/Ideas 10:02:23 strigazi: Add label to Disable the cloud provider on kubernetes clusters 10:02:54 Recently we upgraded Nova to Queens \o/ 10:03:02 strigazi: why do we need it? 10:03:18 Nova on Queens? That's amazing 10:03:45 The architecture of nova changed significantly and the load increased 10:03:49 we're still on an old version which i can't tell you ;( 10:04:35 "Add label to Disable the cloud provider on kubernetes clusters" what's the background? 10:04:36 We noticed that the kubernetes nodes create quite some load on the nova apis 10:05:17 And the controller manager queries frequently the nova API through the metadata service. 10:05:34 isn't it a "bug" of the openstack cloud provider? 10:05:41 blame dims 10:06:25 We have to investigate what is the root cause, but the load is big and the benefit, at least in our cloud is little. 10:07:08 strigazi: ok, i got the point. 10:07:19 i'm just wondering the root cause 10:07:23 Having more than 1500 vms doing a scalability test to the nova api is not great. 10:07:54 We will change to the out of tree cloud provider anyway, we investigate there. 10:08:03 s/we/we can/ 10:08:19 Makes sense? 10:08:20 strigazi: ok 10:08:22 Oh, also 10:08:28 fair enough 10:09:12 csi-cinder csi-cephfs and csi-manila can be used without the cloud provider, I think 10:10:06 csi-manila I don't know if it is a valid name, but we have someone working on integrating manila with kubernetes 10:10:18 cool 10:10:31 Hello everyone, I have a question re enabling RDMA when mounting a volume using GlusterFS via Infiniband on fedora atomic 27. Has anyone tried soemthing similar? 10:10:51 brtknr: can we talk about it offline? 10:11:01 we're in the weekly meeting, welcome to join 10:11:40 flwang1: sure, apologies 10:12:02 And one more note since the queens upgrade. Heat needs to be in queens to be able to mount cinder volumes 10:12:04 brtknr: that's ok, pls join us 10:12:30 strigazi: that's good to know 10:12:38 as you know, we have to upgrade to heat 10:12:41 yes, we have nova in queens but not heat :) 10:12:47 and finally, we have decided to upgrade it to queens 10:12:56 cool 10:13:20 we did mitaka -> ocata a few months ago, it was uneventful 10:13:39 We'll do ocata -> queens soon 10:13:45 nice 10:14:29 One more thing from my side, is about the dashboard. 10:15:18 We have memory and cpu usage But, we don't see some metrics in the node panel 10:15:29 flwang1: have you noticed? 10:15:51 cpu rqs and memory requests are missing 10:16:04 strigazi: oh? i didn't notice that 10:16:32 because now i generally take care magnum and k8s from a higher level 10:16:41 make sure everything works, but not every corner 10:16:51 any finding? 10:17:45 I'm following it, no solution yet, I'll ask helo in #atomic 10:17:57 s/helo/help 10:17:59 and where did you check the cpu/memory requests on dashboard? 10:18:13 so that i can try to reproduce it 10:18:19 give me a sec 10:19:56 http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/node/strigazi-kube-pdwec3q4ukyd-minion-0?namespace=default 10:20:27 This is the path to a node in your cluster 10:20:42 my node is "strigazi-kube-pdwec3q4ukyd-minion-0" 10:21:37 A have a few things more to try 10:21:46 But no solution yet. 10:23:09 ok, i will test it on my local and let you know 10:23:14 thanks 10:23:50 no problem 10:26:35 strigazi: i will show you the screenshot later 10:26:43 anything else from your side? 10:26:44 Merged openstack/magnum stable/queens: k8s_fedora: Add admin user https://review.openstack.org/563679 10:28:29 Merged openstack/magnum stable/queens: Make DNS pod autoscale https://review.openstack.org/565345 10:28:46 That's it, I'm trying to finish upgrades for the summit. No complete patch yet 10:29:51 ok, i was in vacation last week, so no progress for the keystone integration and monitoring stuff :( 10:30:17 btw, thanks for review my patches 10:30:17 flwang1: For monitoing, 10:30:47 It would be nice to have the kubelet logs somewhere self hosted 10:31:40 strigazi: you mean fluentd? 10:31:42 As an operator, that would help a lot 10:31:47 or any self hosted solution? 10:32:02 fluentd collects only 10:32:16 strigazi: yep, i know 10:32:31 i mean similar combinations 10:32:40 are you talking about this? 10:32:49 we could find a way to give access to the current logs to the user 10:33:12 Pushing the logs somewhere centrally is one thing 10:33:28 strigazi: yep, i see your point 10:33:53 I'm talking about using the kube api or a service that runs in the cluster to see the logs 10:34:18 strigazi: yep, i see. are you aware of any popular solution for this? 10:34:33 otherwise, i will do some research 10:34:34 I can't think of anything without a db 10:35:21 it would be fluentd plus something that I'm missing right now 10:35:51 i see. 10:36:05 btw, are for the cpu request issue, are you talking about this https://pasteboard.co/Hj9ezts.png ? 10:37:47 yes memory is empty and I think CPU should have more data 10:38:59 strigazi: i will check my college's cluster which built by kubeadm 10:40:22 ok, thanks 10:41:18 Anything else on your side? 10:41:49 nope 10:42:22 strigazi: ah, btw, i replied your email about the white paper 10:42:55 yes, I saw it, thanks 10:43:03 thank you for offering that 10:44:32 The paper will be out this month 10:45:14 Let's wrap the meeting? 10:45:44 strigazi: ok, thanks 10:46:06 Thanks flwang1 10:46:11 #endmeeting