09:59:34 <strigazi> #startmeeting containers
09:59:35 <openstack> Meeting started Tue May  1 09:59:34 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:59:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:59:39 <openstack> The meeting name has been set to 'containers'
10:00:04 <strigazi> #topic Roll Call
10:00:24 <strigazi> Hello flwang1
10:00:40 <flwang1> o/
10:02:06 <strigazi> #topic Blueprints/Bugs/Ideas
10:02:23 <strigazi> strigazi: Add label to Disable the cloud provider on kubernetes clusters
10:02:54 <strigazi> Recently we upgraded Nova to Queens \o/
10:03:02 <flwang1> strigazi: why do we need it?
10:03:18 <flwang1> Nova on Queens? That's amazing
10:03:45 <strigazi> The architecture of nova changed significantly and the load increased
10:03:49 <flwang1> we're still on an old version which i can't tell you ;(
10:04:35 <flwang1> "Add label to Disable the cloud provider on kubernetes clusters" what's the background?
10:04:36 <strigazi> We noticed that the kubernetes nodes create quite some load on the nova apis
10:05:17 <strigazi> And the controller manager queries frequently the nova API through the metadata service.
10:05:34 <flwang1> isn't it a "bug" of the openstack cloud provider?
10:05:41 <flwang1> blame dims
10:06:25 <strigazi> We have to investigate what is the root cause, but the load is big and the benefit, at least in our cloud is little.
10:07:08 <flwang1> strigazi: ok, i got the point.
10:07:19 <flwang1> i'm just wondering the root cause
10:07:23 <strigazi> Having more than 1500 vms doing a scalability test to the nova api is not great.
10:07:54 <strigazi> We will change to the out of tree cloud provider anyway, we investigate there.
10:08:03 <strigazi> s/we/we can/
10:08:19 <strigazi> Makes sense?
10:08:20 <flwang1> strigazi: ok
10:08:22 <strigazi> Oh, also
10:08:28 <flwang1> fair enough
10:09:12 <strigazi> csi-cinder csi-cephfs and csi-manila can be used without the cloud provider, I think
10:10:06 <strigazi> csi-manila I don't know if it is a valid name, but we have someone working on integrating manila with kubernetes
10:10:18 <flwang1> cool
10:10:31 <brtknr> Hello everyone, I have a question re enabling RDMA when mounting a volume using GlusterFS via Infiniband on fedora atomic 27. Has anyone tried soemthing similar?
10:10:51 <flwang1> brtknr: can we talk about it offline?
10:11:01 <flwang1> we're in the weekly meeting, welcome to join
10:11:40 <brtknr> flwang1: sure, apologies
10:12:02 <strigazi> And one more note since the queens upgrade. Heat needs to be in queens to be able to mount cinder volumes
10:12:04 <flwang1> brtknr: that's ok, pls join us
10:12:30 <flwang1> strigazi: that's good to know
10:12:38 <flwang1> as you know, we have to upgrade to heat
10:12:41 <strigazi> yes, we have nova  in queens but not heat :)
10:12:47 <flwang1> and finally, we have decided to upgrade it to queens
10:12:56 <strigazi> cool
10:13:20 <strigazi> we did mitaka -> ocata a few months ago, it was uneventful
10:13:39 <strigazi> We'll do ocata -> queens soon
10:13:45 <flwang1> nice
10:14:29 <strigazi> One more thing from my side, is about the dashboard.
10:15:18 <strigazi> We have memory and cpu usage But, we don't see some metrics in the node panel
10:15:29 <strigazi> flwang1: have you noticed?
10:15:51 <strigazi> cpu rqs and memory requests are missing
10:16:04 <flwang1> strigazi: oh? i didn't notice that
10:16:32 <flwang1> because now i generally take care magnum and k8s from a higher level
10:16:41 <flwang1> make sure everything works, but not every corner
10:16:51 <flwang1> any finding?
10:17:45 <strigazi> I'm following it, no solution yet, I'll ask helo in #atomic
10:17:57 <strigazi> s/helo/help
10:17:59 <flwang1> and where did you check the cpu/memory requests on dashboard?
10:18:13 <flwang1> so that i can try to reproduce it
10:18:19 <strigazi> give me a sec
10:19:56 <strigazi> http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/node/strigazi-kube-pdwec3q4ukyd-minion-0?namespace=default
10:20:27 <strigazi> This is the path to a node in your cluster
10:20:42 <strigazi> my node is "strigazi-kube-pdwec3q4ukyd-minion-0"
10:21:37 <strigazi> A have a few things more to try
10:21:46 <strigazi> But no solution yet.
10:23:09 <flwang1> ok, i will test it on my local and let you know
10:23:14 <strigazi> thanks
10:23:50 <flwang1> no problem
10:26:35 <flwang1> strigazi: i will show you the screenshot later
10:26:43 <flwang1> anything else from your side?
10:26:44 <openstackgerrit> Merged openstack/magnum stable/queens: k8s_fedora: Add admin user  https://review.openstack.org/563679
10:28:29 <openstackgerrit> Merged openstack/magnum stable/queens: Make DNS pod autoscale  https://review.openstack.org/565345
10:28:46 <strigazi> That's it, I'm trying to finish upgrades for the summit. No complete patch yet
10:29:51 <flwang1> ok, i was in vacation last week, so no progress for the keystone integration and monitoring stuff :(
10:30:17 <flwang1> btw, thanks for review my patches
10:30:17 <strigazi> flwang1: For monitoing,
10:30:47 <strigazi> It would be nice to have the kubelet logs somewhere self hosted
10:31:40 <flwang1> strigazi: you mean fluentd?
10:31:42 <strigazi> As an operator, that would help a lot
10:31:47 <flwang1> or any self hosted solution?
10:32:02 <strigazi> fluentd collects only
10:32:16 <flwang1> strigazi: yep, i know
10:32:31 <flwang1> i mean similar combinations
10:32:40 <flwang1> are you talking about this?
10:32:49 <strigazi> we could find a way to give access to the current logs to the user
10:33:12 <strigazi> Pushing the logs somewhere centrally is one thing
10:33:28 <flwang1> strigazi: yep, i see your point
10:33:53 <strigazi> I'm talking about using the kube api or a service that runs in the cluster to see the logs
10:34:18 <flwang1> strigazi: yep, i see. are you aware of any popular solution for this?
10:34:33 <flwang1> otherwise, i will do some research
10:34:34 <strigazi> I can't think of anything without a db
10:35:21 <strigazi> it would be fluentd plus something that I'm missing right now
10:35:51 <flwang1> i see.
10:36:05 <flwang1> btw, are for the cpu request issue, are you talking about this https://pasteboard.co/Hj9ezts.png ?
10:37:47 <strigazi> yes memory is empty and I think CPU should have more data
10:38:59 <flwang1> strigazi: i will check my college's cluster which built by kubeadm
10:40:22 <strigazi> ok, thanks
10:41:18 <strigazi> Anything else on your side?
10:41:49 <flwang1> nope
10:42:22 <flwang1> strigazi: ah, btw, i replied your email about the white paper
10:42:55 <strigazi> yes, I saw it, thanks
10:43:03 <flwang1> thank you for offering that
10:44:32 <strigazi> The paper will be out this month
10:45:14 <strigazi> Let's wrap the meeting?
10:45:44 <flwang1> strigazi: ok, thanks
10:46:06 <strigazi> Thanks flwang1
10:46:11 <strigazi> #endmeeting