10:00:34 #startmeeting containers 10:00:35 Meeting started Tue Apr 10 10:00:34 2018 UTC and is due to finish in 60 minutes. The chair is strigazi_. Information about MeetBot at http://wiki.debian.org/MeetBot. 10:00:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 10:00:38 The meeting name has been set to 'containers' 10:00:50 #topic Roll Call 10:02:08 hi 10:02:15 hello slunkad 10:03:00 o/ 10:03:10 Hello flwang 10:03:14 strigazi: hello 10:03:27 i need to catch up with you after the meeting ;) 10:03:45 flwang: sure :) 10:04:09 #topic Blueprints/Bugs/Ideas 10:04:51 I have potentially good news on our issue with eventlet flwang, this solution worked for me: 10:04:57 https://github.com/eventlet/eventlet/issues/172#issuecomment-379421165 10:05:25 strigazi: great 10:05:51 strigazi: if we can fix it in eventlet, then i can save my time to focus on the monitoring part 10:06:00 yes 10:07:02 O/ 10:07:32 From my side, I did some progress on upgrades and moved all minion software configs to software deployements 10:07:37 hi ricolin__ 10:08:15 strigazi: as for the upgrade sepc, i left a comment recently 10:08:19 I'll push the patch after the meeting, slunkad it will be interesting for you, to rotate the trust in the cluster 10:08:28 flwang: I'll have a look 10:09:07 Also, I pushed the patch for adding flannel in the master node. 10:09:18 strigazi: hmm would we need to consider that for upgrades too? 10:09:18 #link https://review.openstack.org/#/c/558836/ 10:09:18 strigazi: generally, when we say 'upgrade', it means upgrade the version of the coe. 10:09:33 but seems we don't have an API to expose that info for end user 10:09:34 strigazi: and OS 10:09:53 strigazi: yes 10:10:12 and seems it's not mentioned in your sepc unless i missed 10:10:16 yes, I'll push a WIP for the API reference 10:10:37 strigazi: great, that's the thing i'm looking forward to 10:11:37 Hi folks 10:12:07 slunkad: the main difference (in magnum context ) between SoftwareConfig and deployment is that configs are immutabl whereas deployments can cange 10:12:16 /s/cange/change 10:12:30 panfy: hello 10:13:32 ok 10:14:16 And on more bit on syscontainers, I did some testing on the speed that the containers are pulled into the ostree storage and it is a bit slow. Could someone reproduce, I'm fetching the silly script I have 10:15:37 strigazi: on nested virt or just bare metal? 10:15:52 flwang: nested virt 10:16:19 strigazi: ok, i can test it 10:16:20 I'm doing a sed from our private registry 10:16:36 strigazi: btw, cern is using a private registry? 10:18:19 http://paste.openstack.org/show/718800/ 10:19:06 flwang: yes, we have a gitlab deployment which has a container registry 10:19:19 strigazi: good to know, thanks 10:20:51 And I owe to panfy the docs to build syscontainers 10:21:08 That's it from me 10:21:53 ok, my turn 10:22:01 Yeah, I have try it, https://hub.docker.com/u/fengyunpan/ 10:22:16 i have proposed the fix for NetworkManager working with calico 10:22:31 merging into the same calico-node on k8s master patch 10:22:50 and i also have a fix for Prometheus missing RBAC 10:23:15 flwang: do we need the networkmanager fix in the master node too? 10:23:21 Besides, i have another patch fixing the HA of DNS pod 10:23:44 strigazi: we don't because there is no user's workload(pod) running on master kubelet 10:24:21 in otherwords, there is no interface created by calico on master node 10:25:46 flwang: would there be any implication on adding the same recipe on the master? 10:26:32 strigazi: no, i don't think so 10:26:49 flwang: does kubectl proxy work with this fix? 10:26:59 because the solution just telling NetworkManager don't manage the interface created by calico 10:27:03 calico will take care them 10:27:09 strigazi: yes 10:27:22 no impact for kubectl proxy based on my understanding 10:27:29 ok 10:27:45 the dns autoscaler is still in WIP? 10:28:17 it's ready and I have tested, https://review.openstack.org/#/c/555223/3/magnum/drivers/common/templates/kubernetes/fragments/core-dns-service.sh 10:28:27 but i'd like to get you guys opinion 10:28:40 because currently i'm using the same way used by GKE 10:28:54 but TBH, i think DaemonSet can achieve the same goal 10:29:42 in 100node cluster isn't it an overkill? 10:29:55 to have a DS? 10:31:10 for that case, we can use the way i'm proposing 10:31:22 we can define scale strategy 10:31:34 flwang: where does this recipe for autoscaling come from? Is there a source? 10:31:51 but IMHO, using one pod for a prod k8s cluster is not a best practice 10:32:06 strigazi: yes, wait a sec 10:32:27 strigazi: https://github.com/kubernetes-incubator/cluster-proportional-autoscaler 10:32:44 GKE is using it as well, as i mentioned above 10:33:09 flwang: I think the autoscaler is better than ds and better than a single pod as it is now. 10:33:40 strigazi: cool, then i will polish the auto scaling policy a bit to get a balance 10:33:44 thanks for the feedback 10:34:59 'define scale strategy' sounds good. 10:35:53 panfy: thanks for the feedback 10:36:33 Would it make sense to have a generic strategy and then let the cluster admin modify for his needs? 10:37:13 strigazi: cluster admin can adjust the policy by change the configmap dynamically 10:37:28 yes 10:37:41 "The ConfigMap provides the configuration parameters, allowing on-the-fly changes(including control mode) without rebuilding or restarting the scaler containers/pods." 10:37:55 so yes 10:38:15 yes, I mean, we won't have an option for this in magnum 10:38:34 i don't think it's necessary 10:38:40 we just need to document it somewhere 10:38:48 at least, in release note 10:39:14 i will add a release note anyway 10:39:25 and maybe a user guide 10:39:28 yes 10:39:50 thanks for reminding me 10:41:04 flwang: any news on keystone auth? 10:41:20 strigazi: i have built a docker image 10:41:39 i'm thinking if we can publish it on openstackmagnum account 10:41:58 and i'm about to write code this week 10:42:11 the code should be easy, just some testing work 10:42:26 how is it going to run on the cluster? 10:42:36 as docker, syscontainer, pod? 10:42:44 as docker container, syscontainer, pod? 10:42:48 we can just run it as a pod 10:43:01 on the master node 10:43:02 ? 10:43:11 yes 10:43:35 so kubelet must be always present 10:43:45 probably, i have dig into the details, lingxian did the original test 10:43:56 i will double check with him 10:44:34 i mean 'i haven't dig into the details', sorry 10:44:49 ok, thanks 10:45:40 Do you want to add anything else? 10:45:55 that's all from my side, thanks 10:47:02 BTW, can we add a option "enable_master_kubelet"? If enable_master_kubelet is true, run kubelet syscontainer on master, or not run kubelet on master 10:48:31 In other words, why not run kubelet syscontainer on master? 10:48:33 panfy: we could, but if need it for many use cases, we might just put it back 10:51:16 strigazi: +1 10:51:42 panfy: as strigazi said, we can revisit this later 10:51:56 I'll check what we can with size of the system containers and how we can do better 10:51:56 strigazi: +1 flwang: +1 10:52:04 but, one of the topics we recently discussed is, hide the master node for end user, so.... 10:55:42 anyway, it's still under discussing, will see 10:55:47 I find that remove kubelet and kube-proxy from master by https://review.openstack.org/#/c/514604/ 10:56:42 panfy: yes, it's removed in queens 10:57:13 we had been runing kubelet for a couple of releases in the master nodes only for the the kubernetes components which where moved to the syscontainers 10:57:42 being very lucky, when I removed it, we started to need it again 10:57:50 Oh, I see, thx 10:58:08 yaaas, intresting 10:59:11 we are out of time, we can continue in the channel, thanks 10:59:17 #endmeeting 10:59:41 #endmeeting