10:00:34 <strigazi_> #startmeeting containers
10:00:35 <openstack> Meeting started Tue Apr 10 10:00:34 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi_. Information about MeetBot at http://wiki.debian.org/MeetBot.
10:00:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
10:00:38 <openstack> The meeting name has been set to 'containers'
10:00:50 <strigazi_> #topic Roll Call
10:02:08 <slunkad> hi
10:02:15 <strigazi> hello slunkad
10:03:00 <flwang> o/
10:03:10 <strigazi> Hello flwang
10:03:14 <flwang> strigazi: hello
10:03:27 <flwang> i need to catch up with you after the meeting ;)
10:03:45 <strigazi> flwang: sure :)
10:04:09 <strigazi> #topic Blueprints/Bugs/Ideas
10:04:51 <strigazi> I have potentially good news on our issue with eventlet flwang, this solution worked for me:
10:04:57 <strigazi> https://github.com/eventlet/eventlet/issues/172#issuecomment-379421165
10:05:25 <flwang> strigazi: great
10:05:51 <flwang> strigazi: if we can fix it in eventlet, then i can save my time to focus on the monitoring part
10:06:00 <strigazi> yes
10:07:02 <ricolin__> O/
10:07:32 <strigazi> From my side, I did some progress on upgrades and moved all minion software configs to software deployements
10:07:37 <strigazi> hi ricolin__
10:08:15 <flwang> strigazi: as for the upgrade sepc, i left a comment recently
10:08:19 <strigazi> I'll push the patch after the meeting, slunkad it will be interesting for you, to rotate the trust in the cluster
10:08:28 <strigazi> flwang: I'll have a look
10:09:07 <strigazi> Also, I pushed the patch for adding flannel in the master node.
10:09:18 <slunkad> strigazi: hmm would we need to consider that for upgrades too?
10:09:18 <strigazi> #link https://review.openstack.org/#/c/558836/
10:09:18 <flwang> strigazi: generally, when we say 'upgrade', it means upgrade the version of the coe.
10:09:33 <flwang> but seems we don't have an API to expose that info for end user
10:09:34 <strigazi> strigazi: and OS
10:09:53 <flwang> strigazi: yes
10:10:12 <flwang> and seems it's not mentioned in your sepc unless i missed
10:10:16 <strigazi> yes, I'll push a WIP for the API reference
10:10:37 <flwang> strigazi: great, that's the thing i'm looking forward to
10:11:37 <panfy> Hi folks
10:12:07 <strigazi> slunkad: the main difference  (in magnum context ) between SoftwareConfig and deployment is that configs are immutabl whereas deployments can cange
10:12:16 <strigazi> /s/cange/change
10:12:30 <flwang> panfy: hello
10:13:32 <slunkad> ok
10:14:16 <strigazi> And on more bit on syscontainers, I did some testing on the speed that the containers are pulled into the ostree storage and it is a bit slow. Could someone reproduce, I'm fetching the silly script I have
10:15:37 <flwang> strigazi: on nested virt or just bare metal?
10:15:52 <strigazi> flwang: nested virt
10:16:19 <flwang> strigazi: ok, i can test it
10:16:20 <strigazi> I'm doing a sed from our private registry
10:16:36 <flwang> strigazi: btw, cern is using a private registry?
10:18:19 <strigazi> http://paste.openstack.org/show/718800/
10:19:06 <strigazi> flwang: yes, we have a gitlab deployment which has a container registry
10:19:19 <flwang> strigazi: good to know, thanks
10:20:51 <strigazi> And I owe to panfy the docs to build syscontainers
10:21:08 <strigazi> That's it from me
10:21:53 <flwang> ok, my turn
10:22:01 <panfy> Yeah, I have try it, https://hub.docker.com/u/fengyunpan/
10:22:16 <flwang> i have proposed the fix for NetworkManager working with calico
10:22:31 <flwang> merging into the same calico-node on k8s master patch
10:22:50 <flwang> and i also have a fix for Prometheus missing RBAC
10:23:15 <strigazi> flwang: do we need the networkmanager fix in the master node too?
10:23:21 <flwang> Besides, i have another patch fixing the HA of DNS pod
10:23:44 <flwang> strigazi: we don't because there is no user's workload(pod) running on master kubelet
10:24:21 <flwang> in otherwords, there is no interface created by calico on master node
10:25:46 <strigazi> flwang: would there be any implication on adding the same recipe on the master?
10:26:32 <flwang> strigazi: no, i don't think so
10:26:49 <strigazi> flwang: does kubectl proxy work with this fix?
10:26:59 <flwang> because the solution just telling NetworkManager don't manage the interface created by calico
10:27:03 <flwang> calico will take care them
10:27:09 <flwang> strigazi: yes
10:27:22 <flwang> no impact for kubectl proxy based on my understanding
10:27:29 <strigazi> ok
10:27:45 <strigazi> the dns autoscaler is still in WIP?
10:28:17 <flwang> it's ready and I have tested, https://review.openstack.org/#/c/555223/3/magnum/drivers/common/templates/kubernetes/fragments/core-dns-service.sh
10:28:27 <flwang> but i'd like to get you guys opinion
10:28:40 <flwang> because currently i'm using the same way used by GKE
10:28:54 <flwang> but TBH, i think DaemonSet can achieve the same goal
10:29:42 <strigazi> in 100node cluster isn't it an overkill?
10:29:55 <strigazi> to have a DS?
10:31:10 <flwang> for that case, we can use the way i'm proposing
10:31:22 <flwang> we can define scale strategy
10:31:34 <strigazi> flwang: where does this recipe for autoscaling come from? Is there a source?
10:31:51 <flwang> but IMHO, using one pod for a prod k8s cluster is not a best practice
10:32:06 <flwang> strigazi: yes, wait a sec
10:32:27 <flwang> strigazi: https://github.com/kubernetes-incubator/cluster-proportional-autoscaler
10:32:44 <flwang> GKE is using it as well, as i mentioned above
10:33:09 <strigazi> flwang: I think the autoscaler is better than ds and better than a single pod as it is now.
10:33:40 <flwang> strigazi: cool, then i will polish the auto scaling policy a bit to get a balance
10:33:44 <flwang> thanks for the feedback
10:34:59 <panfy> 'define scale strategy' sounds good.
10:35:53 <flwang> panfy: thanks for the feedback
10:36:33 <strigazi> Would it make sense to have a generic strategy and then let the cluster admin modify for his needs?
10:37:13 <flwang> strigazi: cluster admin can adjust the policy by change the configmap dynamically
10:37:28 <strigazi> yes
10:37:41 <flwang> "The ConfigMap provides the configuration parameters, allowing on-the-fly changes(including control mode) without rebuilding or restarting the scaler containers/pods."
10:37:55 <flwang> so yes
10:38:15 <strigazi> yes, I mean, we won't have an option for this in magnum
10:38:34 <flwang> i don't think it's necessary
10:38:40 <flwang> we just need to document it somewhere
10:38:48 <flwang> at least, in release note
10:39:14 <flwang> i will add a release note anyway
10:39:25 <flwang> and maybe a user guide
10:39:28 <strigazi> yes
10:39:50 <flwang> thanks for reminding me
10:41:04 <strigazi> flwang: any news on keystone auth?
10:41:20 <flwang> strigazi: i have built a docker image
10:41:39 <flwang> i'm thinking if we can publish it on openstackmagnum account
10:41:58 <flwang> and i'm about to write code this week
10:42:11 <flwang> the code should be easy, just some testing work
10:42:26 <strigazi> how is it going to run on the cluster?
10:42:36 <strigazi> as docker,  syscontainer, pod?
10:42:44 <strigazi> as docker container,  syscontainer, pod?
10:42:48 <flwang> we can just run it as a pod
10:43:01 <strigazi> on the master node
10:43:02 <strigazi> ?
10:43:11 <flwang> yes
10:43:35 <strigazi> so kubelet must be always present
10:43:45 <flwang> probably, i have dig into the details, lingxian did the original test
10:43:56 <flwang> i will double check with him
10:44:34 <flwang> i mean 'i haven't dig into the details', sorry
10:44:49 <strigazi> ok, thanks
10:45:40 <strigazi> Do you want to add anything else?
10:45:55 <flwang> that's all from my side, thanks
10:47:02 <panfy> BTW, can we add a option "enable_master_kubelet"? If enable_master_kubelet is true, run kubelet syscontainer on master, or not run kubelet on master
10:48:31 <panfy> In other words, why not run kubelet syscontainer on master?
10:48:33 <strigazi> panfy: we could, but if need it for many use cases, we might just put it back
10:51:16 <flwang> strigazi: +1
10:51:42 <flwang> panfy: as strigazi said, we can revisit this later
10:51:56 <strigazi> I'll check what we can with size of the system containers and how we can do better
10:51:56 <panfy> strigazi: +1 flwang: +1
10:52:04 <flwang> but, one of the topics we recently discussed is, hide the master node for end user, so....
10:55:42 <flwang> anyway, it's still under discussing, will see
10:55:47 <panfy> I find that remove kubelet and kube-proxy from master by https://review.openstack.org/#/c/514604/
10:56:42 <flwang> panfy: yes, it's removed in queens
10:57:13 <strigazi> we had been runing kubelet for a couple of releases in the master nodes only for the the kubernetes components which where moved to the syscontainers
10:57:42 <strigazi> being very lucky, when I removed it, we started to need it again
10:57:50 <panfy> Oh, I see, thx
10:58:08 <panfy> yaaas, intresting
10:59:11 <strigazi> we are out of time, we can continue in the channel, thanks
10:59:17 <strigazi> #endmeeting
10:59:41 <strigazi_> #endmeeting