10:00:01 #startmeeting containers 10:00:02 Meeting started Tue Mar 13 10:00:01 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 10:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 10:00:05 The meeting name has been set to 'containers' 10:00:08 #topic Roll Call 10:00:22 o/ 10:01:01 o/ 10:01:16 hi 10:01:31 Meeting agenda: 10:01:34 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-03-13_1000_UTC 10:02:58 We don't have to review action items or announcements so move to: 10:03:01 #topic Blueprints/Bugs/Ideas 10:04:12 From my side, I had to report back on upgrades, the trust rotation issue and test the dashboard with calico and flannel. 10:05:03 I managed to work only on the dashboard last week and I have some findings: 10:06:10 The following input comes from our profuction deployment. We run queens since wednesday \o/ 10:06:43 nice! 10:06:49 congrats! 10:07:19 1. to access the dashboard using kubectl proxy over the api server we need flannel or calico to be running on the master node(s) 10:08:39 running flannel is just one line to run it as a system container, since the config is already there. For calico, we will need kubelet back on the master, probably kube-proxy as well. 10:08:54 2. heapster and ipv6 :) 10:09:27 strigazi: sorry, do you mean with flannel? we still need a fix to make dashboard works? 10:09:46 to work with kubectl proxy, yes 10:10:18 and the fix is running flannel on master? 10:10:27 yes 10:10:33 strigazi: ok, thanks 10:10:39 unfortunately, yes 10:11:20 strigazi: does that mean the flannel+dashboard never worked before? 10:11:34 with kube proxy 10:11:35 flwang: not never 10:12:32 flwang: it used to work, one of the changes, f27 or 1.9.3 broke kubectl proxy 10:13:21 ok 10:13:25 I can track it down 10:13:43 strigazi: ok, you know my fix for calico is there 10:15:01 fyi, We had the dashboard with flannel working for at least two years 10:15:30 strigazi: hah, that doesn't mater now ;0) 10:17:26 It does, I'll push a fix upsteam 10:18:58 ipv6+heapster, I found out, that kubernetes take the node ips from the cloud-provider 10:20:51 when i say 'it doesn't matter', i mean 'at least two years' doesn't matter, because it's broken now and we need to fix it asap 10:21:08 In kubelet we use all interfaces of the node which you would expect to the ips from the node, however the ips are advertised to heapster from the cloud provider and the metadata server. 10:21:40 And if you have ipv6 ips heapster will try to use those and likely it will fail. 10:22:13 I don't know what is the impact of this to other clouds. Downstream we will set the ipv4 to kubelet. 10:22:34 flwang: slunkad have you used kubernetes with ipv6? 10:22:45 strigazi: no 10:22:46 strigazi: nope 10:23:09 nice 10:23:44 That's a good exercise that we will have to do at some point. 10:24:21 That's it from me, I'll carry my items for next week. 10:24:24 if we have no use case atm for ipv6 can't we use ipv4 upstream too? 10:24:51 for heapster by default I mean 10:25:18 slunkad: I think we should have more input on this 10:25:26 sure 10:26:01 slunkad: last week you said you are intersted on trust rotation, do you have time to take it? 10:26:24 strigazi: yes I can 10:26:38 can you paste the link to the bug again please 10:26:47 #link https://bugs.launchpad.net/magnum/+bug/1752433 10:26:48 Launchpad bug 1752433 in Magnum "trust invalid when user is disabled" [Undecided,New] 10:27:40 I'm adding action items for next week: 10:27:49 #action strigazi to report back on cluster upgrades 10:28:27 #action slunkad to report on "trust invalid when user is disabled" https://bugs.launchpad.net/magnum/+bug/1752433 10:28:28 Launchpad bug 1752433 in Magnum "trust invalid when user is disabled" [Undecided,New] - Assigned to Sayali Lunkad (sayalilunkad) 10:29:01 #action strigazi to push a fix for flannel + dashboard over kubectl proxy 10:29:21 That's it from me 10:29:43 On my side about the docs, I have added the networking stuff discussed at the ptg here https://review.openstack.org/#/c/552099. Feel free to leave comments on it. Also I think as for the restructuring I will start with removing the terminology from the user guide into its own guide 10:30:03 slunkad: cool 10:30:33 I'll take a look today 10:31:33 #action slunkad Factor out the terminology guid 10:31:36 #undo 10:31:37 Removing item from minutes: #action slunkad Factor out the terminology guid 10:31:39 #action slunkad Factor out the terminology guide 10:32:53 flwang: you had to check cotyledon as a replacement and test calico. Any findings for the meeting? 10:33:41 strigazi: no, sorry, i was working on the calico+proxy issue 10:33:58 i will check cotyledon this week 10:34:10 flwang: do we need kube-proxy too in the master node? 10:34:35 strigazi: no 10:34:38 i tested 10:34:51 #action flwang "Investigate cotyledon as a replacement of oslo.service" 10:35:43 flwang: I'm not sure if we are going to need it in general though 10:37:05 A kubernetes node always has a both services running 10:37:21 strigazi: i will give it a double check 10:37:36 but i don't think we really need it if it's not necessary 10:38:39 strigazi: btw, i'm testing prometheus monitoring and ingress now 10:39:09 Can you check when we might need it apart from the the calico+kubectl proxy use case? 10:39:42 flwang: prometheus or heapster+influx+grafana? 10:39:50 prometheus 10:40:19 flwang: We might want to move it in a different namespace from kube-system 10:41:08 you mean when we need kube-proxy on master? 10:41:21 flwang: yes 10:41:30 now is there any preference in the market between prometheus and heapster+influx+grafana? 10:41:46 strigazi: no problem, will do the investigation 10:43:00 flwang: prometheus can be used to monitor services and offer it in the cluster so users can leverage it and modify it. 10:43:13 based on my understanding, we need kube-proxy on master only if there is pod on master need to be accessed by kube proxy 10:45:23 heapster+influx+grafana offers more static content for utilization metrics. Prometheus has more capabilities. AFAIK prometheus is prefered to store monitoring data for applications. heapster+influx+grafana is a proposed solution from the heapster team and it is coplimentary to kubernetes-dashboard 10:48:28 I think we covered everything from the last meeting 10:48:33 #topic Open Discussion 10:48:34 strigazi: thanks for clarification 10:48:59 Any other business? 10:50:19 We can wrap 10 minutes early then 10:50:36 Thanks slunkad flwang 10:50:42 strigazi: i'm good. 10:50:58 me too, thanks! 10:51:06 #endmeeting