10:00:01 <strigazi> #startmeeting containers
10:00:02 <openstack> Meeting started Tue Mar 13 10:00:01 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
10:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
10:00:05 <openstack> The meeting name has been set to 'containers'
10:00:08 <strigazi> #topic Roll Call
10:00:22 <strigazi> o/
10:01:01 <flwang> o/
10:01:16 <slunkad> hi
10:01:31 <strigazi> Meeting agenda:
10:01:34 <strigazi> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-03-13_1000_UTC
10:02:58 <strigazi> We don't have to review action items or announcements so move to:
10:03:01 <strigazi> #topic Blueprints/Bugs/Ideas
10:04:12 <strigazi> From my side, I had to report back on upgrades, the trust rotation issue and test the dashboard with calico and flannel.
10:05:03 <strigazi> I managed to work only on the dashboard last week and I have some findings:
10:06:10 <strigazi> The following input comes from our profuction deployment. We run queens since wednesday \o/
10:06:43 <slunkad> nice!
10:06:49 <flwang> congrats!
10:07:19 <strigazi> 1. to access the dashboard using kubectl proxy over the api server we need flannel or calico to be running on the master node(s)
10:08:39 <strigazi> running flannel is just one line to run it as a system container, since the config is already there. For calico, we will need kubelet back on the master, probably kube-proxy as well.
10:08:54 <strigazi> 2. heapster and ipv6 :)
10:09:27 <flwang> strigazi: sorry, do you mean with flannel? we still need a fix to make dashboard works?
10:09:46 <strigazi> to work with kubectl proxy, yes
10:10:18 <flwang> and the fix is running flannel on master?
10:10:27 <strigazi> yes
10:10:33 <flwang> strigazi: ok, thanks
10:10:39 <strigazi> unfortunately, yes
10:11:20 <flwang> strigazi: does that mean the flannel+dashboard never worked before?
10:11:34 <flwang> with kube proxy
10:11:35 <strigazi> flwang: not never
10:12:32 <strigazi> flwang: it used to work, one of the changes, f27 or 1.9.3 broke kubectl proxy
10:13:21 <flwang> ok
10:13:25 <strigazi> I can track it down
10:13:43 <flwang> strigazi: ok, you know my fix for calico is there
10:15:01 <strigazi> fyi, We had the dashboard with flannel working for at least two years
10:15:30 <flwang> strigazi: hah, that doesn't mater now ;0)
10:17:26 <strigazi> It does, I'll push a fix upsteam
10:18:58 <strigazi> ipv6+heapster, I found out, that kubernetes take the node ips from the cloud-provider
10:20:51 <flwang> when i say 'it doesn't matter', i mean 'at least two years' doesn't matter, because it's broken now and we need to fix it asap
10:21:08 <strigazi> In kubelet we use all interfaces of the node which you would expect to the ips from the node, however the ips are advertised to heapster from the cloud provider and the metadata server.
10:21:40 <strigazi> And if you have ipv6 ips heapster will try to use those and likely it will fail.
10:22:13 <strigazi> I don't know what is the impact of this to other clouds. Downstream we will set the ipv4 to kubelet.
10:22:34 <strigazi> flwang: slunkad have you used kubernetes with ipv6?
10:22:45 <flwang> strigazi: no
10:22:46 <slunkad> strigazi: nope
10:23:09 <strigazi> nice
10:23:44 <strigazi> That's a good exercise that we will have to do at some point.
10:24:21 <strigazi> That's it from me, I'll carry my items for next week.
10:24:24 <slunkad> if we have no use case atm for ipv6 can't we use ipv4 upstream too?
10:24:51 <slunkad> for heapster by default I mean
10:25:18 <strigazi> slunkad: I think we should have more input on this
10:25:26 <slunkad> sure
10:26:01 <strigazi> slunkad: last week you said you are intersted on trust rotation, do you have time to take it?
10:26:24 <slunkad> strigazi: yes I can
10:26:38 <slunkad> can you paste the link to the bug again please
10:26:47 <strigazi> #link https://bugs.launchpad.net/magnum/+bug/1752433
10:26:48 <openstack> Launchpad bug 1752433 in Magnum "trust invalid when user is disabled" [Undecided,New]
10:27:40 <strigazi> I'm adding action items for next week:
10:27:49 <strigazi> #action strigazi to report back on cluster upgrades
10:28:27 <strigazi> #action slunkad to report on "trust invalid when user is disabled" https://bugs.launchpad.net/magnum/+bug/1752433
10:28:28 <openstack> Launchpad bug 1752433 in Magnum "trust invalid when user is disabled" [Undecided,New] - Assigned to Sayali Lunkad (sayalilunkad)
10:29:01 <strigazi> #action strigazi to push a fix for flannel + dashboard over kubectl proxy
10:29:21 <strigazi> That's it from me
10:29:43 <slunkad> On my side about the docs, I have added the networking stuff discussed at the ptg here https://review.openstack.org/#/c/552099. Feel free to leave comments on it. Also I think as for the restructuring I will start with removing the terminology from the user guide into its own guide
10:30:03 <strigazi> slunkad: cool
10:30:33 <strigazi> I'll take a look today
10:31:33 <strigazi> #action slunkad Factor out the terminology guid
10:31:36 <strigazi> #undo
10:31:37 <openstack> Removing item from minutes: #action slunkad Factor out the terminology guid
10:31:39 <strigazi> #action slunkad Factor out the terminology guide
10:32:53 <strigazi> flwang: you had to check cotyledon as a replacement and test calico. Any findings for the meeting?
10:33:41 <flwang> strigazi: no, sorry, i was working on the calico+proxy issue
10:33:58 <flwang> i will check cotyledon this week
10:34:10 <strigazi> flwang: do we need kube-proxy too in the master node?
10:34:35 <flwang> strigazi: no
10:34:38 <flwang> i tested
10:34:51 <strigazi> #action flwang "Investigate cotyledon as a replacement of oslo.service"
10:35:43 <strigazi> flwang: I'm not sure if we are going to need it in general though
10:37:05 <strigazi> A kubernetes node always has a both services running
10:37:21 <flwang> strigazi: i will give it a double check
10:37:36 <flwang> but i don't think we really need it if it's not necessary
10:38:39 <flwang> strigazi: btw, i'm testing prometheus monitoring and ingress now
10:39:09 <strigazi> Can you check when we might need it apart from the the calico+kubectl proxy use case?
10:39:42 <strigazi> flwang: prometheus or heapster+influx+grafana?
10:39:50 <flwang> prometheus
10:40:19 <strigazi> flwang: We might want to move it in a different namespace from kube-system
10:41:08 <flwang> you mean when we need  kube-proxy on master?
10:41:21 <strigazi> flwang: yes
10:41:30 <flwang> now is there any preference in the market between prometheus and heapster+influx+grafana?
10:41:46 <flwang> strigazi: no problem, will do the investigation
10:43:00 <strigazi> flwang: prometheus can be used to monitor services and offer it in the cluster so users can leverage it and modify it.
10:43:13 <flwang> based on my understanding, we need kube-proxy on master only if there is pod on master need to be accessed by kube proxy
10:45:23 <strigazi> heapster+influx+grafana offers more static content for utilization metrics. Prometheus has more capabilities. AFAIK prometheus is prefered to store monitoring data for applications. heapster+influx+grafana is a proposed solution from the heapster team and it is coplimentary to kubernetes-dashboard
10:48:28 <strigazi> I think we covered everything from the last meeting
10:48:33 <strigazi> #topic Open Discussion
10:48:34 <flwang> strigazi: thanks for clarification
10:48:59 <strigazi> Any other business?
10:50:19 <strigazi> We can wrap 10 minutes early then
10:50:36 <strigazi> Thanks slunkad flwang
10:50:42 <flwang> strigazi: i'm good.
10:50:58 <slunkad> me too, thanks!
10:51:06 <strigazi> #endmeeting