09:59:46 #startmeeting containers 09:59:48 Meeting started Tue Apr 24 09:59:46 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:59:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:59:51 The meeting name has been set to 'containers' 09:59:59 #topic Roll Call 10:00:48 o/ 10:00:53 o/ 10:01:27 rochapor1o: waves from Wellington 10:02:09 flwang1: waves back from gva :) 10:02:33 say hi to everyone 10:02:47 #topic Blueprints/Bugs/Ideas 10:03:12 From my side, I discovered this one that I 10:03:21 I'm also resposible for: 10:03:40 #link https://bugs.launchpad.net/magnum/+bug/1766284 10:03:42 Launchpad bug 1766284 in Magnum rocky "k8s_fedora: Kubernetes dashboard service account must not have any permissions" [Critical,In progress] - Assigned to Spyros Trigazis (strigazi) 10:03:55 strigazi: yep, that's a good catch 10:04:01 The fix is easy, just create an admin user 10:04:12 and use each token 10:04:17 and use its token 10:05:57 Apart from that, I don't update from last week, eventlet is still stuck and I haven't updated my other patches, will do this week 10:06:59 strigazi: finished? 10:07:02 Just some input, for flwang1. 548139 looks good, I'm testing it today 10:07:14 flwang1: yes, not much this week from me 10:07:25 strigazi: ok, from my side 10:07:52 I finally make the calico on master patch works, i think it's ready to go 10:08:21 and still working on the keystone integration work, need some help about the atomic install 10:08:35 flwang1: are we sure that we don't need kube-proxy on master? 10:08:45 besides, start to work on the health monitoring 10:09:02 strigazi: i tested it, so far i haven't seen any issue 10:09:13 dashboard, dns, heapster, all work 10:10:28 btw, i will take vacation from Wed to next Mon, back next Tue, just FYI 10:10:52 do you have any input for the health monitor? Are you working on the status? 10:11:05 Hi! I just created a bug: https://bugs.launchpad.net/magnum/+bug/1766546 10:11:05 Launchpad bug 1766546 in Magnum "Multi-Master deployments for k8s driver use different service account keys" [Undecided,New] 10:11:16 strigazi: yep, i'm working on adding 2 new fields for the schema 10:11:24 I would like to discuss the way to resolve this 10:11:39 sfilatov: we're in meeting, can we discuss offline? 10:11:44 sfilatov give us few mins, rochapor1o fyi 1766546 10:12:03 flwang1: I pinged him to discuss it in open discussion 10:12:19 strigazi: cool 10:12:29 that's all on my side, thanks 10:12:47 We can workaround it using ca.key so each server has the same artifact 10:12:49 Are the two fields in the spec? 10:12:58 strigazi: yes 10:13:09 flwang1: perfect 10:13:10 as we discussed last week 10:13:15 good 10:13:48 My concern is that it's not safe to expose it and we could generate this pair separately 10:13:59 strigazi: i'm very excited to see magnum is going to have auto-healing 10:13:59 And not use ca.key for serviceaccount tokens 10:15:02 flwang1: :) 10:15:13 flwang1: that's all from you, right? 10:15:26 yes 10:15:31 i'm clear 10:15:38 sfilatov: 10:16:24 looking 10:16:40 you have a concern on using the ca.key in general or just for the serviceaccount key? 10:17:36 strigazi: serviceaccount key. My concern is we defintely dont need it for serviceaccount key 10:17:53 strigazi: So if we could not ewxpose it, we better not to 10:18:25 sfilatov: you mean, not set it at all? 10:18:56 sfilatov: and let kubernetes generate one? Have you tried it? 10:19:01 strigazi: Not pass it to users vm 10:19:01 flwang1: i've start looking at auto healing 10:19:17 you'll be at the summit? we can discuss there 10:20:32 sfilatov: I'm confused, the problem is that we need the same key. And you suggest to use what as a key for service account tokens? 10:20:54 sfilatov: the goal is to have a separate cert/key pair for the kube services, is it? we can store it in the cert store (barbican or the default db) and load from there 10:21:34 rochapor1o: no, i won't be there because limited budget ;( 10:21:58 strigazi: Yes we need the same key. I suggest generate it. Since ca.key is a safety issue - whoever has it could authenticate in our k8s cluster 10:22:50 rochapor1o: yes, though I'm not sure we need to store it at all. We could generate it and pass through heat variables 10:23:30 we already use barbican to store the cluster secrets 10:23:40 you would need them if you decide to update the number of masters later 10:23:50 or for upgrades 10:24:11 rochapor1o: yes you are right 10:24:14 rochapor1o: the key will be available in heat as a paramater 10:24:40 it will work when scaling as well 10:24:45 any impact if there is no barbican? 10:25:35 secrets go to the magnum db if there's no barbican (the default) 10:26:14 sfilatov: What I don't get is why it is a security concern since the key will be only on master nodes, and the master ndoes already have the api server certs. 10:26:58 strigazi: I suppose api keys are not that important as ca key 10:27:09 strigazi: Though I might be wrong 10:27:25 rochapor1o: yep, i know. but if saving it in magnum db, magnum can only save 1 ca file 10:27:50 strigazi: I see that anyone who has the ca.key can authenticate as any user in magnum 10:27:59 sfilatov: we can double check then, if the current certs on the master node(s) are similar security risks 10:28:51 sfilatov: yes, the ca.key can give access to everything 10:29:30 flwang1: i think db uses the same secret naming convention. if we change to have a special cert/keypair for the api server which is shared among all masters, a change for barbican will also work for the db case 10:29:36 sfilatov: let's take this offline in the bug then 10:29:52 strigazi: +1 10:30:04 let's discuss this offline 10:30:31 sfilatov: can you have a look if the other certs are a security risk, I hope they are not. 10:31:02 strigazi: Can I try to implement this fix If we decide to generate a key it? :) 10:31:18 strigazi: Yes, I will check other certs 10:31:23 sfilatov: and if so, we can generate a key just for this purpose 10:31:54 sfilatov: maybe it can also be used for the cert mananger? 10:32:17 sfilatov: if we decide to go this way, go for it 10:32:23 strigazi: do you mean controller manager? 10:32:44 Merged openstack/magnum master: Add and improve tests for certificate manager https://review.openstack.org/552244 10:32:49 sfilatov: no, this on https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/ 10:32:53 sfilatov: no, this one https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/ 10:33:44 strigazi: should we move on? 10:33:50 yes 10:35:58 flwang1: Do you have anything else for the meeting? 10:36:44 strigazi: there are only 2 milestones for Rocky 10:37:08 we should review our current pipeline to see where we should pay more attention 10:37:20 to make sure we can meet our goal for R 10:38:42 We can try to aim for rocky-2 10:39:04 flwang1: Do you have standing review apart from calico? 10:39:39 i'm blocked by the atomic command for keystone 10:39:51 flwang1: ok 10:40:01 after finished keystone, i can focus on the monitoring stuff 10:40:13 which i think is the pre-condition for auto-healing 10:40:44 strigazi: and i'm also keen to review the upgrade PoC patch 10:41:23 ok, let's aim for Thursday to test 10:41:48 strigazi: and i have this tiny patch https://review.openstack.org/562454 10:41:53 flwang1: can you DM upgrade PoC patch to me? I'm also interested 10:41:54 rename some scritps 10:42:21 sfilatov: haha, i'm waiting for the magic link from strigazi ;) 10:42:46 I'll put you both in the loop 10:42:47 flwang1: looking forward :) 10:43:17 thanks folks 10:43:56 sfilatov: let's push this guy ;) 10:44:11 push is good 10:45:18 strigazi: and it would be nice if you can update the spec to reflect the api changes (show available versions) 10:45:22 Renaming things is not exciting, but I see the point 10:45:38 strigazi: i know it's not exciting, so no rush 10:46:16 strigazi: btw, another thing i mentioned before is the fluentd support 10:46:57 i think it's a low-hanging-fruit, but it's not my first priority 10:47:10 in Rocky, i'd like to help get the auto-healing and upgrade done 10:47:15 flwang1: yes, did you create a bp or bug? 10:47:23 strigazi: not yet 10:47:39 i'm happy to create one in case if there is new contributor interested in 10:48:07 Create a bp then 10:48:27 I hope we can find someone 10:48:29 strigazi: no problem 10:50:12 flwang1: do you want to discuss the atomic issue? Is there anything else? 10:50:39 strigazi: yep, i prefer to discuss the atomic issue offline 10:50:54 i don't want to waste others time 10:51:33 ok 10:52:08 let's wrap this then 10:52:16 strigazi: here you go https://blueprints.launchpad.net/magnum/+spec/fluentd 10:53:52 flwang1: thanks, can you add some more info? Will this be self contained in the cluster? 10:54:07 Will it accept endpoint to push to different places? 10:54:19 Will it accept an endpoint as a parameter to push to different places? 10:54:21 strigazi: sure, i will add more info later 10:55:06 based on my understanding with fluentd, it's most like a logging agent, so we will start it with dameonset 10:55:17 to make sure it can run on each node 10:56:39 Where the data will be stored? 10:57:03 Elasticsearch 10:57:27 In a provided endpoint? or inside the cluster? 10:58:03 strigazi: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch 10:58:14 strigazi: in cluster 10:58:21 ok self-contained then 10:58:50 yes 10:58:55 We can also look in the option to give an external es endpoint 10:58:58 i'm also referring this https://kubernetes.io/docs/concepts/cluster-administration/logging/ 10:59:06 strigazi: absolutely 10:59:35 i can imagine cern has already got a elasticsearch setup 10:59:46 time is up, we can discuss in the channel 10:59:52 yes 10:59:53 sure 10:59:59 ending the meeting then, thanks :) 11:00:04 #endmeeting