10:02:01 #startmeeting containers 10:02:02 Meeting started Tue Jul 24 10:02:01 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 10:02:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 10:02:04 #topic Roll Call 10:02:05 The meeting name has been set to 'containers' 10:02:08 strigazi: if so, there are probably some regression issues 10:02:20 we may need review the changes between 1.11.1 and 1.11.0 10:02:24 o/ 10:02:28 o/ 10:03:39 #topic Blueprints/Bugs/Ideas 10:03:42 strigazi: seems there are always only you and me in this meeting 10:04:06 #topic meeting time 10:04:07 maybe we can merge the two meetings 10:04:35 hi 10:05:01 I could late night for me, early for you and normal for blizzard? 10:05:23 slunkad: hello 10:06:23 flwang1: 2100 UTC ? 10:07:11 strigazi: 2100UTC means 10AM, IIRC 10:07:14 for NZ time 10:07:57 1400 for west coast 10:08:14 0900 for NZ 10:08:30 2300 Europe 10:09:53 flwang1: thoughts? 10:10:02 works for me 10:10:09 for europe I'm present in working hours 10:10:18 I can setup something like office hours 10:10:32 Tuesday mornings 10:10:32 strigazi: that would be nice 10:10:52 So if someone wants something can find me for sure. 10:11:11 IMO, for our case we should push things in th ML 10:11:49 slunkad: what do you think about office hours? 10:11:54 totally agree, we can put more our discussion in the ML 10:12:21 it is like a meeting but without minutes. it is still logged. 10:12:34 strigazi: what do you mean like setting it up as your status on irc? 10:13:48 no, in the wiki page I'll post that this time and day someone will be on the channel 10:14:02 well for europe me 10:14:08 ok ya that sounds good 10:15:21 strigazi: i can cover the NZ/AP time if it's helpful 10:15:52 ok then, Tuesdays at 1300 UTC europe maybe east coast for me 10:16:06 flwang1: for you? 10:16:24 you can pick a time that you are online and it is day :) 10:16:33 you can pick a time that you are online and it is daytime :) 10:16:43 strigazi: yep 10:16:52 does that mean the thursday meeting will not happen? 10:16:56 probably My Thursday morning 10:17:50 we can move the meeting on Tuesdays 2100 or 2200 UTC 10:18:50 2100 UTC is a go? 10:19:24 flwang1: ^^ 10:19:35 strigazi: works for me 10:20:42 #agreed meeting moves to Tuesdays 2100 UTC 10:20:48 We can do it today 10:20:56 tmr for you flwang1 10:21:18 strigazi: sure 10:22:13 Next week I'll be on holidays. We can still have a meeting and flwang1 chairs it? 10:22:25 strigazi: no problem 10:22:46 i will call you if there is question i can't answer 10:22:55 :) 10:23:24 office hours Tuesdays at 1300 UTC for me 10:24:15 flwang1: do you want to set office hours? 10:24:38 strigazi: let me check the UTC time of mine 10:26:33 Wed UTC 10:00PM - 11:00PM 10:26:56 pm? 10:27:51 PM means my AM, and it may work for others, like afternoon 10:27:54 i don't know 10:28:11 or i can put mine later 10:28:12 oh, so 2200 UTc ok 10:28:29 sounds good 10:29:57 #agreed office hours for strigazi Tuesdays at 1300 UTC and Wednesdays 2200 UTC for flwang1 10:30:14 cool 10:30:35 #topic Blueprints/Bugs/Ideas 10:31:34 For me, I'll push to finish the upgrade API to have it in rocky, server and client by Friday. flwang1 I'll need you help for reviews. 10:31:58 strigazi: no problem, i'm keen to review it 10:33:05 The implementation will do inplace upgrades I haven't managed to do the replace with draining. 10:34:12 And secondly, I'll investigate the issue with kube v1.11.1 10:34:41 v1.11.0 and v1.11.1 work at the CERN cloud and they pass the conformance tests. 10:35:08 v1.11.0 works on devstack, but v1.11.1 doesn't 10:35:30 There is an issue with RBAC or certs 10:35:51 I might try a devstack with designate enabled. 10:36:35 The only big difference is that in t production we have DNS and authentication is done with the node names. 10:36:47 node name == hostname 10:37:02 kube node name == hostname == nova vm name 10:37:43 that's possible 10:37:54 what is possible? 10:38:04 i mean maybe related to DNS 10:38:19 that it is for th millionth time DNS? :) 10:38:25 that it is for the millionth time DNS? :) 10:38:42 no, maybe related the hostname something 10:38:50 http://i.imgur.com/eAwdKEC.png 10:39:00 I couldn't resist 10:39:45 I'm investigating, it is good dive in how auth works in k8s 10:39:52 i like the paint, typical chinese paint 10:40:24 strigazi: yep, as for auth, did you ever put some effort on the best practice of k8s security? 10:40:38 since I didn't see there is a sig-security in k8s community 10:40:51 so i'm wondering if there is team caring about it 10:41:17 AFAIK we are doing the best possible. apart from selinux... 10:41:58 after adding calico we covered the policy part too. 10:41:58 selinux is another topic, i don't think we still need to disable it, right? 10:42:12 we could have a label 10:42:27 if selinux is on 10:42:46 user will need to modify their temlates with the appropriate labeling 10:43:10 the security-context in the pod spec 10:44:27 https://kubernetes.io/docs/setup/independent/install-kubeadm/ 10:44:32 Disabling SELinux by running setenforce 0 is required to allow containers to access the host filesystem, which is required by pod networks for example. You have to do this until SELinux support is improved in the kubelet. 10:45:04 I have by passed all the issues with selinux on but I'm not very confident on having it on 10:46:11 strigazi: ok, is kubeadm doing the same? 10:46:16 yeap 10:46:38 but it works with selinux on 10:47:04 we can follow this offline with help from #fedora 10:48:22 strigazi: ok, cool 10:48:30 To conclude, the cluster in queens and master are secure. After that, it is on the cluster admin to deploy apps securely. 10:49:37 that is it from me. 10:49:42 cool 10:49:53 I have a question: how do we cope with upgrades in case of one of the k8s manifests needs to be changed? 10:50:32 For example when I upgraded k8s from 1.9.8 to 1.10 I faced an issue with grafana manifest 10:50:53 image version for grafana needs to be bumped 10:51:17 Will the new upgrade logic allow us to upgrade this version? 10:51:25 versions are easy to bump 10:51:48 whereever we have a label with the tag 10:51:55 strigazi: as long as we pass it as a label i think 10:52:15 the agent will run apply and bump it. 10:52:18 but for grafana, IIRC, we're hardcode the version 10:52:32 for grafana yes 10:52:43 So upgrade should support versions? 10:52:53 i mean labels* 10:52:56 yes 10:53:04 since kube_tag is a label 10:53:36 the same applies for all 10:54:34 So the idea is to pass all the needed labels 10:54:42 openstack coe cluster upgrade \ 10:54:42 --masters \ 10:54:42 --rollback \ 10:54:44 --batch-size \ 10:54:46 --parameters key1=val1,...,keyN=valN 10:54:50 like in this spec? 10:54:55 sfilatov_: can you push a patch for the missing ones? 10:55:16 yes, I can do that 10:55:41 coredns and the monitoing ones 10:55:50 OK 10:56:22 Also could you tell the status for cluster_healing and nodepools? 10:56:54 Looks like they are on hold 10:57:09 Are there any plans on getting back to them? 10:57:13 we can have the moniting in (which is required for healing) 10:57:49 nodegroups won't make it in Rocky 10:58:15 for autohealing we can have it 10:58:32 didn't quite get it 10:58:44 we are talking about heat autohealing, right? 10:58:50 no 10:59:08 magnum will monitor the cluster nodes 10:59:33 is there a patch for this? 10:59:35 this the 1st part, it will basically ask the api if the nodes are ok. 10:59:38 I haven't seen it 10:59:47 flwang1: has one 10:59:48 Kubernetes API? 10:59:51 yes 11:00:06 sfilatov_: wait a sec 11:00:19 https://review.openstack.org/570818 11:00:26 https://review.openstack.org/572897 11:00:27 I guess it's okay for the first part, but generally would be better to get notifications from it 11:00:38 flwang1: thx! 11:00:53 strigazi: >nodegroups won't make it in Rocky 11:00:57 sfilatov_: no problem 11:00:57 sfilatov_: openstack notifications? 11:01:11 sfilatov_: user notifications? 11:01:27 these are veeery different things. 11:01:37 sfilatov_: i think magnum can send openstack notifications out for the status change 11:01:54 flwang1: ^^ this easy and very doable 11:02:05 strigazi: yep 11:02:54 we have a lot of issues 11:02:55 sfilatov_: we have someone that just started working again in nodegroups but there is space for more people. In any case nodegroups will be in the next release (Rocky) 11:03:04 sfilatov_: what issues? 11:03:12 when K8s api is not available 11:03:16 for some reason 11:03:43 that's why I like it better when nodes send notifications 11:03:56 but I guess we can talk about it in patch set 11:04:04 we can ask kubelet\ 11:04:29 curl https://$KUBELET_IP:10250/healthz == 'ok' 11:04:36 yeah 11:04:43 polling 11:04:44 I'll check the patches then 11:04:47 not pushing 11:04:51 and will comment it offline 11:05:04 ok 11:05:05 strigazi: And if you plan on adding push-upgrades 11:05:15 how do you implement master/minions? 11:05:33 I mean I thought you need some kind of nodepools for that 11:05:36 I'll ping you in the review. 11:05:44 thx 11:05:53 we have two now, master and minion 11:06:15 Let's wrap them. I'll send a summary in the ML and cc you 11:06:23 strigazi: thanks 11:06:24 We need to use the ML 11:06:29 thx 11:06:42 before closing 11:06:50 currently, i just finished the multi region issue and mainly focus on our magnum deployment 11:07:09 sfilatov_: flwang1 slunkad if tou have time, test v1.11.0 seems to work well 11:07:22 I'll continue on v1.11.1 11:07:27 strigazi: it's on my list now 11:07:30 flwang1: great news \o/ 11:07:49 strigazi: yep, i'm so excited about that 11:08:10 we are going multiregion here too, we'll need to talk 11:08:40 let's wrap, we are 9 mins late 11:08:44 ok? 11:09:15 said once 11:09:43 said twice 11:09:59 Thanks for joining folks! 11:10:03 #endmeeting