10:02:01 <strigazi> #startmeeting containers
10:02:02 <openstack> Meeting started Tue Jul 24 10:02:01 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
10:02:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
10:02:04 <strigazi> #topic Roll Call
10:02:05 <openstack> The meeting name has been set to 'containers'
10:02:08 <flwang1> strigazi: if so, there are probably some regression issues
10:02:20 <flwang1> we may need review the changes between 1.11.1 and 1.11.0
10:02:24 <flwang1> o/
10:02:28 <strigazi> o/
10:03:39 <strigazi> #topic Blueprints/Bugs/Ideas
10:03:42 <flwang1> strigazi: seems there are always only you and me in this meeting
10:04:06 <strigazi> #topic meeting time
10:04:07 <flwang1> maybe we can merge the two meetings
10:04:35 <slunkad> hi
10:05:01 <strigazi> I could late night for me, early for you and normal for blizzard?
10:05:23 <strigazi> slunkad: hello
10:06:23 <strigazi> flwang1: 2100 UTC ?
10:07:11 <flwang1> strigazi: 2100UTC means 10AM, IIRC
10:07:14 <flwang1> for NZ time
10:07:57 <strigazi> 1400 for west coast
10:08:14 <strigazi> 0900 for NZ
10:08:30 <strigazi> 2300 Europe
10:09:53 <strigazi> flwang1: thoughts?
10:10:02 <flwang1> works for me
10:10:09 <strigazi> for europe I'm present in working hours
10:10:18 <strigazi> I can setup something like office hours
10:10:32 <strigazi> Tuesday mornings
10:10:32 <flwang1> strigazi: that would be nice
10:10:52 <strigazi> So if someone wants something can find me for sure.
10:11:11 <strigazi> IMO, for our case we should push things in th ML
10:11:49 <strigazi> slunkad: what do you think about office hours?
10:11:54 <flwang1> totally agree, we can put more our discussion in the ML
10:12:21 <strigazi> it is like a meeting but without minutes. it is still logged.
10:12:34 <slunkad> strigazi: what do you mean like setting it up as your status on irc?
10:13:48 <strigazi> no, in the wiki page I'll post that this time and day someone will be on the channel
10:14:02 <strigazi> well for europe me
10:14:08 <slunkad> ok ya that sounds good
10:15:21 <flwang1> strigazi: i can cover the NZ/AP time if it's helpful
10:15:52 <strigazi> ok then, Tuesdays at 1300 UTC europe maybe east coast for me
10:16:06 <strigazi> flwang1: for you?
10:16:24 <strigazi> you can pick a time that you are online and it is day :)
10:16:33 <strigazi> you can pick a time that you are online and it is daytime :)
10:16:43 <flwang1> strigazi: yep
10:16:52 <slunkad> does that mean the thursday meeting will not happen?
10:16:56 <flwang1> probably My Thursday morning
10:17:50 <strigazi> we can move the meeting on Tuesdays 2100 or 2200 UTC
10:18:50 <strigazi> 2100 UTC is a go?
10:19:24 <strigazi> flwang1: ^^
10:19:35 <flwang1> strigazi: works for me
10:20:42 <strigazi> #agreed meeting moves to Tuesdays 2100 UTC
10:20:48 <strigazi> We can do it today
10:20:56 <strigazi> tmr for you flwang1
10:21:18 <flwang1> strigazi: sure
10:22:13 <strigazi> Next week I'll be on holidays. We can still have a meeting and flwang1 chairs it?
10:22:25 <flwang1> strigazi: no problem
10:22:46 <flwang1> i will call you if there is question i can't answer
10:22:55 <strigazi> :)
10:23:24 <strigazi> office hours Tuesdays at 1300 UTC for me
10:24:15 <strigazi> flwang1: do you want to set office hours?
10:24:38 <flwang1> strigazi: let me check the UTC time of mine
10:26:33 <flwang1> Wed UTC 10:00PM - 11:00PM
10:26:56 <strigazi> pm?
10:27:51 <flwang1> PM means my AM, and it may work for others, like afternoon
10:27:54 <flwang1> i don't know
10:28:11 <flwang1> or i can put mine later
10:28:12 <strigazi> oh, so 2200 UTc ok
10:28:29 <strigazi> sounds good
10:29:57 <strigazi> #agreed office hours for strigazi  Tuesdays at 1300 UTC and Wednesdays 2200 UTC for flwang1
10:30:14 <flwang1> cool
10:30:35 <strigazi> #topic Blueprints/Bugs/Ideas
10:31:34 <strigazi> For me, I'll push to finish the upgrade API to have it in rocky, server and client by Friday. flwang1 I'll need you help for reviews.
10:31:58 <flwang1> strigazi: no problem, i'm keen to review it
10:33:05 <strigazi> The implementation will do inplace upgrades I haven't managed to do the replace with draining.
10:34:12 <strigazi> And secondly, I'll investigate the issue with kube v1.11.1
10:34:41 <strigazi> v1.11.0 and v1.11.1 work at the CERN cloud and they pass the conformance tests.
10:35:08 <strigazi> v1.11.0 works on devstack, but v1.11.1 doesn't
10:35:30 <strigazi> There is an issue with RBAC or certs
10:35:51 <strigazi> I might try a devstack with designate enabled.
10:36:35 <strigazi> The only big difference is that in t production we have DNS and authentication is done with the node names.
10:36:47 <strigazi> node name == hostname
10:37:02 <strigazi> kube node name == hostname == nova vm name
10:37:43 <flwang1> that's possible
10:37:54 <strigazi> what is possible?
10:38:04 <flwang1> i mean maybe related to DNS
10:38:19 <strigazi> that it is for th millionth time DNS? :)
10:38:25 <strigazi> that it is for the millionth time DNS? :)
10:38:42 <flwang1> no, maybe related the hostname something
10:38:50 <strigazi> http://i.imgur.com/eAwdKEC.png
10:39:00 <strigazi> I couldn't resist
10:39:45 <strigazi> I'm investigating, it is good dive in how auth works in k8s
10:39:52 <flwang1> i like the paint, typical chinese paint
10:40:24 <flwang1> strigazi: yep, as for auth, did you ever put some effort on the best practice of k8s security?
10:40:38 <flwang1> since I didn't see there is a sig-security in k8s community
10:40:51 <flwang1> so i'm wondering if there is team caring about it
10:41:17 <strigazi> AFAIK we are doing the best possible. apart from selinux...
10:41:58 <strigazi> after adding calico we covered the policy part too.
10:41:58 <flwang1> selinux is another topic, i don't think we still need to disable it, right?
10:42:12 <strigazi> we could have a label
10:42:27 <strigazi> if selinux is on
10:42:46 <strigazi> user will need to modify their temlates with the appropriate labeling
10:43:10 <strigazi> the security-context in the pod spec
10:44:27 <strigazi> https://kubernetes.io/docs/setup/independent/install-kubeadm/
10:44:32 <strigazi> Disabling SELinux by running setenforce 0 is required to allow containers to access the host filesystem, which is required by pod networks for example. You have to do this until SELinux support is improved in the kubelet.
10:45:04 <strigazi> I have by passed all the issues with selinux on but I'm not very confident on having it on
10:46:11 <flwang1> strigazi: ok, is kubeadm doing the same?
10:46:16 <strigazi> yeap
10:46:38 <strigazi> but it works with selinux on
10:47:04 <strigazi> we can follow this offline with help from #fedora
10:48:22 <flwang1> strigazi: ok, cool
10:48:30 <strigazi> To conclude, the cluster in queens and master are secure. After that, it is on the cluster admin to deploy apps securely.
10:49:37 <strigazi> that is it from me.
10:49:42 <flwang1> cool
10:49:53 <sfilatov_> I have a question: how do we cope with upgrades in case of one of the k8s manifests needs to be changed?
10:50:32 <sfilatov_> For example when I upgraded k8s from 1.9.8 to 1.10 I faced an issue with grafana manifest
10:50:53 <sfilatov_> image version for grafana needs to be bumped
10:51:17 <sfilatov_> Will the new upgrade logic allow us to upgrade this version?
10:51:25 <strigazi> versions are easy to bump
10:51:48 <strigazi> whereever we have a label with the tag
10:51:55 <flwang1> strigazi: as long as we pass it as a label i think
10:52:15 <strigazi> the agent will run apply and bump it.
10:52:18 <flwang1> but for grafana, IIRC, we're hardcode the version
10:52:32 <strigazi> for grafana yes
10:52:43 <sfilatov_> So upgrade should support versions?
10:52:53 <sfilatov_> i mean labels*
10:52:56 <strigazi> yes
10:53:04 <strigazi> since kube_tag is a label
10:53:36 <strigazi> the same applies for all
10:54:34 <sfilatov_> So the idea is to pass all the needed labels
10:54:42 <sfilatov_> openstack coe cluster upgrade <cluster name or id> \
10:54:42 <sfilatov_> --masters \
10:54:42 <sfilatov_> --rollback \
10:54:44 <sfilatov_> --batch-size <size> \
10:54:46 <sfilatov_> --parameters key1=val1,...,keyN=valN
10:54:50 <sfilatov_> like in this spec?
10:54:55 <strigazi> sfilatov_: can you push a patch for the missing ones?
10:55:16 <sfilatov_> yes, I can do that
10:55:41 <strigazi> coredns and the monitoing ones
10:55:50 <sfilatov_> OK
10:56:22 <sfilatov_> Also could you tell the status for cluster_healing and nodepools?
10:56:54 <sfilatov_> Looks like they are on hold
10:57:09 <sfilatov_> Are there any plans on getting back to them?
10:57:13 <strigazi> we can have the moniting in (which is required for healing)
10:57:49 <strigazi> nodegroups won't make it in Rocky
10:58:15 <strigazi> for autohealing we can have it
10:58:32 <sfilatov_> didn't quite get it
10:58:44 <sfilatov_> we are talking about heat autohealing, right?
10:58:50 <strigazi> no
10:59:08 <strigazi> magnum will monitor the cluster nodes
10:59:33 <sfilatov_> is there a patch for this?
10:59:35 <strigazi> this the 1st part, it will basically ask the api if the nodes are ok.
10:59:38 <sfilatov_> I haven't seen it
10:59:47 <strigazi> flwang1: has one
10:59:48 <sfilatov_> Kubernetes API?
10:59:51 <strigazi> yes
11:00:06 <flwang1> sfilatov_: wait a sec
11:00:19 <flwang1> https://review.openstack.org/570818
11:00:26 <flwang1> https://review.openstack.org/572897
11:00:27 <sfilatov_> I guess it's okay for the first part, but generally would be better to get notifications from it
11:00:38 <sfilatov_> flwang1: thx!
11:00:53 <sfilatov_> strigazi: >nodegroups won't make it in Rocky
11:00:57 <flwang1> sfilatov_: no problem
11:00:57 <strigazi> sfilatov_: openstack notifications?
11:01:11 <strigazi> sfilatov_: user notifications?
11:01:27 <strigazi> these are veeery different things.
11:01:37 <flwang1> sfilatov_: i think magnum can send openstack notifications out for the status change
11:01:54 <strigazi> flwang1: ^^ this easy and very doable
11:02:05 <flwang1> strigazi: yep
11:02:54 <sfilatov_> we have a lot of issues
11:02:55 <strigazi> sfilatov_: we have someone that just started working again in nodegroups but there is space for more people. In any case nodegroups will be in the next release (Rocky)
11:03:04 <strigazi> sfilatov_: what issues?
11:03:12 <sfilatov_> when K8s api is not available
11:03:16 <sfilatov_> for some reason
11:03:43 <sfilatov_> that's why I like it better when nodes send notifications
11:03:56 <sfilatov_> but I guess we can talk about it in patch set
11:04:04 <strigazi> we can ask kubelet\
11:04:29 <strigazi> curl https://$KUBELET_IP:10250/healthz == 'ok'
11:04:36 <sfilatov_> yeah
11:04:43 <strigazi> polling
11:04:44 <sfilatov_> I'll check the patches then
11:04:47 <strigazi> not pushing
11:04:51 <sfilatov_> and will comment it offline
11:05:04 <strigazi> ok
11:05:05 <sfilatov_> strigazi: And if you plan on adding push-upgrades
11:05:15 <sfilatov_> how do you implement master/minions?
11:05:33 <sfilatov_> I mean I thought you need some kind of nodepools for that
11:05:36 <strigazi> I'll ping you in the review.
11:05:44 <sfilatov_> thx
11:05:53 <strigazi> we have two now, master and minion
11:06:15 <strigazi> Let's wrap them. I'll send a summary in the ML and cc you
11:06:23 <flwang1> strigazi: thanks
11:06:24 <strigazi> We need to use the ML
11:06:29 <sfilatov_> thx
11:06:42 <strigazi> before closing
11:06:50 <flwang1> currently, i just finished the multi region issue and mainly focus on our magnum deployment
11:07:09 <strigazi> sfilatov_: flwang1 slunkad if tou have time, test v1.11.0 seems to work well
11:07:22 <strigazi> I'll continue on v1.11.1
11:07:27 <flwang1> strigazi: it's on my list now
11:07:30 <strigazi> flwang1: great news \o/
11:07:49 <flwang1> strigazi: yep, i'm so excited about that
11:08:10 <strigazi> we are going multiregion here too, we'll need to talk
11:08:40 <strigazi> let's wrap, we are 9 mins late
11:08:44 <strigazi> ok?
11:09:15 <strigazi> said once
11:09:43 <strigazi> said twice
11:09:59 <strigazi> Thanks for joining folks!
11:10:03 <strigazi> #endmeeting