10:00:17 <strigazi> #startmeeting containers
10:00:18 <openstack> Meeting started Tue Feb 13 10:00:17 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
10:00:19 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
10:00:21 <openstack> The meeting name has been set to 'containers'
10:00:24 <strigazi> #topic Roll Call
10:00:48 <flwang1> o/
10:00:59 <slunkad> hi
10:01:57 <strigazi> hello slunkad flwang1
10:02:08 <strigazi> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-02-13_1600_UTC
10:02:23 <strigazi> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-02-13_1000_UTC
10:02:26 <strigazi> new time
10:02:40 <strigazi> #topic Announcements
10:02:52 <strigazi> Branch stable/queens is cut
10:03:08 <strigazi> The final release is in two weeks
10:03:29 <flwang1> strigazi: do we still have chance to merge the calico driver ?
10:03:34 <strigazi> flwang1: yes
10:03:39 <flwang1> strigazi: cool
10:04:25 <strigazi> flwang1: we can add changes that don't change requirements and they don't break client etc
10:04:41 <flwang1> strigazi: got it
10:04:58 <strigazi> #topic Blueprints/Bugs/Ideas
10:06:04 <strigazi> Calico network driver for kubernetes https://review.openstack.org/#/c/540352/ flwang1 is there a verification example somewhere? We can add in the docs
10:06:32 <flwang1> strigazi: yep
10:06:52 <flwang1> just create a template and using 'calico' as the network driver
10:07:19 <strigazi> cool, I'll test after the meeting. I mean test a network policy :)
10:07:30 <flwang1> strigazi: thanks a lot
10:07:52 <strigazi> we can check after
10:07:59 <flwang1> strigazi: btw, we (catalyst cloud) are also upstreaming this one https://review.openstack.org/#/c/543265/
10:08:11 <flwang1> support using octavia as LB
10:08:36 <flwang1> given the lbaas of neutron is deprecating, i think it makes much sense
10:09:56 <strigazi> I thought that neutron lbaas was by default octavia v2
10:10:10 <armaan> flwang1: hi, will there be a migration path for moving lbaas v2 LBs to Octavia?
10:10:23 <strigazi> flwang1: Thanks, I'll take a look
10:10:29 <strigazi> armaan: it's the above patch
10:10:37 <strigazi> https://review.openstack.org/#/c/543265/
10:10:42 <strigazi> #link https://review.openstack.org/#/c/543265/
10:10:54 <flwang1> armaan: there is no migration path unfortunately in magnum
10:11:17 <flwang1> there is a new config option to indicate if you have octavia deployed
10:11:59 <armaan> flwang1: nice! last time neutron team made a mess with lbaas v1 by providing no upgrade path, their response was that operators delele the v1 lbs and create new lbs, which was a nightmare
10:12:25 <strigazi> That's the closest possible option for a migration. New clusters will have octavia v2
10:12:36 <armaan> strigazi: thanks for sharing the link!
10:13:22 <strigazi> Next is, cluster federation
10:13:25 <armaan> okie, if we can migrate LBs outside of Magnum, even then perhaps we can find workaround for this.
10:13:52 <armaan> strigazi: Will it be possible to upgrade K8s in Queens?
10:14:21 <armaan> I mean live upgrading of K8s bits to a newer version
10:14:28 <strigazi> armaan: I'll try to add a first implementation
10:15:37 <strigazi> About cluster federation, https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/federation-api
10:15:47 <armaan> nice!
10:16:02 <armaan> btw what is your opinion on something like this https://review.openstack.org/#/c/459498/
10:16:03 <strigazi> The first patch is megred and the api layer is ready to take it in
10:16:54 <strigazi> armaan: we have kube_tag to set the tag for the kubernetes containers.
10:17:22 <strigazi> armaan: the api will still set the kube_tag  to a newer version
10:17:55 <armaan> Okay, i was not aware of this. Let me google it ...
10:19:08 <strigazi> flwang1: Is catalyst interested in cluster federation?
10:19:22 <flwang1> strigazi: not really at this moment
10:19:32 <flwang1> does GKE support that?
10:20:13 <strigazi> flwang1 it is possible to federate with a cluster in their cloud
10:20:59 <flwang1> if so, we may evaluate it later
10:21:03 <strigazi> flwang1: for marketing purposes I guess they don't offer it as option
10:21:07 <flwang1> but not on our MVP list
10:22:34 <strigazi> ok, next
10:22:43 <strigazi> enable_drivers option https://review.openstack.org/#/c/541663
10:23:04 <strigazi> flwang1: we just need release notes, I will test it
10:23:28 <flwang1> strigazi: i can add a release note after the meeting
10:24:09 <strigazi> flwang1: thanks
10:24:37 <strigazi> #topic Open Discussion
10:25:40 <strigazi> slunkad: flwang1 armaan do you want to discuss anything?
10:25:55 <slunkad> strigazi: do we know of any issues getting barbican to work as cert manager in magnum clusters?
10:26:16 <slunkad> I recently ran into a problem when trying it out
10:26:40 <strigazi> slunkad check your keystone_auth and keystone_authtoken sections in magnum.conf
10:27:10 <strigazi> slunkad: https://docs.openstack.org/magnum/latest/install/install-obs.html
10:27:18 <slunkad> I did, is there anything specific needed? the error I get is about domain not being found in project
10:27:20 <armaan> strigazi: In my testing setup, i observed that stable/pike is broken because of this https://bugs.launchpad.net/magnum/+bug/1744362
10:27:21 <strigazi> slunkad: keystone_authtoken
10:27:22 <openstack> Launchpad bug 1744362 in Magnum "heat-container-agent fails to communicate with Keystone." [Medium,Confirmed]
10:28:00 <slunkad> strigazi: is this also valid for newton?
10:28:06 <strigazi> slunkad: yes
10:28:14 <slunkad> strigazi: ok thanks will check then
10:28:30 <strigazi> slunkad: admin_user admin_password admin_tenant_name
10:28:34 <flwang1> strigazi: is there any migration path from x509 to barbican?
10:28:40 <strigazi> flwang1: no
10:28:47 <slunkad> strigazi: yes
10:29:07 <flwang1> so if we deploy magnum firstly without barbican, how can we migrate to barbican later?
10:29:37 <strigazi> armaan: the bug is in heat :( AFAIK we can't do anything magnum-side
10:29:56 <strigazi> flwang1: there is no supported path to do that
10:30:06 <flwang1> strigazi: ok, i see
10:30:18 <strigazi> flwang1: you could import manually the certs, one by one
10:30:26 <strigazi> flwang1: in theory it can work
10:30:32 <flwang1> ok, i see.
10:30:54 <flwang1> i could upstream the 'magic script' later if we have to do that
10:31:23 <strigazi> take ca from db, use trusd_id and trustee user to import the certs to barbican
10:31:51 <flwang1> got it
10:32:07 <strigazi> armaan: exposing the admin or internal endpoint would work
10:32:28 <armaan> strigazi: thanks! I will take it to the #heat folks then
10:33:21 <strigazi> armaan: I don't know any security reason to not expose the admin and internal endpoints over ssl if you *already* expose the public one
10:33:57 <strigazi> armaan: most openstack services offer the same functionality with all endpoints
10:34:03 <armaan> ok, heat agent is currently disabled in our environment
10:34:39 <armaan> strigazi: AFAIK, we cannot expose our internal endpoints because of keystone
10:35:04 <strigazi> armaan: why not?
10:35:53 <armaan> no ssl on internal endpoints
10:35:53 <openstackgerrit> Ricardo Rocha proposed openstack/magnum master: [kubernetes] add ingress controller  https://review.openstack.org/528756
10:36:41 <strigazi> armaan: ok, I get this but you can do the same config for the internal on I guess
10:36:49 <strigazi> armaan: ok, I get this but you can do the same config for the internal one I guess
10:37:46 <strigazi> slunkad: Do you need anything else for the config?
10:38:12 <strigazi> slunkad: did you try again?
10:38:30 <slunkad> strigazi: I did but still seems to fail with the same error
10:38:49 <strigazi> and the trust section?
10:39:00 <strigazi> slunkad: does it have the required fields?
10:39:20 <armaan> strigazi: Yeah, we could do that. I will have a discussion about this with my team.
10:39:35 <slunkad> strigazi: seems so, let me try again
10:42:12 <openstackgerrit> Ricardo Rocha proposed openstack/magnum master: [k8s] allow enabling kubernetes cert manager api  https://review.openstack.org/529818
10:42:33 <strigazi> folks, is there anything else to discuss?
10:42:43 <fghaas> strigazi: yup :)
10:43:30 <fghaas> strigazi: I realize I might be barging in here so apologies for that, but armaan just asked me to join in here about https://review.openstack.org/#/c/459498/ (i.e. --coe-version vs. kube_tag)
10:44:53 <fghaas> If I understand you correctly, then you're suggesting for *operators* to select the kubernetes version they would like to enable users to deploy. That gerrit change, as I understand it, talks about enabling that choice for *users* though — a slightly different question.
10:45:34 <fghaas> So I guess our question is, do you have thoughts on pros and cons for enabling users to select a kubernetes release to deploy, via an API/CLI call?
10:46:36 <strigazi> fghaas coe_version is means to be used for the actual version. kube_tag on the other hand is for the tag that the nodes are going to pull.
10:47:12 <strigazi> fghaas: user can select the version that they want via the kube_tag label.
10:47:47 <strigazi> fghaas: users can use the cli eg with --labels kube_tag=v1.9.1
10:48:29 <fghaas> OK, perfect, and then `coe_version` (as a read-only attribute) would correctly return that?
10:48:51 <slunkad> strigazi: http://paste.openstack.org/show/670986/ fails with this config
10:48:53 <strigazi> fghaas: at the moment, it doesn't return the correct value]
10:50:14 <strigazi> fghaas: since the heat-agent is added we can export the version from the node to heat and then to the cluster
10:50:37 <strigazi> fghaas: at an api level coe_version will be read-only. makes sense?
10:50:47 <strigazi> slunkad: [trust] section?
10:50:57 <fghaas> strigazi: I see; so that means we just need to educate users that only kubectl --version is authoritative. No worries, we can do that. And, using the --labels approach requires the heat agent. Did I parse that correctly?
10:51:33 <strigazi> fghaas yes, kubectl --version is the source of truth
10:51:52 <strigazi> fghaas: no, the heat-agent is not required for kube_tag
10:52:09 <strigazi> fghaas: kube_tag is used here:
10:52:37 <strigazi> fghaas: http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh#n8
10:52:56 <strigazi> fghaas: we also miss an entry in the docs for it :(
10:53:03 <slunkad> strigazi: ah I think i might be missing the trustee_domain_admin_name, I'm using the trustee_domain_admin_id, checking now
10:53:59 <strigazi> fghaas: we miss it here http://git.openstack.org/cgit/openstack/magnum/tree/doc/source/user/index.rst#n294
10:54:37 <fghaas> strigazi: Well *that* can be remedied. :)
10:55:18 <strigazi> slunkad: you need three entries in the trust section
10:55:28 <strigazi> fghaas: you work with armaan ?
10:55:40 <strigazi> fghaas: same team?
10:56:02 <fghaas> strigazi: no longer same team, but same company (since 2013 :) )
10:56:29 <strigazi> fghaas I meant same company, cool :)
10:57:29 <strigazi> time is almost up, we can wrap the meeting, if you need anything to be logged speak now :)
10:58:06 <strigazi> cool, thanks folks, see you next week
10:58:11 <strigazi> #endmeeting