17:00:44 <strigazi> #startmeeting containers
17:00:45 <openstack> Meeting started Thu Jun 28 17:00:44 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:46 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:48 <openstack> The meeting name has been set to 'containers'
17:00:53 <strigazi> #topic Roll Call
17:01:00 <strigazi> o/
17:01:00 <imdigitaljim> o/
17:01:14 <flwang1> o/
17:01:18 <colin-> hello
17:01:22 <jslater> o/
17:01:38 <colin-> \o
17:02:26 <strigazi> #topic Review Action Items
17:02:58 <strigazi> strigazi to summarize the meeting in a ML mail DONE
17:04:16 <strigazi> #link http://lists.openstack.org/pipermail/openstack-dev/2018-June/131769.html
17:05:08 <strigazi> I had two more, to create stories but I wanted to confirm before creating noise in the already heavy storyboard
17:05:34 <strigazi> We can discuss them in:
17:05:45 <strigazi> #topic Blueprints/Bugs/Ideas
17:06:15 <strigazi> The first one is pretty clear and it is about moving kubelet/proxy on all nodes.
17:06:46 <flwang1> s/moving/having ?
17:06:47 <strigazi> We had a quick chat with imdigitaljim. I propose to run the minion configuration on all nodes
17:06:59 <strigazi> yes s/moving/having/
17:07:09 <imdigitaljim> im gonna post a refactor draft here soon on the k8s bootstrapping files, please discuss changes on it. =) if we like it ill continue forward and validate/enhance it.
17:07:30 <imdigitaljim> quick fix minion configuration sounds good
17:07:40 <imdigitaljim> longterm needs to be cleaner
17:08:07 <strigazi> what does quickfix mean?
17:08:19 <strigazi> Which changes?
17:08:46 <flwang1> i would like to postpone the changing to Stein
17:08:50 <flwang1> not in Rocky
17:09:10 <imdigitaljim> thats fine, its just for visibility, doesnt have to be accepted
17:09:11 <strigazi> flwang1: So only small patches?
17:09:40 <flwang1> it's milestone 3 now
17:09:55 <flwang1> i'm not sure if we have enough time to test every corner
17:10:30 <strigazi> ok
17:11:10 <imdigitaljim> adding the minion configuration to master should work though, lets try it out
17:12:40 <flwang1> i just wanna to see a stable/rocky
17:12:48 <strigazi> The current situation that we configure kubelet in the master is a bit comlplicated. If the change is not big we can go for it
17:13:11 <flwang1> besides, k8s 1.11 just released, we may need to bump the k8s version in rocky
17:13:43 <strigazi> flwang1: this is irrelevant
17:13:59 <flwang1> strigazi: if there is a patch in next 2 weeks, i'm happy to review
17:14:24 <strigazi> changing the kube verision is easy and we can't make decisions based on that
17:14:38 <flwang1> yep, 1.11 is not releated, just thinking a loud to avoid i forgot
17:14:39 <strigazi> we did that in the past, config should be generic enough
17:15:37 <strigazi> ok, let's see a patch and we decide in the coming weeks.
17:15:38 <flwang1> Jul 23 - Jul 27R-5Rocky-3 milestone
17:15:49 <strigazi> two weeks
17:15:50 <flwang1> we have about 4 weeks for R-3
17:16:22 <flwang1> or for the worst case, if we can have a patch in good shape by the end of july, i'm happy
17:16:42 <strigazi> The next change would be upgrading the heat templates to pike
17:17:47 <strigazi> how we should make this decision?
17:17:59 <strigazi> flwang1: imdigitaljim what heat version are you running?
17:18:20 <flwang1> strigazi: we will be on queens shortly
17:18:33 <strigazi> should we fork to _v2 drivers for this change?
17:19:07 <flwang1> strigazi: that would be better to physically isolate the changes
17:19:11 <flwang1> so yes
17:19:13 <imdigitaljim> we're still using the juno ones shipped with magnum
17:19:37 <imdigitaljim> a v2 would be good for stability/going forward as well
17:20:10 <strigazi> I'm in favor of v2 but keep the scripts common, thoughts?
17:20:59 <colin-> yeah that sounds resaonable
17:21:05 <flwang1> scripts is fine
17:21:24 <strigazi> ok then
17:21:25 <flwang1> and if we do that, i would suggest us resondiering the folder structure
17:21:45 <strigazi> flwang1: what do you mean?
17:22:02 <strigazi> not k8s_fedora_atomic_v2?
17:23:07 <flwang1> i'm happy with k8s_fedora_atomic_v2
17:23:26 <strigazi> ok, one moment
17:23:31 <cbrumm> would moving to a self hosted control plane be possible in a v2 driver?
17:23:48 <imdigitaljim> ^
17:24:05 <flwang1> i'm talking about magnum/drivers/common/templates/kubernetes/fragments
17:24:17 <flwang1> i'm a bit weird for me though i haven't got a better idea yet
17:24:26 <flwang1> s/i'm/it's
17:24:31 <imdigitaljim> if you're not familiar with self-hosted
17:24:31 <imdigitaljim> https://coreos.com/blog/self-hosted-kubernetes.html
17:24:38 <imdigitaljim> heres a decent doc to explain
17:24:46 <imdigitaljim> kubernetes is moving towards self-hosted model
17:25:15 <imdigitaljim> but in other words your control plane containers run in k8s instead of as systemd
17:25:16 <strigazi> #link https://storyboard.openstack.org/#!/story/2002750
17:25:27 <imdigitaljim> just the kubelet would be systemd
17:25:57 <strigazi> we could do that, but from experience the benefit is not great
17:26:02 <flwang1> imdigitaljim: we did discussed that way
17:26:40 <flwang1> and we even want to drop master nodes from user's tenant, no mater it's in vm or container
17:27:01 <colin-> for my part i've enjoyed being able to interact with all components of the control plane via the API, much in the same patern the project encourages elsewhere for self-managing resources
17:27:17 <flwang1> imdigitaljim: and i agree with strigazi, we need to understand the pros and cons
17:27:28 <strigazi> colin-: I agree on it, it is much easier
17:27:29 <strigazi> but
17:28:17 <strigazi> streamlining the process we deliver kubelet is more important and if we do it for kubelet we can do it for all components
17:28:22 <imdigitaljim> control plane updates in place, improved HA, better introspection on the cluster,
17:28:30 <colin-> ok
17:28:58 <imdigitaljim> control plane access through k8s API
17:29:04 <cbrumm> I'm sure a doc could be put together to help weigh the idea
17:29:16 <imdigitaljim> #link https://coreos.com/blog/self-hosted-kubernetes.html
17:29:31 <strigazi> imdigitaljim: colin- cbrumm we had this actually before.
17:30:07 <strigazi> The reason we shifted from was that the way to deliver the same software kubernetes whose being fragmented.
17:30:16 <strigazi> The reason we shifted from was that the way to deliver the same software kubernetes was fragmented.
17:31:27 <strigazi> I'm not against it in principle, but at the moment it would complicate things
17:31:32 <cbrumm> ok, I just figured that if a v2 was being considered it might be a good time to bring the control plane in line with kubernetes current recommendations. Use kubeadm etc
17:32:06 <cbrumm> but it's not small consideration for sure
17:32:18 <flwang1> those kinds of things really need specs
17:32:19 <strigazi> self-hosted kubernetes would be certainly done with kubeamd.
17:32:21 <imdigitaljim> i think it could cut our bootstrapping code into about 1/4 of what it is
17:32:23 <flwang1> we can't make decision here
17:32:45 <cbrumm> flwang1 agreed
17:32:51 <imdigitaljim> ^
17:34:03 <strigazi> so, we need a detailed proposal and target it to S
17:34:15 <cbrumm> that sounds good
17:35:25 <cbrumm> I can't commit that we'll get to that immediately, but if we can't then it's only our fault if it slips
17:35:53 <strigazi> #link https://storyboard.openstack.org/#!/story/2002751
17:36:13 <cbrumm> thanks strigazi
17:36:41 <openstackgerrit> Jim Bach proposed openstack/magnum master: Kubernetes Bootrapping IDEAS (not intended as PR)  https://review.openstack.org/578945
17:38:15 <strigazi> next, I have one thing we need for Rocky and discovered this week
17:39:15 <strigazi> While doing some benchmarks with spark and kubernetes. we fould out that the way magnum is interacting with heat, actually hurts heat a lot.
17:39:24 <imdigitaljim> oh?
17:39:28 <imdigitaljim> what does it do?
17:39:48 <strigazi> In magnum, we don't have locking between the conductor
17:40:18 <strigazi> and when the conductor(s) try to sync the status with heat they do a stack get.
17:40:53 <strigazi> When a stack is CREATE|UDPATE_IN_PROGRESS we only need the status
17:41:08 <strigazi> not the outputs of the stack or any metadata.
17:41:27 <cbrumm> Yeah that sounds like overkill
17:41:33 <strigazi> For large stacks, evaluating the stack outputs  is expensive
17:41:37 <colin-> have seen similar patterns in other projects
17:42:01 <colin-> is there a better query?
17:42:07 <strigazi> and when many processes hit the heat api and queue requestsm rabbit gets overloaded and eventually heat goes down
17:42:25 <strigazi> The better query is heat list and filter of one id
17:42:25 <imdigitaljim> that sounds like a great patch to get done
17:42:26 <flwang1> strigazi: can we use stack list instead of get?
17:42:35 <flwang1> and then matching the id in magnum
17:42:54 <strigazi> filters={"id": some_stack_id}
17:43:08 <colin-> nice
17:43:24 <flwang1> strigazi: can the fileters accept a list of id?
17:43:29 <strigazi> a list takes a second, a get takes a minute for a cluster with 200 nodes
17:43:38 <strigazi> yeap tested it
17:43:41 <flwang1> but anyway, we should be able to do that
17:43:52 <flwang1> strigazi: cool, have a patch already?
17:43:57 <flwang1> i'm keen to review it
17:44:02 <strigazi> I'll push a patch I have it almost ready
17:44:05 <imdigitaljim> another issue with heat/magnum cleanup. if the cloud-controller-manager allocates something like a loadbalancer and the cluster is deleted.. the LB is dangling and not cleaned up because heat nor magnum know about it as it stands
17:44:19 <strigazi> There is a patch in review
17:44:36 <strigazi> Can you dig it? I can send it you too
17:44:46 <strigazi> Can you dig it? I can send it to you too
17:45:18 <flwang1> strigazi: the heat issue? sure
17:45:30 <strigazi> flwang1: the lbaas clean up
17:45:50 <strigazi> for the heat issue I'll push it
17:45:55 <flwang1> in our testing, we can see the lb is deleted successfully
17:46:08 <imdigitaljim> is this allocated through cloud-controller-manager?
17:46:11 <strigazi> for lbass (created by the cloud provider)
17:46:13 <flwang1> without cloud-controller-manager
17:46:14 <imdigitaljim> using kubernetes
17:46:20 <strigazi> there is a patch
17:46:20 <imdigitaljim> yeah the normal LB's allocated by heat are fine
17:46:29 <strigazi> give me a sec
17:46:35 <flwang1> we haven't tried that way
17:46:48 <flwang1> i'm happy to test and fix it
17:46:50 <imdigitaljim> in other words, kubernetes users need to use load balancers
17:46:57 <imdigitaljim> kubernetes talks to neutron to get a LB
17:46:58 <flwang1> imdigitaljim: i see
17:47:07 <flwang1> it's for service LB not masters LB
17:47:08 <strigazi> #link https://review.openstack.org/#/c/497144/
17:47:11 <imdigitaljim> neutron/octavia
17:47:14 <imdigitaljim> correct
17:47:33 <imdigitaljim> im not sure what all we can/should do to rectify it, we might want to setup a BP for that
17:47:47 <flwang1> strigazi: thanks, will dig it
17:48:11 <strigazi> it's the linked one, have a look
17:48:16 <imdigitaljim> oh yes
17:48:26 <imdigitaljim> strigazi: ill review this https://review.openstack.org/#/c/497144/3/magnum/drivers/heat/driver.py
17:48:38 <imdigitaljim> err https://review.openstack.org/#/c/497144/
17:48:48 <strigazi> imdigitaljim: thanks
17:48:55 <strigazi> and quickly one more
17:48:58 <imdigitaljim> great callout for it =)
17:49:10 <strigazi> The keypait issue:
17:49:13 <strigazi> The keypair issue:
17:49:47 <strigazi> #link https://storyboard.openstack.org/#!/story/2002648
17:50:19 <strigazi> if you offer magnum to others and they want you to do things for them it's pretty important
17:50:37 <strigazi> The idea is to create a keypait per stack
17:50:40 <strigazi> The idea is to create a keypair per stack
17:51:36 <strigazi> So when a client wants you to scale or fix their stack with stack update you should be able to do it.
17:52:25 <strigazi> I'll push a patch for it as well.
17:53:06 <strigazi> Apologies for this meeting folks, we can try to make the next one more structured.
17:54:05 <colin-> no complaints here :)
17:54:10 <imdigitaljim> its complaints here as well
17:54:17 <imdigitaljim> s/its/no
17:54:51 <strigazi> :)
17:55:24 <strigazi> Do tou need to bring up anything else?
17:56:03 <flwang1> i need testing from imdigitaljim for https://review.openstack.org/576029
17:56:23 <flwang1> or any other bless for this one
17:56:26 <cbrumm> Nope, I think we've covered everything
17:56:59 <imdigitaljim> flwang: im on it by the end of this week
17:57:11 <imdigitaljim> i should post about it early next week
17:57:13 <flwang1> imdigitaljim: cool, thanks
17:57:28 <cbrumm> Thank you guys for hosting this again. It's great to get some solid time with you both
17:57:51 <strigazi> Let's meet in 6 days and 23 hours again :)
17:57:59 <cbrumm> sounds like a plan
17:58:33 <flwang1> good to discuss with you guys, very productive
17:58:58 <strigazi> cbrumm: imdigitaljim colin- flwang1 jslater thanks for joining, see you soon
17:59:09 <colin-> ttyl
17:59:14 <flwang1> ttyl
17:59:17 <cbrumm> Have a good evening/day
17:59:28 <strigazi> cheers!
17:59:32 <strigazi> #endmeeting