10:00:03 <strigazi> #startmeeting containers
10:00:04 <openstack> Meeting started Tue May  8 10:00:03 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
10:00:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
10:00:07 <openstack> The meeting name has been set to 'containers'
10:00:13 <strigazi> #topic Roll Call
10:00:31 <flwang1> o/
10:00:34 <ricolin> o/
10:00:37 <strigazi> o/
10:01:22 <slunkad> hi
10:01:45 <strigazi> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-05-08_1000_UTC
10:02:07 <strigazi> #topic Blueprints/Bugs/Ideas
10:02:39 <strigazi> Already started with flwang1 in the channel: Multi-master clusters without API lbass
10:03:16 <strigazi> The idea is:
10:03:52 <strigazi> Let users create HA clusters without paying/using a loadbalancer
10:04:18 <strigazi> swarm clusters don't need LB for the masters to join the cluster
10:04:22 <strigazi> and in kubernetes
10:04:46 <strigazi> you can point flannel to many etcd servers
10:05:04 <strigazi> and kubelet/proxy to many kubernetes api servers
10:06:03 <flwang1> strigazi: i got your point, but does that mean, user should be fully aware of what he is doing?
10:06:31 <flwang1> as i asked above, if the server he was using is dead, how can he know what to do next?
10:07:19 <strigazi> advanced users will know how to point to a different master.
10:07:22 <strigazi> but
10:07:42 <strigazi> for convenience we can intergrate this with magnum's monitoring
10:08:11 <flwang1> strigazi: the monitoring for master is on my list, btw
10:08:38 <flwang1> if we all agree user understand what he/she is doing, then that's ok
10:09:41 <strigazi> for example, gke is giving cluster that are not accessible from the outside
10:09:48 <strigazi> for example, gke is giving clusters that are not accessible from the outside
10:10:02 <strigazi> these clusters are cheaper since they don't have public ips
10:10:33 <flwang1> strigazi: that's right
10:10:49 <strigazi> the same could apply for LBs
10:10:51 <flwang1> if the cluster is just for dev/test, ci/cd, then that's true
10:12:15 <strigazi> we can start doing this as soon we have master monitoting?
10:12:55 <flwang1> sounds a plan
10:13:51 <strigazi> ok, next
10:13:57 <strigazi> test eventlet 0.23
10:14:26 <strigazi> swift already found a breaking change with the new eventlet version:
10:14:29 <strigazi> test eventlet 0.23
10:14:32 <strigazi> #link https://bugs.launchpad.net/swift/+bug/1769749
10:14:33 <openstack> Launchpad bug 1769749 in OpenStack Object Storage (swift) "eventlet 0.23.0 breaks swift's unicode handling" [Undecided,Confirmed]
10:14:51 <strigazi> 0.23 is the version that has the fix for the kubernetes client
10:15:57 <flwang1> does that mean we may still can't use that version?
10:16:01 <strigazi> I'm having a look into this, it is absolutely necessary to implement the monitoring we want while using oslo.service
10:16:37 <strigazi> I think we can't use it if it is not accepted in o/requirements
10:16:47 <strigazi> ricolin: isn't it like this?
10:17:30 <ricolin> yes
10:18:03 <ricolin> if it's in global requirement
10:19:49 <strigazi> we need to find one more fix for eventlet I guess...
10:20:29 <flwang1> strigazi: back to the original plan?
10:20:59 <strigazi> I can have a quick look today and see if it it can be fixed for swift
10:21:10 <flwang1> cool
10:21:45 <strigazi> If not, back to the original plan I guess
10:22:10 <flwang1> :(
10:22:49 <strigazi> Next, for upgrades, I'm having some issues with the rebuild of the master server, hence the no lb mulitmaster proposal
10:24:26 <strigazi> I'll push an apiref patch today for upgrades too
10:25:44 <strigazi> For reference this is the example request: http://paste.openstack.org/show/720555/
10:26:30 <strigazi> flwang1:  http://paste.openstack.org/show/720557/
10:26:43 <flwang1> strigazi: cool
10:26:47 <flwang1> will review it
10:26:58 <strigazi> that is all from me
10:28:27 <flwang1> i'm still working/testing the k8s-keystone-auth
10:28:42 <flwang1> hopefully i can submit another patch set today or tomorrow
10:28:54 <flwang1> no progress for the monitoring stuff
10:30:24 <flwang1> that's all from me
10:30:59 <strigazi> thanks
10:31:22 <strigazi> slunkad: do you have something to discuss?
10:31:24 <slunkad> I am testing ricolin patch today and will post a review soon, and then will get started on the magnum patch for trusts
10:31:34 <strigazi> slunkad: cool
10:31:56 <ricolin> slunkad, cool
10:32:04 <strigazi> ricolin: since you are here, I have a question when you are done
10:32:14 <strigazi> ricolin: since you are here, I have a question when slunkad is done
10:32:24 <slunkad> I am done
10:32:32 <strigazi> :)
10:33:33 <strigazi> ricolin: is it possible to make an OS::Nova::Server resource to depend on a SoftwareDeployment resource
10:33:36 <strigazi> ?
10:34:13 <strigazi> And that software deployment to be applied to the same OS::Nova::Server
10:34:25 <ricolin> not if SoftwareDeployment contain that nova server info
10:34:33 <ricolin> why you need this
10:35:11 <strigazi> I want to drain a Nova server first and then delete it or rebuild it
10:35:52 <strigazi> I could run the drain command in the master node tough...
10:36:00 <strigazi> So this is possible right:
10:36:42 <strigazi> apply softwareDeployment to novaA  and then replace/rebuild NovaB
10:36:46 <strigazi> ricolin: ^^
10:37:47 <ricolin> I see
10:39:52 <strigazi> I'll try ti
10:39:54 <strigazi> I'll try it
10:40:02 <ricolin> strigazi, wonder how can you rebuild before create SD created?
10:40:22 <ricolin> Is all this in the same stack creation process?
10:40:33 <strigazi> yes
10:40:55 <ricolin> and how you rebuild it?
10:41:06 <strigazi> Passing a new image
10:41:14 <strigazi> Passing a new glance image
10:41:55 <ricolin> with update stack?
10:41:59 <strigazi> yes
10:42:52 <ricolin> you can update a new SD to replace the old one
10:43:20 <ricolin> so after server updated, the new SD will do software deploy again
10:44:21 <strigazi> I want to run the first SD before the server update
10:44:32 <openstackgerrit> Feilong Wang proposed openstack/magnum master: Support Keystone AuthN and AuthZ  https://review.openstack.org/561783
10:44:50 <strigazi> flwang1: ++
10:45:33 <ricolin> strigazi, you can create a SD for server and when it's complete, use the second SD to update again
10:45:51 <ricolin> strigazi, have to check if SD is rebuild awared
10:46:18 <strigazi> flwang1: maybe we need to go for an SD here:
10:46:25 * ricolin kind of forgot about that part
10:46:39 <strigazi> flwang1: http://paste.openstack.org/show/720559/
10:46:55 <strigazi> ricolin: thanks, I'll check
10:48:49 <strigazi> flwang1: did you see the paste?
10:48:51 <flwang1> clicking
10:49:37 <flwang1> yes
10:49:40 <flwang1> i saw that
10:49:55 <flwang1> we are very close to the limit, do you mean that?
10:50:02 <strigazi> yes
10:50:19 <strigazi> And this is already after this patch:
10:50:27 <strigazi> https://review.openstack.org/#/c/566533/
10:52:04 <flwang1> yep, it's on my list
10:52:35 <flwang1> i will review it today or tomorrow
10:52:35 <strigazi> it was |             63756 | kube-calico-b3b6jkuxsinl-master-0  |
10:52:46 <flwang1> what's our target?
10:52:55 <strigazi> and it failed on my devstack
10:53:03 <strigazi> flwang1: what do you mean?
10:54:47 <flwang1> i mean target to 10000 or 0?
10:55:06 <flwang1> or whatever feasible number ;)
10:55:34 <strigazi> 0 we can't do, if we cut it in half it should be ok
10:55:59 <strigazi> similar to my patch: https://review.openstack.org/#/c/561858/
10:56:49 <strigazi> Time is running out
10:56:49 <flwang1> great, it would be cool if we can set a goal
10:57:19 <strigazi> I'll rebase my patch and see which number makes sense
10:58:14 <flwang1> strigazi: nice, i will review it
10:58:22 <flwang1> i think that's us
10:59:03 <strigazi> Let's wrap, we can continue in the channel
10:59:22 <strigazi> Thanks for joining folks!
10:59:27 <strigazi> #endmeeting