10:00:22 #startmeeting containers 10:00:23 Meeting started Tue Feb 6 10:00:22 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 10:00:24 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 10:00:27 The meeting name has been set to 'containers' 10:00:32 #topic Roll Call 10:00:37 o/ 10:00:42 o/ 10:00:43 o/ 10:00:43 hi 10:01:37 Thanks for joining the meeting folks 10:01:41 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-02-06_1000_UTC 10:01:53 #topic Announcements 10:02:34 this week on Thursday we will branch, push for your changes, we can still push to the branch, but it is better to push this week 10:03:11 we can discuss this later in the meetning if you have questions 10:03:17 #topic Review Action Items 10:03:34 strigazi to look for a new meeting time APAC friendly [DONE] 10:03:48 strigazi: thanks 10:04:21 strigazi, I think it's better to send a ML to let all know we change the time and location 10:04:23 flwang1: you are welcome, it is great to have you and ricolin here 10:04:38 I sent the email for the time last week 10:04:48 no. yesterday 10:05:07 We send one more for the time 10:05:07 strigazi, cool! 10:05:13 We send one more for the place 10:05:18 lol 10:05:50 #topic Blueprints/Bugs/Reviews/Ideas 10:06:30 Calico network driver for kubernetes, flwang1 do you want to say a few things about this faeture for ricolin slunkad and for the meeting record? :) 10:06:51 strigazi: yes 10:07:04 catalyst cloud is trying to deploy magnum into production 10:07:23 but to get a production ready k8s, we would like to have the network policy support 10:07:34 which currently Flannel can't support 10:07:59 so we'd like to upstream the Calico driver to achieve that 10:08:23 https://review.openstack.org/540352 here is the patch i'm working on 10:08:56 #link https://review.openstack.org/540352 10:09:10 we'd like to get it in Queens if it's possible 10:09:42 flwang1: we will try to get it in 10:09:54 strigazi: thank you for all the support 10:10:00 flwang1: do you need help for the software deployment? 10:10:09 me and ricolin can help 10:10:24 strigazi, yes boss! 10:10:46 strigazi: i will push a patch tomorrow and please review it and feel free to post a patch set 10:11:03 flwang1: cool 10:11:29 btw, here are some slides for best practices on security for kubernetes 10:11:42 #link https://fosdem.org/2018/schedule/event/containers_kubernetes_security/ 10:11:50 #link https://speakerdeck.com/ianlewis/kubernetes-security-best-practices 10:12:51 strigazi: thanks for sharing 10:12:53 the above two, is a talk at a conference in europe this weekend describing best practices on kubernetes security, it describes RBAC, calico and more 10:13:29 I also found a slide deck from a conference yesterday, again on kubernetes security, I found them useful 10:13:33 strigazi: great 10:13:39 #link https://docs.google.com/presentation/d/e/2PACX-1vQwQkF4MjGebZoWqBaJ1F_Nf3HSYS-tjX13JMND0aJ92dXw1flSwAgTIoekHumUuX7LAgBkv3rQS-qp/pub?start=false&loop=false&delayms=3000&slide=id.SLIDES_API1559816053_0 10:14:58 speaking of the heat-agent, ricolin what about that patch on passing the public_url? 10:16:02 was stuck by releases job, but hope I can get it out these two days 10:16:32 ricolin the patchset that is up now, is it working? 10:16:58 strigazi, which one you mean? 10:17:31 I though you pushed a patch already 10:17:37 I thought you pushed a patch already 10:17:50 strigazi, I sould, but not yet 10:17:58 ricolin :) ok 10:18:26 It will help if we can use k8s gate in upstream 10:18:44 ricolin: what do you mean? 10:18:55 I think the our k8s job should work now 10:19:07 strigazi, maybe some plan in PTG to discuss how we can do with magnum-functional-k8s? 10:19:33 ricolin there is nothing we can do on openstack-infra 10:20:12 the five only m1.large vms but most importantly they don't give us nested virtualization 10:21:19 A reasonable environament needs to have, 15GB ram, 8 or 16 cores at least 100 GB disk and *nested* virtualization 10:21:52 multinode would work as well but we still need nested virtualization 10:22:02 strigazi, is current infra even get any environment that can do nested virtualization? 10:22:11 ricolin: no 10:22:17 I asked multiple times 10:22:26 okay:( 10:23:02 I tried to do smth in centos-ci but I was stuck and didn't have more time for it 10:23:25 let's move on 10:23:33 kubernetes python client and eventlet incompatibility https://github.com/eventlet/eventlet/issues/147 10:23:48 I'm repeating this, and I will add to the release notes 10:24:13 the periodic task to collect metrics from the cluster is broken 10:24:29 and it also breaks the task that syncs the cluster status with heat 10:24:56 flwang1: this is the task (the metrics one) that we want to use for cluster healing 10:25:14 strigazi: ok 10:25:35 But most importantly, it breaks the sync with heat, so cluster are in create_in_progress forever 10:26:09 we will have a parameter to disable that task, so magnum will continue to work normally 10:26:45 is there a patch for this already? 10:27:00 slunkad for the parameter yes 10:27:18 strigazi: is it only impacting master branch? or Pike as well? 10:27:56 can we bump the eventlet version? 10:28:31 flwang1: master only. Eventlet hasn't fix the problem yet 10:29:11 flwang1: by disabling the task, magnum works fine. At the moment you only lose the send to ceilometer metrics part 10:29:16 strigazi: ok. can't we bump(skip) that eventlet version in requirements? 10:29:29 strigazi: i see. 10:29:46 the problem is in kubernetes 4.0.0 10:29:48 can you paste the link of the patch disabling the task? 10:30:16 #link https://review.openstack.org/#/c/529098/ 10:30:38 strigazi: cheers 10:31:37 flwang1: olso.messaging depends on eventlet 10:31:57 flwang1: kubernetes depends on multiproccsing 10:32:11 eventlet and multiproccsing are incompatible 10:32:29 ok, thanks for the clarification 10:33:00 we can use python requests directly, but this way we re-write the kuberntes client 10:33:38 or we can use kubectl, the binary. that is even worse 10:34:01 strigazi: TBH, i don't think magnum has to send metrics to ceilometer 10:34:11 it doesn't 10:34:37 i mean the function even shouldn't be a part of magnum ;) 10:34:50 history 10:34:56 history 10:35:31 in any case, we will need the client at some point for cluster healing and monitoring from magnum server to the clusters 10:35:56 next, 10:36:31 Cluster Federation https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/federation-api I'm tesing this and I'll try to take it as experimental api 10:36:35 yep, agree 10:37:09 and I'll try to do the same with cluster upgrades 10:37:09 +1 on that 10:37:25 I'll ping you for testing 10:37:36 for upgrades mostly 10:37:50 I'm trying to clean my patches 10:38:07 Finally, to move to f27 10:38:23 there is a patch to run etcd in a container: 10:38:41 #link https://review.openstack.org/#/c/524116/ 10:38:54 I'm also finishing the patch for flanneld 10:39:06 f27 doesn't have flanneld and etcd installed 10:39:49 do you have any questions? 10:41:01 ok, let's move on 10:41:05 #topic Open Discussion 10:41:39 i'd like to get you guys opinion on this bug https://bugs.launchpad.net/magnum/+bug/1746961 \ 10:41:40 Launchpad bug 1746961 in Magnum "Support enabled drivers" [Undecided,New] - Assigned to Feilong Wang (flwang) 10:42:14 as I mentioned above, Catalyst Cloud would like to deploy magnum, but we just want to support k8s for now 10:42:36 so we don't want to receive any support ticket about any other driver 10:42:52 so we have a way to do that now? 10:43:34 at the moment, you can update the magnum python egg and remove the entrypoints 10:43:41 wondering will it work if we replace the template:) 10:43:55 ricolin: which template? 10:44:44 does it make sense to have config options that disables the drivers? 10:44:53 strigazi: which is hard for us because we're building virtual env for different services 10:45:11 slunkad: that's the thing i'm trying to propose 10:45:27 strigazi, I mean remove the templates under each driver 10:45:43 ricolin that is also possible, 10:45:58 flwang1: we can do this as well 10:46:14 ricolin: but that way also need to change source code 10:46:21 have enable_drivers or use the definitions 10:46:28 flwang1, yes 10:46:31 strigazi: yep, https://github.com/openstack/magnum/blob/master/magnum/conf/cluster.py#L25 10:46:57 should we reuse the existing config option or create a new one? 10:48:06 flwang1 let's do new one 10:48:12 IMO better a new one if that's for disable drivers 10:48:24 flwang1: enabled_definitions is there but unsued anyway 10:48:31 you will need to keep the backward consist 10:48:34 no, enable drivers 10:48:49 so you all happy to have a config option for this user case? 10:48:56 ricolin it is silently ignored already 10:49:08 ricolin: since ocatra 10:49:11 ricolin: since ocata 10:49:20 strigazi, okay, then it will be fine 10:49:29 flwang1: yes, we can add enable_drivers 10:49:35 +1 10:49:41 strigazi: cool, thanks 10:51:17 strigazi: this is building locally now https://review.openstack.org/#/c/520063/13 10:51:42 but I need some +1s on https://review.openstack.org/#/c/539619/ so it is merged and we can test it in gating 10:52:16 slunkad ok 10:52:29 also if you have some time can you give this https://review.openstack.org/#/c/507522/ a code review? 10:52:50 slunkad: I'll have a look 10:53:01 strigazi: and ofc we still need the kubernetes elements in the image.. 10:53:11 thanks! 10:53:19 slunkad I know :) 10:54:05 slunkad: if you have a working image we can take https://review.openstack.org/#/c/507522/ , you will be the ones that will mostly use it initially 10:55:14 strigazi: I do have a SLES based image which I uses for testing which I can't make public 10:55:23 ok 10:56:09 strigazi: not sure if everyone is comfortable merging it for now without the image.. 10:57:16 it's a start, I think if we build an image even locally and publish it, we can start 10:57:17 slunkad: i'm sorry, but if that's the case, do we really need to move the driver into main tree? 10:58:15 strigazi: ok then I will try that, thanks 10:58:40 flwang1: we are working on a opensuse based image so we will have a image soonish 10:58:44 flwang1: with a working image published we can start 10:59:06 flwang1: we zero options avaialble publicly we can't 10:59:14 with zero options avaialble publicly we can't 10:59:41 strigazi: thanks, that makes sense for me 10:59:54 the time is up folks, thanks for coming 11:00:00 #endmeeting