21:01:21 #startmeeting containers 21:01:22 Meeting started Tue Mar 12 21:01:21 2019 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:25 The meeting name has been set to 'containers' 21:01:29 #topic Roll Call 21:01:41 o/ 21:01:47 o/ 21:01:48 Hi 21:02:08 o/ 21:02:08 hello alisanhaji imdigitaljim 21:02:17 hi jakeyip 21:02:23 hi strigazi 21:02:30 o/ 21:02:53 I would like to talk about some evolutions to magnum, is their a slot for that during the meetings? 21:03:26 alisanhaji: sure, I'll ping you 21:03:36 strigazi: great thanks! 21:03:47 flwang: ? 21:04:27 #topic Stories/Tasks 21:05:08 1. ttsiouts pushed a series of patches for nodegroupds 21:05:10 https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:magnum_nodegroups 21:05:35 Please have a look, mostly the ones that don't say WIP 21:05:40 brtknr: ^^ 21:06:31 I reviewed them in person but I'll leave comments in gerrit too. We need input from others too 21:06:53 Any questions comments? 21:07:00 about nodegroups 21:07:42 strigazi: I'll try to push some more things by the end of the week 21:07:48 strigazi: I'm trying to find a story describing this... are you able to point me to one? 21:08:05 mostly regarding driver 21:08:13 jakeyip: https://github.com/openstack/magnum-specs/blob/master/specs/stein/magnum-nodegroups.rst 21:08:23 thanks! 21:09:30 jakeyip: official source http://git.openstack.org/cgit/openstack/magnum-specs/tree/specs/stein/magnum-nodegroups.rst 21:09:46 for some reason I can't find it in http://specs.openstack.org/openstack/magnum-specs/ 21:09:54 ttsiouts: thanks 21:10:22 ttsiouts: we might need to update some details in the spec? like min node-count? 21:10:40 strigazi: sure 21:10:48 sorry, i was in a short meeting 21:10:49 I see highly available magnum clusters in the text. our users wants to be able to create a cluster with workers in different AZ, does this work allow us to achieve that? 21:11:07 yes, that is the main use case 21:11:45 that's great! 21:12:28 different AZ and gpu vs 'plain' vm are the most wanted for us too 21:13:17 any other question about NGs? 21:13:33 thanks for the work ttsiouts, strigazi, much appreciated 21:14:00 ttsiouts++ 21:14:07 next, 21:14:18 2. Support /actions/resize API https://review.openstack.org/#/c/638572/ 21:14:50 I had a look and it works just fine. The only part we miss is the allow resize on update failed. 21:15:30 if you can have a look in the patch, it would be great. 21:15:42 strigazi: does the resize for cluster in update_in_progress work for you ? 21:15:56 flwang: right, that too :) 21:16:19 strigazi: i will propose a new patch set, thanks 21:16:32 flwang: no, didn't work, he request goes in but doesn't do anything. (that was 24h hours ago) 21:16:53 strigazi: ok, i will test it again to figure it out 21:17:03 and i will leave a comment on the patch 21:17:34 flwang: you will also have a look in resize on UPDATE_FAILED? 21:17:56 we also need a client patch 21:18:28 strigazi: sure, i will 21:18:37 i will propose a patch for actions today 21:18:55 or this week at least 21:19:42 ttsiouts: flwang, we will use resize for NGs? What do you think? I think it makes sense. 21:20:04 yeah.. I think it does... 21:20:34 strigazi: that's on my to-do list, i will add a todo comments in resize patch, i have already put a node group param there for placeholder 21:21:31 we just need to make sure the API is what we want and we can support NG as long as it's landed 21:21:46 flwang: I will also try to review 21:21:57 ttsiouts: it will be great, thanks 21:22:57 cool 21:23:23 3. (WIP) Add cluster upgrade to the API https://review.openstack.org/#/c/514959/ 21:24:03 I tested it with the new PS, looks good, I'm using it for the driver implementation, thanks flwang 21:24:17 strigazi: cool 21:25:01 team, please help review the upgrade api https://review.openstack.org/#/c/514959/ 21:25:18 that's one of most important feature we'd like to get it in Stein 21:26:32 any comments/questions about it? 21:27:00 the only tiny issue now is if user is using the default/built-in tag/version in heat template, we can't get it to raise 400 error if user is doing downgrade accidentally 21:27:35 but personally i think it's OK at this stage 21:27:41 +1 21:28:01 because generally the template is proposed by cloud admin and they will be very careful for new template publishing 21:28:51 (what do I, as a cloud admin, have to be careful about) ? 21:29:05 If we find a way to separate labels that are for tags and labels that are switches, to enable/disable features we are good 21:29:24 jakeyip: version compatibility 21:29:55 jakeyip: not make big jumps that will have big diff between cluster tempaltes 21:30:51 strigazi: you just reminder me a feature, we should support enable an addon after cluster created 21:31:31 strigazi: we should re define our label name convention 21:32:05 using consistent pattern, which can help mitigate the issue you mentioned above 21:32:10 I'm looking at the story at https://storyboard.openstack.org/#!/story/2002210 but I'm getting a "Not Found" for the gerrit topic links e.g. https://review.openstack.org/#q,topic:bp/cluster-upgrades,n,z is it just me? 21:32:12 not difficult to do, with the upgrade patch it shoudl be doable. 21:32:54 https://storyboard.openstack.org/#!/story/2002210 21:33:48 https://review.openstack.org/#/q/topic:cluster-upgrades+(status:open+OR+status:merged) 21:36:28 any more comments? 21:38:37 let's move to next one 21:38:42 I'm ok for now, still have lots of questions about the different edge cases but I'll wait for it to land first 21:38:50 flwang: do you have something? 21:39:32 strigazi: a little bit. 1. I'm running for the Train cycle PTL 21:39:45 flwang++ 21:40:56 thank you for the support :D and i do need your support for the following release 21:41:10 no problem 21:41:14 2. Stein will be released soon https://releases.openstack.org/stein/schedule.html 21:41:44 next week is RC1, we should be quick to get things done with good quality which is a challenge 21:41:54 since we still have quite a lot on the table 21:42:25 ack 21:43:15 3. Fedora CoreOS 21:43:33 instead of upgrading to Fedora Atomic 29, we probably should go for Fedora CoreOS 29 21:43:38 strigazi: thoughts? 21:43:52 or do both for a while 21:44:27 strigazi: for Fedora CoreOS 29, do you think we need to change a lot if we copy the code from fedora-atomic driver? 21:44:45 we should stick to one 21:45:09 now that the heat agent does a lot of things, it should be doable 21:45:23 we just need to move some things to ignition 21:45:40 cool 21:45:47 not sure with a timeline of one week, maybe two or three 21:45:51 I thought they are only merging in Fedora CoreOS 30? is there a fedora coreos 29 image already? 21:45:58 29 is bera 21:46:00 29 is beta 21:46:34 https://ci.centos.org/artifacts/fedora-coreos/prod/builds/latest/ 21:47:37 I see. that's a good image for testing :) 21:47:51 that's all from me 21:48:08 yeap, if you want to give feedback this is the place or #fedora-coreos 21:48:24 https://github.com/coreos/fedora-coreos-tracker/ 21:48:42 anyone testing that out yet? 21:49:09 I booted a couple of vms, but not much, didn't have time 21:49:19 we have users asking for coreos support, current answer is wait for fedora coreos 30 21:49:36 a few months left 21:50:15 since we are running out of time 21:50:25 alisanhaji: you wanted to bring smth up? 21:51:01 yes 21:51:23 I am interested in having Kuryr-kubernetes supported with Magnum 21:52:09 as a first step to gettting the clusters deployed by magnum communicating with the OpenStack that created them 21:52:32 that is working already 21:52:36 the goal is to have containers and VMs communicating in the same networks 21:53:15 is kuryr a network driver already? I only see calico and flannel 21:53:26 alisanhaji: does kuryr require trunk ports? 21:53:44 no it is only calico and flannel at the moment 21:54:16 yes when using VMs, but you can also run kuryr without trunk ports 21:54:28 but it requires a neutron-agent in the k8s node 21:54:31 alisanhaji: do you want to propose a patch? 21:55:02 I was wondering if I needed to submit a blueprint or RFE in launchpad 21:55:25 alisanhaji: https://storyboard.openstack.org/#!/project/openstack/magnum 21:55:37 alisanhaji: storyboard 21:55:51 if the patch look like this https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/calico-service.sh it should be easy 21:56:52 Thanks, I will see how kuryr can be deployed like this 21:57:01 are trunk ports a problem with magnum? 21:57:29 it might be complicated, it would better if it can work without 21:58:12 using trunk ports may need more change for code and heat template, which is complicated and we'd like to avoid 21:58:20 I see, in this case a neutron-agent should be installed to the k8s nodes, and talk to neutron-server 21:58:28 ah ok 21:58:41 alisanhaji: agent in the nodes works better for magnum 21:59:22 anything else? time is up? 21:59:25 Ok, thanks! And what about having OVN as a network driver, I am thinking about integrating it too to Magnum 22:00:10 alisanhaji: feel free to propose a story and spec 22:00:18 we can start to discuss from there 22:00:35 flwang great: 22:00:40 ! 22:00:54 alisanhaji: go for it, the network drivers are pretty well encapsulated 22:01:12 CNI is great 22:01:27 yes it is, thanks strigazi and flwang 22:02:05 thanks for joining the meeting everyone! 22:02:12 thanks strigazi, flwang! 22:02:18 see you around 22:02:20 jakeyip: cheers 22:02:36 strigazi: thank you 22:02:46 #endmeeting