03:00:02 #startmeeting zun 03:00:03 Meeting started Tue Jun 27 03:00:02 2017 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:00:06 The meeting name has been set to 'zun' 03:00:07 #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2017-06-27_0300_UTC Today's agenda 03:00:12 #topic Roll Call 03:00:37 Namrata 03:00:47 o/ 03:01:19 Fengshengqin 03:01:27 o/ 03:01:47 thanks for joining the meeting Namrata kevinz FengShengqin diga 03:02:13 pause a few more seconds for potential attendee 03:02:14 Madhuri 03:02:19 hi mkrai 03:02:34 ok, let's get started 03:02:45 #topic Announcements 03:02:50 1. Welcome Zhou Shunli to the core team 03:02:55 #link http://lists.openstack.org/pipermail/openstack-dev/2017-June/118883.html 03:03:02 shunli 03:03:09 zsli_: hey 03:03:09 welcome 03:03:14 zsli_: welcome to the core team 03:03:14 Thanks all. 03:03:19 welcome!! 03:03:24 welcome 03:03:26 Zsli_ congratulations 03:03:38 zsli_: congrats! 03:04:02 thanks all 03:04:05 zsli_ did a lot of coding for the scheduler, it is good to have shunli in the core team 03:04:16 again, welcome 03:04:21 2. New BPs 03:04:29 #link https://blueprints.launchpad.net/zun/+spec/auto-allocate-network Automatically allocate network if none available 03:04:35 #link https://blueprints.launchpad.net/zun/+spec/multiple-networks Support connect container with multiple network 03:04:40 #link https://blueprints.launchpad.net/zun/+spec/zun-api-as-container Running zun-api in a container managed by Zun 03:04:45 #link https://blueprints.launchpad.net/zun/+spec/support-pcipassthroughfilter Support PciPassthroughFilter in Zun Scheduler 03:04:50 #link https://blueprints.launchpad.net/zun/+spec/container-pci-device-modeling Build a data model for PCI_passthrough devices 03:04:54 #link https://blueprints.launchpad.net/zun/+spec/container-sr-iov-networking support SR-IOV networking 03:05:16 Looks like a long list 03:05:25 i wanted to mention those bps because i want to make sure if you think if it is a good idea 03:05:41 it seems lot things planned for this release 03:05:43 :) 03:05:44 yes, there are several bps created last week 03:06:04 it doesn't have to fit into this release 03:06:14 okay 03:06:18 I see much of them are useful in NFV user cases 03:06:26 valuable BPs 03:06:43 i would leave it as a homework for everyone, so please feel free to comment on the bp's whiteboard if you have any comment 03:06:45 who is working on https://blueprints.launchpad.net/zun/+spec/container-sr-iov-networking ? 03:07:01 opposing point of view are welcome 03:07:23 diga: i believe lakerzhou is working on it 03:07:28 okay 03:07:49 would to participate in it 03:07:54 diga: i think you could ping him if you interest to work with him on this bp 03:08:00 hongbin: sure 03:08:23 ok, any other comment so far? 03:08:33 hongbin: about cinder integration, I have pushed latest patch by resolving all the comments & modification 03:08:45 hongbin: https://review.openstack.org/#/c/429943/ did you get a chance to look at that ? 03:08:50 dims: ack 03:09:01 dims: sorry, wrong person 03:09:04 diga: ack 03:09:11 :) 03:09:31 diga: it doesn't pass the gate though 03:09:55 hongbin: yes, I need your help on that 03:10:05 diga: make the patch pass the gate is your next step imo 03:10:17 diga: ok, ping me offline for that 03:10:18 hongbin: okay 03:10:22 hongbin: sure 03:10:38 #topic Review Action Items 03:10:44 1. lakerzhou create a bp for supporting SR-IOV usage (COMPLETE) 03:10:48 #link https://blueprints.launchpad.net/zun/+spec/support-pcipassthroughfilter 03:10:53 #link https://blueprints.launchpad.net/zun/+spec/container-pci-device-modeling 03:10:58 #link https://blueprints.launchpad.net/zun/+spec/container-sr-iov-networking 03:11:04 this concludes the action items 03:11:11 next topic 03:11:14 #topic Cinder integration 03:11:19 #link https://blueprints.launchpad.net/zun/+spec/direct-cinder-integration Direct Cinder integration 03:11:24 #link https://blueprints.launchpad.net/zun/+spec/cinder-zun-integration Cinder integration via Fuxi 03:11:34 i think diga just gave an update for the fuxi part 03:11:43 yeah 03:12:02 in addition, i am working on the cinder part, will continue to work on it this week 03:12:22 any question on this topic? 03:12:46 will go through the BP & spec 03:12:58 diga: ack 03:13:02 #topic Introduce container composition (kevinz) 03:13:12 kevinz: ^^ 03:13:17 hi hongbin 03:13:40 Thix week I'm working about a WIP patch to support "zun capsule create" 03:14:22 Now the Object and data models is finished, continue working on capsule create process 03:14:43 I hope to give a WIP patch to gerrit this week 03:14:50 kevinz: sounds like a good progress 03:15:39 hongbin: first I reuse zun-api and add a /capsules/ method 03:16:13 hongbin: Then after finish zun capsule create ,will move to zun-capsule-api 03:16:44 kevinz: got that, it sounds like a good approach for incremental improvement 03:17:37 hongbin: yes :-) 03:18:15 for others, a bit of the context, we are going to have a new api called zun-capsule-api, that run in parallel with zun-api 03:18:40 why do we need separate api? 03:18:48 the discussion started from the last meeting, and we think this would be the less confusing option for end-user point of view 03:19:10 zsli_: because the "capsule" api would be somehow duplicate with the "container" api 03:19:15 hongbin I have some questions related to two api servers 03:19:27 ack 03:19:27 mkrai: go ahead 03:19:51 Both would be running listening on different port. Right? 03:20:04 mkrai: i think yes 03:20:38 If yes, user would have to know the ports where the apis are available 03:21:03 mkrai: one option is to advertise it at the keystone service catalog 03:21:22 Or is there other way so that user don't have to worry about which port they should send request? 03:22:23 Hongbin: ack. Also what are the other advantages having two api servers? 03:22:34 the usual approach for service discovery is using keystone service catalog, inofrmation like ip address / port of each api service will be available there 03:23:09 hongbin: ack 03:23:13 +1 for what other advantages 03:23:40 how to manage resource for two apis? 03:24:19 for the question of advantages, i stated what i can think of, kevinz might want to add more 03:25:02 1. both "container" and "capsule" can create container, users might find it confusing for which one to use 03:25:21 hongbin: mkrai : what use case we are going to achieve by introducing two apis services for zun ? 03:25:42 some want to use "container", others want to use "capsule", both are doing the same thing (create containers). then it looks confused to have two in the same api 03:26:08 diga: we treat two apis just like "docker" and "swarm" 03:26:20 okay 03:26:46 2. it could make "capsule" funtionality as an optional deployment for operators (for those who just wanted "container", they don't have to deploy the capsule part, so save their operational efforts) 03:27:19 kevinz: do you have more? 03:27:23 hongbin: okay 03:27:26 seems 2 make sense 03:28:11 zsli_: ack 03:28:35 I have one, two apis will low the cost of capsule and container bottleneck in API server. 03:29:10 Sorry got dc 03:29:26 kevinz: i think low the load of api does not make sense. 03:29:57 i think apis all run for many workers 03:31:50 zsli_: ack, it is just my un mature thought:-) 03:31:56 ok, any other comment for this topic? 03:32:18 (good questions so far) 03:32:25 still wondering if it's a good idea to have two separate apis 03:32:31 hongbin: just 1 question - how many operators use capsule for container deployment ? 03:33:31 zsli_: another option is to have them in the same process, but treat "capsule" as an api extension 03:34:01 zsli_: then have a config to disable/enable the extension (like neutron) 03:34:14 my concern is same as shengqin, there maybe resource sync problem for two apis. 03:35:20 zsli_: both apis are stateless i assume? 03:35:29 the advantage #2 also make sense, so it's a hard choice. 03:36:29 zsli_: ok, we could discuss it further offline if you have a chance 03:36:54 sure 03:36:55 diga: for your question, i don't know yet 03:37:09 Please include me as well in discussion 03:37:12 let me think it for sometime. 03:37:19 mkrai: sure 03:37:21 +1 03:37:25 hongbin: NP, will go through capsule docs 03:37:48 ok, next topic 03:38:20 i have the nfv topic in the agenda, but lakerzhou is not here so i propose to skip it 03:38:45 the next one is the retry filter 03:38:50 #topic Add Retry filter (Shunli) 03:38:55 #link https://review.openstack.org/#/c/476299/ 03:39:18 zsli_: you want to start to introduce this topic? 03:39:27 sure 03:39:45 the idea is simple like nova to introduce the retry to zun. 03:40:16 when one compute host failed, then we can retry other host to boot the container. 03:41:02 there are somes obstacles to implement this BP. 03:41:49 the major concern i have is nova is going to get rid of the retry filter, i am not sure if it still make sense to implement it in zun 03:42:05 1, the nova plan to move the resource claim to scheduler, then retry will not be needed. 03:42:27 Hongbin is nova going to replace with placement api or something else? 03:42:35 2, zun do not have the separate scheduler and conductor service to reschedule the request. 03:43:20 mkrai: yes the nova-scheduler will be replaced by the placement api 03:44:09 I'm also not sure if we can works well without retry filter? 03:44:24 I think we can also reuse the placement api 03:44:51 would like to know you guys opinion. 03:45:13 Adding scheduler service would be extra overhead imo 03:46:06 mkrai: ack. 03:46:31 without the scheduler service, all works well so far. 03:46:59 zsli_: do you think if it is a good idea to move claim to scheduler as well (like what nova planned) 03:47:01 seems no need to add scheduler service now. 03:47:30 hongbin:yes 03:47:52 I need to investigate the nova implementation first. 03:47:53 zsli_: if this is the goal, then the retry filter doesn't make sense imo 03:48:26 hongbin (IRC): agree 03:48:30 zsli_: because we don't need to retry if we calimed on scheduler 03:48:32 hongbin: ok. 03:49:08 hongbin: i will abandon the patch. 03:49:36 zsli_: ack, thanks for bringing up this topic, it is a good discussion 03:49:39 and investigate the nova implementation, to see if we can implement like nova. 03:49:58 sounds good 03:50:26 any other comment on this topic? 03:50:59 ok, next one 03:51:07 #topic Add Zun Resources to Heat 03:51:13 #link https://blueprints.launchpad.net/heat/+spec/heat-plugin-zun 03:51:20 Namrata: there? 03:51:27 yes hongbin 03:51:34 thanks for updating the patch 03:51:44 Namrata: want to give a brief update for this topic? 03:51:48 Namrata: np 03:51:52 #link https://review.openstack.org/#/c/437810/ 03:52:38 we have got +2 on the patch 03:52:47 waiting for approval 03:52:50 yes, 2 +2 so far 03:53:10 i guess will be merged this week 03:53:20 thanks namrata for the grate work. 03:53:28 thanks hongbin 03:53:36 thanks zsli_ 03:53:37 I think this patch is great feature for zun. 03:53:48 zsli_: +1 03:54:19 ok, let's get into open discussion 03:54:37 thanks Namrata for the great patch 03:54:38 #topic Open Discussion 03:54:52 anyone has topic to bring up? 03:55:18 hongbin: did your kuryr work with zun now? 03:55:44 zsli_: i think yes 03:55:57 my devstack broken for someday, seems kuryrlib-network bug. 03:56:09 No 03:56:09 ping you offline about the problem 03:56:36 mkrai: your env doesn't work either? 03:56:56 It worked on weekend 03:57:08 mkrai: ack 03:57:11 But I was facing some issue with docker 03:57:51 mkrai: zsli_ ok, let's work those out at the zun channel 03:58:09 all, thanks for joining the meeting 03:58:17 have a good day 03:58:18 my problem is that seems kuryrlib-network add tag to networks failed in neturon. 03:58:28 #endmeeting