03:00:09 #startmeeting zun 03:00:10 Meeting started Tue Dec 6 03:00:09 2016 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:00:13 The meeting name has been set to 'zun' 03:00:16 #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2016-12-06_0300_UTC Today's agenda 03:00:21 #topic Roll Call 03:00:23 Namrata 03:00:25 pradeep 03:00:26 kevinz 03:00:28 Madhuri Kumari 03:00:34 Wenzhi 03:00:37 shubham 03:01:15 thanks for joining the meeting Namrata pksingh kevinz mkrai_ Wenzhi shubhams_ 03:01:23 o/ 03:01:36 * hongbin is getting code today 03:01:46 s/code/cold/ 03:01:56 take care 03:01:59 sudipto: hey 03:02:04 hi 03:02:07 take care hongbin 03:02:11 Wenzhi: thanks, not too bad 03:02:14 #topic Announcements 03:02:20 1. Zun is an official OpenStack project now! 03:02:26 #link https://review.openstack.org/#/c/402227/ 03:02:40 great :) 03:02:41 Yay!!! Congrats to all of us. 03:02:44 thanks everyone for your hard work, this is a recognization of the whole team 03:02:44 \o/ 03:02:46 :) 03:02:48 bravo 03:02:55 great 03:02:59 Great!! 03:03:02 awesome! 03:03:09 yeah 03:03:15 #topic Review Action Items 03:03:19 none 03:03:24 #topic Should we participant OpenStack Project Team Gathering? 03:03:29 #link https://www.openstack.org/ptg/ 03:03:35 hi, hongbin, sorry I'm late 03:03:41 Yes we should 03:03:45 yanyanhu: hey, np. thanks for joining 03:04:03 I plan to go PTG 03:04:22 lakerzhou: ack 03:04:34 anyone else who will go there? 03:04:58 * hongbin should be able to attend but not sure 100% 03:05:05 not so sure 03:05:11 I can't assure now. Waiting for approval from employers 03:05:33 need to talk to my employer 03:05:50 it looks lakerzhou will go there, mkrai_ Namrata and me is not sure 03:06:00 pksingh: ack 03:06:24 ok, i asked this because the tc chair asked me if zun will have a session in ptg 03:06:53 from the current response, it looks only a few of us will go there 03:07:28 ok, let's table this topic to next week 03:07:50 #topic Kubernetes integration (shubhams) 03:07:57 #link https://blueprints.launchpad.net/zun/+spec/k8s-integration The BP 03:08:02 #link https://etherpad.openstack.org/p/zun-k8s-integration The etherpad 03:08:08 shubhams_: ^^ 03:08:41 We discussed about this last week and mostly agreed to have a small set of k8s feature 03:08:53 Thats is : start from pod like feature in zun 03:09:13 It is mentioned as option#3 in etherpad 03:09:40 On top of this, we propose option#5 03:09:52 Madhuri will explain this 03:10:01 mkrai_: ^^ 03:10:26 I have tried to explain it in etherpad. 03:10:43 It is basically exposing single API endpoint for each COE 03:11:08 and then have sub controllers to it for handling the resources related to that COE 03:11:32 By this zun will not have many resources that we directly replicate from COEs 03:11:54 For an example: /v1/k8s/pod etc 03:12:01 then, zun will behavior as a proxy to different coe? 03:12:09 Yes 03:12:22 this is basically an extension of option #1 (it seems) 03:12:32 The implementation at compute side will be same. It will act like proxy 03:13:22 mkrai_: get that 03:13:31 all, thoughts on this? 03:13:48 sounds feasible 03:14:06 I and Madhuri had already discussed this , so we are ok on this point 03:14:12 idea seems great, at least it do not disturbs the existing k8s user 03:14:40 mkrai_, the segregation seems like a good proposal. What would happen to the default zun behaviour in this case? 03:14:51 i mean a command like "zun list" 03:14:59 how about having different api service for different coes? 03:15:05 sudipto_: zun k8s pod-create 03:15:11 pksingh: interesting 03:15:16 mkrai_, that's for k8s, that i got. 03:15:27 the COE will be the parent command and we can have sub commands related to the specific resource 03:15:38 so everything is a COE in zun? 03:16:34 sudipto_: Yes 03:16:55 pksingh: I am not sure, will it not add additional complexity 03:17:05 Running multiple api services? 03:17:08 so there's no longer a command like zun list - it become zun default list or something? 03:17:30 mkrai_: user can run the service if they use k8 03:17:35 then zun k8s list and so on... 03:17:48 mkrai_: otherwise they dont need to run it 03:17:54 sudipto_: Sorry I missed the docker part. It should remain as it is 03:18:30 pksingh: And what about compute service? 03:19:04 ok that sounds fine. Because if you have created a POD from K8s and then do you a docker ps today, it would show us the individual containers. So maybe zun list becomes directly mapped to docker ps 03:19:07 mkrai_: that can be same i think, 03:19:19 i think zun needs to have a core api that is independent of any coe, although we could have extension api that is compatible with different coes 03:19:32 2 cents 03:20:16 sudipto_: Right 03:21:04 pksingh: In that case we will have to have a proxy api service. Right? 03:21:21 hongbin: every coe have different terms, will it not add overhead for existing coe users to learn the new things? 03:22:06 pksingh: yes, that is true, but i would argue that zun's users won't be users of coe 03:22:16 pksingh: because coe users will use the native coe api 03:22:21 hongbin, +! 03:22:31 pksingh: zun's users are really the users of openstack imo 03:22:49 hongbin: +1 additionally with zun we try to give uniformatiy across coes 03:23:02 hongbin: yes, i agree with you 03:23:02 the openstakc users who are using "nova list", those are the potential zun users 03:23:15 agreed 03:23:23 agreed 03:23:47 if we agree on this point, then we need to have a core api that is independent of any coe, and frieldly to openstack users 03:23:51 hongbin: but having our own implementation (core api as you mentioned) involves huge efforts, I think so 03:24:18 what is this core api? another COE from openstack? 03:24:22 shubhams_: yes, might be 03:24:41 sudipto_: right now, the core api is /containers 03:24:57 hongbin, eventually - a group of containers i presume? 03:25:11 sudipto_: Right 03:25:14 sudipto_: yes 03:25:42 so, personally, i prefer option #3, adding pod to the core api 03:25:47 FWIW, that's the path to choose because eventually a K8s user maybe more than OK with K8s itself. 03:25:52 however, option #5 could be an optional feature 03:26:30 sudipto_: true 03:26:46 but shubhams_ is right that it's a huge effort anyway... 03:27:15 another option is to keep /container only 03:27:19 no pod 03:27:41 then, the grouping of containers need to achieved by compose like feature 03:27:59 Can an orchestration service like heat do the composition? 03:28:11 hongbin: this is the core API implementation only. Right? 03:28:23 sudipto_: yes, i think it can 03:28:32 mkrai_: yes 03:29:26 hongbin: What I think now is. We should first integrate k8s with zun(option #3 or option #5) as we have seen huge interest for k8s 03:29:43 And then later concentrate/discuss on the core API feature 03:29:51 These are two different things IMO 03:30:25 Or split the discussion in two topics, k8s and core COE APIs 03:30:27 ok, then it seems we need to choose between #3 and #5 03:30:35 Yes 03:30:54 hongbin: mkrai_ , sorry i ask this, what benifit users will get if they access k8 via zun? 03:31:03 pksingh, stole my words 03:31:37 pksingh: i think they interest to leverage the advanced k8s feature, e.g. replication controller 03:31:47 I don't have answer for it now. But may be in future we provide some openstack features like storage and networking to them 03:32:02 maybe in general, the self-healing feature 03:32:06 Now we are just behaving as proxy 03:32:24 hongbin: cant we implement it in zun instead of providing it through k8s? 03:32:28 pksingh: sudipto_ : I am not sure but may be the user who wants to have container cluster but dont want to explore other options except openstack would use zun and its APIs 03:32:37 pksingh: yes, we could 03:33:07 pksingh: We can and that is different from this k8s integration feature 03:33:27 i think when we are talking about k8s being managed by zun - we are talking about 3 abstractions, one by docker, one by kubernetes and one by zun 03:33:37 hongbin: I suggest to create a bp/etherpad for it and start discussing about core COE APIs in zun 03:33:50 mkrai_: ok, we could do that 03:34:06 #action hongbin create a etherpad to discuss the zun core api 03:34:14 hongbin: thanks 03:34:28 I am not sure if will go to zun to get my kubernetes cluster managed, if i come to openstack, maybe i will just use magnum 03:34:29 pksingh: to answer your question about why users want to use k8s via zun 03:34:42 hongbin: i was thinking why user will use k8s api through zun, it will add extra overhead 03:34:46 pksingh: another important reason is multi-tenancy, neutron, cinder, etc. 03:35:18 hongbin: ok 03:35:36 the key point the users are looking for openstack integration for container technology 03:35:56 sudipto_: magnum don't have the k8s APIs. 03:36:08 How about splitting this bp itself : 1. integrate k8s 2. Core API implementation and work parallely in 2 teams? 03:36:11 sudipto_: i agree with u 03:36:26 my point is - i wouldn't want a magnum or zun api for kubernetes, i will use the kubernetes api itself. 03:36:48 sudipto_: +1, same here 03:36:52 I don't necessarily buy into cinder and neutron points either. 03:37:09 sudipto_: Though they can use the native kubectl but openstack user might want the native openstack clis for the same 03:37:26 sudipto_: then, do you think if k8s integration is the right direction for zun? 03:37:43 i am not sure at this point, maybe we need some operator opinion here. 03:38:05 sudipto_: i see 03:38:26 sudipto_: Think about the case for image driver here . We could have used docker pull directly but adding glance driver gave an option to store images inside openstack. Similarly having k8s integrate with zun will give such benefits . just a thought 03:39:14 sudipto_: actually, back to the beginning when hte zun project is found 03:39:23 hongbin: :) 03:39:37 glance images are pretty inferior in terms of storing docker images if you just think about it. 03:39:42 but we need it. 03:39:43 sudipto_: the zun project is found becaue other openstack projects (i.e. trove) need a api to talk to different coe 03:40:16 yeah which means, you maybe talking about removing the headache from the user that it's kubernetes or swarm or mesos underneath 03:40:21 sudipto_: from their point fo view, it is hard for them to interface with different coe, so they want a single api 03:40:32 but with the endpoint API - we we are saying - start becoming aware of kubernetes commands - isn't it? 03:41:17 maybe no 03:41:51 users should not be aware of k8s under the hook 03:41:57 like nova 03:42:01 hongbin, yup 03:42:06 that's what i am saying too. 03:42:11 like nova's users are not aware of hypervisor 03:42:16 hongbin: i agree 03:42:19 but a command like zun k8s list --> it becomes k8s aware. 03:42:50 sudipto_: it is a good or bad idea? 03:43:06 hongbin, i mean, that is opposite of our initial discussion. 03:43:24 ok 03:43:26 however, during the initial discussion, we couldn't find a way to bring all the COEs under one API (atleast back then) 03:43:43 sudipto_: And that's the same today I guess 03:43:47 To me an ML discussion sounds right? 03:43:55 ok 03:44:09 Every COE has different implementation 03:44:11 #action hongbin start a ML to discuss the k8s integration bp 03:44:23 then, let's advance topic 03:44:33 #topic Support interactive mode (kevinz) 03:44:38 #link https://blueprints.launchpad.net/zun/+spec/support-interactive-mode The BP 03:44:42 #link https://review.openstack.org/#/c/396841/ The design spec 03:44:47 kevinz: ^^ 03:44:51 Hi 03:45:31 This week plan to finish the "attach" workflow and zun-api <-> docker-daemon workflow 03:47:00 last week tweak the design spec and finish the pre-investigation for feasible method 03:47:13 yes, the spec looks good 03:47:24 (to me) 03:47:37 haha 03:48:19 kevinz: i will take a look today, 03:48:23 LGTM as well 03:48:57 Wenzhi: THX 03:49:52 kevinz: i think you can start the prototype (maybe after the spec is merged ) to see if the implementation is feasible 03:50:05 pksingh: OK thanks 03:50:14 kevinz: feel free to upload a wip patch to ping us for early reviews 03:50:40 hongbin: Yeah I will do this week for prototype 03:50:54 kevinz: thanks for your work on this feature, it would be a useful feature 03:51:01 kevinz: ack 03:51:20 ok, move on to next topic 03:51:22 hongbin: my pleasure, thanks for your help 03:51:25 #topic Moving inspect image from containers driver to image driver 03:51:30 #link https://review.openstack.org/#/c/404546/ 03:52:11 for this review, it looks we have different opinion about the location of the inspect image method 03:52:35 Namrata: you want to drive this one? 03:52:44 yeah sure 03:53:15 in my openion, if we can have similar implementation for glance driver, then its fine 03:53:33 Imo inspect_image is sepicfic to image 03:53:42 pksingh: +1 03:53:45 so it should be under image driver 03:54:09 sounds reasonable 03:54:33 pksingh: mkrai_ : yeah currently even glance driver calls inspect_image from docker driver 03:55:40 the present patchset looks good to me. 03:55:53 me too 03:56:06 plus in that case, if we have inspect_image for glance driver then probably we need not run image_load before running inspect_image as it will be handled by driver itself. But for docker only current patch sounds good 03:56:19 i think it will fail if image driver is glance, am i right? 03:56:44 Yes 03:56:55 will implememt for glance also 03:57:32 Namrata: can we add both implementations and then merge? 03:57:32 Namrata: maybe you could combine the implementation of glance and docker hub into one patch 03:57:50 Namrata: then the reviewers can see the big picture clearly 03:58:04 pksingh: same advice :) 03:58:33 hongbin: :) 03:58:44 okay 03:58:48 will do that 03:58:53 Namrata: thx 03:58:55 #topic Open Discussion 03:59:03 Namrata: thnx 03:59:10 thanks 03:59:19 last minute 03:59:39 ok, all thanks for joining the meeting 03:59:42 #endmeeting