03:00:09 <hongbin> #startmeeting zun
03:00:10 <openstack> Meeting started Tue Dec  6 03:00:09 2016 UTC and is due to finish in 60 minutes.  The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot.
03:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
03:00:13 <openstack> The meeting name has been set to 'zun'
03:00:16 <hongbin> #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2016-12-06_0300_UTC Today's agenda
03:00:21 <hongbin> #topic Roll Call
03:00:23 <Namrata> Namrata
03:00:25 <pksingh> pradeep
03:00:26 <kevinz> kevinz
03:00:28 <mkrai_> Madhuri Kumari
03:00:34 <Wenzhi> Wenzhi
03:00:37 <shubhams_> shubham
03:01:15 <hongbin> thanks for joining the meeting Namrata pksingh kevinz mkrai_ Wenzhi shubhams_
03:01:23 <sudipto_> o/
03:01:36 * hongbin is getting code today
03:01:46 <hongbin> s/code/cold/
03:01:56 <Wenzhi> take care
03:01:59 <hongbin> sudipto: hey
03:02:04 <sudipto_> hi
03:02:07 <kevinz> take care hongbin
03:02:11 <hongbin> Wenzhi: thanks, not too bad
03:02:14 <hongbin> #topic Announcements
03:02:20 <hongbin> 1. Zun is an official OpenStack project now!
03:02:26 <hongbin> #link https://review.openstack.org/#/c/402227/
03:02:40 <pksingh> great :)
03:02:41 <mkrai_> Yay!!! Congrats to all of us.
03:02:44 <hongbin> thanks everyone for your hard work, this is a recognization of the whole team
03:02:44 <kevinz> \o/
03:02:46 <shubhams_> :)
03:02:48 <Wenzhi> bravo
03:02:55 <Namrata> great
03:02:59 <lakerzhou> Great!!
03:03:02 <sudipto_> awesome!
03:03:09 <hongbin> yeah
03:03:15 <hongbin> #topic Review Action Items
03:03:19 <hongbin> none
03:03:24 <hongbin> #topic Should we participant OpenStack Project Team Gathering?
03:03:29 <hongbin> #link https://www.openstack.org/ptg/
03:03:35 <yanyanhu> hi, hongbin, sorry I'm late
03:03:41 <mkrai_> Yes we should
03:03:45 <hongbin> yanyanhu: hey, np. thanks for joining
03:04:03 <lakerzhou> I plan to go PTG
03:04:22 <hongbin> lakerzhou: ack
03:04:34 <hongbin> anyone else who will go there?
03:04:58 * hongbin should be able to attend but not sure 100%
03:05:05 <Namrata> not so sure
03:05:11 <mkrai_> I can't assure now. Waiting for approval from employers
03:05:33 <pksingh> need to talk to my employer
03:05:50 <hongbin> it looks lakerzhou will go there, mkrai_ Namrata and me is not sure
03:06:00 <hongbin> pksingh: ack
03:06:24 <hongbin> ok, i asked this because the tc chair asked me if zun will have a session in ptg
03:06:53 <hongbin> from the current response, it looks only a few of us will go there
03:07:28 <hongbin> ok, let's table this topic to next week
03:07:50 <hongbin> #topic Kubernetes integration (shubhams)
03:07:57 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/k8s-integration The BP
03:08:02 <hongbin> #link https://etherpad.openstack.org/p/zun-k8s-integration The etherpad
03:08:08 <hongbin> shubhams_: ^^
03:08:41 <shubhams_> We discussed about this last week and mostly agreed to have a small set of k8s feature
03:08:53 <shubhams_> Thats is : start from pod like feature in zun
03:09:13 <shubhams_> It is mentioned as option#3 in etherpad
03:09:40 <shubhams_> On top of this, we propose option#5
03:09:52 <shubhams_> Madhuri will explain this
03:10:01 <hongbin> mkrai_: ^^
03:10:26 <mkrai_> I have tried to explain it in etherpad.
03:10:43 <mkrai_> It is basically exposing single API endpoint for each COE
03:11:08 <mkrai_> and then have sub controllers to it for handling the resources related to that COE
03:11:32 <mkrai_> By this zun will not have many resources that we directly replicate from COEs
03:11:54 <mkrai_> For an example: /v1/k8s/pod etc
03:12:01 <hongbin> then, zun will behavior as a proxy to different coe?
03:12:09 <mkrai_> Yes
03:12:22 <hongbin> this is basically an extension of option #1 (it seems)
03:12:32 <mkrai_> The implementation at compute side will be same. It will act like proxy
03:13:22 <hongbin> mkrai_: get that
03:13:31 <hongbin> all, thoughts on this?
03:13:48 <Wenzhi> sounds feasible
03:14:06 <shubhams_> I and Madhuri had already discussed this , so we are ok on this point
03:14:12 <pksingh> idea seems great, at least it do not disturbs the existing k8s user
03:14:40 <sudipto_> mkrai_, the segregation seems like a good proposal. What would happen to the default zun behaviour in this case?
03:14:51 <sudipto_> i mean a command like "zun list"
03:14:59 <pksingh> how about having different api service for different coes?
03:15:05 <mkrai_> sudipto_: zun k8s pod-create
03:15:11 <hongbin> pksingh: interesting
03:15:16 <sudipto_> mkrai_, that's for k8s, that i got.
03:15:27 <mkrai_> the COE will be the parent command and we can have sub commands related to the specific resource
03:15:38 <sudipto_> so everything is a COE in zun?
03:16:34 <mkrai_> sudipto_: Yes
03:16:55 <mkrai_> pksingh: I am not sure, will it not add additional complexity
03:17:05 <mkrai_> Running multiple api services?
03:17:08 <sudipto_> so there's no longer a command like zun list - it become zun default list or something?
03:17:30 <pksingh> mkrai_: user can run the service if they use k8
03:17:35 <sudipto_> then zun k8s list and so on...
03:17:48 <pksingh> mkrai_: otherwise they dont need to run it
03:17:54 <mkrai_> sudipto_: Sorry I  missed the docker part. It should remain as it is
03:18:30 <mkrai_> pksingh: And what about compute service?
03:19:04 <sudipto_> ok that sounds fine. Because if you have created a POD from K8s and then do you a docker ps today, it would show us the individual containers. So maybe zun list becomes directly mapped to docker ps
03:19:07 <pksingh> mkrai_: that can be same i think,
03:19:19 <hongbin> i think zun needs to have a core api that is independent of any coe, although we could have extension api that is compatible with different coes
03:19:32 <hongbin> 2 cents
03:20:16 <mkrai_> sudipto_: Right
03:21:04 <mkrai_> pksingh: In that case we will have to have a proxy api service. Right?
03:21:21 <pksingh> hongbin: every coe have different terms, will it not add overhead for existing coe users to learn the new things?
03:22:06 <hongbin> pksingh: yes, that is true, but i would argue that zun's users won't be users of coe
03:22:16 <hongbin> pksingh: because coe users will use the native coe api
03:22:21 <sudipto_> hongbin, +!
03:22:31 <hongbin> pksingh: zun's users are really the users of openstack imo
03:22:49 <shubhams_> hongbin: +1 additionally with zun we try to give uniformatiy across coes
03:23:02 <pksingh> hongbin: yes, i agree with you
03:23:02 <hongbin> the openstakc users who are using "nova list", those are the potential zun users
03:23:15 <Wenzhi> agreed
03:23:23 <kevinz> agreed
03:23:47 <hongbin> if we agree on this point, then we need to have a core api that is independent of any coe, and frieldly to openstack users
03:23:51 <shubhams_> hongbin:  but having our own implementation (core api as you mentioned) involves huge efforts, I think so
03:24:18 <sudipto_> what is this core api? another COE from openstack?
03:24:22 <hongbin> shubhams_: yes, might be
03:24:41 <hongbin> sudipto_: right now, the core api is /containers
03:24:57 <sudipto_> hongbin, eventually - a group of containers i presume?
03:25:11 <mkrai_> sudipto_: Right
03:25:14 <hongbin> sudipto_: yes
03:25:42 <hongbin> so, personally, i prefer option #3, adding pod to the core api
03:25:47 <sudipto_> FWIW, that's the path to choose because eventually a K8s user maybe more than OK with K8s itself.
03:25:52 <hongbin> however, option #5 could be an optional feature
03:26:30 <hongbin> sudipto_: true
03:26:46 <sudipto_> but shubhams_ is right that it's a huge effort anyway...
03:27:15 <hongbin> another option is to keep /container only
03:27:19 <hongbin> no pod
03:27:41 <hongbin> then, the grouping of containers need to achieved by compose like feature
03:27:59 <sudipto_> Can an orchestration service like heat do the composition?
03:28:11 <mkrai_> hongbin: this is the core API implementation only. Right?
03:28:23 <hongbin> sudipto_: yes, i think it can
03:28:32 <hongbin> mkrai_: yes
03:29:26 <mkrai_> hongbin: What I think now is. We should first integrate k8s with zun(option #3 or option #5) as we have seen huge interest for k8s
03:29:43 <mkrai_> And then later concentrate/discuss on the core API feature
03:29:51 <mkrai_> These are two different things IMO
03:30:25 <mkrai_> Or split the discussion in two topics, k8s and core COE APIs
03:30:27 <hongbin> ok, then it seems we need to choose between #3 and #5
03:30:35 <mkrai_> Yes
03:30:54 <pksingh> hongbin: mkrai_ , sorry i ask this, what benifit users will get if they access k8 via zun?
03:31:03 <sudipto_> pksingh, stole my words
03:31:37 <hongbin> pksingh: i think they interest to leverage the advanced k8s feature, e.g. replication controller
03:31:47 <mkrai_> I don't have answer for it now. But may be in future we provide some openstack features like storage and networking to them
03:32:02 <hongbin> maybe in general, the self-healing feature
03:32:06 <mkrai_> Now we are just behaving as proxy
03:32:24 <pksingh> hongbin: cant we implement it in zun instead of providing it through k8s?
03:32:28 <shubhams_> pksingh: sudipto_ : I am not sure but may be the user who wants to have container cluster but dont want to explore other options except openstack would use zun and its APIs
03:32:37 <hongbin> pksingh: yes, we could
03:33:07 <mkrai_> pksingh: We can and that is different from this k8s integration feature
03:33:27 <sudipto_> i think when we are talking about k8s being managed by zun - we are talking about 3 abstractions, one by docker, one by kubernetes and one by zun
03:33:37 <mkrai_> hongbin: I suggest to create a bp/etherpad for it and start discussing about core COE APIs in zun
03:33:50 <hongbin> mkrai_: ok, we could do that
03:34:06 <hongbin> #action hongbin create a etherpad to discuss the zun core api
03:34:14 <mkrai_> hongbin: thanks
03:34:28 <sudipto_> I am not sure if will go to zun to get my kubernetes cluster managed, if i come to openstack, maybe i will just use magnum
03:34:29 <hongbin> pksingh: to answer your question about why users want to use k8s via zun
03:34:42 <pksingh> hongbin: i was thinking why user will use k8s api through zun, it will add extra overhead
03:34:46 <hongbin> pksingh: another important reason is multi-tenancy, neutron, cinder, etc.
03:35:18 <pksingh> hongbin: ok
03:35:36 <hongbin> the key point the users are looking for openstack integration for container technology
03:35:56 <mkrai_> sudipto_: magnum don't have the k8s APIs.
03:36:08 <shubhams_> How about splitting this bp itself : 1. integrate k8s 2. Core API implementation and work parallely in 2 teams?
03:36:11 <pksingh> sudipto_: i agree with u
03:36:26 <sudipto_> my point is - i wouldn't want a magnum or zun api for kubernetes, i will use the kubernetes api itself.
03:36:48 <pksingh> sudipto_: +1, same here
03:36:52 <sudipto_> I don't necessarily buy into cinder and neutron points either.
03:37:09 <mkrai_> sudipto_: Though they can use the native kubectl but openstack user might want the native openstack clis for the same
03:37:26 <hongbin> sudipto_: then, do you think if k8s integration is the right direction for zun?
03:37:43 <sudipto_> i am not sure at this point, maybe we need some operator opinion here.
03:38:05 <hongbin> sudipto_: i see
03:38:26 <shubhams_> sudipto_: Think about the case for image driver here . We could have used docker pull directly but adding glance driver gave an option to store images inside openstack. Similarly having k8s integrate with zun will give such benefits . just a thought
03:39:14 <hongbin> sudipto_: actually, back to the beginning when hte zun project is found
03:39:23 <pksingh> hongbin: :)
03:39:37 <sudipto_> glance images are pretty inferior in terms of storing docker images if you just think about it.
03:39:42 <sudipto_> but we need it.
03:39:43 <hongbin> sudipto_: the zun project is found becaue other openstack projects (i.e. trove) need a api to talk to different coe
03:40:16 <sudipto_> yeah which means, you maybe talking about removing the headache from the user that it's kubernetes or swarm or mesos underneath
03:40:21 <hongbin> sudipto_: from their point fo view, it is hard for them to interface with different coe, so they want a single api
03:40:32 <sudipto_> but with the endpoint API - we we are saying - start becoming aware of kubernetes commands - isn't it?
03:41:17 <hongbin> maybe no
03:41:51 <hongbin> users should not be aware of k8s under the hook
03:41:57 <hongbin> like nova
03:42:01 <sudipto_> hongbin, yup
03:42:06 <sudipto_> that's what i am saying too.
03:42:11 <hongbin> like nova's users are not aware of hypervisor
03:42:16 <pksingh> hongbin: i agree
03:42:19 <sudipto_> but a command like zun k8s list --> it becomes k8s aware.
03:42:50 <hongbin> sudipto_: it is a good or bad idea?
03:43:06 <sudipto_> hongbin, i mean, that is opposite of our initial discussion.
03:43:24 <hongbin> ok
03:43:26 <sudipto_> however, during the initial discussion, we couldn't find a way to bring all the COEs under one API (atleast back then)
03:43:43 <mkrai_> sudipto_: And that's the same today I guess
03:43:47 <sudipto_> To me an ML discussion sounds right?
03:43:55 <hongbin> ok
03:44:09 <mkrai_> Every COE has different implementation
03:44:11 <hongbin> #action hongbin start a ML to discuss the k8s integration bp
03:44:23 <hongbin> then, let's advance topic
03:44:33 <hongbin> #topic Support interactive mode (kevinz)
03:44:38 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/support-interactive-mode The BP
03:44:42 <hongbin> #link https://review.openstack.org/#/c/396841/ The design spec
03:44:47 <hongbin> kevinz: ^^
03:44:51 <kevinz> Hi
03:45:31 <kevinz> This week plan to finish the "attach" workflow and zun-api <-> docker-daemon workflow
03:47:00 <kevinz> last week tweak the design spec and finish the pre-investigation for feasible method
03:47:13 <hongbin> yes, the spec looks good
03:47:24 <hongbin> (to me)
03:47:37 <kevinz> haha
03:48:19 <pksingh> kevinz: i will take a look today,
03:48:23 <Wenzhi> LGTM as well
03:48:57 <kevinz> Wenzhi: THX
03:49:52 <hongbin> kevinz: i think you can start the prototype (maybe after the spec is merged ) to see if the implementation is feasible
03:50:05 <kevinz> pksingh: OK thanks
03:50:14 <hongbin> kevinz: feel free to upload a wip patch to ping us for early reviews
03:50:40 <kevinz> hongbin: Yeah I will do this week for prototype
03:50:54 <hongbin> kevinz: thanks for your work on this feature, it would be a useful feature
03:51:01 <hongbin> kevinz: ack
03:51:20 <hongbin> ok, move on to next topic
03:51:22 <kevinz> hongbin: my pleasure, thanks for your help
03:51:25 <hongbin> #topic Moving inspect image from containers driver to image driver
03:51:30 <hongbin> #link https://review.openstack.org/#/c/404546/
03:52:11 <hongbin> for this review, it looks we have different opinion about the location of the inspect image method
03:52:35 <hongbin> Namrata: you want to drive this one?
03:52:44 <Namrata> yeah sure
03:53:15 <pksingh> in my openion,  if we can have similar implementation for glance driver, then its fine
03:53:33 <Namrata> Imo inspect_image is sepicfic to image
03:53:42 <mkrai_> pksingh: +1
03:53:45 <Namrata> so it should be under image driver
03:54:09 <Wenzhi> sounds reasonable
03:54:33 <shubhams_> pksingh: mkrai_ : yeah currently even glance driver calls inspect_image from docker driver
03:55:40 <sudipto_> the present patchset looks good to me.
03:55:53 <mkrai_> me too
03:56:06 <shubhams_> plus in that case, if we have inspect_image for glance driver then probably we need not run image_load before running inspect_image as it will be handled by driver itself. But for docker only current patch sounds good
03:56:19 <pksingh> i think it will fail if image driver is glance, am i right?
03:56:44 <Namrata> Yes
03:56:55 <Namrata> will implememt for glance also
03:57:32 <pksingh> Namrata: can we add both implementations and then merge?
03:57:32 <hongbin> Namrata: maybe you could combine the implementation of glance and docker hub into one patch
03:57:50 <hongbin> Namrata: then the reviewers can see the big picture clearly
03:58:04 <hongbin> pksingh: same advice :)
03:58:33 <pksingh> hongbin: :)
03:58:44 <Namrata> okay
03:58:48 <Namrata> will do that
03:58:53 <hongbin> Namrata: thx
03:58:55 <hongbin> #topic Open Discussion
03:59:03 <pksingh> Namrata: thnx
03:59:10 <Namrata> thanks
03:59:19 <hongbin> last minute
03:59:39 <hongbin> ok, all thanks for joining the meeting
03:59:42 <hongbin> #endmeeting