03:00:09 <hongbin> #startmeeting zun
03:00:10 <openstack> Meeting started Tue Jul  5 03:00:09 2016 UTC and is due to finish in 60 minutes.  The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot.
03:00:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
03:00:14 <openstack> The meeting name has been set to 'zun'
03:00:15 <hongbin> #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2016-07-05_0300_UTC Today's agenda
03:00:20 <hongbin> #topic Roll Call
03:00:25 <mkrai> Madhuri Kumari
03:00:26 <Wenzhi> Wenzhi Yu
03:00:30 <shubhams> Shubham sharma
03:00:46 <Vivek_> Vivek Jain
03:01:06 <sudipto> o/
03:01:33 <hongbin> Thanks for joining the meeting mkrai Wenzhi shubhams Vivek_ sudipto
03:02:07 <hongbin> #topic Announcements
03:02:13 <flwang1> o/
03:02:15 <hongbin> The python-zunclient repo was created!
03:02:20 <yanyanhu> hi
03:02:23 <hongbin> #link https://review.openstack.org/#/c/317699/
03:02:28 <hongbin> #link https://github.com/openstack/python-zunclient
03:02:51 <hongbin> Thanks mkrai for creating the repo
03:03:07 <mkrai> hongbin, :)
03:03:13 <hongbin> flwang1: yanyanhu hey
03:03:17 <yanyanhu> :)
03:03:34 <hongbin> Any question about the python-zunclient repo ?
03:03:40 <mkrai> service-list command is now supported
03:04:02 <mkrai> #link https://review.openstack.org/#/c/337360/
03:04:16 <hongbin> Great!
03:04:17 <Wenzhi> great! will try zunclient later
03:04:23 <shubhams> I will check and confirm
03:04:32 <mkrai> Thanks!
03:04:39 <hongbin> #topic Review Action Items
03:04:45 <hongbin> hongbin draft a spec for design option 1.1 (Work In Progress)
03:04:50 <hongbin> #link https://etherpad.openstack.org/p/zun-containers-service-design-spec
03:05:11 <hongbin> I will continue to work on the etherpad this week
03:05:21 <hongbin> You are welcome to collaborate :)
03:05:37 <yanyanhu> looks great
03:05:45 <hongbin> thanks
03:06:01 <hongbin> I will leave the etherpad offline
03:06:08 <hongbin> #topic Architecture design
03:06:15 <hongbin> #link https://etherpad.openstack.org/p/zun-architecture-decisions
03:06:53 <hongbin> At the last meeting, we decided to choose option #1.1
03:07:04 <hongbin> I want to confirm it again
03:07:14 <hongbin> Is everyone on the same page?
03:07:32 <yanyanhu> yes, I think so :)
03:07:41 <shubhams> Yes
03:07:49 <Wenzhi> +1
03:07:54 <mkrai> +1
03:07:57 <hongbin> Anyone want a clarification. Now it is hte time
03:08:25 <hongbin> OK. Looks everyone agree
03:08:36 <hongbin> Then, we will work according to the decision
03:08:41 <hongbin> #topic API design
03:08:46 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/api-design The BP
03:08:51 <hongbin> #link https://etherpad.openstack.org/p/zun-containers-service-api Etherpad
03:09:07 <hongbin> mkrai: I gave the stage to you :)
03:09:14 <mkrai> hongbin, Thanks
03:09:33 <mkrai> I have started creating spec for zun API according to our design
03:09:43 <mkrai> #link https://etherpad.openstack.org/p/zun-containers-service-api-spec
03:09:56 <mkrai> I want everyone to please have a look
03:10:21 <mkrai> And feel free to add to it
03:10:48 <mkrai> According to our discussion, in our first implementation we will support docker and later COEs
03:10:49 <flwang1> mkrai: just had a quick glance
03:10:56 <flwang1> i think at the init stage
03:11:09 <flwang1> we don't have to care about the details of each api endpoint
03:11:21 <flwang1> we need to figure out the resource asap
03:11:28 <hongbin> Yes, agree
03:11:28 <flwang1> and define the details later
03:11:30 <mkrai> Yes it will generally evolve flwang1
03:11:44 <mkrai> Ok that will be helpful for me too :)
03:12:01 <mkrai> I will add all the resources this week
03:12:38 <flwang1> since it's easy to define the action if we can figure out the objects/resources
03:12:49 <mkrai> That's all from my side
03:12:55 <flwang1> and a relationship between each resources will be lovely
03:13:04 <mkrai> Sure I will add that
03:13:04 <sudipto> yeah and given that we have two overlapping runtimes, i wanted to understand if everyone feels - we should look for an overlap - or should we be more like - building our stuff around one runtime - support it well - and then move to the other?
03:13:32 <hongbin> sudipto: good question
03:14:09 <mkrai> We should abstract container runtimes also
03:14:25 <flwang1> sudipto: define the two runtimes? you mean container and COE?
03:14:37 <mkrai> docker and rocket
03:14:42 <sudipto> flwang1, sorry should have been clearer. I meant docker and rkt
03:15:25 <sudipto> my point being, is there a need to build both of these API supports simultaneously?  (I'd be clear i have no bias towards either)
03:15:46 <mkrai> IMO it is not good to have different set of APIs for container runtimes even.
03:15:59 <sudipto> it is not about two different set of APIs
03:16:18 <sudipto> think about libvirt being the driver nova was built on and later VMWare was added...if that serves as a good analogy
03:16:32 <Wenzhi> one set of API, two different drivers?
03:17:06 <sudipto> Wenzhi, yeah it will be one set of APIs only. However, you will probably have NotImplementedError() for the one that doesn't support it.
03:17:09 <hongbin> sudipto: you mean two set of APIs for docker and rkt? or two set of APIs for runtimes and COEs?
03:17:21 <sudipto> hongbin, two set of APIs for runtimes and COEs...
03:17:28 <hongbin> sudipto: I see
03:17:47 <hongbin> sudipto: It looks you challenge the design option 1.1 :)
03:17:58 <hongbin> sudipto: but let's discuss it
03:18:07 <sudipto> hongbin, no, i don't think i am challenging that :)
03:18:18 <sudipto> hongbin, i am just sticking to the API design discussion...
03:18:39 <sudipto> anyway, i think we can take it offline...
03:18:43 <hongbin> sudipto: ok. could you clarify ?
03:18:44 <sudipto> on the etherpad that is...
03:19:05 <sudipto> basically - should be building around OCI/CNCF?
03:19:10 <sudipto> *we
03:19:51 <hongbin> If collaboration between OCI/CNCF is possible, I am open to that
03:20:33 <sudipto> and my point in a more crude way was - if docker supports x y z APIs - should we be building the same x y z for rocket and through a NotImplementedError()
03:20:52 <hongbin> That mean we needs to figure out something, and propose it to OCI/CNCF? or just imeplement whatever OCI/CNCF?
03:21:06 <sudipto> maybe mkrai is right - we should probably abstract it at a level - where there is a overlap
03:21:22 <sudipto> my only concern is - we shouldn't end up doing things not so well for either runtimes...that's all
03:21:33 <hongbin> Yes, that is one option
03:21:45 * eliqiao joins
03:21:50 <hongbin> eliqiao: hey
03:22:06 <sudipto> hongbin, i will take this on the etherpad and with you and mkrai later, don't want to derail the meeting over it :)
03:22:20 * eliqiao just arrived at hangzhou(for bug smash day)
03:22:24 <flwang1> focus on docker, my 0.02
03:22:28 <hongbin> sudipto: well, I don't have much items to discuss later :)
03:22:55 <sudipto> flwang1, yeah i think i meant something like that :) as in the focus bit :)
03:23:14 <sudipto> don't end up being a place where nothing works well...
03:23:32 <sudipto> Also, important to refer to what version of remote APIs we will support.
03:23:46 <mkrai> Do we agree to concentrate on docker now?
03:24:01 <mkrai> sudipto, the latest one?
03:24:19 <sudipto> mkrai,yeah mostly, but document per say - latest would be a good to cite in numbers.
03:24:29 <sudipto> since the API updates are pretty frequent
03:24:52 <mkrai> Ok
03:25:08 <hongbin> well, I compared the APIs of docekr and rkt
03:25:23 <hongbin> obviously, docker have double APIs operations than rkt
03:25:35 <hongbin> However, the basic looks the same
03:25:40 <mkrai> Agree hongbin
03:25:49 <hongbin> e.g. start, stop, logs, ....
03:25:54 <sudipto> hongbin, mkrai so we would look at mostly the basic operations?
03:26:03 <sudipto> i was more looking at a thing like docker attach for instance...
03:26:12 <sudipto> do you consider that basic or necessary?
03:26:56 <hongbin> This is a challenging question
03:26:57 <mkrai> The basic ones
03:27:07 <shubhams> I think that we should focus on necessary
03:27:16 <flwang1> docker the target, but we can implement it in several stages
03:27:41 <flwang1> that said
03:27:57 <mkrai> Yes the basic ones first and later the necessary ones
03:28:03 <flwang1> at 1st stage, implement the basic(common) api between docker and rokect
03:28:05 <eliqiao> agreed.
03:28:12 <flwang1> and then adding more
03:28:20 <shubhams> agree
03:28:23 <flwang1> that's not a problem
03:28:39 <sudipto> sounds good!
03:28:45 <hongbin> wfm
03:28:48 <mkrai> +1
03:29:03 <mkrai> hongbin, please mark it as agreed
03:29:17 <hongbin> Everyone agree?
03:29:36 <namrata> agreed
03:29:50 * sudipto is worried for the future, but agrees in the present
03:29:51 <hongbin> #agreed I think there are two general approach to define the API at the beginning, and then adding more later
03:29:57 <Wenzhi> so we agreed on one set of container runtime API and two different drivers(for docker and rkt)?
03:30:07 <hongbin> sudipto: We can discuss it further later
03:30:15 <sudipto> hongbin, yeah sounds good.
03:30:55 <hongbin> Wenzhi: One thing one set of API for both docker and rkt
03:31:22 <hongbin> For the runtimes and COEs, we can discuss if it is one set of APIs or two set
03:31:44 <hongbin> sudipto: We have plenty of time, we can touch this topic a bit here?
03:32:09 <mkrai> I think it will be very difficult to abstract COE and container runtime
03:32:10 <sudipto> hongbin, you mean the COE API design?
03:32:25 <hongbin> yes
03:32:52 <hongbin> sudipto: you seem to propose one set of APIs for COEs and runtimes?
03:33:06 <sudipto> hongbin, no no, i don't think it is to overlap them
03:34:04 <sudipto> flwang1, the other day had a mention of the vmware apis in nova - i thought you meant something like managing a cluster with the same set of apis - that a compute driver like libvirt would support?
03:34:31 <flwang1> maybe i missed something, but i think we're not going to create one set API to fit both container and COE
03:35:06 <flwang1> IMHO, we do need a group/set of api for container and one for COE, and the hard part is not on the container part
03:35:12 <mkrai> Yes flwang1
03:35:12 <flwang1> the hard part is the COE part
03:35:37 <hongbin> Yes, possibly need another etherpad for COE
03:35:38 <flwang1> because, we're going to have a native COE and the other COE support may from magnum
03:36:08 <flwang1> so how to work out a group/set of api to make everybody happy is the hard part, does that make sense?
03:36:23 <flwang1> or i'm making it mess ;)
03:36:26 <sudipto> yeah it is not just hard, it's like probably impossible.
03:36:27 <mkrai> Each COE has different resources, so different set of APIs for each COE
03:36:46 <mkrai> sudipto, I agree with you
03:37:44 <mkrai> hongbin, I think its better to create an etherpad for all COE APIs
03:37:54 <mkrai> And then try to take out some common if possible
03:37:54 <flwang1> hah, i think that's the interesting part of software design :)
03:38:00 <flwang1> since we have to compromise
03:38:06 <hongbin> #action hongbin create an etherpad for the COE API design
03:38:09 <Wenzhi> different set of APIs for each COE is not beautiful I think
03:38:10 <flwang1> or i would say balance
03:38:15 <sudipto> you will be biased in such a case, i think that's fine though ;)
03:38:34 <flwang1> figure out a balance point between the reality and the design
03:39:20 <hongbin> I think the hardest part is that almost everyone is using native COE API, how to convince them to use our API is the question
03:39:28 <sudipto> hongbin, ++
03:39:33 <sudipto> that's my primary concern
03:39:36 <hongbin> Frankly, I don't have an answer for that
03:40:03 <hongbin> Unless, we make it much simpler or more high-level or somthing
03:40:29 <shubhams> hongbin : its only possible if we provide wrapper over existing coe and that wrapper should be same  for each COE to end user/customer for all
03:40:43 <Wenzhi> unless we can unify them, or I don't think people will use a cmd like zun k8s XXX
03:41:01 <shubhams> Wenzhi : yeah right
03:41:09 <flwang1> hongbin: i don't think that's a really problem IMHO
03:41:24 <hongbin> flwang1: why?
03:41:30 <flwang1> the main target user of Zun is the application developer, not operator
03:41:48 <flwang1> do you remember our discussion about the user case of Zun?
03:41:57 <hongbin> yes
03:42:12 <flwang1> IMHO, it's like: As a user, i want to create a container
03:42:25 <flwang1> then i can manage its lifecycle with zun's api/client
03:42:43 <flwang1> if
03:42:59 <hongbin> As a user, I want to replicate my container to N
03:42:59 <flwang1> the user also want to create container based on a COE cluster
03:43:06 <hongbin> Another use case?
03:43:11 <flwang1> and the cloud provider didn't provide it
03:43:23 <flwang1> then he may have to play with the COE API
03:43:40 <flwang1> then that's users choice
03:43:50 <flwang1> if we're doing bad job
03:44:06 <flwang1> then don't expect user will use zun's api to talk with COE
03:44:24 <flwang1> i think that's fair enough
03:44:39 <hongbin> Yes, this area needs more thinking
03:44:51 <hongbin> I am still struggle to figure it out
03:44:59 <flwang1> imagine a supermarket A has good price and service, why do you want to chose B?
03:45:09 <mkrai> I think user would want to use Zun if they want to run COEs on container infrastructure
03:45:10 <flwang1> unless B has better price and service
03:45:52 <mkrai> Sorry *openstack infrastructure
03:46:46 <flwang1> mkrai: yep, the good gene of zun is, it's the native container api of openstack
03:47:02 <flwang1> it should know more about the infra than the COE itself
03:47:07 <hongbin> Yes, that is a point
03:47:08 <sudipto> mkrai, that's a weak case at the moment... since the direction is more towards making openstack as an app on kubernetes :) If i read through the SIG discussions right.
03:47:25 <flwang1> in other words, we can provide better integration with other services
03:47:28 <sudipto> however, that doesn't mean the other way around is not possible,
03:47:48 <sudipto> and it's probably good for people who already have invest heavily on openstack to try out containers as well.
03:48:42 <hongbin> How to make COE integrate well with OpenStack?
03:49:10 <hongbin> I think it is not our job to teach operators to how enable Kuryr in COE?
03:49:40 <hongbin> If Kuryr/neutron is not configured in COE, Zun cannot use it
03:50:07 <hongbin> Same for Cinder, Keystone, ...
03:51:02 <hongbin> OK. Looks like we need to take the discussion to an etherpad
03:51:16 <hongbin> #topic Open Discussion
03:51:21 <mkrai> I have a concern on managing state of containers
03:51:47 <mkrai> Are we planning to manage it in Zun db?
03:52:09 <hongbin> mkrai: I think yes
03:52:22 <hongbin> mkrai: Although we need to reconsider the "db"
03:52:27 <mkrai> How do we sync it will container runtime?
03:52:34 <hongbin> mkrai: I think etcd will be better
03:52:37 <mkrai> s/will//with
03:52:47 <yuanying> etcd +1
03:53:08 <hongbin> mkrai: I think it is similar as how Nova sync with VMs
03:53:32 <sudipto> I certainly have a strong feeling that we shouldn't treat containers and VMs similarly.
03:53:33 <mkrai> Ok sounds good
03:53:42 <Wenzhi> we can sync them with a periodic task maybe
03:53:54 <sudipto> the state sync in VMs can be asynchronous - you might end up killing a container before that happens :)
03:54:05 <hongbin> sudipto: OK. I think it is like how kubelet sync with containers :)
03:54:09 <sudipto> as in containers could be very short lived as well.
03:54:22 <sudipto> hongbin, hah ok.
03:54:53 <sudipto> mkrai, were you inclined towards something like a distributed db? like yuanying pointed out?
03:55:04 <hongbin> but agree, containers are started and stopped very frequently, so db is not the right choice
03:55:42 <Wenzhi> sounds reasonable ^^
03:56:08 <mkrai> Yes
03:56:52 <hongbin> OK. sound like an agree?
03:57:25 <hongbin> silient....
03:57:46 <sudipto> agreed.,
03:57:47 <yuanying> agree
03:57:53 <hongbin> :)
03:58:03 <mkrai> +1
03:58:09 <Wenzhi> +1 we need a db operation implementation for etcd backend
03:58:13 <yuanying> And also what about to re-consider "rabbitMQ"?
03:58:24 <shubhams> +1
03:58:42 <hongbin> #agreed store container state at etcd instead of db
03:58:59 <hongbin> yuanying: Yes, I am thinking about the message queue as well
03:59:03 <hongbin> yuanying: what do you think?
03:59:05 <yuanying> taskflow also uses kvs for message passing
03:59:37 <yuanying> https://wiki.openstack.org/wiki/TaskFlow
03:59:46 <yuanying> We can implements conductor on KVS
04:00:18 <hongbin> OK. TIme is up. We can re-discuss the rabbitmq in the ML or next meeting
04:00:27 <hongbin> All, thanks for joining hte meeting
04:00:30 <mkrai> Thanks everyone
04:00:32 <hongbin> #endmeeting