03:00:12 #startmeeting zun 03:00:14 Meeting started Tue Jun 21 03:00:12 2016 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:00:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:00:17 The meeting name has been set to 'zun' 03:00:22 #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2016-06-21_0300_UTC Today's agenda 03:00:27 #topic Roll Call 03:00:36 Madhuri Kumari 03:00:39 Wenzhi Yu 03:00:46 Namrata Sitlani 03:00:49 o/ 03:00:54 hi 03:00:55 ho 03:00:58 hi 03:01:10 hi 03:01:13 I'm ChangBo Guo (aka gcb), from EasyStack, have been working in oslo and Nova, would like to contribute in project zun 03:01:24 welcome, gcb :) 03:01:26 Hello, everyone 03:01:31 gcb: hey 03:01:45 Thanks for joining the meeting mkrai Wenzhi namrata eliqiao gcb haiwei_ yanyanhu feisky flwang 03:02:01 Welcome gcb 03:02:29 #topic Announcements 03:02:29 hongbin: gcb and i were collogues when we worked for ibm 03:02:40 flwang: I see 03:02:43 o/ 03:02:50 I have no annoucement 03:02:58 Any annoucement from our team member? 03:03:27 #topic Review Action Items 03:03:29 welcome gcb 03:03:34 hongbin create an etherpad as a draft of Zun design spec (DONE) 03:03:35 flwang: I am also a IBMer, and you introduced OpenStack to me in 2013, haha:) 03:03:48 #link https://etherpad.openstack.org/p/zun-containers-service-design-spec 03:04:07 I will continue work on the etherpad above 03:04:26 Feel free to write down your notes and comments there 03:04:43 #topic Architecture design 03:04:51 #link https://etherpad.openstack.org/p/zun-architecture-decisions 03:05:18 sudipto: created this etherpad to collaborate on the architecture design 03:05:37 I plan to spend about 10 - 15 minutes for everyone to comment on it 03:05:50 agree? 03:06:03 or we can do it as homework 03:06:30 hongbin: do you need us to input it now? 03:06:39 eliqiao: yes please 03:06:41 hongbin, i guess offline is better, since this would need people to think through? 03:06:55 sudipto: ack 03:07:01 offline everyone? 03:07:12 or online for a few minutes? 03:07:53 offline is better 03:07:55 (no response, I guess folks are reading the etherpad...) 03:08:00 +1 for offline 03:08:11 offline 03:08:14 OK. Then let's advance topic 03:08:22 #topic API design 03:08:28 #link https://blueprints.launchpad.net/zun/+spec/api-design The BP 03:08:35 #link https://etherpad.openstack.org/p/containers-service-api 2013 Etherpad 03:08:41 #link https://etherpad.openstack.org/p/openstack-containers-service-api 2014 Etherpad 03:08:56 mkrai: could you lead the discussion of hte API design? 03:09:09 Yes sure hongbin 03:09:42 As you can see the page explains the endpoint for each resource in Zun 03:09:53 And resource representation in Zun 03:10:26 We have collected samples from K8s, docker-compose, rocket 03:10:39 And tried to create one for us. 03:10:51 #link https://etherpad.openstack.org/p/zun-containers-service-api The latest etherpad 03:11:04 If everyone is ok with representation, we can start creating a spec for it. 03:11:51 There's one more the atomic container creation - i guess we need to think about a group of containers too? 03:12:19 sudipto: That would be some advance operation? 03:12:55 sudipto: I think we can start with basic first, we are free to add support for group of containers later 03:13:02 hongbin, ok 03:13:11 agree with mkrai that looks like an advanced operation 03:13:14 +1 hongbin 03:13:54 Everyone agrees to submit a spec now? 03:14:15 Everyone can always write their input on ps 03:14:25 +1 for spec reviewing 03:14:33 yes, there should be a spec I think 03:14:33 mkrai: Here is what I think 03:14:40 +1 03:14:53 mkrai: We need to identify the list of container runtimes we want to support first 03:15:08 hongbin: ack 03:15:22 mkrai: Then, evaluate if the APIs listed there is supported by which runtime or not 03:15:29 hongbin, +1 03:15:57 If some APIs are not supported well by most of the runtimes/COEs, then we need to remove it 03:16:17 mkrai: please consider use async API calls at the begining. 03:16:27 eliqiao, +1 03:16:39 hongbin: Why can't we have them? 03:16:54 Calls will be made based on the type of container runtime 03:17:25 mkrai: No, my point is to identify which technology we want to support first 03:17:35 basically, i think we need to define how the APIs broadly apply for various runtimes. Like let's say - what happens when you want to use the same APIs for a POD creation? Or for that matter a VM creation for Hyper kind of scenarios? 03:17:37 Ok. I will collect information on this too. 03:18:11 Here is the list in the etherpad 03:18:16 1. Raw docker 03:18:24 2. Kubernetes 03:18:35 3. Rocket 03:18:44 4. Hyper 03:18:59 Everyone is happy with this list? 03:19:04 Did anyone from Hyper join? 03:19:24 hongbin: K8s is a container management tool 03:19:31 Y, I'm from hyper 03:19:42 feisky, hello! Glad to have you hear! 03:19:43 Hyper supports all apis listed. 03:19:46 mkrai: Yes, then we can debate that 03:19:53 So why are we counting as runtime tool 03:20:13 Yes please hongbin. I would like to hear 03:20:42 I don't have position on supporting k8s or not 03:20:55 But I want to have a discussion within the team 03:21:12 maybe just start from docker/lxc at the beginning? 03:21:18 That is Zun should support container runtimes only? or consider k8s as well? 03:21:23 the most common used ones 03:21:36 IMO integrating with COEs is an advance feature 03:21:47 +1 03:21:52 yeah, if zun has to support container runtimes only then we are creating a wrapper around docker and eventually competing with the COEs? 03:21:54 Not similar to docker, hyper or rocket 03:22:30 agree with mkrai 03:22:40 sudipto: agree 03:22:45 sudipto: I am afraid, we will end up with this 03:22:52 we will support k8s in future with a driver/plugin, but maybe not now I think 03:22:57 we still haven't got a conclusion how to place Zun 03:23:16 is it another COE or we want to place it on top of Magnum 03:23:17 agree yanyanhu 03:23:33 From my point of view, it is hard to compete with k8s, if we can avoid that, I prefer to avoid 03:23:50 flwang, that's exactly my confusion too. I think we have to get that sorted first. 03:24:03 or if it's clearer to other folks, maybe we should just talk about it. 03:24:07 sudipto: we're on same page :D 03:24:15 why does Zun compete with k8s? 03:24:18 talking with magnum/backend COEs is option, but not the only way I think 03:24:34 not really 03:24:34 hongbin: I completely agree that we shouldn't compete with k8s or swarm or any COEs. 03:24:37 we must be careful when designing Zun to avoid duplication/competition with COEs/Nova 03:24:53 That is not our motive anyhow. 03:25:10 if Zun can talk with container runtime directly 03:25:15 just like other COE 03:25:45 then that means Zun have to do all the things Nova has done, if you want a production service, not a toy 03:25:56 that is Zun for I think :) 03:26:09 IMO Zun should support all basic operations, and for advances feature it can depend on other COEs 03:26:10 container management service in openstack 03:26:12 like AZ, aggregate, cell, server group, in other words, you have to manage host 03:26:20 mkrai, +1 03:26:47 flwang, exactly so - there's a lot of work which has already been done and we would be duplicating stuff...eventually that would probably lead to us falling short of others... 03:26:52 if we put it on Magnum, then the host is not provided by cloud provider, but the VM's within the user's tenant 03:27:24 just like AWS ECS did 03:27:47 I think we can borrow/leverage some existing design of nova to support those functionalities, like host management, scheduling. But as an individual container management service, Zun need them 03:27:51 Ok so hongbin everyone, I think first we should make clear between team that we want to compete with other COEs(include all advance features) or not(integrate with COEs) 03:28:01 Then only we can decide our design 03:28:05 mkrai, +1 03:28:43 +1 03:28:49 I think the question is not only if we should compete with k8s or not 03:28:49 i know we're trying to touch some shadow corner, but we have to face it :D 03:29:14 The critical part is if we competes with k8s, what is hte points that differentiate Zun with k8s 03:29:16 then we go back to zun's use case 03:29:25 I think there's a lack of consensus here. I think creating a wrapper does mean competing with other COEs. That's an architectural decision. Creating the API would be an engineering challenge that should follow after that. 03:30:19 I don't think this is just for competing with other COEs 03:30:40 we are building a container management service in openstack 03:31:10 yanyanhu, and what is the greater implications of it? COntainers are not like VMs, so you would most likely build a PaaS on top. 03:31:14 It will be very premature to decide now. My opinion is to start with basic operations which anyhow we are going to support in any case. 03:31:20 Or do you want zun to become something like solum? 03:31:46 sudipto, I think that depends on how user use container 03:31:57 maybe that want to use it as VM, e.g. a lxc container 03:32:13 maybe they want to build PaaS solution based on dockers 03:32:24 seems if Zun does not compete with COEs/Nova, then the scope of Zun would be quite small 03:32:43 but maybe we shouldn't strictly limit our work scope on PaaS layer 03:32:44 Yes, there is a tradeoff 03:33:10 Wenzhi, +1 03:33:14 yanyanhu, if you want to build a PaaS solution using docker, the customer might want to choose a far more matured COE like Kubernetes... unless we spend day and night and re-invent the same code in Python in OpenStack. 03:33:19 Wenzhi: that's not correct 03:33:32 +1 sudipto 03:33:46 sudipto, you're right. That is because there is no native container management service in openstack in last two years 03:33:53 we can do the similar stuff like ECS, but let magnum do the underhood things 03:34:03 so Kubernetes became the best choice for user 03:34:27 flwang: seems reasonable 03:34:47 flwang, i think the magnum integration is a key. 03:35:01 sudipto: we're on the same page again :D 03:35:41 Yes, I think Magnum integration is the long-term direction 03:35:43 does it make sense if we design Zun as an OpenStack native container orchestration engine, compete with k8s/swarm...? 03:35:44 flwang, :) i guess, most of such points are the ones to discuss in the architecture decision link that hongbin pasted above. 03:36:02 if we build Zun upon existing COEs as just a wrapper, IMHO, the meaning of this project is rally limited 03:36:05 #link https://etherpad.openstack.org/p/zun-architecture-decisions 03:36:24 yanyanhu: good point 03:36:39 It's a trade off really, whether we want to create just a wrapper or list down - some value adds that we think we can give. 03:37:08 * sudipto calls for a SWOT analysis. :) 03:37:32 Here is what I think. It might be OK to have a small overlay with COEs and NOva 03:37:47 But we need to find the points that we are strong at 03:38:09 sudipto, agree that we need a tradeoff there. I just feel using COEs as backend is an option for Zun and we can support it. But we'd better don't limit us to it 03:38:17 which could address use cases that other COEs cannot address 03:38:27 yeah the COE competition is alright, as long as we : 1. don't end up re-inventing everything. 2. can bring in strong value adds from the openstack ecosystem. 03:38:30 hongbin, +1 03:38:39 especially in openstack environment 03:38:45 what is our differentiation from other COEs 03:38:59 hongbin: then how about make Zun a COE? 03:39:07 hongbin: +1 03:39:22 e.g. better integration with openstack SDN(neutron/kuryr) 03:39:24 Wenzhi: Its too early to say that :D 03:39:35 or openstack SDS 03:39:50 yes, network and storage is hte selling points? 03:40:18 keystone also 03:40:44 yes 03:40:55 if we are going in the direction of COEs - then i guess there are much more things to consider - right from whether mysql is the way to go or not and that's a rabbit hole. 03:41:09 so i guess let's debate on the etherpad offline? 03:41:30 yes, offline debate is better 03:41:38 ok 03:41:43 only 20 minutes left for meeting... 03:41:46 Etherpad link? 03:41:56 #link https://etherpad.openstack.org/p/zun-architecture-decisions 03:42:05 Thanks hongbin 03:42:36 mkrai: You have anything else for the API design, or advance topic? 03:42:50 No that's all. 03:42:58 Very good discussion 03:43:23 mkrai: Thanks Madhuri for leading the API design efforts 03:43:27 #topic Nova integration 03:43:39 #link https://blueprints.launchpad.net/zun/+spec/nova-integration The BP 03:43:46 This is the last item in hte agenda 03:44:22 anyone here is working on the nova integration? 03:44:40 I guess Aditi is not here 03:44:46 like it or hate it, i guess it might be a good starting point. 03:45:06 I like it sudipto personally :) 03:45:10 sudipto: Yes, you have idea on how to implement it? 03:45:34 hongbin: i think we just need to follow how nova integrate with Ironic 03:46:04 flwang: OK, if we follows what nova-ironic integration, here is what it will look like 03:46:05 hongbin, again that boils down to what we want to achieve. I thought we wanted something like nova-docker here? 03:46:24 1. THere is a Zun virt driver for nova 03:46:32 I want to work in it 03:46:35 2. The Zun virt driver will talk to the Zun Rest API 03:47:07 3. The Zun service will do the real work on processing the request 03:47:30 namrata: sure, please feel free to ping the BP owner 03:47:44 hongbin: I just have once concern 03:47:58 hongbin, waiting for the 4th point.... 03:47:59 :) 03:48:10 I am not sure but my understanding is correct or not. Does Ironic use Nova scheduler? 03:48:12 sudipto: I finished my point :) 03:48:22 mkrai: yes 03:48:32 mkrai: Here is how it works 03:48:32 So what about our scheduler? 03:48:35 mkrai, it does but it wants to split out. 03:48:53 hongbin, so the virt_driver is very analogus to the nova-docker driver? 03:48:58 mkrai: then we don't need scheduler ourselves. 03:48:59 I guess nova will pull Ironic to get a list of hosts 03:49:08 We will also have scheduler so implementation will be different 03:49:33 eliqiao: Is it? 03:49:39 btw nova has some lxc support as well. 03:49:44 sudipto: Not really, the Zun virt driver might be just a proxy to the Zun rest API 03:50:10 hongbin, ok 03:50:24 eliqiao: No, we still need a schduler 03:50:31 hongbin: for what? 03:50:38 eliqiao: IMO we need a scheduler 03:50:49 eliqiao: Here is my understanding 03:51:09 eliqiao: nova schedule the request to a nova-compute 03:51:26 eliqiao: the nova-compute forward it the the virt driver 03:51:29 hongbin: ,oh, I get 03:51:34 hongbin: I see. 03:51:55 #topic Open Discussion 03:52:13 hongbin: Sorry I didn't get, where comes our scheduler? 03:52:30 mkrai, hongbin i too am a bit confused now. 03:52:40 well I might be wrong 03:52:51 when you say zun virt_driver - i guess you mean a compute driver? 03:53:09 mkrai: nova-api -> nova-cond -> nova->scheduler -> nova-cpu -> zun-virt -> zun-api -> zun-cond -> zun-scheduler -> zun-agent 03:53:16 Zun scheduler is for scheduling request from Zun api to a host 03:53:31 eliqiao: I guess you are correct 03:53:37 what is the use of host selected by nova? 03:53:43 so long a call stack :( 03:53:47 I understand the flow eliqiao 03:54:05 that does not look right 03:54:17 Agree sudipto 03:54:34 once you have reached nova-cpu - why do you want to go back to the API layer? 03:54:54 as for the flow you mentioned above, we need to figure out the host managed by nova and zun 03:55:14 sudipto: that's what ironic does 03:55:29 nova-cpu only does the request forwarding. 03:55:39 basically my understanding is there are two level of scheduling 03:55:52 eliqiao, ah now i get the point. 03:56:05 acutally, 1st (nova-scheduler) doesn't nothing help at all. 03:56:18 eliqiao, however, ironic does baremetal - which is OK to be slow - will it be alright to do this for a container? 03:56:33 sudipto: I don't think so 03:56:59 We should have different implementation for containers 03:57:17 However, we cannot change nova code, that is the limitation 03:57:18 i guess this also is an etherpad discussion then? 03:57:22 * sudipto observes the clock. 03:57:38 And then it also depends on nova community for acceptance with different implementation 03:57:42 ok, I will create another etherpad later 03:57:44 time is running out... 03:57:46 +1 sudipto 03:58:20 ok i think we had a good open discussion today. I think defining the project scope and the right scope is tougher but at the same time important to have a strong base. 03:58:31 but i am sure we will get there! 03:58:40 +1 sudipto 03:58:59 Yes, the homework for everyone is to review the therpads above 03:59:03 debating is not bad thing :) 03:59:12 It is always helpful 03:59:13 yes 03:59:23 Thanks everyone for joininh 03:59:25 All, thanks for joining the meeting 03:59:29 Thanks 03:59:33 thanks 03:59:34 #endmeeting