03:00:04 #startmeeting zun 03:00:04 Meeting started Tue Jun 14 03:00:04 2016 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:00:07 The meeting name has been set to 'zun' 03:00:11 #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2016-06-14_0300_UTC Today's agenda 03:00:16 o/ 03:00:17 #topic Roll Call 03:00:22 Madhuri Kumari 03:00:23 o/ 03:00:24 o/ 03:00:29 Wenzhi Yu 03:00:38 o/ 03:00:46 hi 03:01:02 Thanks for joining the meeting eliqiao mkrai eliqiao sudipto Wenzhi Namrata haiwei_ 03:01:17 Pause a few seconds for future pacticipants 03:02:01 hi 03:02:28 Qiming: hey 03:02:29 hi, sorry I'm late 03:02:33 NP 03:02:35 Hi 03:02:39 #topic Announcements 03:02:39 hi.. 03:02:44 Eli Qiao is now a Zun cores! 03:02:51 :) 03:02:53 thanks hongbin and team. 03:03:05 Thanks Eli for your contribution and commitment 03:03:08 welcome Eli Qiao 03:03:09 Great addition 03:03:17 Welcome eliqiao 03:03:29 con :) 03:03:29 For others, if you want to join the core team, please feel free to ping me 03:03:34 thx all. 03:03:43 The standard will be similar to recent added core 03:03:56 #topic Review Action Items 03:04:03 1. hongbin submit a request to rename the project (Done by Eli Qiao) 03:04:09 #link https://review.openstack.org/#/c/326306/ 03:04:14 #link https://review.openstack.org/#/c/329247/ 03:04:29 Eli proposed two patches to rename hte project 03:04:44 The first patch rename the IRC and launchpad 03:04:57 The second patch rename in gerrit and git 03:05:07 It looks the second patch will take a while to land 03:05:15 need to wait for infra team's schedule to change gerrit and git 03:05:17 Hopefully, the first patch will land soon 03:05:23 yes 03:05:44 I was told that we should expect months for the next rename 03:05:51 hi, xiucai just act as an auditor now :), may be core in someday. 03:06:19 We will talk about the transition period during these few months 03:06:36 xiucai: Hey, welcome to the team meeting 03:06:41 2. hongbin create a bp for glance integration (DONE) 03:06:47 #link https://blueprints.launchpad.net/zun/+spec/glance-integration 03:07:01 Any comment for the review actio item? 03:07:23 #topic Project rename procedure 03:07:31 #link https://review.openstack.org/#/c/326306/ Rename request for infra team 03:07:48 Let's discuss how should we do for the transition period 03:07:54 hi, hongbin, so the job in gate side may not work during this migratin? 03:07:59 s/migratin/migration 03:08:00 which is from right now to the next rename 03:08:14 yanyanhu: It looks the gate is working fine 03:08:22 hongbin, nice 03:08:28 I saw the gate job passed 03:08:49 so we just need to revise the gate job template after gerrit name is changed 03:09:05 o/ 03:09:10 sorry in another meeting 03:09:22 hongbin: one more thing, dsvm only setup services, there is no any test case yet. 03:09:42 hongbin: so we'd better to manually check devstack logs/screen log of service 03:09:44 yanyanhu: maybe. Yes 03:09:51 flwang: hey. NP 03:09:55 hongbin: I just enable zun-compute service 03:10:06 #link https://review.openstack.org/328854 03:10:10 hi, eliqiao, has post/pre_test hook been set up 03:10:11 eliqiao: ack 03:10:22 yanyanhu: not yet right now. 03:10:26 if so, it's easy to add real test cases 03:10:29 I see 03:11:05 Then, I guess here is the plan 03:11:12 yanyanhu: I will try to see if I can enable post hook. but if some one is interested with it, I am gald to help 03:11:45 eliqiao, thanks :) 03:12:16 Plan 1. starting using [zun] for email, wiki, launchpad 03:12:23 what is zun-compute? 03:12:28 what is zun-conductor? 03:12:45 Yes, let's discuss the architecure 03:12:49 it's easy, just need to add them to gate template. And the first version of test_hooks can be empty script 03:13:16 yes i have few doubts on architecture.. 03:13:39 OK. Before we talk about the archtecture, anything else for the renaming periodi 03:13:42 period 03:13:44 the design looks like Nova's design? 03:13:48 https://review.openstack.org/#/c/317699/ 03:14:00 hongbin: We should add Zunclient also 03:14:10 #link https://review.openstack.org/#/c/317699/ 03:14:20 mkrai: I think we should land that patch right now (not wait for the renam) 03:14:23 rename 03:14:31 Ok I will do that 03:14:33 Then we have a client to work with 03:14:41 mkrai: thx 03:14:57 Any other comment for the renaming? 03:15:30 OK. Let's discuss the architecture 03:15:46 #topic Zun architecture 03:16:09 From my understanding, the architecture is a copy of Nova. correct? 03:16:32 Yes it seems so 03:16:50 I could see some patches that copied the exact code from nova, including the objects 03:16:52 But we need to look at our requirement also before copying 03:16:59 yes.. 03:17:06 agree 03:17:17 Nova architecture is not necessary fit for us 03:17:36 Because as told many times containers lifecycle is different from VMs 03:17:50 yeah - was the same thought in my head too. 03:18:06 So do we need compute service? 03:18:12 versioned object is very helpful I think :) we should support it if possible. Although some effort to set it up 03:18:29 yanyanhu: Sure, we need object 03:18:45 mkrai: yeah, we need to discuss about compute serive 03:19:02 yanyanhu, agreed, before we write the objects though, we need to discuss the data model and use cases in detail IMO 03:19:24 sudipto, yes, agree 03:19:43 what about making some specs and then starting to code 03:19:45 It seems that the API layer in Nova is kinda similar to what we would want to do... 03:19:45 The thing is how we connect with HOST from zun-conductor? 03:19:48 speaking of objects and data model, I have a patch intend to add container object https://review.openstack.org/#/c/328726/ 03:19:51 about whether we need sun-compute, I think that depends on what responsibility we want conductor to take 03:20:08 eliqiao, bingo, I have the same question 03:20:45 I think we need a sort-of local agent 03:21:03 In nova, it is nova-compute, in aws, it is called ecs-agent or somthing 03:21:04 in nova, conductor is a bridge between compute and API? 03:21:19 should we though REST (for exmaple we can use docker-py to connect a remote docker daemon), but I think we may need to control NIC/Storage later .. 03:21:20 yanyanhu, it's between the compute and the DB. 03:21:25 ah, right 03:21:37 Basically, nova-conductor is a proxy to DB 03:21:42 conductor seems a bridge between compute and DB, yanyanhu? 03:21:50 conductor is for upgrade and access DB and also it's task flow manager. 03:22:09 eliqiao: We would need docker on the host to connect 03:22:09 yes, basically a bridge between compute agent and core service 03:22:26 which talks with DB directly 03:22:26 Wenzhi, thanks for the patch (https://review.openstack.org/#/c/328726/) but it is really tooooo big to review, it is combining a lot of things into a single patch 03:22:42 actually nova-conductor also handle some build/rebuild tasks 03:23:04 and also call nova-scheduler to filter out dest compute host 03:23:04 mkrai: but if we have zun-compute, the connection between conductor and host is, conductor talk to compute and compute talk to docker daemon locally. 03:23:24 I do believe there's a need for an agent, that is probably going to have stevedoor plugins loaded - based on the driver (backend) used. 03:23:35 Qiming: sorry, I will split it into several small patches 03:23:48 s/stevedoor/stevedore 03:23:49 It seems we are coding first, and then discussing the design 03:23:57 haiwei_, +1 03:24:01 haiwei_, :) 03:24:02 eliqiao: Ack. Later we might need to talk to openstack services also to manage things. So we would need compute service 03:24:05 why don't we have the conductor talk to the local daemon directly? 03:24:06 sudipto: agree, I think zun-compute(agent) could be a option compoment (driver specific) 03:24:09 it is right? 03:24:13 mkrai, agree 03:24:27 Qiming: good question. What is the pros and cons? 03:24:52 Qiming: Later we might need to talk to openstack services also to manage things. So we would need compute service 03:25:00 Qiming: do you mean we run conductor on each host? 03:25:38 we can have a single conductor (conceptually single ...) to talk to container daemons on all nodes 03:26:10 Qiming: what is the advantage? 03:26:17 have that single conductor to speak different dialects 03:26:48 Qiming, then you would complicate the conductor IMo 03:27:00 per my understanding, if we want to use kuryr, we need to install kuryr and neutron agent on each host, right? 03:27:01 one of the key value of zun, as I see it, is it can provide a LCD among all container backends 03:27:42 Qiming: could you tell what LCD stand for? 03:27:47 Qiming, what would be the LCD in this case? 03:27:53 Lowest common dinominator 03:28:01 yes 03:28:41 Thanks hongbin , good to know that. 03:28:58 Qiming: but both Nova and AWS have local agents. If Zun don't have it, I guess we will face limitation to perform local operations 03:29:18 from a deployer/user's perspective, each deployed components need to be managed 03:29:21 Qiming: e.g. setting up the network, storage, image, file system etc. 03:29:38 ok 03:29:40 hongbin: Agree 03:29:53 hongbin: agree 03:30:06 hongbin: yes, I have same question on network, storaget etc, @ Qiming how to manage them locally? 03:30:34 I'm leaning more towards a remote management 03:30:54 the ansible way of managing things, instead of the chef/puppet way 03:31:06 Like a lean agent, and a heavy conductor? 03:31:33 in an ideal setup, we don't need agents at all, if possible 03:32:19 Yes, maybe 03:32:25 just some ideas for team to consider 03:32:52 maybe eventually, we will need agents installed, configured and maintained on each underlying nodes 03:33:14 when we are there, we will know it 03:33:14 hmm... I got a question, if we leverage kuryr, we will install kuryr and nerutron agent on host... in this case, it's not good. 03:33:58 Qiming, it's a good thought but i guess there are some unavoidable cases. For example - how do you configure the bridges? 03:34:03 eliqiao, installation is easy, but configure and setup... 03:34:03 eliqiao, you mean having zun to install kuryr ? 03:34:53 If we follows Nova pattern, operators install Kuryr and Kuryr agents 03:35:14 my knowledge on kuryr is very very limited, but can we offload those set up to kuryr? 03:35:18 And Kuryr agents needs to be installed at each host 03:35:27 sure 03:35:29 then we talk to Kuryr? 03:35:46 For networking part, I guess that will work 03:36:08 Qiming: I am not sure, just thinking about your view of "agent", too many agent been installed on host. 03:36:29 yes, that is what I am really worrying about 03:36:57 hongbin: Kuryr is only a connect bridge of neutron and docker, seems zun shouldn't talk with it at all. 03:37:01 From the API per say - the request sent is more like "I want the container to be deployed" - that's one REST call...and then the local agent is responsible for orchestrating that whole request by making several local calls to the local daemon/ovs/xyz and fullfills that request. So i see a benefit there. 03:37:45 * sudipto states the benefits of a local agent 03:38:37 em, the benefit sounds real 03:39:04 * eliqiao has some concerns with sudipto, but yes agent bring maintain effort. 03:40:31 Qiming: do you change your point of view about local agent, or this needs to be discussed further? 03:40:37 * eliqiao s/some/same 03:40:49 I'll keep the viewpoint to myself 03:41:10 ok 03:41:11 do not want to block the team from progressing 03:41:30 Qiming, it's a very good thought IMHO, and we could think about this as we go maybe. 03:41:43 Qiming: If the team want to implement a local agent, what you will suggest about the implementation? 03:41:53 my suggestion is that we can focus on things we MUST do in this cycle 03:42:29 agree 03:42:37 agreed. 03:42:40 +1 03:42:56 some basic data models, a generic, minimal API design/implementation, with test case covered 03:43:20 agreed 03:43:22 Qiming: I am +1 on the basic stuff. 03:44:04 I would suggest to implement the local agent as light as possible, if possibly, containerize the local agent 03:44:17 For this, reference AWS agent implementation 03:44:20 it's better tolist out the basic tasks somewhere 03:44:42 #link https://github.com/aws/amazon-ecs-agent 03:45:17 +1 haiwei_ 03:45:39 OK, want an etherpad for that? 03:46:22 hongbin, an etherpad would be nice I feel. 03:46:24 Yes would be great 03:46:39 #link https://etherpad.openstack.org/p/zun-basic-tasks 03:46:49 We have 15 minutes left 03:47:00 10 minutes to work on it maybe 03:47:06 5 minutes open discussion 03:47:13 any updates from API work now? 03:47:35 I have posted a patch for Container API controller 03:47:39 But that needs update 03:47:52 thx, mkrai 03:48:16 #link https://review.openstack.org/#/c/328444/ 03:49:15 hoho, 1718 lines added 03:49:26 need two days to go through them, :) 03:49:48 :) 03:50:13 It seems almost code are copied from old magnum 03:50:47 Yes yuanying 03:50:55 Need to remove wsme code 03:50:57 Yes, it looks most folks are from Magnum 03:51:05 :) 03:51:12 hongbin: :) 03:51:23 I don't like start/stop.../action_controller.. personally 03:51:33 so we learned something about magnum as well :P 03:52:05 One thing is - just making zun as a service in openstack that manages containers - need further detailing w.r.t USPs IMHO. As in are we targeting just the OpenStack existing environments or are we also telling - that this provides something unique? 03:52:25 yuanying: Please comment on the patch. I will update accordingly 03:52:34 Thanks for your input yuanying 03:52:36 sudipto: good point 03:52:38 At this point in time, we are in rush to replicate another openstack service i feel. 03:53:00 * Qiming shares the same feeling 03:53:08 Yes, I am thinking we need a spec to clarify the overall design and roadmap 03:53:18 +1 hongbin 03:53:43 agree hongbin 03:53:50 I am interested in working on the spec. But would need support from all 03:54:01 I can work on that, but need a few more meetings to discuss with you to drive consensus on the overall design first 03:54:23 sure, we could put the spec in an etherpad 03:54:29 THen, everyone can contribute 03:54:45 It will be huge 03:54:45 #topic Open Discussion 03:55:15 #action hongbin create an etherpad as a draft of Zun design spec 03:56:00 I would urge everyone to put your thoughts on that etherpad - and come up with ideas that could be unique to zun (not about how similar it is to nova/docker or xyz) 03:56:45 agree 03:57:30 OK. Anything else to discuss? 03:57:54 If not, let's wrap up a bit earlier 03:58:13 All, thanks for joining ht emeeting 03:58:21 Thanks everyone 03:58:25 Hope to see you all in the next meeting 03:58:28 #endmeeting