03:00:30 #startmeeting zun 03:00:31 Meeting started Tue Aug 2 03:00:30 2016 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:00:34 The meeting name has been set to 'zun' 03:00:38 #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2016-08-02_0300_UTC Today's agenda 03:00:40 o/ 03:00:43 #topic Roll Call 03:00:47 Madhuri Kumari 03:00:58 Namrata 03:01:14 hi, sorry, I'm late 03:01:17 o/ 03:01:18 Wenzhi Yu 03:01:39 Dilip 03:01:52 Thanks for joining the meeting eliqiao mkrai Namrata_ yanyanhu sudipto Wenzhi itzdilip 03:01:56 #topic Announcements 03:02:04 I have no annoucement 03:02:19 Any of you have? 03:02:34 #topic Review Action Items 03:02:37 None 03:02:42 #topic Runtimes API design (mkrai) 03:02:49 mkrai: ^^ 03:03:07 I have implemented the APIs using pecan 03:03:16 Uploaded the patch yesterday 03:03:27 Currently some of the APIs are not running 03:03:40 Waiting for the RPCs to test it 03:04:05 I see hongbin has already uploaded patch for compute agent 03:04:13 yes 03:04:13 https://review.openstack.org/#/c/328444/ mkrai? 03:04:27 I will test using it this week 03:04:30 Yes eliqiao 03:04:30 Hi o/ 03:04:59 I would like team to review patch once I set it as ready for review 03:05:10 That's all I have. 03:05:19 Anyone have any question? 03:05:19 Thanks mkrai 03:05:20 We may use zun to test swarm bay(created bay Magnum) mkrai 03:05:44 eliqiao, Yes we can 03:06:24 It depends on the implementation of compute apis now 03:06:25 eliqiao: Or you can simply run a docker daemon at localhost and test it 03:06:47 okay 03:06:51 hongbin: sure. 03:07:14 Also there is change in the APIs endpoint 03:07:23 which I will update in spec etherpad 03:07:41 The API endpoint should be a configurable string 03:07:58 That is for all the action APIs 03:08:10 ok 03:08:22 #topic Nova integration (Namrata) 03:08:50 Namrata_: any update from you? 03:09:15 as per discussion with hongbin 03:09:40 That is using nova scheduler with nova integration 03:09:49 hongbin, I, sudipto had some discussion last week which we recorded in etherpad 03:09:52 #link https://etherpad.openstack.org/p/zun-containers-nova-integration 03:10:01 and if without nova zun's scheduler 03:10:10 can we ask opinion fro the team 03:11:19 Namrata_: I think you can relay your inputs to the etherpad above 03:11:32 Okay hongbin 03:11:35 * hongbin is reading hte etherpad 03:12:22 Namrata_, In that case does it mean our Zun scheduler should be optional? 03:12:48 yes 03:13:23 Hmm that means we expect host information in APIs 03:13:33 True that 03:13:41 Yes, Zun should be able to return a list hosts 03:13:48 Not sure about this implementation 03:14:09 mkrai: That is exactly how Ironic is integrated with Nova 03:14:31 mkrai: Ironic return a list of nodes for nova scheduler to pick 03:14:40 hongbin, you mean providing host list to nova and let it choose one? 03:14:51 yanyanhu: yes 03:14:52 But ironic doesn't have a scheduler. Right? 03:15:02 mkrai: yes 03:15:08 then maybe we need to enroll the nodes manually 03:15:41 Yes,all nodes should be enrolled at Zun 03:17:04 any one know how vcenter being integrated with nova? 03:17:19 ok, but how does Nova decide the dest node? 03:17:36 yanyanhu: no idea from me 03:17:38 which is a similar case I guess? 03:17:49 Wenzhi, based on filters 03:18:24 Wenzhi: nova retrieve the list of hosts, use filters and weithers to choose a host 03:18:26 mkrai: yes, but currently nova do not have any filters for container case , right? 03:18:36 yanyanhu: +1 03:18:54 some reference: http://docs.openstack.org/kilo/config-reference/content/vmware.html 03:19:02 Yes Wenzhi 03:19:08 Wenzhi: container is represented as a Nova instance if nova integration 03:19:10 in that case, nova-compute is just a gateway, vcenter will decide vm will be runing on which host. 03:19:32 But some general filter would work for us too 03:19:41 eliqiao, What is vcenter? 03:19:53 eliqiao, right, that's why I feel it could be a similar case :) 03:20:12 "The VMware vCenter driver enables the nova-compute service to communicate with a VMware vCenter server that manages one or more ESX host clusters." 03:20:15 mkrai: just talking about how vmware driver works from yanyanhu 03:20:26 hongbin: if so, seems we need to introduce something like container flavor 03:20:42 Wenzhi: yes 03:20:44 Wenzhi, yes I guess so 03:21:01 yanyanhu: the case of vCenter is strange, it will allow 1 compute node have multiple hosts. 03:21:15 for libvirt: only one compute node vs one host 03:21:23 eliqiao: like powervc, hha 03:21:56 eliqiao: In Ironic, there could be multiple hosts as well 03:22:44 In the case of vcentre, basically, the nova scheduler is disabled? 03:23:00 no with respect to compute, vcenter is considered as host 03:23:12 hongbin: not sure but I guess so 03:23:28 vcenter itself is a cloud 03:23:48 eliqiao, yes, just considering the use case of container is very different from vm(at least in Zun scope), I guess this could be a feasible way to integrate Zun with nova? 03:23:48 sounds like we need more investigation in this area 03:24:13 just providing limited support for provisioning container through nova 03:24:28 That will work as well 03:25:25 Let's get a very simple implementation as Ironic and later we can of course enhance it as we evolve 03:25:27 If we intergrate with Nova, things get sucks, if not, we will lose the entry point of Nova ... 03:25:41 maybe nova-zun-driver can just pass the nova cmd to zun and let zun do the whole work 03:26:06 Wenzhi, yes, that's is what I'm thinking 03:26:26 That is the proxy approach, basically, bypass the nova scheduler 03:26:42 yes 03:26:52 yes, Zun has its own scheduling strategy, based on 'container' use cases 03:27:12 ok 03:27:20 hongbin: just heard from colleague from nova mid-cycle, nova scheduler will be split out in one of two release. 03:27:33 eliqiao, good news :) 03:27:38 great 03:28:36 in the long run, we as one community, should converge to one scheduler as possible as we can, we are not supposed to reinvent everything 03:28:56 Qiming: agree 03:29:30 for the short term and mid-term experimentation, it might be okay to run a simple stupid scheduler algorithm, but the interface to that to-be-split scheduler should be clear 03:30:01 Qiming: ++ 03:30:17 +1 03:31:00 Namrata_: do you have enough information to proceed? 03:31:11 Namrata_: or anything that needs to be clarify? 03:32:14 Hongbin I will investigate more on it 03:32:21 or maybe start with code 03:33:00 Namrata_: sure. you can start with code, or start with a spec is also fine 03:33:20 okay hongbin 03:33:34 Thanks Namrata_ 03:33:38 Namrata_: I think you need to start with a spec, nova demands that 03:34:04 Wenzhi: We started the work totally out of Nova I think 03:34:19 Although start with a spec is a good idea 03:34:27 It will bot be in nova tree. Do we still need it? 03:34:44 We can push it to nova tree later 03:35:00 but I am not sure if Nova team will accpet it 03:35:15 If not, then the driver needs to stay at our tree 03:35:24 I have the same concern ^^ 03:35:56 I will submit spec in zun 03:36:14 no harm to have a try :) after we are ready 03:36:22 yes 03:36:25 agreed 03:36:39 OK. advance topic 03:36:47 #topic Integrate with Mesos scheduler (sudipto) 03:37:27 sudipto: could you outline the idea of integating with mesos secheduler that you mentioned before ? 03:37:43 let zun stay upon mesos? :) 03:37:45 hongbin, ok - so my thought was to have a way to run zun as a framework on top of mesos. 03:38:04 keep in mind, this is just one way 03:38:18 there is going to be a z-sched as you guys are discussing so far. 03:38:32 but i'd like that to be pluggable. so that it can be overridden by a mesos scheduler. 03:39:23 the job of creation of container will also be done using mesos via it's executor. 03:39:30 +1 sudipto. Schedulers should be pluggable so that we can integrate with any runtimes scheduler easily 03:40:18 this would allow you to run zun with other container frameworks. presently the ones integrated with mesos are kubernetes and marathon as a frameworks 03:41:01 so keep the interfaces clean and pluggable - that should do. 03:41:24 unless there are thoughts/comments - i am done hongbin :) 03:42:08 sudipto: I guess mesos is not just a scheduler 03:42:17 sudipto: it also manages hosts? 03:42:27 hongbin, it does 03:42:29 hongbin, it has to run a slave on each of the hosts. 03:42:38 it is a resource pool management service IMO 03:43:11 and frameworks upon it provide strategy 03:43:12 Yes, then with this idea, the mesos slave will replace the compute agent 03:43:19 to manage lifecycle of tasks 03:43:34 yanyanhu: what do you think about this idea? 03:43:46 hongbin, interesting idea :) 03:44:01 just not sure whether we should deeply depends on mesos 03:44:05 yanyanhu: you steal my comments :) 03:44:12 hongbin, :) 03:44:14 (interesting idea) 03:44:27 another concern about mesos is it's C++ based 03:44:41 yes 03:44:51 Qiming: what do you think? 03:44:54 could that be a problem for dependency building? 03:44:54 so the workflow happens like this: there's a mesos master that talks to the mesos slaves. The mesos master gets a list of offers and presents it to the framework. the list of offers is nothing but a set of resources for a task deployment and then the framework makes the decision of where to schedule it. 03:45:03 i would suggest keep this as optional. 03:45:15 sudipto, just like how k8s on mesos works :) 03:45:32 yanyanhu, yeah precisely. 03:46:15 Yes, it should be optional at the beginning 03:46:19 mesos allows you to simultaneously run two frameworks on the same set of hosts. 03:46:25 that's the value add. 03:46:42 s/two/two or more 03:46:55 I think another value add is that we don't need to re-implement resource management from scatch 03:47:11 well that depends if you want to treat as mandatory or optional 03:47:18 however, it is optional, so we still need a resource management 03:47:25 yes 03:47:52 A drawback is as yanyanhu mentioned, it is C++ 03:48:01 no that is not a drawback 03:48:06 however, it could be solved by containerized the mesos-master and slave 03:48:06 it has the python bindings. 03:48:16 yes 03:48:25 so why is C++ a drawback then? :) 03:48:43 the mesos core is written in C++ - we don't have to touch that code. 03:48:52 I guess packing will be more difficult 03:49:17 There is a big discussion about whether openstack should allow Golang 03:49:36 and the packing team has a strong disagreement with the proposal 03:49:41 same for c++ 03:49:51 i don't think we will package mesos with openstack ? 03:49:54 or intend to. 03:50:25 no, if it is containerized 03:50:43 OH, I see what you mean 03:50:57 No, it is packaged by the OS distro 03:51:16 so that looks fine 03:51:28 i mean this could be something very similar to this : 03:51:33 #link: https://github.com/docker/swarm/blob/master/cluster/mesos/README.md 03:52:09 yes 03:52:17 but this is definitely not a priority i would say. 03:52:46 I guess hte network and storage needs to be investigated 03:52:46 more focus at the moment should be on the API design and the other pieces that would make zun stand on it's own feet first. 03:53:00 yes 03:53:12 #topic Open Discussion 03:53:26 Hi. I am Janki, new to Zun. I am looking for contributing to Zun. 03:53:47 welcome to join the meeting janki 03:53:55 Not quite sure where to start as all BP are taken and those not taken are all long term 03:54:05 so are we looking at timelines yet to achieve things? I am guessing i'd be able to meet you guys at Barcelona. But should we aim for a PoC before that? 03:54:05 hongbin: Thank you :) 03:54:23 janki, don't worry about the BPs yet. Start with the reviews if you like it. 03:54:35 Yes atleast docker support sudipto :) 03:54:37 sudipto: Will do 03:54:38 sudipto: PoC? 03:54:48 hongbin, i mean something to work with or demo with. 03:55:00 i change my words - a working project :) 03:55:06 mkrai, yeah agreed. 03:55:24 yes, if we can make that, that is great 03:55:35 Janki there is lots of work to be done in Zun 03:55:50 Attend meetings you will get the idea on which things you can work 03:55:56 mkrai: I would be happy to work on those. 03:56:17 mkrai: Sure, will attend 03:56:25 cool : 03:56:37 like for instance, can we say that during the next meeting we can have the API design finalised and mostly up for review? 03:56:53 Hopefully yes sudipto :) 03:57:10 mkrai, i am happy to help! 03:57:25 hongbin, Can I use the compute code now? 03:57:28 Is it ready? 03:57:40 Thanks sudipto 03:57:42 mkrai: I haven't tested it yet :) 03:57:58 mkrai: but everyone are free to revise it 03:58:07 Ok I will try once 03:58:14 Sure 03:58:17 There will be a lot fo bugs though :) 03:58:22 mkrai, i think the API code can move with stubs for now? 03:58:42 The current one is just bypassing the RPC calls 03:58:43 are we going to be using the docker remote apis ? 03:59:02 anyway i think we are on the brink of the hour. I will catch on the zun channel. 03:59:11 We will use the compute rpc apis 03:59:16 Sure 03:59:23 Thanks all! 03:59:35 All, thanks for joining the meeting 03:59:40 Thanks.. 03:59:44 Next meeting is the next week the same time 03:59:49 #endmeeting