03:00:30 <hongbin> #startmeeting zun
03:00:31 <openstack> Meeting started Tue Aug  2 03:00:30 2016 UTC and is due to finish in 60 minutes.  The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot.
03:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
03:00:34 <openstack> The meeting name has been set to 'zun'
03:00:38 <hongbin> #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2016-08-02_0300_UTC Today's agenda
03:00:40 <eliqiao> o/
03:00:43 <hongbin> #topic Roll Call
03:00:47 <mkrai> Madhuri Kumari
03:00:58 <Namrata_> Namrata
03:01:14 <yanyanhu> hi, sorry, I'm late
03:01:17 <sudipto> o/
03:01:18 <Wenzhi> Wenzhi Yu
03:01:39 <itzdilip> Dilip
03:01:52 <hongbin> Thanks for joining the meeting eliqiao mkrai Namrata_ yanyanhu sudipto Wenzhi itzdilip
03:01:56 <hongbin> #topic Announcements
03:02:04 <hongbin> I have no annoucement
03:02:19 <hongbin> Any of you have?
03:02:34 <hongbin> #topic Review Action Items
03:02:37 <hongbin> None
03:02:42 <hongbin> #topic Runtimes API design (mkrai)
03:02:49 <hongbin> mkrai: ^^
03:03:07 <mkrai> I have implemented the APIs using pecan
03:03:16 <mkrai> Uploaded the patch yesterday
03:03:27 <mkrai> Currently some of the APIs are not running
03:03:40 <mkrai> Waiting for the RPCs to test it
03:04:05 <mkrai> I see hongbin has already uploaded patch for compute agent
03:04:13 <hongbin> yes
03:04:13 <eliqiao> https://review.openstack.org/#/c/328444/ mkrai?
03:04:27 <mkrai> I will test using it this week
03:04:30 <mkrai> Yes eliqiao
03:04:30 <shubhams> Hi o/
03:04:59 <mkrai> I would like team to review patch once I set it as ready for review
03:05:10 <mkrai> That's all I have.
03:05:19 <mkrai> Anyone have any question?
03:05:19 <hongbin> Thanks mkrai
03:05:20 <eliqiao> We may use zun to test swarm bay(created bay Magnum) mkrai
03:05:44 <mkrai> eliqiao,  Yes we can
03:06:24 <mkrai> It depends on the implementation of compute apis now
03:06:25 <hongbin> eliqiao: Or you can simply run a docker daemon at localhost and test it
03:06:47 <eliqiao> okay
03:06:51 <eliqiao> hongbin: sure.
03:07:14 <mkrai> Also there is change in the APIs endpoint
03:07:23 <mkrai> which I will update in spec etherpad
03:07:41 <hongbin> The API endpoint should be a configurable string
03:07:58 <mkrai> That is for all the action APIs
03:08:10 <hongbin> ok
03:08:22 <hongbin> #topic Nova integration (Namrata)
03:08:50 <hongbin> Namrata_: any update from you?
03:09:15 <Namrata_> as per discussion with hongbin
03:09:40 <Namrata_> That is using nova scheduler with nova integration
03:09:49 <mkrai> hongbin, I, sudipto had some discussion last week which we recorded in etherpad
03:09:52 <mkrai> #link https://etherpad.openstack.org/p/zun-containers-nova-integration
03:10:01 <Namrata_> and if without nova zun's scheduler
03:10:10 <Namrata_> can we ask opinion fro  the team
03:11:19 <hongbin> Namrata_: I think you can relay your inputs to the etherpad above
03:11:32 <Namrata_> Okay hongbin
03:11:35 * hongbin is reading hte etherpad
03:12:22 <mkrai> Namrata_, In that case does it mean our Zun scheduler should be optional?
03:12:48 <Namrata_> yes
03:13:23 <mkrai> Hmm that means we expect host information in APIs
03:13:33 <Namrata_> True that
03:13:41 <hongbin> Yes, Zun should be able to return a list hosts
03:13:48 <mkrai> Not sure about this implementation
03:14:09 <hongbin> mkrai: That is exactly how Ironic is integrated with Nova
03:14:31 <hongbin> mkrai: Ironic return a list of nodes for nova scheduler to pick
03:14:40 <yanyanhu> hongbin, you mean providing host list to nova and let it choose one?
03:14:51 <hongbin> yanyanhu: yes
03:14:52 <mkrai> But ironic doesn't have a scheduler. Right?
03:15:02 <hongbin> mkrai: yes
03:15:08 <Wenzhi> then maybe we need to enroll the nodes manually
03:15:41 <hongbin> Yes,all nodes should be enrolled at Zun
03:17:04 <yanyanhu> any one know how vcenter being integrated with nova?
03:17:19 <Wenzhi> ok, but how does Nova decide the dest node?
03:17:36 <hongbin> yanyanhu: no idea from me
03:17:38 <yanyanhu> which is a similar case I guess?
03:17:49 <mkrai> Wenzhi, based on filters
03:18:24 <hongbin> Wenzhi: nova retrieve the list of hosts, use filters and weithers to choose a host
03:18:26 <Wenzhi> mkrai: yes, but currently nova do not have any filters for container case , right?
03:18:36 <eliqiao> yanyanhu: +1
03:18:54 <yanyanhu> some reference: http://docs.openstack.org/kilo/config-reference/content/vmware.html
03:19:02 <mkrai> Yes Wenzhi
03:19:08 <hongbin> Wenzhi: container is represented as a Nova instance if nova integration
03:19:10 <eliqiao> in that case, nova-compute is just a gateway, vcenter will decide vm will be runing on which host.
03:19:32 <mkrai> But some general filter would work for us too
03:19:41 <mkrai> eliqiao, What is vcenter?
03:19:53 <yanyanhu> eliqiao,  right, that's why I feel it could be a similar case :)
03:20:12 <yanyanhu> "The VMware vCenter driver enables the nova-compute service to communicate with a VMware vCenter server that manages one or more ESX host clusters."
03:20:15 <eliqiao> mkrai: just talking about how vmware driver works from yanyanhu
03:20:26 <Wenzhi> hongbin: if so, seems we need to introduce something like container flavor
03:20:42 <hongbin> Wenzhi: yes
03:20:44 <yanyanhu> Wenzhi, yes I guess so
03:21:01 <eliqiao> yanyanhu: the case of vCenter is strange, it will allow 1 compute node have multiple hosts.
03:21:15 <eliqiao> for libvirt: only one compute node vs one host
03:21:23 <Wenzhi> eliqiao: like powervc, hha
03:21:56 <hongbin> eliqiao: In Ironic, there could be multiple hosts as well
03:22:44 <hongbin> In the case of vcentre, basically, the nova scheduler is disabled?
03:23:00 <itzdilip> no with respect to compute, vcenter is considered as host
03:23:12 <Wenzhi> hongbin: not sure but I guess so
03:23:28 <Wenzhi> vcenter itself is a cloud
03:23:48 <yanyanhu> eliqiao, yes, just considering the use case of container is very different from vm(at least in Zun scope),  I guess this could be a feasible way to integrate Zun with nova?
03:23:48 <hongbin> sounds like we need more investigation in this area
03:24:13 <yanyanhu> just providing limited support for provisioning container through nova
03:24:28 <hongbin> That will work as well
03:25:25 <mkrai> Let's get a very simple implementation as Ironic and later we can of course enhance it as we evolve
03:25:27 <eliqiao> If we intergrate with Nova, things get sucks, if not, we will lose the entry point of Nova ...
03:25:41 <Wenzhi> maybe nova-zun-driver can just pass the nova cmd to zun and let zun do the whole work
03:26:06 <yanyanhu> Wenzhi, yes, that's is what I'm thinking
03:26:26 <hongbin> That is the proxy approach, basically, bypass the nova scheduler
03:26:42 <Wenzhi> yes
03:26:52 <yanyanhu> yes, Zun has its own scheduling strategy, based on 'container' use cases
03:27:12 <hongbin> ok
03:27:20 <eliqiao> hongbin: just heard from colleague from nova mid-cycle, nova scheduler will be split out in one of two release.
03:27:33 <yanyanhu> eliqiao, good news :)
03:27:38 <hongbin> great
03:28:36 <Qiming> in the long run, we as one community, should converge to one scheduler as possible as we can, we are not supposed to reinvent everything
03:28:56 <hongbin> Qiming: agree
03:29:30 <Qiming> for the short term and mid-term experimentation, it might be okay to run a simple stupid scheduler algorithm, but the interface to that to-be-split scheduler should be clear
03:30:01 <eliqiao> Qiming: ++
03:30:17 <yanyanhu> +1
03:31:00 <hongbin> Namrata_: do you have enough information to proceed?
03:31:11 <hongbin> Namrata_: or anything that needs to be clarify?
03:32:14 <Namrata_> Hongbin I will investigate more on it
03:32:21 <Namrata_> or maybe start with code
03:33:00 <hongbin> Namrata_: sure. you can start with code, or start with a spec is also fine
03:33:20 <Namrata_> okay hongbin
03:33:34 <hongbin> Thanks Namrata_
03:33:38 <Wenzhi> Namrata_: I think you need to start with a spec, nova demands that
03:34:04 <hongbin> Wenzhi: We started the work totally out of Nova I think
03:34:19 <hongbin> Although start with a spec is a good idea
03:34:27 <Namrata_> It will bot be in nova tree. Do we still need it?
03:34:44 <hongbin> We can push it to nova tree later
03:35:00 <hongbin> but I am not sure if Nova team will accpet it
03:35:15 <hongbin> If not, then the driver needs to stay at our tree
03:35:24 <Wenzhi> I have the same concern ^^
03:35:56 <Namrata_> I will submit spec in zun
03:36:14 <yanyanhu> no harm to have a try :) after we are ready
03:36:22 <hongbin> yes
03:36:25 <Wenzhi> agreed
03:36:39 <hongbin> OK. advance topic
03:36:47 <hongbin> #topic Integrate with Mesos scheduler (sudipto)
03:37:27 <hongbin> sudipto: could you outline the idea of integating with mesos secheduler that you mentioned before ?
03:37:43 <yanyanhu> let zun stay upon mesos? :)
03:37:45 <sudipto> hongbin, ok - so my thought was to have a way to run zun as a framework on top of mesos.
03:38:04 <sudipto> keep in mind, this is just one way
03:38:18 <sudipto> there is going to be a z-sched as you guys are discussing so far.
03:38:32 <sudipto> but i'd like that to be pluggable. so that it can be overridden by a mesos scheduler.
03:39:23 <sudipto> the job of creation of container will also be done using mesos via it's executor.
03:39:30 <mkrai> +1 sudipto. Schedulers should be pluggable so that we can integrate with any runtimes scheduler easily
03:40:18 <sudipto> this would allow you to run zun with other container frameworks. presently the ones integrated with mesos are kubernetes and marathon as a frameworks
03:41:01 <sudipto> so keep the interfaces clean and pluggable - that should do.
03:41:24 <sudipto> unless there are thoughts/comments - i am done hongbin  :)
03:42:08 <hongbin> sudipto: I guess mesos is not just a scheduler
03:42:17 <hongbin> sudipto: it also manages hosts?
03:42:27 <yanyanhu> hongbin, it does
03:42:29 <sudipto> hongbin, it has to run a slave on each of the hosts.
03:42:38 <yanyanhu> it is a resource pool management service IMO
03:43:11 <yanyanhu> and frameworks upon it provide strategy
03:43:12 <hongbin> Yes, then with this idea, the mesos slave will replace the compute agent
03:43:19 <yanyanhu> to manage lifecycle of tasks
03:43:34 <hongbin> yanyanhu: what do you think about this idea?
03:43:46 <yanyanhu> hongbin, interesting idea :)
03:44:01 <yanyanhu> just not sure whether we should deeply depends on mesos
03:44:05 <hongbin> yanyanhu: you steal my comments :)
03:44:12 <yanyanhu> hongbin, :)
03:44:14 <hongbin> (interesting idea)
03:44:27 <yanyanhu> another concern about mesos is it's C++ based
03:44:41 <hongbin> yes
03:44:51 <hongbin> Qiming: what do you think?
03:44:54 <yanyanhu> could that be a problem for dependency building?
03:44:54 <sudipto> so the workflow happens like this: there's a mesos master that talks to the mesos slaves. The mesos master gets a list of offers and presents it to the framework. the list of offers is nothing but a set of resources for a task deployment and then the framework makes the decision of where to schedule it.
03:45:03 <sudipto> i would suggest keep this as optional.
03:45:15 <yanyanhu> sudipto,  just like how k8s on mesos works :)
03:45:32 <sudipto> yanyanhu, yeah precisely.
03:46:15 <hongbin> Yes, it should be optional at the beginning
03:46:19 <sudipto> mesos allows you to simultaneously run two frameworks on the same set of hosts.
03:46:25 <sudipto> that's the value add.
03:46:42 <sudipto> s/two/two or more
03:46:55 <hongbin> I think another value add is that we don't need to re-implement resource management from scatch
03:47:11 <sudipto> well that depends if you want to treat as mandatory or optional
03:47:18 <hongbin> however, it is optional, so we still need a resource management
03:47:25 <hongbin> yes
03:47:52 <hongbin> A drawback is as yanyanhu mentioned, it is C++
03:48:01 <sudipto> no that is not a drawback
03:48:06 <hongbin> however, it could be solved by containerized the mesos-master and slave
03:48:06 <sudipto> it has the python bindings.
03:48:16 <hongbin> yes
03:48:25 <sudipto> so why is C++ a drawback then? :)
03:48:43 <sudipto> the mesos core is written in C++ - we don't have to touch that code.
03:48:52 <hongbin> I guess packing will be more difficult
03:49:17 <hongbin> There is a big discussion about whether openstack should allow Golang
03:49:36 <hongbin> and the packing team has a strong disagreement with the proposal
03:49:41 <hongbin> same for c++
03:49:51 <sudipto> i don't think we will package mesos with openstack ?
03:49:54 <sudipto> or intend to.
03:50:25 <hongbin> no, if it is containerized
03:50:43 <hongbin> OH, I see what you mean
03:50:57 <hongbin> No, it is packaged by the OS distro
03:51:16 <hongbin> so that looks fine
03:51:28 <sudipto> i mean this could be something  very similar to this :
03:51:33 <sudipto> #link: https://github.com/docker/swarm/blob/master/cluster/mesos/README.md
03:52:09 <hongbin> yes
03:52:17 <sudipto> but this is definitely not a priority i would say.
03:52:46 <hongbin> I guess hte network and storage needs to be investigated
03:52:46 <sudipto> more focus at the moment should be on the API design and the other pieces that would make zun stand on it's own feet first.
03:53:00 <hongbin> yes
03:53:12 <hongbin> #topic Open Discussion
03:53:26 <janki> Hi. I am Janki, new to Zun. I am looking for contributing to Zun.
03:53:47 <hongbin> welcome to join the meeting janki
03:53:55 <janki> Not quite sure where to start as all BP are taken and those not taken are all long term
03:54:05 <sudipto> so are we looking at timelines yet to achieve things? I am guessing i'd be able to meet you guys at Barcelona. But should we aim for a PoC before that?
03:54:05 <janki> hongbin: Thank you :)
03:54:23 <sudipto> janki, don't worry about the BPs yet. Start with the reviews if you like it.
03:54:35 <mkrai> Yes atleast docker support sudipto :)
03:54:37 <janki> sudipto: Will do
03:54:38 <hongbin> sudipto: PoC?
03:54:48 <sudipto> hongbin, i mean something to work with or demo with.
03:55:00 <sudipto> i change my words - a working project :)
03:55:06 <sudipto> mkrai, yeah agreed.
03:55:24 <hongbin> yes, if we can make that, that is great
03:55:35 <mkrai> Janki there is lots of work to be done in Zun
03:55:50 <mkrai> Attend meetings you will get the idea on which things you can work
03:55:56 <janki> mkrai: I would be happy to work on those.
03:56:17 <janki> mkrai: Sure, will attend
03:56:25 <mkrai> cool :
03:56:37 <sudipto> like for instance, can we say that during the next meeting we can have the API design finalised and mostly up for review?
03:56:53 <mkrai> Hopefully yes sudipto :)
03:57:10 <sudipto> mkrai, i am happy to help!
03:57:25 <mkrai> hongbin, Can I use the compute code now?
03:57:28 <mkrai> Is it ready?
03:57:40 <mkrai> Thanks sudipto
03:57:42 <hongbin> mkrai: I haven't tested it yet :)
03:57:58 <hongbin> mkrai: but everyone are free to revise it
03:58:07 <mkrai> Ok I will try once
03:58:14 <mkrai> Sure
03:58:17 <hongbin> There will be a lot fo bugs though :)
03:58:22 <sudipto> mkrai, i think the API code can move with stubs for now?
03:58:42 <mkrai> The current one is just bypassing the RPC calls
03:58:43 <sudipto> are we going to be using the docker remote apis ?
03:59:02 <sudipto> anyway i think we are on the brink of the hour. I will catch on the zun channel.
03:59:11 <mkrai> We will use the compute rpc apis
03:59:16 <mkrai> Sure
03:59:23 <mkrai> Thanks all!
03:59:35 <hongbin> All, thanks for joining the meeting
03:59:40 <Namrata_> Thanks..
03:59:44 <hongbin> Next meeting is the next week the same time
03:59:49 <hongbin> #endmeeting