03:00:02 <hongbin> #startmeeting zun
03:00:03 <openstack> Meeting started Tue Jan 17 03:00:02 2017 UTC and is due to finish in 60 minutes.  The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot.
03:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
03:00:06 <openstack> The meeting name has been set to 'zun'
03:00:08 <hongbin> #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2017-01-17_0300_UTC Today's agenda
03:00:13 <hongbin> #topic Roll Call
03:00:17 <pksingh> Pradeep
03:00:20 <diga> o/
03:00:24 <mkrai> Madhuri Kumari
03:00:27 <Namrata> Namrata
03:00:37 <lakerzhou> lakerzhou
03:00:48 <sudipto_> o/
03:00:59 <kevinz> kevinz
03:01:04 <hongbin> thanks for joining the meeting pksingh diga mkrai Namrata lakerzhou sudipto_ kevinz
03:01:12 <hongbin> #topic Announcements
03:01:18 <hongbin> i have no announcement
03:01:22 <hongbin> anyone else has?
03:01:40 <hongbin> #topic Review Action Items
03:01:42 <hongbin> none
03:01:49 <hongbin> #topic Cinder integration (diga)
03:01:54 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/cinder-zun-integration The BP
03:01:59 <hongbin> #link https://review.openstack.org/#/c/417747/ The design spec
03:02:02 <hongbin> diga: ^^
03:02:13 <diga> hongbin: Yes
03:02:22 <diga> hongbin: I saw your comments
03:02:47 <diga> I think the current approach looks okay to me
03:02:56 <hongbin> ok
03:03:34 <diga> I agreed on we should create seperate volume table for this implementation
03:03:54 <hongbin> diga: ack
03:04:23 <diga> hongbin: but I have studied on this, then I came up with this approach
03:04:33 <hongbin> diga: sure
03:04:38 <hongbin> diga: most of my comments are asking for clarification
03:04:46 <diga> hongbin: if you do this way, then later on, we can extend this to multiple drivers
03:04:50 <diga> hongbin: yes
03:04:59 <hongbin> diga: ok
03:05:33 <hongbin> diga: then, i would look forward to your revision to address them
03:05:35 <diga> hongbin: I will revisit the spec, will reply to your comments
03:05:39 <diga> hongbin: yes
03:05:54 <diga> hongbin: I will update it in next one hr
03:05:58 <hongbin> diga: thanks
03:06:20 <diga> hongbin: welcome!
03:06:20 <hongbin> for others, any comment about the cinder integration spec?
03:06:51 <pksingh> i agree that there should be no hard dependency on any projects
03:07:09 <pksingh> we should design in that way
03:07:32 <mkrai> The driver based implementation is preferable
03:07:40 <diga> yes, that's the approach I am taking in this spec
03:07:54 <hongbin> pksingh: mkrai +1
03:08:46 <hongbin> ok, next topic
03:08:52 <hongbin> #topic Support interactive mode (kevinz)
03:08:57 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/support-interactive-mode The BP
03:09:02 <hongbin> #link https://review.openstack.org/#/c/396841/ The design spec
03:09:12 <hongbin> kevinz: ^^
03:09:18 <kevinz> Hi
03:10:09 <kevinz> I prepare to use CLIS->API->COMPUTE->Docker Daemon to finish the container tty resize function
03:10:18 <kevinz> In server side
03:10:31 <hongbin> great
03:11:14 <kevinz> Also websocket link need docker version
03:11:26 <hongbin> i see
03:11:43 <kevinz> in compute node. Do we already have ? If not I can add one func to get
03:12:06 <mkrai> Yes it is already there
03:12:24 <mkrai> we do have a conf for it
03:12:46 <hongbin> i think the problem is how to expose the version via REST API?
03:13:45 <kevinz> API just get the doceker version in compute node and then generate the websocket link to CLIS
03:14:03 <hongbin> i see
03:14:06 <kevinz> hongbin: Yeah
03:15:20 <hongbin> kevinz: if i understand correctly, zun needs to have an admin api to return the link for cli to do interactive operations?
03:16:11 <hongbin> kevinz: however, the websocket link is generic? or it is runtime-specific?
03:16:12 <kevinz> Yes, exactly
03:17:17 <kevinz> Yes the websocket APi need the docekr version, docker daemon IP and port
03:17:43 <hongbin> kevinz: ok
03:18:03 <hongbin> kevinz: feel free to go ahead and submit a patch for review
03:18:16 <kevinz> OK, I see
03:18:20 <kevinz> Thanks hongbin
03:18:44 <hongbin> any other question for kevinz ?
03:18:58 <kevinz> No more :-)
03:19:23 <hongbin> thanks kevinz
03:19:27 <hongbin> next one
03:19:29 <hongbin> #topic Make Zunclient an OpenStackClient plugin (Namrata)
03:19:34 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/zun-osc-plugin The BP
03:19:38 <hongbin> Namrata: ^^
03:19:42 <Namrata> hi
03:20:21 <Namrata> the blueprint is completed
03:20:30 <hongbin> Namrata: awesome
03:20:33 <Namrata> as for noe as discussed earlier
03:20:43 <Namrata> thanks hongbin
03:20:52 <hongbin> Namrata: thanks for the great work
03:21:18 <pksingh> Namrata: great work :)
03:21:26 <Namrata> thanks pksingh
03:21:36 <hongbin> Namrata: mind marking this bp as implemented? https://blueprints.launchpad.net/zun/+spec/zun-osc-plugin
03:21:50 <Namrata> yeah sure
03:21:55 <hongbin> Namrata: thanks
03:22:10 <hongbin> any other comment for the osc bp?
03:22:39 <hongbin> ok, next one
03:22:42 <hongbin> #topic How to expose CPU configurations for containers
03:22:50 <hongbin> #link https://review.openstack.org/#/c/418675/ A proposal to add cpushare to container
03:22:55 <hongbin> #link https://review.openstack.org/#/c/418175/ A proposal to change description of cpu parameter
03:23:26 <hongbin> i will try to summarize the discussion, sudipto_ feel free to chime in if you have any comment
03:23:39 <sudipto_> sure
03:23:53 <hongbin> we have been discussing how to expose the cpu contraints of the container via zun api
03:24:00 <sudipto_> I have been struggling with time management this past week, but hopefully this week would be better.
03:24:24 <hongbin> currently, we are exposing cpu constraints as the number of core
03:24:34 <hongbin> that is vcpu (same as nova)
03:24:54 <hongbin> however, there are several alternatives proposed
03:25:20 <hongbin> for example, exposing the cpushare parameter instead
03:25:40 <sudipto_> hongbin, i have my doubts over what we are calling a vcpu right now.
03:25:59 <hongbin> sudipto_: i guess it is the number of virtual cores
03:26:00 <pksingh> hongbin: number of core or relative number of cpu cycles, i am not sure whether they are same or different
03:26:20 <hongbin> pksingh: i see, i am not sure either
03:26:31 <sudipto_> hongbin, number of virtual cores have no significance unless you can map them to cores on the system...which we aren't doing.
03:26:51 <hongbin> sudipto_: i see
03:27:06 <hongbin> then, let's discuss. what is the best way to do this
03:27:27 <sudipto_> hongbin, the last time we discussed, the proposal was to bring out cpu policies
03:27:35 <pksingh> we are using cpu-quota and docker says it as 'cpuquota - Microseconds of CPU time that the container can get in a CPU period'
03:28:07 <sudipto_> pksingh, yup, that's my point.
03:28:36 <sudipto_> so if you define a CPU period of 10 ms. Then a cpu-quota will define, how many ms - your container can execute in that period.
03:28:51 <pksingh> sudipto_: +1
03:29:07 <sudipto_> so it kinda boils down to a shares concept
03:29:51 <hongbin> i have looked at k8s for cpu in before
03:30:27 <hongbin> if i remembered correct, they used cpu quota for max cpu allocation, cpu share for required cpu allocation
03:31:36 <hongbin> however, k8s is using other cpu unit (not vcpu)
03:32:18 <hongbin> sudipto_: pksingh what are your opinions of the ideal solution?
03:32:25 <sudipto_> hongbin, so let's talk in terms of physical cores on the system, if there are 5 cores in the system...what's a cpu period/quota for this system?
03:32:44 <sudipto_> hongbin, i plan to get some clarity on this today
03:32:56 <pksingh> i was thinking exposing these things can make scheduling job complex
03:33:19 <sudipto_> hongbin, we don't expose these, we expose policies.
03:33:25 <sudipto_> pksingh, ^
03:33:38 <sudipto_> policies being - shared/dedicated/strict
03:33:52 <pksingh> means it would be configurable?
03:33:57 <sudipto_> where shared means the default case, where we are operating right now.
03:34:04 <sudipto_> yeah configurable as a part of zun run command
03:34:33 <hongbin> sudipto_: i am fine with the policy things
03:34:41 <sudipto_> dedicated means, the zun backend code will give you dedicated cpu cores to run on. While strict means, there's a one on one mapping of cores to containers.
03:34:52 <hongbin> sudipto_: however, that is about cpu pining, but less about cpu allocation?
03:35:06 <sudipto_> hongbin, agreed.
03:35:31 <sudipto_> hongbin, do you know k8s do cpu allocation? (shares is one way)
03:35:36 <hongbin> sudipto_: ok, it seems you proposed to expose "number of physical cores" + policy?
03:36:16 <sudipto_> hongbin, after the discussion with you and pksingh i feel it's a good idea to not expose the cores, but just the policy to the end user.
03:36:25 <pksingh> i was thinking abou a public cloud, will it be better to expose this,
03:37:13 <sudipto_> pksingh, public cloud with openstack? very few :) but that's beyond the point.
03:37:39 <sudipto_> i agree with you that we should not be exposing cores to the end users, hence the policies.
03:37:52 <hongbin> i think zun would be mainly targeted for private cloud (since container on public cloud has isolation problem)
03:38:10 <sudipto_> Now why policies? Because there's a need for running NFV based workloads to have dedicated resources.
03:38:37 <pksingh> hongbin: +1
03:38:45 <hongbin> sudipto_: if we expose policies, we need to expose the number of cores as well?
03:38:57 <pksingh> sudipto_: ccan we run them on diferent set of compute nodes?
03:39:10 <sudipto_> hongbin, not to the user necessarily right?
03:39:11 <hongbin> sudipto_: for example, if we have a policy "dedicated", then how many cores are dedicated?
03:39:21 <sudipto_> hongbin, o yeah, for that yes.
03:39:30 <sudipto_> i thought you mean the actual numbers on the system
03:39:37 <sudipto_> pksingh, meaning?
03:39:50 <lakerzhou> policy is related to cpu pining support only
03:39:57 <sudipto_> lakerzhou, yeah
03:40:13 <pksingh> sudipto_: we have some nodes in the system which are dedicated for this dedicated policy
03:40:29 <pksingh> sudipto_: we will always alot that container to that set of nodes
03:41:02 <lakerzhou> NFV applications usually require a certain # of cores (dedicated)
03:41:10 <sudipto_> pksingh, that does sound like the availability zones concept, but yes you need to do that.
03:41:37 <sudipto_> pksingh, someone in nova had proposed a way to overcome this by creating host capabilities. I will share that spec with you once i find it.
03:42:32 <sudipto_> lakerzhou, +1
03:42:38 <pksingh> sudipto_: ok
03:42:41 <hongbin> ok, if we want to expose # of cores, how to do that?
03:43:00 <bkero> $(nproc)
03:43:41 <hongbin> bkero: yes, it seems that is the command to get the number of processor
03:43:52 <lakerzhou> Also in nova, the # of cores are vcores, not physical cores
03:43:55 <sudipto_> hongbin, that boils down - to if we can expose something in the form of a vcpu for a container
03:44:16 <hongbin> sudipto_: what do you think about that?
03:44:27 <sudipto_> lakerzhou, the virtual cores, give you the idea of how many physical cores you would need for a dedicated use case.
03:44:49 <sudipto_> hongbin, i will get back on this by today, if that's ok.
03:44:59 <hongbin> sudipto_: ok, sure
03:45:11 <hongbin> perhaps, we could table this discussion to next week
03:45:26 <hongbin> then, all of us can study more about this area
03:45:38 <sudipto_> yeah
03:45:48 <hongbin> any last minute comment before advancing topic?
03:46:06 <hongbin> #topic Discuss BPs that are pending approval
03:46:13 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/support-port-bindings Support container port mapping
03:46:49 <hongbin> kevinz: i saw you proposed this bp, want to drive this one?
03:46:59 <kevinz> hongbin: YEAH
03:47:17 <kevinz> I think we can add a port binding to container when create
03:47:24 <sudipto_> hongbin, do we also keep track of allocated ports? I am guessing we should?
03:47:45 <hongbin> sudipto_: i am not sure
03:48:08 <sudipto_> hongbin, otherwise, two containers can potentially overlap on the same port?
03:48:15 <sudipto_> same host port for example.
03:48:20 <hongbin> sudipto_: yes, that is a problem
03:48:42 <hongbin> sudipto_: however, you could use the -P opiton and let docker pick a port for you
03:49:19 <sudipto_> hongbin, kevin has put a -p with the zun command line...which i think is legit because docker might not just be the driver of the future...
03:49:57 <hongbin> sudipto_: +1
03:50:59 <kevinz> sudipto_:+1
03:51:03 <sudipto_> hongbin, this too points to some kind of a host inventory, we need to build in zun
03:51:29 <hongbin> sudipto_: yes, that is true
03:51:30 <sudipto_> the cpu one would need that too, and so will many other host capabilities.
03:51:55 <hongbin> sudipto_: if we follow the openstack deployment, host will be under a management network, which is different from the tenant network
03:52:15 <hongbin> port mapping in this case means to expose a container to a management network....
03:52:43 <sudipto_> hongbin, good point...
03:52:45 <hongbin> i am not sure if this makes sense, however, if containers are running on vm, this makes perfect sense
03:52:58 <sudipto_> hongbin, yup, thats a very valid point.
03:53:11 <sudipto_> another thing to brainstorm about :)
03:53:13 <pksingh> hongbin: +1
03:53:47 <hongbin> then, how to deal with this bp, table it? drop it? keep it?
03:54:30 <sudipto_> hongbin, come back do some research next week?
03:54:32 <hongbin> kevinz: what do you think?
03:54:49 <hongbin> sudipto_: ok, sure
03:54:55 <pksingh> yes that would be better
03:54:57 <hongbin> table this one
03:55:04 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/support-zun-copy Support zun copy
03:55:19 <kevinz> hongbin: +1for sudipto_
03:55:22 <hongbin> how about this one? a good/bad idea?
03:55:48 <pksingh> i think this is good
03:56:01 <hongbin> pksingh: ack
03:56:09 <sudipto_> is this docker cp?
03:56:16 <pksingh> i thunk k8s also supports this
03:56:21 <hongbin> sudipto_: i guess it is
03:56:29 <pksingh> yes sudipto_
03:56:34 <sudipto_> yeah, doesn't seem to harm. at all.
03:56:48 <hongbin> ok, i will approve it if there is no further objection
03:57:03 <hongbin> next one
03:57:08 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/kuryr-integration Kuryr integration
03:57:41 <hongbin> i am proposing to use kuryr for our native docker driver
03:57:49 <sudipto_> hongbin, +1
03:58:00 <hongbin> perhaps just an invetigation for now, to see if this is possible
03:58:30 <pksingh> yes that would be good
03:58:34 <lakerzhou> hongbin, +1
03:58:40 <hongbin> currently, our nova driver has neutron integration (via nova capability), the native docker driver doesn't have any neutron integration yet
03:58:48 <pksingh> but it does not support multitenancy right?
03:59:11 <hongbin> pksingh: i hope it does
03:59:15 <hongbin> pksingh: will figure it out
03:59:26 <pksingh> hongbin: ok sure
03:59:29 <hongbin> sorry, run out of time
03:59:35 <hongbin> #topic Open Discussion
04:00:02 <hongbin> it looks most of us agreed on the kuryr integration bp, then i will approve it
04:00:09 <hongbin> all, thanks for joining the meeting
04:00:12 <pksingh> sure
04:00:12 <hongbin> #endmeeting