16:01:58 <adrian_otto> #startmeeting containers
16:01:59 <openstack> Meeting started Tue Jan 26 16:01:58 2016 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:00 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:02 <openstack> The meeting name has been set to 'containers'
16:02:06 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-01-26_1600_UTC Our Agenda
16:02:11 <adrian_otto> #topic Roll Call
16:02:13 <adrian_otto> Adrian Otto
16:02:19 <muralia1> murali allada
16:02:22 <hongbin> o/
16:02:23 <rods> o/
16:02:24 <rpothier> o/
16:02:24 <Tango> Ton Ngo
16:02:26 <wanghua> o/
16:02:27 <dane_leblanc> o/
16:02:27 <HimanshuGarg> o/
16:02:28 <madhuri> o/
16:02:28 <houming> o/
16:02:32 <suro-patz> o/
16:02:34 <juggler> o/
16:02:55 <eghobo> o/
16:03:19 <adrian_otto> hello muralia1 hongbin rods rpothier Tango wanghua dane_leblanc HimanshuGarg madhuri houming suro-patz juggler and eghobo
16:03:41 <adrian_otto> #topic Announcements
16:03:54 <adrian_otto> 1) Midcycle
16:04:17 <adrian_otto> Our midcycle will be Feb 18+19 in the SF Bay area (Sunnyvale planned)
16:04:33 <adrian_otto> the exact address is pending confirmation
16:05:16 <adrian_otto> 2) On Feb18 there will be an OpenStack Meetup in Sunnyvale, and I will be presenting Magnum. It would be great for those of you attending the midcycle to join us. It begins at 7:00 PM.
16:05:41 <adrian_otto> Any announcements from team members?
16:05:55 <muralia1> do you have a link to the meetup adrian, so we can rsvp
16:06:48 <adrian_otto> #link http://www.meetup.com/openstack/events/224950334/ OpenStack Meetup - Magnum
16:07:23 <adrian_otto> the group has expressed considerable excitement about this topic, so it should be fun.
16:07:57 <adrian_otto> ok, let's proceed through our agenda
16:08:01 <adrian_otto> #topic Review Action Items
16:08:03 <adrian_otto> (none)
16:08:11 <adrian_otto> Magnum UI Subteam Update (bradjones)
16:08:15 <adrian_otto> bradjones: yt?
16:08:19 <wanghua> I have one
16:08:29 <adrian_otto> wanghua: proceed
16:08:37 <wanghua> https://review.openstack.org/#/c/268852/
16:08:42 <wanghua> spec for trust
16:09:31 <adrian_otto> ok, that fits in this topic, so let's advance to it:
16:09:35 <adrian_otto> #topic Essential Blueprint Updates
16:09:57 <adrian_otto> wanghua: do you have a question for us to discuss in the context of our meeting today?
16:10:44 <wanghua> I want to get the feed back from us all, and get the spec merged soon
16:11:04 <wanghua> Some features need this
16:11:16 <wanghua> And they are blocked by this feature
16:11:46 <adrian_otto> ok, so you are asking for further discussion on the review
16:11:58 <wanghua> yes
16:12:11 <Tango> wanghua: I think the spec has been improving steadily, looks pretty close to what we need.
16:12:14 <adrian_otto> ok, and is there any apparent obstacle that we can knock down right now?
16:13:09 <Tango> Should we make it a goal to finalize and merge this week?
16:13:18 <hongbin> Tango: +1
16:13:32 <wanghua> I think the spec is ok now. If there is no objection, we can merge it this week
16:14:16 <hongbin> Several essential BPs/bugs need this feature, so it is a high priority
16:14:23 <adrian_otto> #action adrian_otto to champion consensus and merge of https://review.openstack.org/268852
16:14:36 <adrian_otto> let's look at the next
16:14:57 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/magnum-tempest (dimtruck)
16:15:12 <adrian_otto> Dimitry is not present today. Are there other team members who can comment on this?
16:16:14 <adrian_otto> ok, let's advance
16:16:31 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/magnum-troubleshooting-guide (Tango)
16:16:43 <adrian_otto> and related:
16:16:49 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/user-guide (Tango)
16:17:02 <Tango> We have 2 sections merged for the troubleshooting guide, thanks everyone for the reviews
16:17:08 <Tango> 1 section under review
16:17:08 <adrian_otto> whoot!
16:17:19 <Tango> and 2 sections under review for the guide
16:17:48 <Tango> Please jump in and write, the team will help with the review and add anything missing
16:18:20 <Tango> The section on TLS and Mesos just need some refactoring from the existing docs
16:18:20 <adrian_otto> ok
16:19:08 <adrian_otto> Tango: any more remarks before looking at the next BP?
16:19:17 <Tango> That's all for now
16:19:22 <adrian_otto> great
16:19:59 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/resource-quota (vilobhmm11)
16:20:13 <vilobhmm11> adrian_otto : spec is under review https://review.openstack.org/#/c/266662/7
16:20:55 <adrian_otto> ok, I am updating the BP status to Needs Code Review
16:21:00 <vilobhmm11> got good feedback but few discussion that were captured as part of http://lists.openstack.org/pipermail/openstack-dev/2015-December/082266.html are raised again n again
16:21:32 <vilobhmm11> as in why we are not using quota infrastructure provided by different components of IaaS layer like Nova
16:22:05 <vilobhmm11> which I guess has already been discussed and we came to a conclusion to impose quota/rate limiting for containers only in mitaka
16:22:46 <adrian_otto> for bays and baymodels
16:22:59 <Tango> vilobhmm11: Hopefully a clear explanation in the spec will serve as reference and lay the question to rest
16:23:20 <adrian_otto> Tango: yes, indeed
16:23:29 <vilobhmm11> adrain_otto : bays are composed of nodes which involve resources from IaaS
16:23:38 <vilobhmm11> IMHO
16:24:02 <eghobo> vilobhmm11: also keep in mind if you are targeting just containers, you need to parse kub requests
16:24:22 <vilobhmm11> Tango : I think its mentioned pretty clearly in the spec just that reviewers who were initially not involved in discussion here or dont want to go through http://lists.openstack.org/pipermail/openstack-dev/2015-December/082266.html in detail would want to raise such questions
16:24:43 <adrian_otto> perhaps condensing that into a summary may help
16:24:50 <vilobhmm11> adrian_otto : sure
16:25:27 <coreyob> as a part of that summary, can you make sure to address the use case for containers
16:25:37 <coreyob> I think that is the biggest part that I'm missing for quotas
16:25:51 <coreyob> why an operator would want to limit the number of containers
16:26:16 <coreyob> especially since it isn't a real limit, since the user can avoid it by using the native api
16:26:17 <adrian_otto> it might be wise for us to produce a disposition on native API use vs. the containers resource in Magnum bays.
16:26:28 <adrian_otto> perhaps on the wiki so it can be referenced
16:26:45 <adrian_otto> I'm willing to draft that
16:27:28 <vilobhmm11> coreyob : wanted to limit the rate of creation
16:27:37 <vilobhmm11> and hence imposing quota on them
16:27:49 <adrian_otto> #action adrian_otto to produce a wiki page that explains Magnum's ability to support native API access and wrapped (limited) access to that funcitonality through our contiainers resource, and pointers to points of debate.
16:28:22 <coreyob> vilobhmm11 why limit containers? they're just processes? why would an operator want to limit the number of processes that a user runs?
16:28:40 <eghobo> vilobhmm11: what is your target COE? at least for first version?
16:28:42 <adrian_otto> there may also be room for other tools, such as a transparent proxy for various native API tools that implements rate controls
16:29:00 <vilobhmm11> coreyob : what other resources you have in mind that should be limited or imposed quota on?
16:29:17 <adrian_otto> coreyob: it depends on the providers billing model
16:29:22 <vilobhmm11> eghobo : swarm
16:29:29 <vilobhmm11> thats mentioned clearly in the spec
16:29:34 <coreyob> vilobhmm11 imo, the only things that operators would want to limit are "real" resources. things based in hardware limitations
16:29:40 <adrian_otto> if they have decided to make a container entity billable, then having a limit (basically a credit line) starts to make sense
16:30:20 <vilobhmm11> +1
16:30:23 <coreyob> adrian_otto is that an actual use case or an imagined one? is there someone in the community that actually wants to bill based on the number of processes a user is running?
16:30:31 <adrian_otto> coreyob: that POV is logical if you assume that the provider will only bill based on the IaaS resources that compose the bay.
16:30:44 <hongbin> Another reason is to ensure the stability of COE (by limit the number of processes running on it)
16:31:11 <coreyob> hongbin it doesn't change that because a single container can also run 1000 processes or you can have 1000 containers running 1 process
16:31:16 <adrian_otto> coreyob: until recently that was one of Rackspace's use cases
16:31:20 <hongbin> It is not good to have too many processes running on a COE
16:32:20 <adrian_otto> coreyob: but a container could have a non-user imposed memory limit, or other cgroup characteristic that limits capacity consumption.
16:32:26 <coreyob> adrian_otto so there isn't a real community member that actually expects to bill based on number of containers
16:32:35 <coreyob> right and in that case we're using the nova quotas
16:32:58 <coreyob> because number of containers doesn't have anything to do with memory
16:33:18 <eghobo> coreyob: +1
16:33:23 <vilobhmm11> coreyob : you don't want a malicious user to come and consume resources (as many as they want) from a bay..rate limiting will help here.
16:33:51 <adrian_otto> the intent was to emit the rpc usage message when containers start and stop, and let providers make their own choices about what's billable. I don't ask Magnum users about their product plans.
16:33:59 <hongbin> coreyob: but each container will consume resources in general case
16:34:05 <coreyob> vilobhmm11 even if you have only 1 containers limit in the quota, you can spin up 1 million processes
16:34:23 <coreyob> hongbin yes but the resources that operators care about are memory and cpu, and those are nova concepts
16:34:24 <adrian_otto> coreyob: no, that's actually not true
16:34:41 <adrian_otto> there is an effective minimum size of a process that works out to about 6mb
16:35:03 <adrian_otto> so if there's a memory cgroup in use, that is actually an effective limit on process count
16:35:11 <hongbin> coreyob: the COE admin would care memory and cpu on a bay
16:35:18 <coreyob> adrian_otto you're talking about memory though. limiting containers as a method of limiting memory is just re-implementing memory quotas in a bad way
16:35:56 <adrian_otto> uh, that's the basic memory limit feature in the kernel.
16:36:23 <adrian_otto> it's a cgroup parameter for memory
16:36:52 <eghobo> adrian_otto: just keep in mind that all COEs support CPU/Memory limit/quota but only Kub support # pods
16:37:15 <coreyob> adrian_otto are you talking about the COE limiting the memory of a process?
16:37:18 <coreyob> of a container?
16:37:24 <coreyob> because users set that not operators
16:37:26 <coreyob> right?
16:38:06 <adrian_otto> users can set it, but it's also possible to set up the containers endpoint to force a value that is not user specified
16:38:26 <coreyob> adrian_otto how is that possible if the users are root on the nodes via the COE native API?
16:38:37 <adrian_otto> so if you ask for 20GB, and it's set up to allow up to 2GB, you end up with the lower limit
16:38:47 <adrian_otto> with native api this does not apply
16:38:55 <eghobo> adrian_otto: could you elaborate? do you mean default quota?
16:39:01 <adrian_otto> this applies for service providers who don't want to give native api access
16:39:26 <coreyob> adrian_otto so as long as magnum has a native api available for bays, then quotas for number of containers is pointless right?
16:40:13 <adrian_otto> coreyob: it would not make sense to offer both. You'd choose one approach or the other depending on your interests.
16:40:36 <adrian_otto> for example, Rackspace only offers native API access with Carina
16:40:48 <adrian_otto> so we are not concerned with container quotas
16:41:45 <coreyob> adrian_otto let's say you weren't going to offer native api as a part of magnum. is there a way to even do that? how does that work with ssh keys on the nodes since users will still have root via that?
16:41:49 <hongbin> Providers who offer magnum API access will concern quotas. I guess
16:42:29 <coreyob> don't get me wrong, I get quotas for bays at least since a bay corresponds toa  "real" thing like an ip address
16:42:44 <adrian_otto> hongbin: Do any of your downstream consumers have plans to use the containers resource?
16:43:06 <hongbin> adrian_otto: no. We are on requirement gathering phase
16:43:27 <adrian_otto> so it might be something we could table, and revisit at a later time.
16:44:33 <adrian_otto> if nobody cares, then we could drop it from the wish list
16:44:56 <vilobhmm11> adrian_otto : atleast downstream at my workplace we have this requirement and would prefer to impose quota at the container level
16:45:09 <coreyob> vilobhmm11 are you planning on somehow blocking the native api?
16:45:09 <vilobhmm11> we are talking about memory, cpu, etc resources on the bay …also we will take into consideration the resources created (# of process ) created by native api vs magnum api
16:45:22 <adrian_otto> for historical context, the OpenStack community asked us to "make containers a first class resource in OpenStack", which is what we did.
16:45:43 <vilobhmm11> coreyb : nope will take into consideration resources created via both native as well as magnum api
16:45:47 <coreyob> vilobhmm11 are you unable to limit cpu/memory based on nova quotas?
16:46:13 <vilobhmm11> coreyob : nope but nova quotas are more for a compute
16:46:29 <adrian_otto> coreyob: that's handled by flavor definition
16:46:31 <vilobhmm11> what we are talking about is quota imposed by COE admin
16:46:56 <adrian_otto> so yes, you can limit the total instance memory as a quota. We do that in our cloud.
16:47:19 <coreyob> so vilobhmm11 are you talking about limiting the memory a particular container can consume?
16:47:29 <adrian_otto> so if you are at your limit, and you have a giant instance, you can kill off that instance and make two more that are half as big each
16:48:26 <vilobhmm11> coreyob : for now, in Mitaka, just going to start with number of containers to be created
16:48:43 <vilobhmm11> but if need arises we can even do that if we have a valid use case
16:48:45 <coreyob> vilobhmm11 so your use case is actually about limiting memory and cpu per container, and not about the number of containers
16:49:29 <adrian_otto> #topic Open Discussion
16:49:33 <vilobhmm11> coreyob : the intial goal was to see how introducing quota will play out in magnum
16:49:53 <vilobhmm11> so we picked up containers as out intial resource
16:50:19 <coreyob> this is my essential problem with the spec. containers doesn't have a real use case other than "let's make quotas for things"
16:50:37 <vilobhmm11> and decided to rate limit on the # of containers to be created; with the quota infrastructure that will ebdeveloped as part of https://review.openstack.org/#/c/266662/7 we can impose quota on other resources as well
16:50:43 <coreyob> i think we should change that spec to target bays for maybe bay-nodes first
16:50:52 <coreyob> *or maybe
16:51:03 <coreyob> something that has a real use case
16:52:15 <suro-patz> folks https://review.openstack.org/#/c/267134/ is up for review against BPs - https://blueprints.launchpad.net/magnum/+spec/async-container-operations and https://blueprints.launchpad.net/magnum/+spec/async-rpc-api - will appreciate your review and feedback
16:53:09 <Tango> suro-patz: Which commands now work asynch?
16:53:14 <vilobhmm11> adrian_otto, hongbin : ^^ thoughts ? I think still continuing with containers use case makes sense…if we want to cover any other resource we should decide now
16:53:43 <suro-patz> with this creation of 10 containers in succession on a swarm COE on a devstack (4vcpu, 24G RAM, 60G HDD) takes 15s compare ~150s in sync mode
16:53:46 <adrian_otto> vilobhmm11: bay and baymodel should be the first resources for quotas
16:54:04 <adrian_otto> followed by containers if any true use cases are identified for it
16:54:19 <suro-patz> Tango: container-create/delete/start/stop/pause/unpause/reboot
16:54:29 <adrian_otto> suro-patz: I love it!!
16:54:31 <hongbin> vilobhmm11: I think container is a good candidate from my side
16:54:32 <coreyob> adrian_otto can we limit it to just bays? or do you have a use case for baymodels too? baymodels are just rows in a database, they're not a "real" resource
16:54:35 <vilobhmm11> adrian_otto : sure
16:55:09 <adrian_otto> coreyob: operational experience has taught us that no resource should be unlimited
16:55:11 <hongbin> vilobhmm11: IMO, if we want to limit containers, it should be per-bay basis
16:55:11 <suro-patz> adrian_otto: thanks!
16:55:26 <adrian_otto> even if the limit is really high, there needs to be an upper boundary that can be adjusted.
16:55:31 <Tango> suro-patz: +1
16:55:53 <juggler> Newly release amusing/informative video about Git here: https://youtu.be/jr4zQc3g1Ts
16:56:05 <coreyob> adrian_otto that sounds more like a configuration option for magnum as a whole (just so that the database doesn't break) but not a tenant specific limit
16:56:11 <adrian_otto> coreyob: this point of view assumes that any user may be hostile.
16:56:39 <adrian_otto> global limits suck, but are better than nothing
16:56:46 <adrian_otto> this was covered in the email discussion
16:57:20 <adrian_otto> limits per tenant is much better, especially where the value can be adjusted without any system level reconfiguration
16:57:23 <vilobhmm11> coreyob : would recommend to go through http://lists.openstack.org/pipermail/openstack-dev/2015-December/082266.html
16:57:53 <adrian_otto> ok, we are approaching the end of our scheduled time for today's meeting
16:58:41 <adrian_otto> Our next meeting is on Groundhog Day 2016-02-02 at 1600 UTC
16:58:51 <adrian_otto> any final remarks before we wrap up?
16:59:11 <juggler> what if the meeting keeps repeating? :)
16:59:24 <adrian_otto> that would suck
16:59:36 <adrian_otto> have a great day everyone, thanks for attending!
16:59:37 <juggler> lol
16:59:40 <adrian_otto> #endmeeting