16:01:58 #startmeeting containers 16:01:59 Meeting started Tue Jan 26 16:01:58 2016 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:02 The meeting name has been set to 'containers' 16:02:06 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-01-26_1600_UTC Our Agenda 16:02:11 #topic Roll Call 16:02:13 Adrian Otto 16:02:19 murali allada 16:02:22 o/ 16:02:23 o/ 16:02:24 o/ 16:02:24 Ton Ngo 16:02:26 o/ 16:02:27 o/ 16:02:27 o/ 16:02:28 o/ 16:02:28 o/ 16:02:32 o/ 16:02:34 o/ 16:02:55 o/ 16:03:19 hello muralia1 hongbin rods rpothier Tango wanghua dane_leblanc HimanshuGarg madhuri houming suro-patz juggler and eghobo 16:03:41 #topic Announcements 16:03:54 1) Midcycle 16:04:17 Our midcycle will be Feb 18+19 in the SF Bay area (Sunnyvale planned) 16:04:33 the exact address is pending confirmation 16:05:16 2) On Feb18 there will be an OpenStack Meetup in Sunnyvale, and I will be presenting Magnum. It would be great for those of you attending the midcycle to join us. It begins at 7:00 PM. 16:05:41 Any announcements from team members? 16:05:55 do you have a link to the meetup adrian, so we can rsvp 16:06:48 #link http://www.meetup.com/openstack/events/224950334/ OpenStack Meetup - Magnum 16:07:23 the group has expressed considerable excitement about this topic, so it should be fun. 16:07:57 ok, let's proceed through our agenda 16:08:01 #topic Review Action Items 16:08:03 (none) 16:08:11 Magnum UI Subteam Update (bradjones) 16:08:15 bradjones: yt? 16:08:19 I have one 16:08:29 wanghua: proceed 16:08:37 https://review.openstack.org/#/c/268852/ 16:08:42 spec for trust 16:09:31 ok, that fits in this topic, so let's advance to it: 16:09:35 #topic Essential Blueprint Updates 16:09:57 wanghua: do you have a question for us to discuss in the context of our meeting today? 16:10:44 I want to get the feed back from us all, and get the spec merged soon 16:11:04 Some features need this 16:11:16 And they are blocked by this feature 16:11:46 ok, so you are asking for further discussion on the review 16:11:58 yes 16:12:11 wanghua: I think the spec has been improving steadily, looks pretty close to what we need. 16:12:14 ok, and is there any apparent obstacle that we can knock down right now? 16:13:09 Should we make it a goal to finalize and merge this week? 16:13:18 Tango: +1 16:13:32 I think the spec is ok now. If there is no objection, we can merge it this week 16:14:16 Several essential BPs/bugs need this feature, so it is a high priority 16:14:23 #action adrian_otto to champion consensus and merge of https://review.openstack.org/268852 16:14:36 let's look at the next 16:14:57 #link https://blueprints.launchpad.net/magnum/+spec/magnum-tempest (dimtruck) 16:15:12 Dimitry is not present today. Are there other team members who can comment on this? 16:16:14 ok, let's advance 16:16:31 #link https://blueprints.launchpad.net/magnum/+spec/magnum-troubleshooting-guide (Tango) 16:16:43 and related: 16:16:49 #link https://blueprints.launchpad.net/magnum/+spec/user-guide (Tango) 16:17:02 We have 2 sections merged for the troubleshooting guide, thanks everyone for the reviews 16:17:08 1 section under review 16:17:08 whoot! 16:17:19 and 2 sections under review for the guide 16:17:48 Please jump in and write, the team will help with the review and add anything missing 16:18:20 The section on TLS and Mesos just need some refactoring from the existing docs 16:18:20 ok 16:19:08 Tango: any more remarks before looking at the next BP? 16:19:17 That's all for now 16:19:22 great 16:19:59 #link https://blueprints.launchpad.net/magnum/+spec/resource-quota (vilobhmm11) 16:20:13 adrian_otto : spec is under review https://review.openstack.org/#/c/266662/7 16:20:55 ok, I am updating the BP status to Needs Code Review 16:21:00 got good feedback but few discussion that were captured as part of http://lists.openstack.org/pipermail/openstack-dev/2015-December/082266.html are raised again n again 16:21:32 as in why we are not using quota infrastructure provided by different components of IaaS layer like Nova 16:22:05 which I guess has already been discussed and we came to a conclusion to impose quota/rate limiting for containers only in mitaka 16:22:46 for bays and baymodels 16:22:59 vilobhmm11: Hopefully a clear explanation in the spec will serve as reference and lay the question to rest 16:23:20 Tango: yes, indeed 16:23:29 adrain_otto : bays are composed of nodes which involve resources from IaaS 16:23:38 IMHO 16:24:02 vilobhmm11: also keep in mind if you are targeting just containers, you need to parse kub requests 16:24:22 Tango : I think its mentioned pretty clearly in the spec just that reviewers who were initially not involved in discussion here or dont want to go through http://lists.openstack.org/pipermail/openstack-dev/2015-December/082266.html in detail would want to raise such questions 16:24:43 perhaps condensing that into a summary may help 16:24:50 adrian_otto : sure 16:25:27 as a part of that summary, can you make sure to address the use case for containers 16:25:37 I think that is the biggest part that I'm missing for quotas 16:25:51 why an operator would want to limit the number of containers 16:26:16 especially since it isn't a real limit, since the user can avoid it by using the native api 16:26:17 it might be wise for us to produce a disposition on native API use vs. the containers resource in Magnum bays. 16:26:28 perhaps on the wiki so it can be referenced 16:26:45 I'm willing to draft that 16:27:28 coreyob : wanted to limit the rate of creation 16:27:37 and hence imposing quota on them 16:27:49 #action adrian_otto to produce a wiki page that explains Magnum's ability to support native API access and wrapped (limited) access to that funcitonality through our contiainers resource, and pointers to points of debate. 16:28:22 vilobhmm11 why limit containers? they're just processes? why would an operator want to limit the number of processes that a user runs? 16:28:40 vilobhmm11: what is your target COE? at least for first version? 16:28:42 there may also be room for other tools, such as a transparent proxy for various native API tools that implements rate controls 16:29:00 coreyob : what other resources you have in mind that should be limited or imposed quota on? 16:29:17 coreyob: it depends on the providers billing model 16:29:22 eghobo : swarm 16:29:29 thats mentioned clearly in the spec 16:29:34 vilobhmm11 imo, the only things that operators would want to limit are "real" resources. things based in hardware limitations 16:29:40 if they have decided to make a container entity billable, then having a limit (basically a credit line) starts to make sense 16:30:20 +1 16:30:23 adrian_otto is that an actual use case or an imagined one? is there someone in the community that actually wants to bill based on the number of processes a user is running? 16:30:31 coreyob: that POV is logical if you assume that the provider will only bill based on the IaaS resources that compose the bay. 16:30:44 Another reason is to ensure the stability of COE (by limit the number of processes running on it) 16:31:11 hongbin it doesn't change that because a single container can also run 1000 processes or you can have 1000 containers running 1 process 16:31:16 coreyob: until recently that was one of Rackspace's use cases 16:31:20 It is not good to have too many processes running on a COE 16:32:20 coreyob: but a container could have a non-user imposed memory limit, or other cgroup characteristic that limits capacity consumption. 16:32:26 adrian_otto so there isn't a real community member that actually expects to bill based on number of containers 16:32:35 right and in that case we're using the nova quotas 16:32:58 because number of containers doesn't have anything to do with memory 16:33:18 coreyob: +1 16:33:23 coreyob : you don't want a malicious user to come and consume resources (as many as they want) from a bay..rate limiting will help here. 16:33:51 the intent was to emit the rpc usage message when containers start and stop, and let providers make their own choices about what's billable. I don't ask Magnum users about their product plans. 16:33:59 coreyob: but each container will consume resources in general case 16:34:05 vilobhmm11 even if you have only 1 containers limit in the quota, you can spin up 1 million processes 16:34:23 hongbin yes but the resources that operators care about are memory and cpu, and those are nova concepts 16:34:24 coreyob: no, that's actually not true 16:34:41 there is an effective minimum size of a process that works out to about 6mb 16:35:03 so if there's a memory cgroup in use, that is actually an effective limit on process count 16:35:11 coreyob: the COE admin would care memory and cpu on a bay 16:35:18 adrian_otto you're talking about memory though. limiting containers as a method of limiting memory is just re-implementing memory quotas in a bad way 16:35:56 uh, that's the basic memory limit feature in the kernel. 16:36:23 it's a cgroup parameter for memory 16:36:52 adrian_otto: just keep in mind that all COEs support CPU/Memory limit/quota but only Kub support # pods 16:37:15 adrian_otto are you talking about the COE limiting the memory of a process? 16:37:18 of a container? 16:37:24 because users set that not operators 16:37:26 right? 16:38:06 users can set it, but it's also possible to set up the containers endpoint to force a value that is not user specified 16:38:26 adrian_otto how is that possible if the users are root on the nodes via the COE native API? 16:38:37 so if you ask for 20GB, and it's set up to allow up to 2GB, you end up with the lower limit 16:38:47 with native api this does not apply 16:38:55 adrian_otto: could you elaborate? do you mean default quota? 16:39:01 this applies for service providers who don't want to give native api access 16:39:26 adrian_otto so as long as magnum has a native api available for bays, then quotas for number of containers is pointless right? 16:40:13 coreyob: it would not make sense to offer both. You'd choose one approach or the other depending on your interests. 16:40:36 for example, Rackspace only offers native API access with Carina 16:40:48 so we are not concerned with container quotas 16:41:45 adrian_otto let's say you weren't going to offer native api as a part of magnum. is there a way to even do that? how does that work with ssh keys on the nodes since users will still have root via that? 16:41:49 Providers who offer magnum API access will concern quotas. I guess 16:42:29 don't get me wrong, I get quotas for bays at least since a bay corresponds toa "real" thing like an ip address 16:42:44 hongbin: Do any of your downstream consumers have plans to use the containers resource? 16:43:06 adrian_otto: no. We are on requirement gathering phase 16:43:27 so it might be something we could table, and revisit at a later time. 16:44:33 if nobody cares, then we could drop it from the wish list 16:44:56 adrian_otto : atleast downstream at my workplace we have this requirement and would prefer to impose quota at the container level 16:45:09 vilobhmm11 are you planning on somehow blocking the native api? 16:45:09 we are talking about memory, cpu, etc resources on the bay …also we will take into consideration the resources created (# of process ) created by native api vs magnum api 16:45:22 for historical context, the OpenStack community asked us to "make containers a first class resource in OpenStack", which is what we did. 16:45:43 coreyb : nope will take into consideration resources created via both native as well as magnum api 16:45:47 vilobhmm11 are you unable to limit cpu/memory based on nova quotas? 16:46:13 coreyob : nope but nova quotas are more for a compute 16:46:29 coreyob: that's handled by flavor definition 16:46:31 what we are talking about is quota imposed by COE admin 16:46:56 so yes, you can limit the total instance memory as a quota. We do that in our cloud. 16:47:19 so vilobhmm11 are you talking about limiting the memory a particular container can consume? 16:47:29 so if you are at your limit, and you have a giant instance, you can kill off that instance and make two more that are half as big each 16:48:26 coreyob : for now, in Mitaka, just going to start with number of containers to be created 16:48:43 but if need arises we can even do that if we have a valid use case 16:48:45 vilobhmm11 so your use case is actually about limiting memory and cpu per container, and not about the number of containers 16:49:29 #topic Open Discussion 16:49:33 coreyob : the intial goal was to see how introducing quota will play out in magnum 16:49:53 so we picked up containers as out intial resource 16:50:19 this is my essential problem with the spec. containers doesn't have a real use case other than "let's make quotas for things" 16:50:37 and decided to rate limit on the # of containers to be created; with the quota infrastructure that will ebdeveloped as part of https://review.openstack.org/#/c/266662/7 we can impose quota on other resources as well 16:50:43 i think we should change that spec to target bays for maybe bay-nodes first 16:50:52 *or maybe 16:51:03 something that has a real use case 16:52:15 folks https://review.openstack.org/#/c/267134/ is up for review against BPs - https://blueprints.launchpad.net/magnum/+spec/async-container-operations and https://blueprints.launchpad.net/magnum/+spec/async-rpc-api - will appreciate your review and feedback 16:53:09 suro-patz: Which commands now work asynch? 16:53:14 adrian_otto, hongbin : ^^ thoughts ? I think still continuing with containers use case makes sense…if we want to cover any other resource we should decide now 16:53:43 with this creation of 10 containers in succession on a swarm COE on a devstack (4vcpu, 24G RAM, 60G HDD) takes 15s compare ~150s in sync mode 16:53:46 vilobhmm11: bay and baymodel should be the first resources for quotas 16:54:04 followed by containers if any true use cases are identified for it 16:54:19 Tango: container-create/delete/start/stop/pause/unpause/reboot 16:54:29 suro-patz: I love it!! 16:54:31 vilobhmm11: I think container is a good candidate from my side 16:54:32 adrian_otto can we limit it to just bays? or do you have a use case for baymodels too? baymodels are just rows in a database, they're not a "real" resource 16:54:35 adrian_otto : sure 16:55:09 coreyob: operational experience has taught us that no resource should be unlimited 16:55:11 vilobhmm11: IMO, if we want to limit containers, it should be per-bay basis 16:55:11 adrian_otto: thanks! 16:55:26 even if the limit is really high, there needs to be an upper boundary that can be adjusted. 16:55:31 suro-patz: +1 16:55:53 Newly release amusing/informative video about Git here: https://youtu.be/jr4zQc3g1Ts 16:56:05 adrian_otto that sounds more like a configuration option for magnum as a whole (just so that the database doesn't break) but not a tenant specific limit 16:56:11 coreyob: this point of view assumes that any user may be hostile. 16:56:39 global limits suck, but are better than nothing 16:56:46 this was covered in the email discussion 16:57:20 limits per tenant is much better, especially where the value can be adjusted without any system level reconfiguration 16:57:23 coreyob : would recommend to go through http://lists.openstack.org/pipermail/openstack-dev/2015-December/082266.html 16:57:53 ok, we are approaching the end of our scheduled time for today's meeting 16:58:41 Our next meeting is on Groundhog Day 2016-02-02 at 1600 UTC 16:58:51 any final remarks before we wrap up? 16:59:11 what if the meeting keeps repeating? :) 16:59:24 that would suck 16:59:36 have a great day everyone, thanks for attending! 16:59:37 lol 16:59:40 #endmeeting