22:00:42 #startmeeting containers 22:00:42 Meeting started Tue Jun 2 22:00:42 2015 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:00:45 The meeting name has been set to 'containers' 22:00:58 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-06-02_2200_UTC Our Agenda 22:01:01 #topic Roll Call 22:01:05 Adrian Otto 22:01:05 Andrew Melton 22:01:12 Rob Pothier 22:01:13 Perry Rivera 22:01:14 OTSUKA, Motohiro 22:01:14 Ton Ngo 22:01:15 Fawad Khaliq 22:01:16 Bich Le 22:01:17 o/ 22:01:20 Egor Guz 22:01:24 Tom Cammann 22:01:31 o/ 22:01:36 Dane LeBlanc 22:01:37 Thomas MAddox 22:01:47 Madhuri Kumari 22:01:53 Antoni Segura 22:01:58 Martin Falatic 22:02:19 hello apmelton rpothier juggler yuanying-alt Tango|2 fawadkhaliq bich_le_ hongbin eghobo tcammann mfalatic dane_leblanc thomasem madhuri_ apuimedo mfalatic 22:02:30 I think that sets the record for the longest hello reply 22:02:43 adrian_otto: :) 22:02:46 ! 22:02:50 o/ 22:02:54 Hello 22:03:08 hello jay-lau-513 22:03:11 Brenden Blanco 22:03:26 adrian_otto hi 22:03:36 Hi all 22:03:36 hi brendenblanco 22:04:30 hi tobe 22:04:37 #topic Announcements 22:04:56 1) I will be on a vacation day next week (camping, limited email probably) 22:05:16 so I will be seeking an alternate chair for next week at 1600 UTC 22:05:41 please PRIVMSG me for that, and we'll mark the agenda with the name of the pro-tem chair 22:06:05 2) We now have a log-hanging-fruit tag referenced on our contributing page 22:06:11 #topic https://wiki.openstack.org/wiki/Magnum/Contributing Contributing to Magnum 22:06:25 cool 22:06:30 #link https://bugs.launchpad.net/magnum/+bugs?field.tag=low-hanging-fruit Low Hanging Fruit 22:06:36 so if you are a new contributor, check that 22:06:41 Have a great time adrian_otto :) 22:06:44 I'll do my best to try to keep that fed 22:06:50 thanks tobe 22:07:09 also, I linked the "7 Habits" talk form the summit on the Contributing page 22:07:21 so if you missed that, I suggest you check that out. 22:07:34 ^^very informative 22:07:47 o/ 22:07:58 o/ 22:08:04 that concludes our prepared announcement. Other announcements from team members today? 22:08:32 remember that milestone 11 is set for June 25 22:08:36 survey announcement returns were low...abt 3 respondents. would you like to hear feedback so far? 22:08:55 juggler: let's revisit that in Open Discussion 22:08:59 you will be first up 22:09:14 Thanks adrian_otto 22:09:25 np! 22:09:28 #topic Review Action Items 22:09:32 (none) 22:09:36 I have one 22:09:40 #link https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type 22:09:42 #topic Blueprint/Task Review 22:09:52 hongbin, proceed. 22:10:15 There are two questions 22:10:27 1. Which OS for hosting mesos 22:10:36 2. Which Marathon object to manage 22:10:53 what is normally used, or does it express any preference at all? 22:11:09 For #1, there are several choice Ubuntu, CentOS, CoreOS 22:11:19 hogepodge: we use Ubuntu and Centos 22:11:34 I don't hink build for CoreOS exists 22:12:01 I don't think CoreOS makes sense, unless it's actually a container image that gets run 22:12:02 hongbin Ubuntu and CentOS mighe be better 22:12:04 We prefer CentOS for production, and Ubuntu for development 22:12:12 it was discussion in mail list but I don't remember any actions from it 22:12:15 the mesosphere also have some examples about that 22:12:26 CoreOS works by using container image for mesos 22:12:46 that approach should work for all three then 22:13:09 really, we would like run Mesos in container? 22:13:10 and would be consistent with how we deploy Swarm bays 22:13:40 the Mesos container would have access to the Docker daemon on the host 22:13:50 so containers would not necessarily need to be nested 22:14:12 That is true 22:14:14 i think mesos need to be installed the same way as kub 22:14:24 adrian_otto The problem is that we need to hardcode the docker server host to heat template 22:14:46 may I please see by raise of hands (o/) who is running Mesos today? 22:14:52 so we'd pass the docker socket to the mesos container? 22:14:58 +1 22:15:05 let's at least get all the input from those users and make something initially that works for them 22:15:06 o/ 22:15:10 and we can iterate on that 22:15:12 o/ 22:15:15 o/ 22:15:48 o/ (not in production. only in lab) 22:16:17 ok, so jay-lau-513 eghobo and hongbin and amit213 can the three of you discuss this in #openstack-containers when we adjourn and decide on what our bay templates should do to begin with 22:16:27 we can have alternate templates in our contrib directory 22:16:46 so maybe you pick a CentOS or Ubuntu one first, and then add others in contrib 22:16:59 and allow operators to give us feedback on which should be the default 22:17:06 fair? 22:17:16 +1 22:17:21 +1 22:17:25 sure 22:17:35 yep, we don't use atomic for example because luck of tools 22:17:36 +1 22:17:58 ok, cool, so hungbin, on to question #2 22:18:10 +1 22:18:12 There are two options for #2 22:18:18 "Which marathon object to manage" 22:18:28 1. Import Marathon App to Magnum 22:18:37 hongbin I see your option in ML, sorry that I forget to reply 22:18:46 2. Implement Magnum container by using Marathon app 22:19:01 jay-lau-513: NP 22:19:08 hungbin: could you elaborate what do you mean Marathon object? 22:19:16 Marathon app is like a container 22:19:25 but with more features 22:19:42 such as scheduing contratnt, the number of replicas 22:19:54 scheduling contraint 22:19:55 not sure I understand Marathon is framework with web ui 22:20:00 eghobo: kubernetes has pods. Mesos has other objects that we may or may not want to model in Magnum 22:20:09 hongbin: could it be implemented as a container like the swarm conductor uses 22:20:20 just with extra, mesos only, params? 22:20:20 eghobo: yes, it has Rest API and UI 22:20:44 adrian_otto: mesos has only application 22:20:57 hongbin the pre-condition is that you may want to install Marathon first if want to use its ap 22:21:03 hongbin ap/api 22:21:12 eghobo: ok, got it. (I do not pretent to be a Mesos expert) 22:21:23 apmelton: That would be possible. 22:21:40 I know we're trying to expose scheduling for swarm as well 22:21:52 perhaps you can work with diga to abstract it enough that it works for both 22:22:06 I think we can start app through Marathon the same way as Kub 22:22:26 jay-lau-513: Yes, we install Marathon first 22:23:24 maybe the right approach here is to have a bay template for Mesos+Marathon, and alternate templates in contrib, and don't add any new resource types in Magnum at all for Mesos 22:23:29 I'm sorry. Have we decided to just support Marathon? 22:23:41 tobe: no, there are just multiple choices 22:23:55 we will have docs that show how to specify an alternate template 22:23:58 adrian_otto: +1 for Marathon 22:24:00 Yes, that's what I mean 22:24:06 tobe: We decide let magnum talk to marathon 22:24:08 so you can get different arrangements of Mesos 22:24:41 adrian_otto: OK 22:24:43 adrian_otto Come to my origianl questions: how to enable end user use this bay via magnum API? 22:25:17 jay-lau-513: I think we'd prefer users to use native clients 22:25:24 jay-lau-513: I'm suggesting that to begin with, as a first iteration we only support magnum baymodel-create and magnum bay-create for mesos bay types 22:25:33 and then use native clients past that point 22:25:41 adrian_otto aplelton fair enough 22:25:52 sounds good 22:25:54 Marathon is most used to schedule container upon mesos. It's reasonable to support it. 22:25:55 ok, i see. thanks 22:26:00 and as the use cases become clear to us, then follow up on that to add additional Mesos resources as needed (application, whatever) 22:26:14 ok 22:26:23 But there're some others framework can scheduler containers upon mesos. We should be extensible for them 22:26:37 and base that on input we get from those using the Mesos bays 22:26:49 tobe, yes that it our intent 22:26:56 s/it/is/ 22:27:24 hongbin, was that the input you were hoping for? 22:27:32 yes 22:27:34 Currently we don't have usecase for mesos bay type. right? 22:27:47 we have not documented that in the Magnum spec yet 22:28:08 yuanying-alt: we should discuss adding that in a way we can all agree on 22:28:12 we have people running mesos and they want to have magnum magnum the clusters, that sounds like a use case 22:28:15 to me at least 22:28:27 magnu manage the clusters* 22:28:42 yes, magnum would be handy for scaling out the bays that run the mesos clusters 22:29:14 apmelton: we have similar use case - playground for dev teams 22:29:38 this might be getting a bit off topic, but have we discussed mechanisms to actually gather metrics on the hosts to use for scaling? 22:30:59 apmelton: no, that's not something we have spent much time exploring yet 22:31:10 alright, I was just wondering 22:31:17 we did talk about using a scaling group so a bay scale up/down would have a webhook for each 22:31:37 so we could have any external control system call those webhooks 22:31:54 but the metrics will be unique to each type of bay 22:32:09 would they though? 22:32:19 total sum of -m values is one obvious one 22:32:24 at this point, the underlying thing spawning the containers is docker 22:32:37 well, with k8s and swarm, yes 22:33:24 It might make sense to contribute a spec for that 22:33:24 with mesos you can go with just cgroups 22:33:38 so we could collaborate on the plan a bit 22:33:40 adrian_otto: agreed there, there's going to be lots of options 22:34:18 ideally to break down the approach into achievable steps toward the autoscaling bay we want 22:34:51 it would be awesome if I could demo an autoscaling bay (by sum of -m values) in Tokyo 22:35:33 so that once I fall below a threshold of available memory in the bay, that the scale up webhook is called 22:35:40 how far off is Tokyo? 22:35:51 late october 22:35:52 6 months. Last week of October. 22:36:34 K8s doesn't support autoscalling, right? 22:36:58 ah 22:36:59 tobe: I'm not sure anything support auto scaling of the underlying infrastructure at this point 22:37:12 that's kinda one of the big adds for magnum 22:37:33 you can add/remove resources to kub cluster and it should handle it 22:37:35 right, apmelton 22:37:49 k8s plan to support autoscaling pod 22:37:52 #link https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/autoscaling.md 22:38:11 If the underlying has implemented, we can re-use its api 22:38:24 Thanks hongbin 22:38:49 lets take that for homework, and get our autoscaling bay blueprint updated 22:38:59 I wanted to touch on one of our ML threads: 22:39:02 http://lists.openstack.org/pipermail/openstack-dev/2015-June/065261.html 22:39:18 And magnum has the unified interfaces for all these frameworks 22:39:19 #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/065261.html Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel 22:40:00 I feel name should not be a required parameter 22:40:05 perhaps we can explore that question a bit here 22:40:11 Thaknks adrian_otto 22:40:14 +1 for required 22:40:21 both madhuri_ and I were opposed to requiring names 22:40:38 the argument for names is so human usability goes up 22:40:39 adrian_otto: what's the alternative? 22:40:48 name is more readable than UUID 22:40:50 It makes bay-create for hard 22:41:23 implementation of this would essentially require magnum to raise an exception if you attempt a magnum baymodel-create or magnum bay-create with no name assigned 22:41:26 So it depends on user if he doesn't pass name 22:41:33 He will have to use uuid 22:41:56 a required name with a unique requirement is just a user supplied uuid 22:42:02 but then when he want to operate a bay, he need first retrive the uuid first 22:42:06 bay create need to be hard ;) because you are building cluster which potentially will stay for long time 22:42:32 this will bring trouble and time consuming 22:42:46 I think we should go with the nova model, where names aren't unique, and if there are more than one, we require the use of a uuid 22:43:05 take nova as example, I prefer to use name to delete VM but not uuid as I do not want to retrieve UUID every time 22:43:05 jay-lau-513: you get the uuid in the response to the bay-create 22:43:12 would having uuid being a default be troublesome? that is, it's used explicitly unless a uuid is specified? 22:43:14 and I cannot remenmber it 22:43:37 explicitly/implicitly uh :) 22:43:38 apmelton: In case of multiple bays with same name, user will have to use uuid 22:43:40 if you create a docker container and do not specify a name, one is generated for you 22:43:44 apmelton: I think we should consider unique name per tenant 22:43:46 but names are not required 22:43:57 adrian_otto yes, but I may not able to remember it ;-( 22:43:58 however names are enforced to be unique by docker 22:44:04 but docker will generate them ;) 22:44:11 That will also be the similar case as not passing name while creating bay 22:44:14 madhuri_ right, need uuid for such case 22:44:19 so maybe we should have Magnum follow that same pattern 22:44:30 auto-generated names if the client does not supply one 22:44:36 I think fewer mandatory things is more user friendly. 22:44:43 for BayModel and Bay resources 22:44:51 User can always provide name 22:44:55 + Tango|2 22:44:57 but error on Name duplicates 22:45:06 +1 for Tango|2 22:45:11 adrian_otto the auto generated names is similar with UUID, difficult to remember 22:45:11 or maybe a prefixed name, e.g. start with something the user can then patternsearch for? 22:45:29 adrian_otto: do you mean docker or nova pattern? 22:45:40 e.g. foo_38bc 22:46:27 adrian_otto: I am not sure whether this would be a good idea or not. Can we allow users to update name in magnum? 22:46:39 things that are often created/destroyed should not mandate name, e.g. docker - but for things which are supposed to live longer, we should better have name mandatory - viz.bay/baymodel 22:46:53 So that user can update name at later time also 22:47:02 eghobo: docker pattern, but similar to nova 22:47:11 madhuri_ imho, the name should not be updated 22:47:21 madhuri_: sure, they can do a bay update replace name=whatever 22:47:25 what? 22:47:29 jay-lau-513: why? 22:47:36 So we have the option then 22:47:48 If user doesn't pas name, later he can update it 22:47:59 if he doesn't want to use uuid 22:48:13 adrian_otto Does there are any use cases that the name need to be updated? 22:48:27 I found one now jay-lau-513 22:48:35 what if I name it something stupid by mistake 22:48:45 and I have production stuff in that bay that I don't want to kill 22:48:50 Add a name later when not given at bay-create 22:48:52 adrian_otto I see 22:48:54 I might use bay update to fix that 22:49:05 Now I'm confused about using uuid and name or just unrepeatable name? 22:49:28 this sounds more like a use-case for "description" than name 22:49:32 We should go back and understand the intended use of the name. Is it for display purpose, or id purpose? 22:49:40 tobe: Name need not to be unique 22:49:42 tobe: I'm suggesting that if the user does not supply a name, we generate a human readable name, and that we enforce names to be unique 22:50:15 +1 22:50:18 the rationale there is if you have the same name for multiple bays, you still have to fall back to using uuids to address them anyway 22:50:22 adrian_otto: I don't think that's a good idea 22:50:29 the uniqueness on name 22:50:30 apmelton: ok, why? 22:50:45 if you assign a name to a bay SUPER_BAY 22:50:47 adrian_otto: Just like how docker works, right? +1 for this 22:51:01 IMHO, if a name is a unique property, then it should not also be mutable...use an optional description field instead 22:51:02 and then you go to act on SUPER_BAY and you get an exception because there are two named that 22:51:15 that's worse than just using UUID to act on them to begin with 22:51:33 adrian_otto: +1 22:51:33 tobe: yes, that it how donker works. 22:51:38 lets say at some point we implement sharing of a bay between tenants 22:51:51 I don't agree on this 22:51:54 so you go to share your TEST_BAY with a tenant, and they already have one named that 22:52:15 apmelton: I need to think on that 22:52:18 When we already have uuid as unique property, then why do we want to make name also a unique property? 22:52:32 we are approaching our end time, and I want to get to Open Discussion 22:52:41 adrian_otto: +1 uniqueness. I think people who use docker will find it intuitive to use 22:52:42 let's keep the discussion open on the ML 22:52:42 so before we're done with blueprints 22:52:52 I need to push this out to l-2 https://blueprints.launchpad.net/magnum/+spec/async-container-operations 22:52:53 apmelton +1, different tenant can have same name bays 22:53:03 and seek consensus there after we have thought through these other use cases 22:53:38 I'm hoping that I can finish up whats blocking me from magnum stuff by the end of next week so I can pick this up ASAP https://blueprints.launchpad.net/magnum/+spec/secure-docker 22:53:38 we will defer other work item discussion to our next meeting, or in #openstack-containers 22:54:03 apmelton: anything magnum team members can do to help you? 22:54:22 #topic Open Discussion 22:54:29 juggler: tell us about the survey 22:54:40 adrian_otto: when I get some time, I'll break async-container-ops into some pieces people can pick up 22:54:43 initial survey results here: http://paste.openstack.org/show/257490/ 22:54:54 apmelton: sounds great, tx! 22:55:08 rolled-up 22:55:25 juggler: maybe we should feature the survey on our project wiki page? 22:56:07 too few respondents to put weight on the results. 22:56:09 +1 22:56:26 I'll try to figure placement on the page 22:57:47 ok, good discussion. Let's keep advancing on each of these tough issues. 22:57:52 we have 3 minutes remaining. 22:57:59 any other topics that might fit? 22:58:30 Our next team meeting will be on 2015-06-09 at 1600 UTC, chair TBD. 22:58:38 thanks everyone for attending today!! 22:58:42 Cheers! 22:58:45 thx 22:58:47 have a good evening/morning/day everyone! 22:58:55 bug 1451678..running into a snag during git review. let me know if you have a few mins to help. thanks! 22:58:55 bug 1451678 in Magnum trunk "Add link to dev-manual-devstack.rst into document dev-quickstart.rst" [High,Triaged] https://launchpad.net/bugs/1451678 - Assigned to P Rivera (juggler) 22:59:09 thanks all 22:59:13 Thanks, see you next time. 22:59:15 Thanks all 22:59:25 #endmeeting