22:01:04 <adrian_otto> #startmeeting containers
22:01:05 <openstack> Meeting started Tue Sep  9 22:01:04 2014 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:01:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:01:09 <openstack> The meeting name has been set to 'containers'
22:01:27 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers Our Agenda
22:01:31 <adrian_otto> #topic Roll Call
22:01:33 <adrian_otto> Adrian Otto
22:01:58 <iqbalmohomed> Iqbal Mohomed, IBM Research
22:02:15 <adrian_otto> hi iqbalmohomed
22:02:21 <iqbalmohomed> Hello
22:02:40 <funzo> Chris Alfonso
22:03:27 <adrian_otto> hi funzo
22:03:42 <funzo> hi!
22:03:48 <adrian_otto> long time no see
22:03:58 <funzo> yeah, I missed last week
22:04:04 <adrian_otto> will you make it to the Paris Summit event in November?
22:04:10 <funzo> I will not be there
22:04:17 <adrian_otto> bummer
22:04:20 <funzo> although I'd like to :)
22:04:48 <funzo> Ian Main will make it from RHT though
22:05:01 <adrian_otto> ok
22:05:27 <adrian_otto> well, we are pretty light on attendees today
22:05:53 <iqbalmohomed> Getting in line for the iWatch perhaps :-p
22:05:58 <funzo> ha!
22:06:00 <adrian_otto> heh
22:06:33 <adrian_otto> ok, well diga sent his regrets, but he did produce a little bit of code
22:07:01 <iqbalmohomed> Yup ... very good to see it
22:07:01 <adrian_otto> so let's quickly step through the agenda items together, and with any luck we will end early today
22:07:20 <adrian_otto> #topic Announcements
22:07:41 <adrian_otto> first is that I will be absent next week, as our meeting conflicts with the OpenStack Silicon Valley event
22:07:47 <funzo> as will I be.
22:07:53 <funzo> except I'll be in Jamaica
22:08:29 <adrian_otto> ok, so unless I hear a ground swell between now and then, I will cancel that meeting, and we will continue again on 2014-09-23
22:09:14 <adrian_otto> if the meeting gets scheduled for 9/16, I will arrange for the chair to email the openstack-dev@lists.openstack.org ML with the [containers] topic tag
22:09:23 <adrian_otto> so that's how you will learn about it
22:09:26 <iqbalmohomed> sounds good
22:09:39 <adrian_otto> any other announcements?
22:09:54 <adrian_otto> #topic Review Action Items
22:10:01 <adrian_otto> I took two:
22:10:08 <adrian_otto> adrian_otto to coordinate a follow-up about Gantt, to help the containers team understand its readiness plans, and how they may be applied in our work.
22:10:45 <adrian_otto> this is in -progress. I am awaiting a response from bauzas, who I understand is leading the Gantt effort
22:11:14 <adrian_otto> I will update you again after that discussion happens
22:11:26 <adrian_otto> we might just rely on an ML thread to resolve that
22:11:36 <adrian_otto> so I will carry that action item forward
22:11:48 <adrian_otto> #action adrian_otto to coordinate a follow-up about Gantt, to help the containers team understand its readiness plans, and how they may be applied in our work.
22:12:04 <adrian_otto> next action item:
22:12:06 <adrian_otto> adrian_otto to make a backlog of open design topics, and put them on our meeting schedule for discussion
22:12:16 <adrian_otto> that is now available. Feel free to edit it:
22:12:25 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers Containers Team Meeting Page
22:12:32 <adrian_otto> there is now a topics backlog section
22:12:51 <adrian_otto> now on to the API, and progress toward that
22:12:55 <adrian_otto> #topic Containers Service (Magnum) API
22:13:02 <adrian_otto> #link https://github.com/digambar15/magnum.git Initial Source code from diga
22:13:15 <adrian_otto> there's not a lot in there yet
22:13:30 <adrian_otto> and it's set up as v2 of an API rather than v1, which we will arrange to adjust
22:14:17 <adrian_otto> but once we get an openstack repo opened and a gerrit review queue for it, we can begin to submit specs against that as code with docstrings
22:14:18 <iqbalmohomed> Is pecan used by any other openstack project? Just curious
22:14:34 <adrian_otto> so we can debate the merits of each aspect of the API implementation before we code against it
22:14:51 <adrian_otto> iqbalmohomed: yes. It is used by Cielometer, and by Solum
22:14:59 <iqbalmohomed> ah cool
22:15:06 <adrian_otto> many of the early projects did not follow a design pattern
22:15:33 <adrian_otto> and that was earmarked as a maintainability problem, so new projects are encouraged to use pecan/wsme for WSGI
22:16:03 <adrian_otto> before that recommendation, basically every project did something different when it came to WSGI
22:16:32 <adrian_otto> for a control plane API, the WSGI choice really does not matter much
22:16:55 <adrian_otto> for data plane API's arguments for performance come into consideration
22:17:08 <iqbalmohomed> cool ... thx
22:17:24 <adrian_otto> ok, there is more backstory if you take any deeper interest in that subject.
22:18:01 <adrian_otto> funzo: any questions on the approach for the API, or what has been proposed for it?
22:18:40 <adrian_otto> ok, that brings us to open discussion
22:18:44 <adrian_otto> #topic Open Discussion
22:18:56 <iqbalmohomed> i had a question ...
22:19:06 <adrian_otto> iqbalmohomed:  sure, what's up?
22:19:09 <iqbalmohomed> we have talked about kubernetes in the past
22:19:17 <adrian_otto> yes.
22:19:28 <funzo> adrian_otto: I haven't reviewed yet - but I will
22:19:39 <adrian_otto> funzo: tx!
22:19:54 <iqbalmohomed> curious about similarities/differences ... i  can see some major benefits in the proposed container service ... stuff kubernetes doesn't do at all
22:20:23 <iqbalmohomed> but feel there are similarities that we can take advantage of in terms of designs, etc. any thoughts on this
22:20:46 <adrian_otto> iqbalmohomed: great question. This is something that I have been drawing block diagrams with to help my colleagues rationalize where these different things fit.
22:21:10 <adrian_otto> my hope is that we will refine something to bring up for discussion at our next meeting
22:21:41 <adrian_otto> the good news is that in my mind, there is a clear way that both kubernetes and Magnum can be used in Harmony along with OpenStack
22:22:08 <adrian_otto> although there are some areas of overlap, that's only by necessity
22:22:14 <iqbalmohomed> that is good to hear
22:22:30 <iqbalmohomed> the kubernetes pod concept .. that seems like a good idea
22:22:31 <adrian_otto> kubernetes has a concept of a pool manager that Rackspace has written code to extend to use Heat
22:22:54 <iqbalmohomed> we have talked about the "instance" managed by the container service as well
22:23:05 <adrian_otto> as it does not make sense to have multiple sources of truth about capacity management of the raw capacity on which containers are placed.
22:23:30 <iqbalmohomed> is this opensource? I saw a rackspace provider to kubernetes ... is this what u refer to?
22:23:33 <adrian_otto> yes, so our intent is to leverage a combination of heat and Nova to deal with that
22:23:53 <adrian_otto> iqbalmohomed: my understanding is that those exist as pull requests against the kubernetes project
22:23:59 <iqbalmohomed> ah ok
22:24:10 <adrian_otto> I don't have links to share, but I could ask for them
22:24:33 <iqbalmohomed> if u could share, would be appreciated
22:25:10 <iqbalmohomed> but the big picture is that kubernetes is a higher level thing ... container service aims to be in the middle of that and lower level things like Nova ... is that right?
22:25:26 <adrian_otto> the key difference between a system like mesos or kubernetes, and Magnum is that Magnum is focused on solving "give me a container"
22:26:07 <adrian_otto> and kubernetes and mesos are more concerned with "deploy and control my service/application" with the assumption that a microservices architecture in in play
22:26:47 <adrian_otto> they work by having each host communicate back to the pool manager from an agent
22:26:57 <adrian_otto> and that's how inventory is tracked
22:27:15 <adrian_otto> I should say inventory of capacity (hosts that can run containers)
22:27:18 <iqbalmohomed> so magnum isn't going to be in the business of restarting failed containers, for eg?
22:27:36 <adrian_otto> and that "membership" is what kubernetes tracks in etcd
22:27:52 <adrian_otto> iqbalmohomed: no, that's not a goal
22:27:57 <iqbalmohomed> cool
22:28:10 <adrian_otto> whereas that's in scope for kubernetes, or even potentially Heat.
22:28:37 <adrian_otto> how many containers should be added to deal with a sudden increase in work
22:28:44 <iqbalmohomed> yup .. makes sense
22:28:55 <adrian_otto> currently kubernetes does not solve for that, but I'm reasonably sure that's an ambition
22:29:13 <adrian_otto> whereas autoscaling and auto healing are not contemplated for Magnum
22:29:52 <iqbalmohomed> right ... in Openstack, heat handles those concerns
22:29:53 <adrian_otto> instead, we want a standardized, modular interface where various container technologies can be attached, and an API that allows a variety of tools to be integrated on the top
22:30:24 <adrian_otto> so if you really wanted kubernetes + openstack, one option would be: kubernetes + swarmd + magnum + openstack
22:31:10 <iqbalmohomed> what about the mapping of ports/ips to containers ..
22:31:24 <adrian_otto> where kubernetes does the service level management, swarmd provides a docker API, magnum provides a containers centric API to swarmd, and openstack (nova) produces raw capacity on instances of any type (vm, bare metal, container)
22:31:58 <adrian_otto> ok, so mappings of ports would be passed through magnum to the host level systems
22:32:16 <adrian_otto> let''s say for sake of discussion that I have the above setup
22:32:29 <adrian_otto> and I use nova-docker as my virt driver in nova
22:32:37 <adrian_otto> so I end up doing nested containers
22:33:01 <iqbalmohomed> yup
22:33:24 <adrian_otto> my port mappings are requested when my swarmd interacts with Magnum, and are passed through to the nova-docker virt driver, and it accomplishes the map using (probably) a call to neutron
22:33:41 <adrian_otto> or by invoking docker locally on the host
22:34:11 <adrian_otto> Magnum would cache the port arguments requested
22:34:28 <adrian_otto> so it would be able to return them when you do an inspection on a container
22:34:51 <adrian_otto> sound sensible?
22:34:52 <iqbalmohomed> so who decides the port mappings in your example? swarmd?
22:35:23 <adrian_otto> good question, the port configurations are expressed by the consumer of the Magnum API
22:35:49 <adrian_otto> so in my example, yes, that's swarmd, and it would be responding to requests from kubernetes, which would need to specify them
22:35:52 <iqbalmohomed> got it ... so up to that consumer to make good decisions
22:36:00 <adrian_otto> yes
22:36:16 <iqbalmohomed> cool ... makes sense .. thanks for this!
22:36:29 <adrian_otto> this approach strikes a balance between simplicity (of the system) and freedom (to make the system do exotic things)
22:36:53 <adrian_otto> hopefully Magnum is only somewhat opinionated
22:37:06 <iqbalmohomed> I like the layering/separation of concerns here
22:37:18 <adrian_otto> because if it gets too magical… you get the point
22:37:31 <iqbalmohomed> rails :-p
22:37:36 <adrian_otto> now this is a contrast to Solum
22:37:52 <adrian_otto> where Solum will generate heat templates and will determine the mappings for you
22:38:11 <iqbalmohomed> yup ... makes sense
22:38:18 <adrian_otto> whereas Magnum will allow you to specify an additional level of detail that it happily passes through to the lower layers
22:38:36 <iqbalmohomed> a silly question just to confirm ..
22:38:39 <adrian_otto> Magnum makes no assumptions about what you are trying to deploy
22:39:07 <iqbalmohomed> for the instance on which magnum deploys containers . ... all containers belong to this tenant, right?
22:39:14 <adrian_otto> (of course speaking in the future perspective, since it does nothing yet)
22:39:57 <adrian_otto> okay, I will be careful about how I answer that
22:40:24 <adrian_otto> we have contemplated two ways Magnum will interact with instances
22:40:33 <adrian_otto> 1) It creates them as-needed
22:40:50 <adrian_otto> 2) The caller specifies what instance_id to create a container on
22:41:15 <adrian_otto> in the case of #1, they will be created using the credentials (trust token) provided by the caller
22:41:27 <adrian_otto> so the tenant will match the user of the Magnum API
22:41:42 <adrian_otto> unless you have done something clever with the trust tokens to get a different outcome
22:41:50 <adrian_otto> does this make sense so far?
22:42:07 <iqbalmohomed> yup
22:42:18 <adrian_otto> ok, so in the case of #2, we apply some logic
22:42:50 <adrian_otto> we check to see if the resource referenced by the instance_id in the request matches the tenant currently acting on the Magnum API
22:42:53 <adrian_otto> -or-
22:43:05 <adrian_otto> there is a trust that allows access to another tenant
22:43:13 <iqbalmohomed> ic
22:43:37 <iqbalmohomed> so one can use trusts to achieve some interesting scenarios
22:44:00 <adrian_otto> yes, it is possible to make a trust to delegate rights to another tenant
22:44:23 <adrian_otto> and if you have set that up in keystone before you made a call, then it could be possible for you to act across tenants if you had a reason to do that
22:44:26 <iqbalmohomed> cool ... thanks for this discussion ... very useful info
22:44:30 <adrian_otto> but it would be properly access controlled
22:44:41 <adrian_otto> we would use python-keystoneclient to access those features
22:45:10 <adrian_otto> ok, any other questions?
22:45:19 <iqbalmohomed> i'm sated for the day :)
22:45:45 <adrian_otto> heh, okay cool
22:46:02 <adrian_otto> any other lurkers who would like to be recorded in attendance before we wrap up for today?
22:46:43 <adrian_otto> ok, thanks!
22:46:51 <adrian_otto> I'll see you here again in two weeks time
22:46:57 <iqbalmohomed> thanks .. bye
22:46:59 <adrian_otto> #endmeeting