21:00:15 <devkulkarni> #startmeeting Solum Team Meeting
21:00:17 <openstack> Meeting started Tue Mar  3 21:00:15 2015 UTC and is due to finish in 60 minutes.  The chair is devkulkarni. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:20 <openstack> The meeting name has been set to 'solum_team_meeting'
21:00:31 <devkulkarni> #topic Roll Call
21:00:37 <devkulkarni> Devdatta Kulkarni
21:00:44 <akshayc> Akshay Chhajed
21:00:50 <james_li> james li
21:00:55 <ravips> Ravi Sankar Penta
21:00:56 <mkam> Melissa Kam
21:01:16 <datsun180b> Ed Cranford
21:01:31 <devkulkarni> hi akshayc, ravips, james_li, mkam, datsun180b
21:01:40 <devkulkarni> thanks for joining
21:01:42 <akshayc> hi devkulkarni
21:01:45 <ravips> hi dev
21:01:46 <devkulkarni> here is our agenda for today:
21:01:54 <devkulkarni> #link https://wiki.openstack.org/wiki/Meetings/Solum
21:02:06 <devkulkarni> there is only one item under 3-3
21:02:16 <devkulkarni> but we have not completed items from 2-24
21:02:28 <devkulkarni> so we can continue where we left of last time
21:02:33 <datsun180b> makes sense to start from where we left off
21:03:09 <devkulkarni> so last week I think james_li informed us about the features of custom port support and robustness improvements
21:03:52 <devkulkarni> today I would like to start with giving some updates about changes to the deployment
21:04:03 <devkulkarni> (devkulkarni): Supporting different backends for deployment
21:04:31 <devkulkarni> So till now, we were only supporting deployment of docker containers via heat+nova-docker driver
21:05:01 <akshayc> ok
21:05:04 <devkulkarni> we thought what if we don't have the nova-docker driver available in a our deployment? what could we do?
21:05:18 <devkulkarni> the solution was to consider alternative deployment strategies.
21:05:41 <devkulkarni> the one that became immediately apparent was to consider something like CoreOS and deploy DUs on it
21:05:56 <devkulkarni> PaulCzar had started on that effort sometime back
21:06:17 <devkulkarni> We continued that patch forward and took it to completion
21:06:25 <devkulkarni> basic idea is that
21:06:48 <devkulkarni> Heat will deploy a CoreOS VM and pass the location of the DU via user-data section of the template
21:07:10 <devkulkarni> the location
21:07:26 <devkulkarni> could be either docker registry based, or swift based
21:07:48 <devkulkarni> for swift, we are generating a tempURL with configurable expiry time
21:08:21 <devkulkarni> so we construct the template in our deployer, set the image-id to use as the CoreOS VM image id, and then create the heat stack
21:08:39 <devkulkarni> Heat spins up the VM, and then executes the user-data section
21:09:04 <devkulkarni> which consists of: downloading the DU, importing it in the local docker registry, and then executing 'docker run
21:09:32 <devkulkarni> currently you may not be able to try this feature in our vagrant setup
21:09:42 <devkulkarni> I am still working on it
21:10:15 <devkulkarni> we need to register the coreos vm in glance as part of spinning up the Vagrant devstack environment
21:10:31 <devkulkarni> I have that patch submitted
21:10:57 <devkulkarni> but currently our vagrant deployments have not been working for some reason
21:11:18 <devkulkarni> I am getting 'No valid host found' error (even after increasing memory to 10 G)
21:12:14 <devkulkarni> any questions/comments so far? I would be happy to walk you through the relevant code changes if anyone is interested
21:12:43 <ravips> no valid host error from heat?
21:12:51 <devkulkarni> yeah..
21:12:57 <james_li> after you upload a  coreos image to glance, can you do nova boot?
21:13:12 <devkulkarni> yes, I can do a nova boot
21:13:39 <james_li> ok, can you boot a coreos Vm via heat?
21:13:51 <devkulkarni> a related problem is that injecting keys via nova boot does not seem to work for the coreos image that I am using
21:14:11 <devkulkarni> james_li: good question.
21:14:31 <james_li> the only reason we use coreos is that it already has docker
21:14:44 <james_li> people can use any ubuntu image and install docker on it
21:14:58 <james_li> so we are not limited to coreOS
21:15:08 <devkulkarni> On vagrant I think I am not able to boot Coreos vm via heat. It gives the 'no valid host' found error, which is thrown by nova
21:15:13 <devkulkarni> james_li: you are right
21:15:20 <gpilz> gilbert pilz is here
21:15:36 <ravips> devkulkarni: did we check heat event list
21:16:07 <devkulkarni> james_li: also the fact that we are not using the 'units' as defined in the coreos terminology, makes it more the reason for us to not get constrained to coreos
21:16:32 <devkulkarni> ravips: yes, I did. I looked at heat's service screens
21:16:49 <devkulkarni> I was able to correlate the 'no valid host' found error from heat to nova
21:17:34 <devkulkarni> james_li: however, for our Vagrant setup, I think the coreos image is good as it already has docker on it. We don't need to build or maintain it.
21:17:45 <devkulkarni> I mean an image that has docker on it
21:18:18 <devkulkarni> james_li: here is the image that we are using: http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
21:18:46 <devkulkarni> https://github.com/stackforge/solum/blob/master/contrib/devstack/lib/solum
21:18:55 <devkulkarni> check solum_install_core_os function from there
21:19:49 <devkulkarni> anyways. ping me on solum if you want to discuss/find out more about deployments
21:19:59 <ravips> devkulkarni: not the heat service screen, heat event-list shows all stages for the given stack and event-show can give you more insights
21:20:09 <devkulkarni> ravips: oh okay.
21:20:17 <devkulkarni> did not know about that.
21:20:37 <devkulkarni> I will try heat even-list and see what events are being emitted by heat
21:20:42 <ravips> ravips: learned that from angus when i ran into some issue
21:20:42 <devkulkarni> heat event-list
21:20:45 <james_li> probably we need to implement a similar thing for solum :)
21:20:55 <james_li> solum event-list
21:20:55 <devkulkarni> james_li: +10
21:21:10 <ravips> +1
21:21:47 <devkulkarni> ravips: thanks for suggesting those two commands. will definitely try them next time I am debugging the Vagrant setup
21:22:09 <devkulkarni> Ok. lets move on to the next topic. It is from datsun180b
21:22:12 <devkulkarni> (datsun180b): CLI changes (introduction of the new 'app' namespace)
21:22:29 <devkulkarni> datsun180b: floor is yours
21:22:58 <datsun180b> You may have noticed these 'app' commands in the CLI. They're the forerunners of a number of changes I'm looking to make to Solum
21:23:53 <datsun180b> Namely, streamlining the user experience of dealing with apps via assemblies and plans. The end goal is to expose just 'apps' in the CLI and API, and leave the fate of plans and assemblies and such as internal details
21:24:26 <devkulkarni> #link: https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/NewCLI
21:24:57 <devkulkarni> we had discussed this in one of our recent meetings
21:25:32 <devkulkarni> datsun180b: so, how far have you progressed on this change?
21:25:56 <datsun180b> At present, the CLI presents apps as gloms of apps and plans
21:26:17 <datsun180b> I'd like to, eventually, make app _the_ resource of interaction for Solum
21:26:53 <devkulkarni> You mean on the API side? Makes sense to have one-to-one mapping from CLI to API
21:27:13 <james_li> +1
21:27:23 <datsun180b> but this recent change does somethign drastic by, instead of 409ing on 'plan delete <occupied plan>', going ahead and taking care of the assemblies (and stacks/components and logs and such) and deleting the plan
21:27:48 <devkulkarni> which recent change?
21:28:07 <datsun180b> #link https://review.openstack.org/#/c/160501/
21:28:43 <gpilz> I think that stopping a deleting a running application because you've deleted the blueprint for that application (the plan) seems a little drastic
21:28:44 <ravips> datsun180b: new cli is awesome, don't we still want to keep plan cli (at least for admins)? I'm thinking the case where vendor wants to distribute some app templates that users can readily use instead of writing custom plan files
21:28:56 <gpilz> it would be like burning my house to ground because I shredded the blueprint
21:29:04 <gpilz> I don't know why you would do that
21:29:24 <datsun180b> gpilz: we won't always destroy them, current thinking is instead to stop them and maintain the assembly rows as a history of actions upon an app
21:29:49 <datsun180b> we just don't have an "assembly stop" mechanism
21:29:51 <devkulkarni> gpilz: also, there will be 'app stop' command coming soon
21:30:13 <gpilz> ed: that seems the be the real issue
21:30:19 <datsun180b> further, there's the matter of associating an app with up to one deployed-and-running instance of your code
21:31:18 <devkulkarni> ravips: in the use case that you described, how will having the plan cli help?
21:31:21 <datsun180b> ravips: i imagine the plan and assembly commands will be removed in the future
21:32:13 <datsun180b> registering an app will be a matter of "app create", not "plan create", and building & deploying an app will be a matter of "app build" or "app deploy", not "assembly create"
21:32:42 <ravips> devkulkarni: admins can register plans using plan cli and user can use plan id in the app cli to reference existing plans (need some auth checks)
21:32:43 <datsun180b> that a plan or assembly is created or destroyed is not a concerning matter for a user, ideally
21:32:54 <gpilz> +1
21:33:23 <datsun180b> ravips: there's also the matter of separating registered plans from provided templates
21:33:25 <devkulkarni> ravips: that could be done by introducing 'app register'
21:35:09 <ravips> devkulkarni: register app will create an assembly?
21:35:15 <datsun180b> no, an app
21:35:19 <devkulkarni> ravips: no
21:35:39 <datsun180b> assemblies are instances of building/built/deployed user code
21:35:42 <ravips> what's the difference between assembly and an app?
21:36:01 <devkulkarni> I am saying that the use case that you described — that someone registers a app template/plan file and others use it — can be achieved by something like 'app register'. we don't specifically need plan cli for that
21:36:20 <datsun180b> an app includes the registration, overhead like heat components, logfiles, user secrets, and activity history
21:36:27 <devkulkarni> ravips: what datsun180b mentioned above
21:36:36 <datsun180b> by "includes" i mean "will ideally include"
21:36:40 <ravips> okay, app deploy will create a running instance
21:36:48 <devkulkarni> ravips: yep
21:36:49 <datsun180b> (building it if need be)
21:37:13 <devkulkarni> ravips: you might want to take a look at my comment on this: #link https://review.openstack.org/#/c/160501/
21:37:19 <ravips> ah, perfect
21:38:32 <datsun180b> so this all to say, lacking an app resource, a user deleting an app means to destroy every aspect of it, no questions asked
21:39:14 <devkulkarni> datsun180b: even when we have app resource,
21:39:14 <datsun180b> if that means stopping and possibly removing assemblies, that's what it means
21:39:33 <devkulkarni> wouldn't deleting the app amount to deleting all aspects of it. no?
21:39:43 <gpilz> ummm
21:39:46 <devkulkarni> may be we can take query params at that point
21:39:57 <datsun180b> what i mean is, since we have no app but a plan, deleting a plan should mean the same thing
21:39:58 <devkulkarni> and selectively delete some or other things
21:40:14 <devkulkarni> datsun180b: ok, makes sense.
21:40:18 <gpilz> the reason CAMP doesn't say you must delete the plan when you delete the app
21:40:48 <gpilz> is that the user may have spent some considerable time and effort uploading various artifacts to the platform (datasets, etc.)
21:41:00 <datsun180b> of course, the registration information is important
21:41:23 <gpilz> and, if they wanted to start *another* app (that was very much like the app they just deleted but maybe slightly different)
21:41:30 <datsun180b> so there's another matter of our db's foreign-key reference from assemblies to plans
21:41:37 <devkulkarni> I remember that the CLI makes that distinction between 'delete' and 'destroy'
21:41:38 <gpilz> we didn't want to force the user to have to re-upload all that information
21:41:48 <devkulkarni> sure
21:41:51 <datsun180b> that's the underlying reason--when an assembly is updated, it has to make reference to plan data
21:41:55 <gpilz> however, it doesn't seem like Solum is that worried about this case
21:42:09 <gpilz> all of the stuff I've seen just uses reference to git
21:42:10 <devkulkarni> from user point of view though, there is no 'plan'. there is just an 'app'
21:42:42 <datsun180b> but with the future app model, assemblies won't be updated--each new app deploy would create a new app from the current registered plan
21:42:56 <devkulkarni> datsun180b: minor correction
21:43:03 <datsun180b> go ahead
21:43:04 <gpilz> ed: that makes sense
21:43:09 <devkulkarni> each new app deploy would create a new assembly
21:43:18 <datsun180b> thanks yes
21:43:28 <datsun180b> that's what i mean, promise
21:43:44 <devkulkarni> gpilz: point about not deleting the registration information is valid
21:43:53 <datsun180b> it's the same app, it may be a different plan, but it's a new assembly
21:44:04 <devkulkarni> I think in the cli that distinction is made by specifying 'app delete' and 'app destroy'
21:44:31 <gpilz> that makes sense
21:44:35 <devkulkarni> at least at one point that was the case.
21:44:49 <gpilz> i'm not sure about conflating "apps" and "plans"
21:44:58 <ravips> devkulkarni: can you elaborate distinction between app delete/destroy
21:44:58 <datsun180b> i never signed off on destroy
21:45:03 <gpilz> a plan is not an app - it is information about how to create an app
21:45:05 <devkulkarni> ravips: sure
21:45:18 <datsun180b> gpilz: completely correct about plans and apps
21:45:32 <datsun180b> an app will own one or more plans for versioning
21:45:54 <gpilz> can two apps reference the same plan?
21:45:57 <datsun180b> and that app will own one or more assemblies, up to one of which is deployed unless i'm wrong
21:46:11 <devkulkarni> datsun180b: that is correct
21:46:29 <datsun180b> gpilz: i wouldn't think so
21:47:05 <gpilz> i hear you saying that, to support versioning, you need a higher level resource, "app", which references one or more assemblies
21:47:14 <devkulkarni> gpilz: what do you mean? an 'app' includes the following: a template describing the relevant thinngs like git url + some workflow defn + various instantiations of the associated workflows
21:47:15 <gpilz> each assembly is a different version of the app?
21:47:51 <devkulkarni> gpilz: an assembly is instantiation of different workflows associated with an app
21:48:07 <gpilz> why don't you model those workflows as components?
21:48:28 <devkulkarni> how will that help?
21:48:46 <gpilz> it seems to me like you are inventing a higher-level collection resource
21:49:18 <gpilz> because you've ignored the fact that the resource you've been working with, "assembly", is actually a collection of components
21:49:23 <gpilz> maybe i'm wrong
21:49:52 <datsun180b> i've been working on removing some of those things from an assembly, like the triggers
21:50:00 <devkulkarni> gpilz: we are starting from the point of view of what did currently an assembly represent in Solum's current architecture
21:50:08 <devkulkarni> datsun180b: +1
21:50:10 <datsun180b> and shortly, heat components for example
21:50:40 <datsun180b> so if an app owns an LB, we can swap out the assembly and point the LB to a new web head without the IP of the app changing
21:50:53 <devkulkarni> gpilz: this #link https://review.openstack.org/#/c/160501/ has more explanation of the rationale
21:51:01 <datsun180b> for example
21:51:12 <datsun180b> oh we are nearly out of time
21:52:38 <datsun180b> So I propose we get a solid write-up of what all this app resource entails so far as changes to Solum
21:53:00 <devkulkarni> datsun180b: +1
21:53:04 <akshayc> +1
21:53:19 <datsun180b> We have the NewCLI doc in the wiki that we can use to drive some design
21:53:19 <gpilz> +1
21:53:22 <ravips> +1
21:53:44 <devkulkarni> cool.
21:53:48 <devkulkarni> we are almost out of time.
21:53:56 <devkulkarni> lets just jump to open discussion
21:54:03 <devkulkarni> #topic Open discussion
21:54:04 <datsun180b> devkulkarni, you're the chair, can you task me with that?
21:54:57 <devkulkarni> #agreed datsun180b will document the changes to Solum due to the introduction of the app resource
21:55:13 <datsun180b> that ought to show up on the record
21:55:23 <devkulkarni> #action datsun180b will document the changes to Solum due to the introduction of the app resource
21:55:55 <devkulkarni> akshayc: did you see james_li's patch?
21:56:03 <akshayc> yep
21:56:11 <devkulkarni> james_li: mind updating folks about your patch
21:56:44 <james_li> converting bash to python requires installing docker-py
21:57:01 <james_li> which is not in the global requirements of openstack
21:58:07 <james_li> so in the light of akshayc's infra patch I submitted another patch so that we can enable docker-py just by making a local change
21:58:19 <james_li> https://review.openstack.org/#/c/160041/
21:58:26 <devkulkarni> datsun180b had a comment on it
21:58:38 <akshayc> cool…. any picks for python git library?
21:59:21 <datsun180b> akshayc: no clue, pick one and go nuts
21:59:24 <james_li> akshayc: did a quick looking on that, but it seems there is not yet a stable py-git that we can use
21:59:44 <akshayc> ok…. so use shell?
21:59:51 <devkulkarni> its about time.
21:59:55 <akshayc> ok
22:00:01 <datsun180b> back to #solum, thanks everyone
22:00:06 <devkulkarni> I mean't we are at the end of our meeting
22:00:09 <devkulkarni> thanks
22:00:14 <devkulkarni> #endmeeting