20:00:30 <asalkeld> #startmeeting heat
20:00:31 <openstack> Meeting started Wed Mar 11 20:00:30 2015 UTC and is due to finish in 60 minutes.  The chair is asalkeld. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:34 <openstack> The meeting name has been set to 'heat'
20:00:48 <skraynev_> hehe today it works ;)
20:00:55 <zaneb> \o/
20:01:24 <spzala> Hi
20:01:25 <stevebaker> /o\
20:01:28 <shardy> o/
20:01:48 <BillArnold> hi
20:01:59 <KarolynChambers> hi
20:02:00 <pas-ha> o/
20:02:07 <asalkeld> #topic Adding items to the agenda
20:02:29 <asalkeld> so far we have #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda
20:02:30 <skraynev_> I have added one
20:02:40 <Tango|2> Hi
20:03:00 <asalkeld> hi Tango|2
20:03:38 <asalkeld> nothing else?
20:04:03 <asalkeld> #topic run up to kilo release
20:04:15 <asalkeld> #link https://wiki.openstack.org/wiki/Kilo_Release_Schedule
20:04:29 <asalkeld> (March 19 cut off for features),lets run though what we can get in.
20:04:39 <asalkeld> that's one week
20:04:45 <asalkeld> afaiks
20:05:00 <skraynev_> time for hard work
20:05:04 <asalkeld> and we got a *lot*  to get in
20:05:17 <asalkeld> #link https://launchpad.net/heat/+milestone/kilo-3
20:05:55 <asalkeld> has anyone seen new progress from tags and breakpoints?
20:05:56 <stevebaker> hey, everything in code review
20:06:14 <shardy> asalkeld: yes, I've been testing the latest breakpoint patches today
20:06:18 <zaneb> asalkeld: good progress on breakpoints (hooks)
20:06:22 <shardy> the heat one is very close IMO
20:06:23 <zaneb> I'm reviewing at the moment
20:06:24 <asalkeld> ok, cool
20:06:42 <shardy> heatclient needs a bit more work, I assume shadower is on that already
20:06:45 <zaneb> shardy: agree. I think it should land this week
20:06:53 <inc0> any outstanding patches which will affect db schema?
20:07:02 <zaneb> (the Heat one that is, not the heatclient one)
20:07:16 <mspreitz> hi
20:07:30 <asalkeld> inc0: there are still ~ 3 or 4 convergence patches, tho' smaller
20:07:41 <zaneb> inc0: pretty sure there are, yes. but all the ones I know are for convergence
20:08:23 <asalkeld> we still have convergence, olso versioned objects, keystone resources, tags and breakpoints
20:08:34 <asalkeld> stevebaker: you about?
20:08:40 <stevebaker> yes
20:08:46 <asalkeld> what's the state of you signalling
20:08:56 <asalkeld> https://blueprints.launchpad.net/heat/+spec/software-config-swift-signal
20:08:56 <stevebaker> keystone resources are in contrib, so surely they are not subject to feature freeze
20:09:22 <asalkeld> stevebaker: as long as they don't effect the string freeze
20:09:26 <skraynev_> asalkeld: also Mistral resources
20:09:39 <asalkeld> skraynev_: yeah phew that's right
20:09:59 <stevebaker> asalkeld: hmm, is i18n applied to contrib? It probably shouldn't be
20:10:13 <asalkeld> stevebaker: honestly not sure
20:10:40 <asalkeld> stevebaker: is software-config-swift-signal done?
20:11:03 <stevebaker> asalkeld: everything in heat has landed for software-config-trigger and swift signalling. Review on this related heatclient change is needed though https://review.openstack.org/#/c/160240/
20:11:35 <asalkeld> ok, that's right
20:11:57 <asalkeld> say do we offically use heatclient blueprints?
20:12:01 <asalkeld> there are some
20:12:19 <asalkeld> they look forgotten
20:12:23 <asalkeld> https://blueprints.launchpad.net/python-heatclient
20:12:30 <stevebaker> occasionally. I'd rather use feature bugs
20:12:53 <asalkeld> yeah, i'd rather not use them
20:13:31 <asalkeld> so zaneb have you looked at the convergence reviews today?
20:13:40 <asalkeld> any major issues?
20:13:42 <zaneb> not today
20:14:13 <asalkeld> 6am for me, just got up
20:14:13 <stevebaker> Is there a way of knowing which is the highest priority convergence reviews required other than blueprint priority?
20:14:18 <zaneb> I don't think it matters whether those patches land before FF
20:14:34 <asalkeld> schema changes
20:14:54 <asalkeld> zaneb: there is a string freeze
20:14:55 <skraynev_> zaneb: could you please take a look on my answers / question for https://review.openstack.org/#/c/161306/
20:15:07 <skraynev_> if you have a minute ;)
20:15:12 <asalkeld> so we *could* commit with commented out logs:-O
20:15:16 <zaneb> asalkeld: yeah, we may have to stop landing stuff between FF and RC
20:15:22 <inc0> I've taken a look now -  I can't see anything affecting my patches if I haven't miss anything
20:15:43 <asalkeld> inc0: you should be able to get in stack and template
20:15:47 <zaneb> but it's not like convergence is going to be a working thing in Kilo either way
20:15:52 <shardy> zaneb: you mean it's too late to land anything amounting to useful, so we may as well wait until we branch?
20:16:01 <shardy> ok, cool
20:16:02 <zaneb> yes
20:16:04 <asalkeld> there is some more resource stuff
20:16:10 <inc0> asalkeld, its already ready....before k3 I'd love to have Resource as well, as its critical object
20:16:31 <zaneb> I mean, if we can land stuff safely, then by all means land it (even during FF)
20:17:15 <inc0> I think if we are careful, given resource and stack objects lands, we should be good with database compatibility for upgrade k->l
20:17:18 <asalkeld> zaneb: you not worried about string freeze?
20:17:40 <asalkeld> i guess bugs change strings
20:17:44 <zaneb> asalkeld: we should take that into account when landing stuff
20:17:45 <shardy> IMO we shouldn't be landing convergence rearchitecting during FF
20:18:08 <zaneb> some things should definitely not land after FF/SF and before the branch
20:18:33 <asalkeld> only minor things
20:18:34 <zaneb> other things probably can, so long as they are not called anywhere
20:18:45 <zaneb> don't add strings
20:18:48 <zaneb> &c.
20:18:57 <asalkeld> i am ok with that
20:19:14 <zaneb> no db migrations during that period, obvs
20:19:20 <asalkeld> we won't be too long out of business
20:19:24 <zaneb> (but new tables might be ok)
20:19:38 <shardy> Ok, providing we're not making structural changes to existing code then fair enough
20:20:25 <asalkeld> we all good with this topic?
20:20:33 <zaneb> +1
20:20:37 <shardy> +1
20:20:43 <asalkeld> #topic Thoughts on spec for balancing scaling groups across AZs
20:20:51 <asalkeld> #link https://review.openstack.org/#/c/105907/
20:21:13 <KarolynChambers> Hi, we have added comments to address some of thequestions with this, so wondering on people's thoughts
20:21:27 <mspreitz> not some, all
20:21:54 <asalkeld> KarolynChambers: it's been a while since i looked at that
20:22:25 <KarolynChambers> wanted to get it on the  radar again so to speak
20:23:09 <stevebaker> KarolynChambers, mspreitz: Would this mean implementing an az scheduler in heat?
20:23:29 <mspreitz> that's a pretty heavy weight wording for what we have in mind
20:23:36 <zaneb> all of my context on this got paged out :/
20:23:37 <mspreitz> we propose a simple counting scheme
20:23:51 <inc0> stevebaker, or using gantt:) but I'm more concerned about having to implement logic which will ensure that volume will be in same az as instacne...for example
20:23:57 <asalkeld> mspreitz:  KarolynChambers it might be better to raise after branching when we can give it proper time
20:24:00 <inc0> and all that stuff
20:24:11 <asalkeld> we are all super focused on kilo
20:24:25 <mspreitz> inc0: deliberately not making Heat into holistic scheduler here
20:24:25 <asalkeld> it's not long away
20:24:53 <zaneb> inc0: there is a way to do that, you use get_attr on the server to supply the AZ to the volume
20:25:18 <mspreitz> zaneb: problem is not really that complex.  Scaling group has homogenous members
20:25:47 <mspreitz> all we propose is that heat counts members in each az
20:26:13 <pas-ha> mspreitz, I wonder what's the plan when a given AZ refuses an instance (no valid host found)?
20:26:27 <inc0> well, sure, its nothing that can't be solved.
20:26:28 <stevebaker> mspreitz: but heat is making placement decisions based on some algorithm (albeit a simple one). We've been deferring getting into this area until gantt is available to consume
20:26:46 <inc0> pas-ha, I guess CREATE_ERROR -> same as now if nova throws
20:26:51 <mspreitz> we can not defer to gantt, it has no interface for choosing which group member to remove
20:26:57 <mspreitz> on scale down
20:27:14 <mspreitz> inc0: yes, proposal here is very simple
20:27:26 <pas-ha> inc0, but then the point of balancing is fainting
20:27:29 <mspreitz> well, proposal is vague on that point, initial impl is simple
20:27:48 <mspreitz> proposal is vague, allows impl to try again elsewhere
20:27:49 <shardy> Does anyone know if e.g Neutron has full support for Availability Zones?
20:28:00 <asalkeld> not sure shardy
20:28:04 <inc0> I don't think so
20:28:05 <mspreitz> shardy: not relevant
20:28:15 <inc0> network node is still az-agnostic
20:28:26 <mspreitz> we are just looking for a way for heat to make AZ choice that eventually gets to whatever template author cares it to
20:28:48 <inc0> but I agree with mspreitz - it will provide some measure of safety without too much problems I guess
20:28:56 <shardy> mspreitz: so if you can't have per-AZ l3 networks, what happens when an AZ goes down containing the Neutron services?
20:28:56 <inc0> if we keep this thing naive
20:29:30 <inc0> shardy, is that our problems or nova's?:)
20:29:34 <shardy> adding just instance support for the AWS resource kinda makes sense, but folks want to scale stacks containing more than just instances with the native resources
20:29:36 <inc0> or neutrons
20:30:05 <mspreitz> shardy: when an AZ goes down it is entirely unavailable of course, when the impl graduates to handling refusals it will naturally cover that
20:30:12 <shardy> inc0: I guess I'm just asking if the feature will be crippled die to being nova-centric that's all
20:30:18 <shardy> s/die/due
20:30:24 <mspreitz> Nova is not the only thing that knows AZ
20:30:28 <mspreitz> Cinder does too
20:30:36 <mspreitz> but that is not critical here
20:30:49 <mspreitz> the idea is that template author applies this only where relevant
20:30:58 <shardy> mspreitz: Ok, I'm just trying to understand the gaps, given that we don't place any restrictions on what resources can be scaled out
20:31:16 <mspreitz> shardy: OK, let me do case analysis
20:31:17 <stevebaker> I guess that is up to the user
20:31:34 <zaneb> shardy: I don't see how that's our problem
20:31:40 <mspreitz> case 1: scaling group of OS::Nova::Server or Cinder volume --- clear to all, I suppose
20:32:00 <mspreitz> case 2: scaling group of atomic resource that does not support AZ - user does not ask for balancing across AZ
20:32:23 <mspreitz> case 3: scaling group of stack - template author propagates AZ param to relevant resources, recurse on the case analysis
20:33:14 <inc0> well, I for one think this is nice feature to be added, but its Liberty anyway
20:33:19 <shardy> mspreitz: I guess the question is, do we bear any responsibility to communicate to the user if the scaled unit is only partially balanced over AZs
20:33:44 <shardy> maybe we don't care, and we just document known limitations
20:33:48 <mspreitz> The user can already query that info, albeit not very conveniently
20:33:57 <mspreitz> adding convenient query would be a nice add
20:34:53 <pas-ha> as far as I understood user chooses herself what resources in a nested stack to pas the az param, so user is aware of what is balanced
20:35:03 <mspreitz> right
20:35:10 <mspreitz> but question is how does user know results
20:35:26 <asalkeld> ok, mspreitz it looks like people don't have a big problem with the spec
20:35:37 <asalkeld> just minor questions
20:35:44 <pas-ha> an implicit output, mapping resource names to az's
20:35:55 <mspreitz> On querying results, I think that can/should be addressed as a later add
20:36:03 <asalkeld> can we handle as normal? and attack it in L?
20:36:05 <pas-ha> like OS::StackID
20:36:09 <mspreitz> OK, actually, we could do that via attribute
20:36:34 <mspreitz> asalkeld: pls clarify what "it"
20:36:43 <asalkeld> the blueprint
20:36:52 <pas-ha> spec and imlementation
20:36:53 <asalkeld> (as in do it)
20:37:19 <mspreitz> Karolyn is closer to driving need now, I think
20:37:29 <mspreitz> Karolyn, delay to L OK?
20:37:34 <asalkeld> mspreitz: you could also add to https://etherpad.openstack.org/p/liberty-heat-sessions
20:37:38 <mspreitz> IMHO scheduler hints are higher priority
20:37:57 <stevebaker> KarolynChambers: I've +1ed, but would like an evaluation of gantt in the Alternatives section
20:38:07 <inc0> mspreitz, it would be hard for it to land in kilo as there is 1 week left and bp isnt merged even;)
20:38:24 <KarolynChambers> stevebaker: okay, I will look at gantt
20:38:30 <mspreitz> stevebaker: there *is* an evaluation in the Alternatives of defering to other scheduler
20:38:34 <asalkeld> lets move on
20:38:42 <inc0> also clarification how to solve resource groups with az-agnostic resources would be nice
20:38:54 <mspreitz> Gantt has no interface for choosing which to delete on scale down
20:39:04 <mspreitz> inc0: that is not a problem
20:39:06 <stevebaker> mspreitz: yet
20:39:47 <mspreitz> If Gantt *did* have an interface for choosing which to delete, Heat would need to invoke it.... individuals are identified in the generated template
20:40:00 <asalkeld> mspreitz: what stevebaker is suggesting is should we work with gantt to add some of functionality there
20:40:16 <asalkeld> and look into that
20:40:29 <asalkeld> how would that work, would it be better
20:40:38 <mspreitz> akalkeld: that does not get this issue entirely out of Heat, generated template identifies individuals
20:40:42 <asalkeld> could others, not using heat benefit
20:41:05 <asalkeld> mspreitz: totally we would still need some work in heat
20:41:21 <mspreitz> OK, so we are on to L for this.
20:41:27 <asalkeld> #topic Documentation options for stack lifecycle scheduler hints
20:41:36 <asalkeld> #link https://review.openstack.org/#/c/130294/
20:41:51 <stevebaker> yes, that is what I am suggesting. Or use gantt for initial placement, and logic in heat for removal. I just want to see that analysis. The Alternatives just talks about nova. My idealised notion of gantt is that it is a completely generic service for putting $things in $places
20:41:52 <KarolynChambers> i think people were oka ywith the code but asalked you had a comment in the spec about documentation being needed
20:42:22 <asalkeld> KarolynChambers: looking - it's been a while
20:42:30 <KarolynChambers> what are the options for documentation? more documentation in the spec? or is there some other place to document?
20:43:09 <asalkeld> heat/doc/source is one option
20:43:44 <KarolynChambers> do people have a preference for where you'd like to see?
20:43:52 <asalkeld> KarolynChambers: i am concerned with hooks as 1. devs need to know what there are
20:44:02 <KarolynChambers> understand
20:44:03 <asalkeld> so they don't regress
20:44:14 <asalkeld> and a user might want to use them too
20:44:22 <asalkeld> so what is it for and how do i use it
20:44:36 <pas-ha> and how to set it up
20:44:41 <asalkeld> yip
20:44:58 <KarolynChambers> agreed
20:45:11 <mspreitz> well, the usage we have in mind is not something users would be doing
20:45:15 <mspreitz> it is for operators
20:45:29 <asalkeld> mspreitz: i more mean operartors
20:45:46 <asalkeld> KarolynChambers: you happy with that?
20:45:59 <asalkeld> anyone got better idea for docs?
20:46:21 <skraynev_> nope
20:46:31 <asalkeld> KarolynChambers: topic done?
20:46:34 <KarolynChambers> yes
20:46:37 <KarolynChambers> thank you
20:46:44 <asalkeld> np
20:46:51 <asalkeld> #topic Work to get WSGI services runnable inside Apache/nginx
20:46:55 <pas-ha> let's have them at least somewhere first, then decide what is the best place for them
20:46:56 <skraynev_> http://lists.openstack.org/pipermail/openstack-dev/2015-February/057359.html
20:47:29 <skraynev_> currently keystone and other services want to deprecate eventlet and use apache for these purposes
20:47:39 <skraynev_> do we want be in this tream?
20:47:44 <inc0> uhh...
20:47:49 <skraynev_> and do the same thing in L ?
20:47:57 <asalkeld> skraynev_: sure
20:48:04 <asalkeld> it's quite easy
20:48:13 <inc0> this increases comlexity of installation - adds new component
20:48:13 <stevebaker> for heat-api*, sure
20:48:20 <zaneb> this only applies to the API though, of course
20:48:25 <pas-ha> yes, api only
20:48:30 <inc0> but I guess if ks will do it anyway...
20:48:48 <asalkeld> inc0: we can have it as an option for starters
20:49:01 <zaneb> +1
20:49:11 <stevebaker> imagine a cloud with every service on a single IP, port 80
20:49:13 <asalkeld> and still have a binary for things like devstack
20:49:22 <pas-ha> or a fance checker that there is suitable backend available :)
20:49:29 <skraynev_> stevebaker: dreams :)
20:49:36 <stevebaker> well, 443
20:49:46 <skraynev_> dreams - > reality
20:50:12 <asalkeld> skraynev_: seems like a small spec
20:50:18 <pas-ha> +1
20:50:25 <asalkeld> #topic open discussion
20:50:26 <skraynev_> ok, so if all agree with this I will add bp and spec for L.
20:50:27 <inc0> how does this work with asyncio idea?
20:50:44 <asalkeld> inc0: hope that doesn't happen:-O
20:50:54 <stevebaker> inc0: that is something we can consider for heat-engine
20:51:15 <inc0> stevebaker, do we really want to support 2 types of runtime?;)
20:51:27 <asalkeld> inc0: that's the idea
20:51:29 <stevebaker> inc0: I said consider, not adopt
20:51:30 <inc0> but meh, apache will be easy enough
20:51:45 <inc0> if it helps, why not
20:51:50 <asalkeld> api use apache, engines/workers use "something"
20:52:00 <pas-ha> not sure why, as devstack would have apache available anyway for keystone
20:52:08 <asalkeld> eventlet/fork/threads
20:52:15 <stevebaker> worker
20:52:18 <stevebaker> s
20:52:19 <pas-ha> and it already has it for horizon
20:52:40 <inc0> my point is rather...why do it?;)
20:52:55 <zaneb> inc0: read the thread
20:53:13 <asalkeld> inc0: it also give operators some options
20:53:16 <asalkeld> and easy to do
20:53:18 <pas-ha> apache/nginx is a proven production grade concurrent web server, better than eventlet-driven python
20:53:26 <stevebaker> eventlet has issues, and at least for APIs moving to web servers is an easy solution
20:53:58 <inc0> fair enough
20:54:10 <inc0> btw, open topic: heat memory consumption
20:54:24 <inc0> shardy mentioned it today, might be good to take a look
20:54:34 <asalkeld> #link https://etherpad.openstack.org/p/liberty-heat-sessions
20:54:36 <asalkeld> ?
20:54:51 <asalkeld> are we using more memory than normal?
20:54:57 <shardy> yeah, everyone I did some testing today of a TripleO seed with multiple workers
20:55:12 <shardy> heat-engine with 4 workers peaked at over 2G memory usage :-O
20:55:21 <shardy> worse, even, than nova-api
20:55:40 <zaneb> that's... a lot
20:55:44 <asalkeld> shardy: i suspect root_stack is somewhat to blame
20:55:56 <shardy> if anyone has existing thoughts on steps to improve that, it'd be good to start enumerating them as bugs and working through
20:55:59 <inc0> yeah...and template validation
20:56:08 <asalkeld> we need to kill root_stack
20:56:32 <shardy> asalkeld: yeah, I'm sure you're right
20:56:47 <inc0> or try to offload something to db
20:57:02 <asalkeld> that will load every stack in every stack (if that makes sense)
20:57:14 <shardy> Shall I just raise a "heat-engine is a memory hog" bug, and we can capture ideas there on how to fix it?
20:57:32 <shardy> also, does Rally help with quantifying such things, or only performance?
20:57:37 <asalkeld> shardy: as a task to make specific bugs
20:57:38 <inc0> or lets talk in Vancouver?
20:58:00 <asalkeld> inc0: it's really an investigation isn't it
20:58:20 <shardy> inc0: I'm not sure it really warrants summit time, we just need to do the analysis and fix the code ;)
20:58:20 <asalkeld> someone needs to figure out what all the problems are
20:58:52 <asalkeld> loading templates and files into mem
20:58:55 <inc0> profiler for the rescue;)
20:59:14 <asalkeld> 2 mins ...
21:00:12 <asalkeld> ok, that's mostly it. Thanks all!
21:00:17 <skraynev_> bb
21:00:18 <asalkeld> #endmeeting