12:00:19 <zaneb> #startmeeting heat
12:00:20 <openstack> Meeting started Wed Sep 17 12:00:19 2014 UTC and is due to finish in 60 minutes.  The chair is zaneb. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:00:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
12:00:24 <openstack> The meeting name has been set to 'heat'
12:00:34 <zaneb> #topic roll call
12:00:37 <asalkeld> o/
12:00:52 <mspreitz> alhoa
12:00:52 <ryansb> \o
12:01:12 <inc0> \o/
12:01:57 <zaneb> hmm, small crowd today
12:02:05 <BillArnold_> hi
12:02:31 <asalkeld> skraynev, ...
12:03:13 <ryansb> zaneb: jpeeler is supposed to be in today, not sure where he is atm though
12:03:17 <pas-ha> o/
12:03:31 <asalkeld> look rent a crowd;)
12:03:47 <tspatzier> hi
12:03:55 <asalkeld> hi tspatzier
12:03:55 <zaneb> shardy?
12:04:18 <zaneb> ok, let's get started
12:04:23 <pas-ha> skraynev is on vacation
12:04:34 <zaneb> #topic Review action items from last meeting
12:04:40 <zaneb> pretty sure there weren't any
12:05:04 <zaneb> #link http://eavesdrop.openstack.org/meetings/heat/2014/heat.2014-09-10-20.02.html
12:05:07 <asalkeld> s/heat/ansible :-O
12:05:30 <zaneb> ok, there was reviews for FFE blueprints
12:05:39 <zaneb> which I think all got reviewed or bumped
12:05:51 <zaneb> #topic Adding items to the agenda
12:06:12 <mspreitz> HARestarter
12:06:18 <mspreitz> "provider resource"
12:06:35 <asalkeld> mspreitz, you don't love my docs;)
12:06:47 <mspreitz> actually I quite appreciate that something is being done
12:06:54 <ryansb> SupportStatus (from ML discussion)
12:06:57 <mspreitz> I am just confused by one term
12:07:00 <zaneb> mspreitz: are those two different topics?
12:07:02 <mspreitz> yes
12:07:10 * zaneb has not caught up on review comments
12:07:35 <mspreitz> ryansb: what is the ML subject line?
12:07:36 <zaneb> ok, could be a longer meeting than I thought
12:08:02 <ryansb> mspreitz: "Defining what is a SupportStatus version"
12:08:05 <ryansb> #link http://lists.openstack.org/pipermail/openstack-dev/2014-September/045038.html
12:08:12 <asalkeld> note I added this: https://etherpad.openstack.org/p/kilo-heat-summit-topics
12:09:04 <zaneb> asalkeld: good job, thanks
12:09:15 <zaneb> we'll make a PTL out of you yet
12:09:31 <zaneb> ;)
12:09:32 <inc0> I might have 2 words to add about better HA;)
12:09:39 <zaneb> ok
12:09:43 <zaneb> #topic Review priorities & release status
12:09:58 <zaneb> #link https://launchpad.net/heat/+milestone/juno-rc1
12:09:58 <asalkeld> inc0, high availability?
12:10:21 <zaneb> so we got 5 FFEs merged
12:10:24 <inc0> asalkeld, yes, I'm getting email to openstack-dev ready
12:10:34 <zaneb> I think 2 low-priority ones got bumped
12:11:11 <zaneb> so at this stage of the release cycle, effort should be focussed on bug fixing and reviews of bug fixes
12:11:37 <asalkeld> i started looking at : https://bugs.launchpad.net/heat/+bug/1319813
12:11:38 <uvirtbot> Launchpad bug 1319813 in heat "no event recorded for INIT_* resources" [Medium,Triaged]
12:11:42 <zaneb> also, python-heatclient reviews for all of these features that were added. there are a bunch
12:11:45 <asalkeld> but got into the weeds
12:12:12 <zaneb> #action review python-heatclient support for merged features as a priority
12:12:53 <asalkeld> ryansb, we need a query for rc1 bugs/bp
12:12:55 <asalkeld> :)
12:13:12 <zaneb> asalkeld: yeah, Gerrit can't do that
12:13:28 <zaneb> queries are very simplistic :/
12:13:41 <zaneb> that's about all I have to say on this topic...
12:13:44 <asalkeld> wouldn't it be nice if gerrit could get the bug priorty and target
12:14:11 <zaneb> #topic HARestarter
12:14:26 <ryansb> asalkeld: I can try, not sure it's possible
12:14:27 <pas-ha> asalkeld, that's probably the Storyboard you are dreaming about
12:14:53 <zaneb> so HARestarter is an evolutionary dead-end and I proposed a patch to deprecate it
12:15:01 <asalkeld> pas-ha, not sure
12:15:13 <zaneb> because I am constantly seeing people completely misunderstand what it is and what it does
12:15:18 <zaneb> hi, inc0 ;)
12:15:36 <zaneb> but I gather mspreitz objects
12:15:37 <inc0> zaneb, hello, I hope you don't hate me just yet;)
12:15:38 <asalkeld> zaneb, so the new version would just delete the resource and convergence would fix it?
12:16:32 <zaneb> asalkeld: I'm not sure that we need a "new version" for that
12:16:46 <mspreitz> can someone explain why HARestarter will not be supportable in the future?
12:16:57 <zaneb> but yes, once we have convergence just deleting the offending resource would be infinitely preferable to what we do now
12:17:26 <zaneb> mspreitz: resources are supposed to be independent
12:17:27 <asalkeld> mspreitz, it's not very generic, and does not handle dependant resources
12:17:41 <asalkeld> so restart deletes the server
12:17:49 <zaneb> mspreitz: that resource operates on its containing stack
12:18:14 <asalkeld> and doesn't know what to do with volumes/networks
12:18:16 <mspreitz> I understand that the name is misleading.  That is easily fixed by a non-sharp transition to a new name.
12:18:42 <asalkeld> maybe do something like the autoscaling
12:19:03 <asalkeld> and make the item of monitoring/restarting a nested stack
12:19:15 <zaneb> tbh I don't feel like there is a lot more to discuss here
12:19:21 <zaneb> deprecation is not removal
12:19:35 <zaneb> it's an indicator to users of our future plans
12:19:49 <inc0> I am preparing email describing how we could use convergence to fix stateful vms, which could not be simply deleted and recreated
12:19:52 <zaneb> which have been discussed to death on many, many occasions
12:20:05 <mspreitz> Will we give users a transition period, during with both HARestarter and whatever we prefer to replace it with are both available?
12:20:13 <mspreitz> s/with/which
12:20:43 <mspreitz> Will we give users a transition period, during which both HARestarter and whatever we prefer to replace it with are both available?
12:21:20 <zaneb> mspreitz: fair question. I don't think we have decided
12:21:26 <asalkeld> mspreitz, any thing more?
12:21:26 <asalkeld> next(topic)
12:21:31 <mspreitz> yes more
12:21:39 <zaneb> I don't think there will be a 'replacement' as such
12:21:41 <mspreitz> It is pretty tough on users if there is no transition period
12:22:05 <zaneb> it will become irrelevant with convergence
12:22:10 <mspreitz> Users need a way to do HA
12:22:18 <zaneb> mspreitz: well, it's even tougher if we don't warn them in advance
12:22:20 <mspreitz> that is *replace* a user chosen set of things
12:22:25 <zaneb> so that's why we're deprecating
12:22:30 <mspreitz> I am fine with warning
12:22:42 <mspreitz> but we need to give users and/or providers something they can do
12:22:45 <zaneb> at the time removal is proposed would be the time to object if there are still people relying on it
12:23:10 <mspreitz> If the whole idea of *replacement* is a dead-end, I want to start preparing now
12:23:40 <mspreitz> But there needs to be an effective way to do HA, now and in the future
12:23:45 <zaneb> mspreitz: I think inc0 has some ideas for you, but that's another topic
12:23:48 <mspreitz> I can do HA now with HARestarter, that is the only way now
12:24:09 <mspreitz> Maybe we should hear that first
12:24:18 <zaneb> #topic Nova HA
12:24:25 <mspreitz> a "warning" that simply says you will not be able to do something in the future is not very good
12:24:25 <zaneb> inc0: go
12:25:07 <inc0> well I want to get discussion running about joing convergence and nova service_group and evacuate
12:25:26 <inc0> I will send email to openstack-dev about that soon describing detailed idea
12:25:29 <asalkeld> sorry mspreitz my wifi went weird and i didn't see any output for 5 mins
12:25:33 <mspreitz> service_group or server_group?
12:25:42 <inc0> service_group ofc, thanks
12:25:57 <mspreitz> asalkeld: not much, really
12:26:18 <inc0> so flow would be: covergence observes given resource, for example vm
12:26:20 <mspreitz> inc0: is there a short version for here?
12:26:37 <inc0> host with this vm dies, nova posts notification
12:26:45 <zaneb> asalkeld: http://eavesdrop.openstack.org/meetings/heat/2014/heat.2014-09-17-12.00.log.txt has the scrollback if you need it
12:26:53 <therve> Is it really service group? What's that?
12:27:07 <inc0> convergence gets this notif and restarts vm on different host
12:27:07 <asalkeld> zaneb, ta
12:27:10 <inc0> using shared storage
12:27:33 <inc0> so in effect this will look like normal restart from vm perspective
12:28:01 <inc0> this won't cover things like kernel panics in vm, but will cover hardware failure
12:28:03 <zaneb> inc0: is the shared storage really something that Heat should know about? isn't that Nova's job to handle?
12:28:25 <inc0> zaneb, nova will, heat just will send api call about evacuation
12:28:37 <mspreitz> why were server groups mentioned?
12:28:38 <asalkeld> this is the difference between active and passive monitoring
12:28:42 <inc0> equivalent to nova evacuate instance --on-shared-storage
12:28:43 <mspreitz> I mean, they were not mentioned in the outline
12:28:48 <asalkeld> very different things
12:28:49 <inc0> if this option will be used by client
12:29:24 <inc0> mspreitz, service_group, nova uses zookeeper to monitor health of hosts
12:29:46 <inc0> and convergence will want to know when host dies because that might affect stack
12:29:52 <mspreitz> sorry I misread earlier response
12:30:04 <therve> inc0, Does it really need to involve heat?
12:30:15 <zaneb> inc0: I don't want to know when a host dies
12:30:18 <inc0> therve, I think yes, not on low level
12:30:22 <zaneb> I want to know when my VM dies
12:30:49 <inc0> zaneb, ok, but it would be more effective way to connect one with another
12:31:06 <mspreitz> If host dies, nova knows that VM died
12:31:14 <mspreitz> Nova should emit notification of both, right?
12:31:14 <zaneb> right, but Nova should so that connecting
12:31:21 <zaneb> s/so/do/
12:31:35 <therve> +1
12:31:50 <inc0> sure, but we want to know that vm is needed to be restarted
12:31:54 <inc0> one way or another
12:31:55 <zaneb> Nova is an abstraction layer for hosts
12:31:59 <Qiming_> mspreitz, Nova cannot do that, current VM lifecycle event are emitted from nova-compute, which goes away with the host
12:32:06 <zaneb> nothing outside Nova should know that hosts exist
12:32:24 <mspreitz> Qming_: sounds like a bug in Nova
12:32:26 <asalkeld> conductor could emit that event
12:32:33 <mspreitz> Nova should emit the notifications about Nova resources
12:32:43 <asalkeld> yip
12:32:46 <zaneb> +1
12:32:53 <inc0> right now nova doesn't do notifs about host health
12:33:02 <mspreitz> we don't care about host health
12:33:12 <mspreitz> we care about virtual resources that Heat created
12:33:12 <inc0> so since we'll need to make them anyway, we can append info about resources affected
12:33:14 <therve> I'm not even sure why we talk about that. Is there anything actionable?
12:33:21 <inc0> and that is what convergence will parse
12:33:41 <mspreitz> Heat does not necessarily even know the host for everything it creates
12:33:43 <inc0> anyway, however we do that, convergence will trigger nova action
12:33:44 <Qiming_> asalkeld, I hope nova-conductor will emit host failure events, along with a list of VMs that were running on that failed host, but the thing is not there yet
12:33:49 <mspreitz> note that servers are not the only things with hosts
12:34:04 <asalkeld> Qiming_, sure - nova bug
12:34:14 <mspreitz> and Cinder
12:34:23 <asalkeld> we need to come up with a design for future guest hat
12:34:26 <mspreitz> and Neutron?
12:34:27 <asalkeld> lol
12:34:33 <asalkeld> guest ha
12:34:42 <zaneb> so to summarise this discussion...
12:34:51 <asalkeld> probably doesn't need to be now
12:34:59 <zaneb> in the future Nova may emit more useful notifications than it does now
12:35:11 <zaneb> we should plan on listening for them.
12:35:18 <inc0> and act on them
12:35:36 <ryansb> so doesn't that not really change plans? I feel like that's in scope for convergence already
12:35:37 <mspreitz> zaneb: we should actively lobby for more useful notifications from Nova, Cinder, whatever else is relevant
12:35:38 <inc0> thats what convergence is supposed to do right?
12:35:59 <Qiming_> if only Nova provides sufficient support to host/vm failure detection and recovery, Heat only needs to decide what are the additonal options that should be exposed to end users
12:36:03 <zaneb> #topic SupportStatus versions
12:36:07 <asalkeld> inc0, at least give tools to the user to be able to do that
12:36:08 <inc0> ryansb, yeah, but what is beyond scope of convergene is configurable actiosn
12:36:18 <zaneb> #link http://lists.openstack.org/pipermail/openstack-dev/2014-September/045038.html
12:36:29 <mspreitz> Now that inc0 has spoken, I'd like to return to HARestarter
12:36:45 <therve> I think there is another topic
12:36:47 <zaneb> we have moved on to SupportStatus
12:36:49 <ryansb> inc0: ah, thank you.
12:37:02 <zaneb> was there a conclusion to this thread?
12:37:08 <ryansb> zaneb: no
12:37:16 <zaneb> I thought we were going with Gauvain's proposal
12:37:30 <asalkeld> +1 to the git tag
12:37:35 <asalkeld> what ever that is
12:38:07 <ryansb> zaneb: It didn't look like we reached a conclusion, which is why I brought it up.
12:38:08 <asalkeld> not sure the standard lib thing made sense
12:38:25 <zaneb> asalkeld: yeah, and SpamapS ended up agreeing
12:38:52 <asalkeld> 2014.X
12:38:55 <Qiming_> git tag is not consistent at the moment, it is mixing 2014.2.b3 with juno-3, IIRC
12:39:06 <zaneb> ryansb: which part is unresolved?
12:39:33 <zaneb> Qiming_: well... it's always going to be a _future_ git tag, right
12:39:56 <zaneb> so if you added a resource last week, you'd say it's available from 2014.2
12:40:01 <ryansb> zaneb: I saw 2 proposals, one for deployers to host documentation on their deployment
12:40:16 <Qiming_> zaneb, agreed
12:40:17 <mspreitz> Can commit have more than one tag?  How about a series of tags that are about the supported resources?
12:40:21 <ryansb> and one for having our up-to-date CI-deployed docs include a supported-since tag
12:40:58 <zaneb> ryansb: those ideas are not in conflict :)
12:41:21 <asalkeld> yeah they get the version from the same place
12:41:37 <ryansb> Fair enough.
12:41:41 <asalkeld> it's only a question of 2014.x or juno/icehouse
12:41:56 <asalkeld> and it seemed most were +1 for 2014
12:42:07 <mspreitz> git has something that increases in a linear way
12:42:35 <zaneb> asalkeld: that's true, but I imagine that deployers should just strip out the version when they generate their own docs. their users only care about what their cloud currently supports
12:42:40 <mspreitz> but I do not know if it is exposed a something that is syntactically linear increasing
12:43:34 <asalkeld> i don't think this is a big issue
12:43:46 <zaneb> here's my position: the docs.o.o docs are for the OpenStack project. The OpenStack project does releases every 6 months. Therefore it should document the release in which stuff was added
12:44:07 <zaneb> if individual users have different requirements, they can generate their own docs from the same source
12:44:16 <asalkeld> sure
12:44:24 <therve> +1
12:45:00 <zaneb> #topic Critical issues sync
12:45:12 <zaneb> thanks to the folks who fixed the gate
12:45:27 <zaneb> we need to keep a closer eye on the requirements sync, apparently
12:45:39 <zaneb> any other critical issues?
12:45:42 <therve> What happened?
12:45:47 <pas-ha> zaneb, you mean sycn it faster?
12:46:13 <asalkeld> requirements got to the 55 review i think
12:46:18 <zaneb> pas-ha: yeah, I mean actually review the auto-proposed patch :)
12:46:27 <therve> AH because of the config change, yeah it took some time
12:46:38 <therve> Should we move out the generated config?
12:46:39 <pas-ha> therve, not only
12:47:10 <pas-ha> there was a version mismatch in requirements between oslo.db and heat
12:47:28 * zaneb had no idea the patch was there
12:47:45 <zaneb> ryansb: maybe that should be at the top of our dashboard :)
12:48:00 <therve> pas-ha, Isn't that the point?
12:48:26 <pas-ha> yes, but oslo.db released before we synced
12:48:26 <ryansb> dashboard law #451 all dashboards eventually encompass everything, such that you require a dashboard-dashboard.
12:48:35 <ryansb> zaneb: I'll look into it.
12:48:58 <zaneb> ryansb: "OpenStack Proposal Bot" is the one to look for
12:49:05 <therve> We should look into an incubator sync maybe before release too
12:49:25 <zaneb> therve: that's probably wise
12:49:51 <therve> I'll do it
12:50:08 <zaneb> #action therve sync oslo incubator
12:50:46 <zaneb> ok, looking at https://launchpad.net/heat/+milestone/juno-rc1 we are not in terrible shape
12:51:03 <zaneb> please review bug fixes
12:51:36 <asalkeld> ok
12:51:39 <zaneb> #topic Open Discussion
12:51:51 <zaneb> I think we got to all the proposed topics?
12:51:55 <therve> https://review.openstack.org/92124 should be ready if the window is still open :)
12:52:01 <mspreitz> didn't finish one
12:52:09 <asalkeld> provider templates
12:52:18 <mspreitz> right
12:52:34 <zaneb> therve: it's not, sorry :( I had to bump your bp
12:52:43 <therve> zaneb, Oh okay :/
12:52:54 <mspreitz> zaneb: missed one topic
12:52:59 <mspreitz> as well as not finish another
12:53:15 <zaneb> therve: yesterday was pretty much the deadline
12:53:15 <asalkeld> mspreitz, ...
12:53:24 <therve> Damn
12:53:37 <zaneb> mspreitz: oh, sorry. let's do that one now then
12:53:39 <mspreitz> topic missed: what does the term "provider resource" mean
12:53:50 <mspreitz> as far as I can tell, it means "nested stack"
12:54:19 <mspreitz> that is, when documenting nested stacks, why use the term "provider resource"?
12:54:26 <zaneb> afaik it doesn't mean anything
12:54:46 <asalkeld> https://wiki.openstack.org/wiki/Heat/Providers
12:54:47 <zaneb> a provider stack is a nested stack generated from a provider template
12:54:50 <mspreitz> zaneb: so we should just say "nested stack"?
12:54:55 <Qiming_> nested stack is an implementation detail, IMO
12:55:23 <zaneb> you can have a provider _for_ a resource
12:55:31 <asalkeld> https://review.openstack.org/#/c/121741/6/doc/hot-guide/source/composition.rst
12:55:51 <asalkeld> line 20 ^
12:55:58 <zaneb> jasond wrote a glossary of this stuff somewhere
12:56:01 <mspreitz> ah, that wiki page helps explain
12:56:02 <zaneb> it should be in the docs
12:56:54 <zaneb> #link http://docs.openstack.org/developer/heat/glossary.html
12:56:58 <mspreitz> frankly I think using the term "provider" here is just mystifying, and could be entirely dropped.  If it is used, it needs explanation
12:57:14 <mspreitz> it is just a bit of detail about nested stacks
12:57:24 <zaneb> Provider resource
12:57:24 <zaneb> A resource implemented by a provider template. The parent resource’s properties become the nested stack’s parameters. See What are “Providers”? (OpenStack Wiki).
12:57:26 <asalkeld> mspreitz, i thought i added some nice text to help
12:57:54 <mspreitz> yes, that glossary explains
12:58:05 <mspreitz> until now, I thought "provider" was shorthand for "cloud provider"
12:58:10 <zaneb> mspreitz: not all nested stacks are providers
12:58:16 <mspreitz> asalkeld: perhaps I missed your latest update
12:58:37 <asalkeld> "
12:58:39 <asalkeld> A note on the terminology:
12:58:39 <asalkeld> "provider" does not refer to the "provider of the cloud" but to the
12:58:39 <asalkeld> fact that a user can "provide" their own resource types. The term
12:58:39 <asalkeld> is historical but the reader could think of these in terms of
12:58:39 <asalkeld> "custom" or "template" resource types.
12:58:40 <asalkeld> "
12:58:45 <mspreitz> I remain mystified about why it is important to make this distinction.  Why not just speak of nested stacks?
12:59:07 <asalkeld> shrug
12:59:13 <mspreitz> Yes, "custom" would be much better!
12:59:15 <zaneb> mspreitz: an OS::Heat::Stack resource is also a nested stack, but not a provider
12:59:19 * Qiming_ sighs...
12:59:36 * asalkeld need to sleep
12:59:40 <mspreitz> asalkeld: yes, I missed your latest update.  That helps a lot!
12:59:40 <zaneb> also, nested stacks are an implementation detail, not something the user should be concerned with
13:00:01 <zaneb> ok, time is up
13:00:04 <zaneb> #endmeeting