20:00:40 <zaneb> #startmeeting heat
20:00:41 <openstack> Meeting started Wed May  7 20:00:40 2014 UTC and is due to finish in 60 minutes.  The chair is zaneb. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:44 <openstack> The meeting name has been set to 'heat'
20:00:57 <SpamapS> ahoy!
20:01:03 <zaneb> why is my window really narrow?
20:01:03 <tspatzier> hi all
20:01:06 <pas-ha> hi
20:01:09 <vijendar> hi
20:01:15 <rpothier_> hi
20:01:25 <shardy> o/
20:01:31 <jasond> o/
20:01:57 <wirehead_> o/
20:02:09 <zaneb> might be a short one this week
20:02:17 <stevebaker> \o
20:02:22 * wirehead_ queues up lolcats
20:02:25 <zaneb> #topic Review last meeting's actions
20:02:34 <zaneb> #link http://eavesdrop.openstack.org/meetings/heat/2014/heat.2014-04-23-20.01.html
20:02:39 <zaneb> there weren't any!
20:02:43 <zaneb> nnnnnnnnnext!
20:02:56 <zaneb> #topic Adding items to the agenda
20:03:06 <zaneb> #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda_.282014-05-07_2000_UTC.29
20:03:15 <zaneb> anybody?
20:03:20 <zaneb> anybody at all?
20:03:33 <stevebaker> zaneb: client request retry policy
20:03:38 <SpamapS> oo
20:03:41 <zaneb> k
20:03:41 <SpamapS> good one
20:04:03 <SpamapS> lowered in priority as we move toward convergence tho. :)
20:04:46 <mordred> SpamapS: you're moving toward convergence
20:04:57 <SpamapS> I'm converging on convergence.
20:05:07 <zaneb> SpamapS: for the record I still think your schedule for that is hopelessly optimistic
20:05:15 <shardy> stevebaker: fwiw I think that's something we should be driving into the clients, ideally via common library code
20:05:16 <wirehead_> Well, that can be a problem.  If we're going to start de-prioritizing things like client request retries, we'd better make damn sure we get convergence merged expeditiously.
20:05:26 <SpamapS> zaneb: me too, but I've got High Apple Pie up in the SKY hopes.
20:05:34 <zaneb> #topic client request retry policy
20:05:36 <shardy> stevebaker: as opposed to something we want to handle explicitly in heat
20:05:41 <stevebaker> right..
20:05:44 <zaneb> since it appear we already started :D
20:06:17 <SpamapS> wirehead_: everybody get out and push, right? :)
20:06:28 <zaneb> SpamapS: ok, but it's not clear to me how unrealistic hopes are helpful to good planning
20:06:41 <SpamapS> So I don't mean to be flippant.
20:06:51 <stevebaker> clients.py is ... primitive.  I'm thinking that a client lib should be contributed via a plugin, which matches recent plugins-all-the-things activity
20:07:03 <SpamapS> _I_ want a long term solution, but I will certainly chase short term patches for extremely important issues.
20:07:20 <zaneb> SpamapS: right, we should do both
20:07:38 <stevebaker> a client plugin can provide standard handling of things like NotFound ignoring
20:07:50 <shardy> +1 on anything which avoids while loops in every resources, or heat-specific wrappers for every client
20:07:51 <zaneb> stevebaker: ++
20:09:03 <SpamapS> The question is, what now is the better course.. driving retry into all other client libs.. or doing while loops until we get to convergence, which is effectively going to be a massive retry loop. :)
20:09:04 <stevebaker> and eventually could provide a way of invoking client calls with a particular retry policy. Policies could be things like tolerate NotFound, retry on 500, fetch a new token and retry if token expires etc
20:09:33 <shardy> SpamapS: can't we drive the retry via oslo, that's where all the base client stuff lives right?
20:09:45 <SpamapS> shardy: I haven't been following that work.
20:09:49 <SpamapS> so I am not sure.
20:09:56 <stevebaker> SpamapS: I don't think every retry scenario could be handled by client libs
20:10:04 <zaneb> SpamapS: it's not clear to me that retries would be a bad thing even in the convergence model
20:10:41 <stevebaker> anywhoo, I just wanted to start the topic, to see if I should flesh out a clients plugin blueprint to present at http://summit.openstack.org/cfp/details/428
20:10:41 <shardy> stevebaker: I'm just arguing against fixing it in a way which all of openstack won't benefit from
20:10:48 <zaneb> SpamapS: the point of convergence is that you can deal with it when things fail. It's still always better if they don't fail.
20:10:51 <wirehead_> Just a data point — in our production experience, we see a LOT more errors that are of the indeterminate case than we do of the simple easy-to-retry cases
20:10:52 <shardy> retrying failed API calls is hardly specific to heat
20:11:21 <wirehead_> e.g. "Hey, we got a 500 from Nova and this means that we may or may not be getting a server"
20:11:31 <stevebaker> shardy: heat is different to other client users. Horizon has a monkey who can click the button again if a request fails.
20:11:32 <SpamapS> zaneb: they would certainly not be a bad thing. The model makes failures less costly is all.
20:11:55 <zaneb> SpamapS: cool, we agree then :)
20:12:27 <SpamapS> wirehead_: right, undefined states are a bug in Nova and most are impossible to deal with automatically no matter what model we use.
20:12:28 <shardy> stevebaker: lol ;)
20:12:52 <shardy> stevebaker: I still think it's a common pattern which should be solved in a common way, once
20:13:01 <wirehead_> SpamapS: well, not entirely.  For example, you can add metadata to the server and wait-and-see if the server eventually comes up with the metadata you set.
20:13:17 <stevebaker> eventually client retry policies could be configured via heat.conf, to tailor for snowflake failure modes of particular clouds
20:13:19 <zaneb> stevebaker: what about moving to python-openstackclient as our client lib?
20:13:21 <SpamapS> wirehead_: Most of those are solvable and are more often retryable if the API gives us a user defined guaranteed-to-be-unique-ID.
20:13:48 <zaneb> don't know if that has a more consistent interface?
20:13:59 <stevebaker> zaneb: I think that is a common cli which just depends on all the client libs
20:14:00 <shardy> zaneb: we could consider that for the v2 API, like keystone has, but it's a fairly user-hostile move IMO
20:14:13 <SpamapS> zaneb: that would be pretty sane. But what we saw 2 months ago was that it was still immature.
20:14:28 <shardy> also there is no support for heat atm (or last time I checked there wasn't) so could be significant work
20:14:43 <stevebaker> someone should totally add that
20:15:08 <zaneb> action, stevebaker to totally add that
20:15:21 <shardy> stevebaker: I meant to look into it but $other_stuff has got in the way
20:15:39 <stevebaker> shardy: I agree that some retry logic should be pushed into client libs, but heat will have to do something for some transient errors
20:16:48 <stevebaker> anyway, I'll describe my solution during http://summit.openstack.org/cfp/details/428 and even if we don't do the retry policy, client plugins will still add some value
20:17:24 <shardy> stevebaker: I'd still like clarification of why heat requires special handling of such errors, vs users getting annoyed when spurious failures happen via horizon or any other service driving other APIs
20:18:10 <shardy> stevebaker: not necessarily right now, it can wait for the beer track next week ;)
20:18:18 <stevebaker> shardy: because tripleo don't want to brick their cloud due to one transient error
20:18:18 <wirehead_> mmmm... beer track
20:18:34 <wirehead_> Also, autoscale :)
20:18:36 <pas-ha> shardy: one thing might be autoscaling, when resources are upd and down constantly
20:18:47 <pas-ha> wirehead_: stolen
20:19:10 <shardy> stevebaker: so they won't, we'll implement convergence and retry from failed states in heat, and client retry-n-times loops in some client code (hopefully common and not all-the-*fooclients)
20:19:17 <stevebaker> pas-ha: are you suggesting a *rum* track at summit ?!
20:19:24 <wirehead_> YARR!
20:19:50 <pas-ha> stevebaker: lol :)
20:19:58 <SpamapS> I think in Hotlanta they drink Hennessey.
20:20:27 <shardy> pas-ha: I still don't get the specialness, you retry until it works, or give up after $n attempts, possibly with some multiplier on the delay between attempts or something
20:20:33 <wirehead_> I move that the Triple-O team brings cases of Belgian Trippels.
20:20:55 <wirehead_> But seriously, most of what triple-o needs, autoscale needs just as bad if not worse.
20:21:09 <SpamapS> which errors are getting special handling?
20:21:28 <zaneb> SpamapS: none, at the moment
20:21:51 <zaneb> SpamapS: shardy's question is, why is this only Heat's problem?
20:21:56 <stevebaker> shardy: I'm probably thinking more of client calls with no side-effects, say a server get should retry a couple of times and ignore any kind of error, then just give up and its no big deal to use the stale server object
20:21:58 <shardy> well keystone user deletes used to, then I removed it and have not yet reinstated it
20:22:52 <shardy> stevebaker: Ok, got it, so that's kinda additional error-path tolerance on top of the retry
20:23:30 <stevebaker> shardy: as an aside, KeystoneClientV3 could become one of these client plugins, but it would be registered as 'auth' rather than 'keystone'
20:24:01 <shardy> zaneb: exactly, stuff randomly failing is a bad experience regardless of what user you are
20:24:20 <shardy> stevebaker: Ok, cool, well the keystoneclient wrapper is pluggable already
20:24:30 <shardy> do we really want to abstract every client like that?
20:24:57 <shardy> the heat_keystoneclient stuff atm is a bit of a mess and not really something we want to proliferate, IMO
20:25:13 * shardy holds his hands up as the reason it's in a mess
20:25:14 <wirehead_> I feel like these conversations are what leads people to write articles like "Why I don't use any API wrapper libraries ever"
20:25:59 <stevebaker> shardy: its not abstracting it, there will still be direct access to the client lib, but the client plugin can provide extra behaviours like consistent error handling, and invoking client calls with a particular retry policy
20:26:30 <shardy> stevebaker: Ok, but I'd still prefer us to get what logic we can into the clients
20:26:36 <stevebaker> its not a wrapper, honest! ;)
20:29:11 <zaneb> ok, it seems like we're ready to move on
20:29:11 <stevebaker> funnily enough, horizon *does* have a wrapper for every client lib
20:29:23 <wirehead_> Yeah, I was starting to wonder about that.
20:29:29 <shardy> stevebaker: even more reason not to invent another one IMO
20:29:54 <SpamapS> ok
20:30:00 <SpamapS> one last thought..
20:30:16 <SpamapS> it's Heat's problem because Heat's whole reason for being is automation.
20:30:58 <SpamapS> so if Heat _can_ automate a solution to an obvious problem, it should.
20:31:05 <shardy> SpamapS: I'm not saying don't solve it, I'm just arging where the code which solves it should go should benefit everyone, not just heat
20:31:05 <sdake> o/ sorry late :)
20:31:47 <stevebaker> shardy: I agree, but there would be _some_ behaviours that would be considered too high-level to belong in a client lib
20:31:50 <SpamapS> shardy: Agree that there are many, even most, cases where the client library should contain the recovery code.
20:32:12 <zaneb> action items on this?
20:32:16 <shardy> SpamapS, stevebaker: cool, I think we're pretty much in agreement then
20:32:29 <stevebaker> zaneb: me write a blueprint
20:32:38 <stevebaker> zaneb: for client plugins
20:32:50 <SpamapS> me choose color for bikeshed
20:33:02 <zaneb> #action stevebaker to write a blueprint for client plugins
20:33:10 <zaneb> #topic Design Summit preparation
20:33:23 <wirehead_> yay!
20:33:29 <zaneb> I created etherpads for all of the sessions that didn't have one already
20:33:39 <zaneb> #link https://wiki.openstack.org/wiki/Summit/Juno/Etherpads#Heat
20:34:24 <zaneb> if folks could go there and add in detail on what they want to discuss, that would be helpful preparation
20:34:41 <shardy> Oh randallburt isn't around, I was going to see if he wanted me to help add detail for the v2 API one
20:34:55 <wirehead_> i am wondering if we should create a fresh etherpad for Scaling, Robustness, and Convergence.
20:35:01 <shardy> I may add some bullet points there anyway
20:35:08 <andrew_plunk> shardy I would assume he would
20:35:19 <zaneb> also, we'll need somebody to kick off the discussion in each session, introduce the topic and what the issues are &c.
20:36:06 <SpamapS> Let's see if Ludacris is available
20:36:16 <zaneb> and if you are the proposer of a session, you are on the hook for finding someone to do that
20:36:31 * stevebaker volunteers for https://etherpad.openstack.org/p/juno-summit-heat-sw-orch
20:36:34 <shardy> zaneb: I can co-introduce the v2 API one with randall, provided he's cool with that
20:36:51 <zaneb> shardy: cool, I think he mentioned already that he was
20:37:32 <zaneb> I don't need to collect names, just wanted to make session proposers aware that it's on them ;)
20:37:35 <shardy> zaneb: likewise the auth-model part of the dev/ops session
20:37:56 <wirehead_> zaneb: so, the merged best that became scaling, robustness, and convergence?
20:37:57 <tspatzier> stevebaker: I collected a summary of recent ML discussions on that topic. That gives us more detail on some points for sw-orch and let's us cross out other points already
20:38:13 <tspatzier> stevebaker: ... and I am fine if you run this :-)
20:38:14 <zaneb> wirehead_: yes
20:38:28 <wirehead_> Who counts as the owner?
20:38:29 <stevebaker> tspatzier: were there some items we could take off your original list based on that thread?
20:38:43 <zaneb> wirehead_: lifeless
20:39:00 <SpamapS> The really important duty that isn't stated there is that the "leader" of a session's chief goal is to continuously drive the discussion toward the resulting actions of the session.
20:39:01 <tspatzier> stevebaker: yes, based on your comments to the ML post, I think some items are answered
20:39:05 <zaneb> wirehead_: but if you have something to contribute, bring it
20:39:26 <stevebaker> tspatzier: ok, do you want to go ahead and butcher that etherpad?
20:39:49 <zaneb> SpamapS: I thought you didn't want to have "leaders"?
20:39:57 <SpamapS> zaneb: I do, but they've been distorted into presenters.
20:40:09 <SpamapS> zaneb: the point is that nobody should be standing in front of the crowd, talking at them.
20:40:10 <zaneb> ok, we definitely don't want that
20:40:10 <shardy> SpamapS: +1, and I think we should ban any presentations other than 5mins to give context at the start
20:40:16 <stevebaker> its more like a chair to keep us on track
20:40:27 <tspatzier> stevebaker: I can do it. Maybe let me send you the summary I've got and if you are ok with it I can copy the contents to the ether pad and do some formatting
20:40:35 <wirehead_> Yeah.  Servent-to-the-crowds and passer-of-the-mic
20:41:11 <stevebaker> tspatzier: or, we could use the collaborative content tool to collaborate on the content ;)
20:41:47 <zaneb> we'll also need volunteers for the note-taking, but we can do that on an ad-hoc basis on the day
20:42:22 <tspatzier> stevebaker: why not :-) I'll just copy my refined collection to the end, we'll hack on it and then remove the old stuff
20:42:34 <stevebaker> tspatzier: +1
20:43:33 <zaneb> PSA: shardy and I are giving a talk on Monday morning, right after the keynotes
20:43:40 <zaneb> #link http://openstacksummitmay2014atlanta.sched.org/event/19c53bb2ba181cd835e24db871612090
20:43:47 <therve> zaneb, Do you want some to put a name in the etherpad as the "leader" ?
20:44:02 <zaneb> it would be great to get as many heat people there as possible to answer questions afterwards
20:44:09 <shardy> Yeah, come and laugh at my attempt to do a live demo :)
20:44:10 <wirehead_> how about we call them "toastmaster" instead of "leader"?
20:44:24 <zaneb> therve: if you like, but not necessary
20:44:25 <wirehead_> Only 50% kidding about that
20:45:15 <zaneb> looking for the other Heat talks...
20:45:26 * zaneb curses the Demo Theater
20:45:34 <shardy> stevebaker: you have one on software config don't you?
20:45:47 <stevebaker> http://openstacksummitmay2014atlanta.sched.org/event/dae9536f6bb9ad61b3b2ccf39a18515f
20:45:51 <stevebaker> #link http://openstacksummitmay2014atlanta.sched.org/event/dae9536f6bb9ad61b3b2ccf39a18515f
20:45:56 <tspatzier> stevebaker: I copied my content. Feel free to give it a first pass while I go to sleep after this meeting.
20:46:05 <stevebaker> tspatzier: ok
20:46:20 <zaneb> #link http://openstacksummitmay2014atlanta.sched.org/event/18a9671f96918adb3d3dbbd35c981338
20:46:37 <zaneb> that ^ is randall and kebray's talk
20:46:53 <stevebaker> also I'm on this panel, I'm not sure if it overlaps with our last design session http://openstacksummitmay2014atlanta.sched.org/event/36c43c8b4701c057e2f6aa55c4a53c1c#.U2qbp3V52jo
20:47:27 <zaneb> cool, didn't even know about that one
20:47:45 <stevebaker> hopefully its not one of the bloodsport panels
20:47:57 <zaneb> hopefully it is :D
20:48:00 <SpamapS> You mean hopefully it _IS_
20:48:13 <SpamapS> 4 men enter, 1 man leaves... In a huff.
20:48:24 <stevebaker> IMAGE BASED DEPLOYMENTS BITCHES!!!!
20:48:38 <zaneb> debating TOSCA with Georgy...
20:48:47 * zaneb makes a note to bring popcorn
20:48:48 <stevebaker> what could possibly go wrong
20:48:59 <tspatzier> we are running a special session on heat-translator on Thursday afternoon: http://openstacksummitmay2014atlanta.sched.org/event/c94698b4ea2287eccff8fb743a358d8c#.U2qcFi9QwuE
20:49:40 <tspatzier> I shared that on the ML already. Not purely Heat, but *very* closely related. Would be great to see many of you there.
20:51:16 <zaneb> thanks tspatzier, that's going to be a really interesting one I think
20:51:31 <zaneb> #topic Open Discussion
20:51:41 <zaneb> anything else?
20:52:10 <wirehead_> I'm assuming we'll see y'all at the summit?
20:52:11 <tspatzier> zaneb: yep, I hope so. We are also putting together an etherpad with agenda draft. I will send a link tomorrow so people can provide input
20:52:53 <zaneb> wirehead_: nope, we've decided to go transparent
20:53:10 <wirehead_> So I'll walk through the hall and get slapped.  A lot.
20:53:11 <wirehead_> Cool.
20:53:28 <stevebaker> I'm coming in disguise
20:53:29 <zaneb> oh, one reminder
20:53:45 <zaneb> we will have a project pod at the summit
20:53:52 <SpamapS> Any chance we will gather for an informal toast to Orchestration and the end of the magnificent reign of the long line of Steve's?
20:53:55 <zaneb> #link https://www.youtube.com/watch?v=WLsjlmrQ6Mw
20:54:21 <zaneb> SpamapS: I certainly hope so
20:54:22 <stevebaker> I'll drink to that
20:54:40 <shardy> +1 :)
20:54:58 <zaneb> SpamapS: you meant 'reign of terror', right?
20:55:33 <zaneb> so I'm not sure about the pod details
20:55:41 <zaneb> we may be sharing with another project
20:55:47 <zaneb> but basically it's a big table
20:55:58 <zaneb> and there will be a flipchart we can use for scheduling
20:56:09 <wirehead_> We'll start acting like the other team.  Kinda like pod people.
20:56:23 <SpamapS> as long as it isn't just a shared iPod.. I'm not putting randallburt's earbuds in my ear.... ;)
20:56:27 <zaneb> so take advantage of it if you need to continue to discuss something outside the design sessions
20:56:47 <wirehead_> I'll run your code, but I won't wear your earbuds, bro.
20:57:11 <stevebaker> I want to bend some ears about how servers in isolated neutron networks can communicate with heat
20:57:19 <SpamapS> stevebaker: yes
20:57:43 <SpamapS> and with that.. I must reboot
20:57:48 * SpamapS o/'s early
20:58:47 <zaneb> time to wrap up anyway
20:58:52 <zaneb> #endmeeting