20:01:14 <zaneb> #startmeeting heat
20:01:15 <openstack> Meeting started Wed Apr 23 20:01:14 2014 UTC and is due to finish in 60 minutes.  The chair is zaneb. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:01:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:01:18 <openstack> The meeting name has been set to 'heat'
20:01:30 <zaneb> #topic roll call
20:01:31 <radix> o/
20:01:32 <bgorski> o/
20:01:34 <randallburt> o/
20:01:34 <stevebaker> \o
20:01:35 <tspatzier> hi all
20:01:36 <wirehead_> o/
20:01:40 <shardy> o/
20:01:42 <vijendar> hi
20:01:44 <jasond> o/
20:01:45 <jpeeler> hi
20:01:48 <erecio> o/
20:01:48 <sanjay_sohoni> o/
20:01:48 <kebray> o/
20:02:11 <pas-ha> o/
20:02:13 <rakhmerov> hey
20:02:30 <zaneb> rakhmerov: welcome :)
20:02:42 <rakhmerov> tnx!
20:02:45 <zaneb> good turnout today :)
20:03:05 <sdake> wrong channel :) o/
20:03:10 <zaneb> #topic  Review last meeting's actions
20:03:23 <rakhmerov> :)
20:03:24 <zaneb> #link http://eavesdrop.openstack.org/meetings/heat/2014/heat.2014-04-17-00.02.html
20:03:37 <zaneb> zaneb to add deprecation of RouterGateway to Release Notes
20:03:44 <zaneb> stevebaker: I did that
20:03:55 <stevebaker> ta
20:03:57 <zaneb> zaneb to propose options for alternate meeting times on Mailing List
20:04:02 <zaneb> I also did that
20:04:07 <zaneb> just now :D
20:04:18 <tspatzier> I saw it :-)
20:04:26 <zaneb> of course none of the relevant people are here, so not much point discussing it
20:04:37 <zaneb> reply on the ML if you have a preference
20:04:52 <zaneb> * everyone to submit design summit session proposals by the end of the week
20:04:56 <lifeless> o/
20:04:59 <zaneb> did everyone do that?
20:05:06 <zaneb> well done everyone
20:05:13 <radix> :)
20:05:22 <zaneb> #topic Adding items to the agenda
20:05:32 <zaneb> #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda_.282014-04-23_2000_UTC.29
20:05:49 <zaneb> any additional items?
20:06:29 <zaneb> I'll take that as a no
20:06:55 <zaneb> #topic Design Summit planning
20:07:06 <zaneb> so I had an idea of how to handle this
20:07:21 <zaneb> ...and then lifeless proposed another session
20:07:35 <skraynev> hello all, sorry for the late
20:07:46 <wirehead_> heh
20:08:09 <zaneb> so, there is an etherpad up to collaborate on this
20:08:09 <wirehead_> so, action item... hassle lifeless?
20:08:16 <zaneb> #link https://etherpad.openstack.org/p/heat-juno-design-summit-schedule
20:08:46 <zaneb> only randallburt, therve and asalkeld have commented on it so far
20:09:41 <zaneb> radix, lifeless: both of your sessions touch on convergence and healing
20:09:58 <radix> yeah, it's something that a lot of people care about :)
20:10:06 <zaneb> I'm not sure whether to merge those into one session or schedule them back-to-back
20:10:11 <zaneb> thoughts?
20:10:15 <stevebaker> its a topic which almost deserves 2 sessions
20:10:17 <randallburt> btw, shardy, I only added the v2 api session so it got on the list; I'd like to collaborate with you on that or let you run it if you prefer
20:10:37 <radix> zaneb: I think it's more important to give thorough focus to convergence + healing, informed by use cases such as autoscaling and tripleO
20:10:40 <shardy> randallburt: sure sounds good
20:10:51 <sdake> merge makes more sense
20:10:59 <shardy> randallburt: not had time to look at that lately so will be good to kick-start the effort again ;)
20:11:24 <stevebaker> zaneb: A session isn't really needed for http://summit.openstack.org/cfp/details/388 if we're only doing the reorg
20:11:36 <randallburt> shardy:  k
20:11:48 <zaneb> stevebaker: I totally plan to reject that one ;)
20:12:34 <radix> zaneb: though I won't say anything about the tripleO session, I am pretty far removed from that topic so maybe it should still have its own
20:12:42 * stevebaker goes to get some Rope
20:12:42 <zaneb> fwiw I don't think it's necessary in general for the person who proposed the session to run it
20:12:48 <stevebaker> by which I mean https://github.com/python-rope/rope
20:12:59 <rakhmerov> do you guys have an item to discuss things like integration with taskflow/mistral/whatever it is at the summit?
20:13:14 <zaneb> the person who proposed the session will be accountable for finding _someone_ to run it though ;)
20:13:15 <sdake> there is a separate session that ttx setup that is approve3d
20:13:25 <stevebaker> tspatzier: I'm getting to replying to your software-config email. Do you mind if I run the session?
20:13:28 <zaneb> rakhmerov: yes, it's in the cross-project track
20:13:30 <sdake> for cross project integration with emerging stackforge projects
20:13:52 <ruhe> this one http://summit.openstack.org/cfp/details/1
20:14:00 <sdake> yes #1 - must be important :)
20:14:02 <radix> rakhmerov: there's a session proposed for the cross-project sessions: http://summit.openstack.org/cfp/details/335
20:14:04 <stevebaker> first post!
20:14:05 <tspatzier> stevebaker: sounds good; would be happy to drive this together with you
20:14:17 <radix> rakhmerov: though maybe that's not exactly what you're getting at
20:14:32 <zaneb> wow, we got 2 time slots
20:14:49 <radix> huh?
20:15:05 <rakhmerov> no, I think that's fine. Basically I'm here to figure out how we can communicate at the summit
20:15:16 <zaneb> radix: 1 and 335
20:15:21 <radix> oh
20:15:34 <ruhe> and they're already scheduled: http://junodesignsummit.sched.org/
20:16:23 <zaneb> yup, as usual russellb is making the rest of us look bad with his ruthless efficiency
20:16:34 <SpamapS> o/
20:17:08 <zaneb> shardy: how important is the auth model thing, and could we combine it with the operator session?
20:17:14 <SpamapS> Where did this idea that any one person should run a session come from?
20:17:34 <SpamapS> If you have something to say, you come to the front and say it.
20:17:41 <shardy> zaneb: probably, I just wanted to see if there is still a need for discussion on that topic
20:17:44 <wirehead_> run = facilititate.
20:18:00 <zaneb> SpamapS: I agree, but somebody has to start :)
20:18:02 <SpamapS> wirehead_: disagree. We always designate somebody to take notes, but that is not the same as facilitating.
20:18:23 <SpamapS> zaneb: Sure, whoever proposes sessions will likely be the first with anything to say. :)
20:18:23 <randallburt> taking notes != facilitating
20:18:32 <SpamapS> I just don't like the sessions where one person "takes the lead"
20:18:33 <shardy> zaneb: feel free to drop or combine as needed
20:18:36 <stevebaker> thats another job
20:18:42 <randallburt> is moderator a better word? discussion conductor?
20:18:46 <SpamapS> I prefer that the idea dominates.
20:18:59 <zaneb> shardy: ok, thanks. could you leave a comment to that effect?
20:19:08 <SpamapS> anyway, it's a format issue
20:19:26 <SpamapS> Anybody should feel empowered to take the floor. All should be respectful of whoever has the floor. That is all.
20:19:51 <shardy> zaneb: sure
20:19:52 <zaneb> SpamapS: ++
20:20:17 <SpamapS> That said, I want stevebaker to run the software-config session.
20:20:18 <SpamapS> ;-)
20:20:26 <zaneb> lol
20:20:36 <tspatzier> :-)
20:20:40 <radix> so he can teach us how it works? :)
20:20:42 <stevebaker> lol
20:20:45 <zaneb> anybody know anything about #15?
20:20:49 <sdake> need m0ar youtube
20:20:57 <stevebaker> also, come to my main conference talk
20:21:05 <radix> zaneb: the Swift one?
20:21:18 <zaneb> http://summit.openstack.org/cfp/details/387
20:21:21 <SpamapS> #15 on what list?
20:21:23 <zaneb> that one ^
20:21:29 <tspatzier> stevebaker: when is it?
20:21:52 <sdake> zaneb smells like w orkflow
20:21:56 <radix> "holistic scheduling" sounds like mspatzier would know something
20:21:57 <randallburt> zaneb:  I think that's the internal service version of the lifecycle callbacks.
20:22:13 <tspatzier> radix: you mean mspreitzer I guess
20:22:13 <SpamapS> radix: mspreitzer
20:22:19 <radix> er, yeah, mspreitzer
20:22:21 <radix> sorry :)
20:22:26 <SpamapS> Holistic scheduling isn't workflow.
20:22:27 <randallburt> notifying OpenStack in a way that an operator or other OpenStack services can respond to
20:22:28 <zaneb> btw, good news. we will have a table assigned to us for discussions throughout the conference
20:22:37 <radix> nice
20:22:39 <SpamapS> It is actually quite a refined form of orchestration.
20:22:46 <zaneb> not as good as a meeting room, but considerably better than nothing
20:22:47 <randallburt> zaneb:  nice
20:22:51 <stevebaker> tspatzier: 3:40, Tuesday
20:22:54 <stevebaker> #link http://openstacksummitmay2014atlanta.sched.org/event/dae9536f6bb9ad61b3b2ccf39a18515f#.U1ghBXWSzKQ
20:23:00 <tspatzier> thanks stevebaker
20:23:04 <SpamapS> Declare the things you want, and the engine deploys them optimally.
20:23:11 <lifeless> radix: so perhaps you me and SpamapS hangout and do joint prep work for one session ?
20:23:37 <lifeless> radix: which is your session proposal ?
20:23:38 <SpamapS> (I also think it is _WAY_ over reaching for what Heat and OpenStack are capable of at this time)
20:23:43 <radix> lifeless: are you talking about the convergence thing?
20:23:49 <tspatzier> yes, this sessions is related to a patch that got recently posted by Bill Arnold, he also opened a BP after sdake requested it on the review
20:23:53 <zaneb> lifeless: http://summit.openstack.org/cfp/details/144
20:24:11 <radix> lifeless: I'm suggesting that convergence + healing have its own session, but I didn't say that tripleo shouldn't have its own
20:24:23 <sdake> well as I recall after looking at the review tspatzier, I was certain there would be popcorn on that idea - so may be best to have a session on it
20:24:39 <sdake> btw, I did remove the -2 :)
20:24:46 <SpamapS> radix: we're suggesting that convergence and healing are paramount to operating an OpenStack cloud with Heat.
20:24:54 <zaneb> SpamapS: for a moment there I thought you were talking about lifeless's proposal
20:24:57 <zaneb> ;)
20:25:00 <tspatzier> agree sdake, this is a bigger topic to be sorted out
20:25:05 <lifeless> radix: I think your session and the one I've proposed overlap a lot
20:25:08 <randallburt> sdake:  I'm tempted to add my own −2 then.
20:25:09 <radix> spamaps: ok, if that's the main point of your proposal then that sounds like it should all just be merged together
20:25:21 <lifeless> radix: I'd hate to have the same discussion twice
20:25:23 <sdake> I -2 because there was no blueprint and its popcorn material
20:25:24 <radix> lifeless: I would suggest to get randallburt in that hangouts too
20:25:27 <SpamapS> zaneb: I do thin holistic scheduling will be far simpler once we've moved to distributed state machine and convergence.
20:25:35 <sdake> think its worth having a design discussion around however
20:25:51 <randallburt> radix:  sure, I'm game
20:25:57 <sdake> it might be a good idea, I just dont know what the idea is in detail :)
20:26:06 <lifeless> radix: we have a specific design in mind (of course) - and HP is ponying up people to implement it (its somewhat bold in nature)
20:26:16 <radix> interesting, ok :)
20:27:00 <lifeless> radix: but we're also interested in scale
20:27:08 <lifeless> radix: e.g. we want to deploy 10K node clusters
20:27:26 <radix> thus, no polling statuses every 30 seconds ;-)
20:27:27 <lifeless> radix: (or more - think moonshot / seamicro style rack densities)
20:27:52 <lifeless> radix: indeed, so http://summit.openstack.org/cfp/details/428
20:28:02 <SpamapS> longpoll would be fine. ;)
20:28:05 <zaneb> this is definitely something not contemplated when we started heat :D
20:28:16 <shardy> lol :)
20:28:47 <sdake> pretty soon the heat of heat will be generating sea salt!
20:28:49 <lifeless> we can put > 1K machines in a rack...
20:29:03 <radix> lifeless: so I'm pretty busy the rest of the day, can you suggest a time for us to get together? (randallburt and I are in the same timezone, CST)
20:29:04 <lifeless> its scary density
20:29:43 <zaneb> ok, so we're supposed to have a schedule by Friday(!)
20:29:47 <lifeless> radix: I'm free for the next 4 hours; SpamapS ?
20:29:55 <randallburt> lifeless:  is this yet another thing pinned to taskflow or is that being used as a generic "distributed asynchronous workers"
20:29:57 <SpamapS> I have 2.5 hours
20:30:06 <zaneb> radix, lifeless: sounds like y'all should get together and let me know what you decide
20:30:11 <SpamapS> randallburt: the latter
20:30:23 <randallburt> SpamapS:  oh good.
20:30:36 <SpamapS> randallburt: but given taskflow's origins and future... no sense using something else.
20:30:44 <zaneb> #topic Heat/Mistral collaboration
20:30:52 <zaneb> rakhmerov: go :)
20:30:53 <lifeless> randallburt: taskflow as a design paradigm is uninteresting in the heat context - but 'run this bit of code with guaranteed completion semantics somewhere arbitrary' is very important - and joshua and I did a mini-deep-dive on that in sunnyvale a couple months ago
20:30:58 <rakhmerov> yep
20:30:58 <randallburt> SpamapS:  I remain to be convinced.
20:31:03 <wirehead_> radix: feel free to reschedule planned meetings if necessary. :)
20:31:11 <radix> wirehead_: ok :)
20:31:17 <lifeless> randallburt: and so rather than reinvent things I'd like to use a couple of key slices from within taskflow
20:31:22 <radix> lifeless, SpamapS: ok, I have time after this then :)
20:31:25 <rakhmerov> ok, so for now I have a couple of questions
20:31:27 <radix> randallburt: ?
20:31:33 <radix> randallburt: are you free after this meeting?
20:31:51 <rakhmerov> 1. Are you guys still interested in discussion about integrating with something like Mistral?
20:32:12 <rakhmerov> 2. If yes, how could we plan this collaboration further?
20:32:12 <kebray> radix, he's free for 15-20 minutes, then I'm going to steel him.
20:32:17 <zaneb> rakhmerov: define 'integrating'
20:32:19 <randallburt> radix:  sorry, not for another few hours
20:32:45 <zaneb> guys, let's take the post-meeting scheduling to #heat ;)
20:33:11 <rakhmerov> well, in HK we briefly touched that you might want to use Mistral once it gets mature internally for internal Heat processes automation
20:33:17 <rakhmerov> using workflow engine
20:33:31 <wirehead_> So, for reasonable auto scaling workflows, we need scheduling, perhaps like Mistral provides, to trigger scaling events.
20:33:36 <zaneb> ok
20:33:48 <rakhmerov> I recognize it may sound tricky because it wasn't clear what it was going to be and so on and so forth
20:33:51 <SpamapS> rakhmerov: so we're talking about doing away with the large-scale workflow-ish parts of Heat and focusing on a convergence model.
20:33:54 <zaneb> so the options there as I see them are to use (1) taskflow, or (2) mistral
20:34:07 <rakhmerov> ok
20:34:10 <SpamapS> rakhmerov: we'll still have workflow needs..
20:34:17 <shardy> wirehead_: aren't scaling events triggered by alarms?
20:34:23 <randallburt> zaneb:  I'm not entirely sure (1) is a realistic near-term option
20:34:25 <rakhmerov> good to hear that
20:34:32 <SpamapS> rakhmerov: but I think they'll be entirely internal and thus will likely just leverage taskflow.
20:34:44 <wirehead_> shardy: "Hey, we've got a PR blast going out at 9 AM.  Let's make sure we have enough servers up"
20:34:56 <zaneb> I was personally never convinced that using an externally-facing API (mistral) would be better for the job than a library (taskflow)
20:35:05 <shardy> wirehead_: Ok, the cron-aas use-case, got it
20:35:07 <randallburt> zaneb:  agreed there as well
20:35:10 <rakhmerov> ok, I see :) So getting back to quetion #2
20:35:30 <zaneb> some people were convinced though, so they should speak up ;)
20:35:31 <SpamapS> wirehead_: wouldn't you just use mistral, and schedule a workflow which pokes more servers into your autoscaling group?
20:35:49 <rakhmerov> do you think it makes sense to discuss separately this topic more detailed?
20:35:52 <SpamapS> wirehead_: Heat wouldn't need any special Mistral sauce for that use case.
20:36:09 <rakhmerov> mostly, I'd like to get more details on how exactly you see using taskflow/mistral
20:36:30 <zaneb> rakhmerov: so there's 3 possible integration points
20:36:32 <SpamapS> zaneb: My thinking about using Mistral was that we'd move to a distributed workflow model instead of having the engines be directly responsible for managing the workflow.
20:36:49 <rakhmerov> the point is: I am not sure everyone clearly understands the key differences between taskflow and mistral
20:36:52 <SpamapS> zaneb: I'm now convinced that we should do away with the large scale workflows, which would mean Mistral would be overkill. :)
20:37:02 <zaneb> #1 is what we discussed just now. I think we're leaning toward taskflow for that
20:37:27 <zaneb> #2 is implementing Mistral resources in Heat. I think we would probably do that
20:37:33 <shardy> rakhmerov: I think that is true, I certainly don't
20:38:32 <rakhmerov> ok, so given that we don't have much time right now (not a suitable format) I'd suggest we have a separate discussion/meeting about that.
20:38:41 <radix> I want to see some Mistral resources in heat, but for pretty boring reasons (just to expose scheduling stuff)
20:39:15 <shardy> rakhmerov: maybe a ML post with a summary and links to most appropriate docs?
20:39:16 <zaneb> #3 is integration points for external workflows within the Heat workflow
20:39:24 <rakhmerov> we finally got a lot implemented and are about share the results with the community so we can tell in details
20:39:28 <SpamapS> rakhmerov: I think #1 is dependent on the outcome of our design sessions on the convergence shift, and so can't really be discussed seriously right now. #2 is a no brainer.. we should have resources for all OpenStack API's.
20:39:49 <shardy> rakhmerov: I think most folks understand the goals in a broad sense, but not the details, current status or roadmap
20:40:07 <rakhmerov> SpamapS: agree that it can't be discussed seriously now
20:40:22 <SpamapS> zaneb: I think #3 is something we can do so generically that Mistral doesn't need special sauce.
20:40:38 <rakhmerov> ok, so let me use ML to plan further activities on that
20:40:59 <rakhmerov> so we decide where and how Mistral might be useful for you
20:41:29 <zaneb> SpamapS: agree, but there's also the question of making a higher-level tool like Murano that combines declarative (HOT) and workflow (mistral) elements of an application
20:41:54 <SpamapS> zaneb: Let's call that one Sauron.
20:41:55 <zaneb> rakhmerov: +1, mailing list is the best place to have that discussion imo
20:42:32 <zaneb> all right, I think we're good to move on?
20:42:39 <SpamapS> yesplz
20:42:43 <rakhmerov> zaneb: ok, sure. Just wanted to have a brief "live contact" with you guys and see if I can help with this
20:43:04 <zaneb> rakhmerov: np, thanks for coming along :)
20:43:08 <zaneb> #topic Pending heat-templates patches for software config
20:43:16 <zaneb> stevebaker: I assume this is you
20:43:18 <tspatzier> so i added this one
20:43:22 <zaneb> oh, ok
20:43:46 <tspatzier> I realised that there are a lot of good changed pending in heat-templates for the software config stuff
20:43:56 <tspatzier> all gated basically by one change: https://review.openstack.org/#/c/79758
20:44:11 <shardy> I reviewed all of these IIRC and my comments have not been addressed
20:44:12 <stevebaker> I need to refresh those changes based on feedback, but once that is done some reviews would be good, since software-config doesn't do much without them ;)
20:44:19 <shardy> most of them look good to go
20:44:41 <stevebaker> I've been distracted by other things
20:45:14 <tspatzier> stevebaker: yeah, that's what I thought. If you like I can take this one and post a new patch set. but wanted to talk to you first.
20:45:41 <tspatzier> This one and then the next dependent one ... and after that a lot of stuff can land. most of it already approved.
20:46:44 <stevebaker> tspatzier: I'll let you know. If I don't get to them today I won't be able to until Monday
20:46:52 <zaneb> tspatzier: be aware that you'd probably have to resubmit all of them
20:46:59 <bgorski> anyone test this patches accordingly to readmes? I was trying to do that and I had some issues with pulling deployments list
20:47:02 <zaneb> otherwise they will have outdated dependencies
20:47:19 <tspatzier> zaneb: good to know
20:47:51 <stevebaker> also there have been some fixes in os-*-config which needed to be released. Images you build won't work without those fixes
20:48:01 <tspatzier> bgorski: I did test them. And they seem to work pretty well. Except the one issue on the first in chain.
20:48:07 <stevebaker> SpamapS: I haven't checked if you did that yet
20:48:21 <SpamapS> stevebaker: _just_ now
20:48:26 <stevebaker> \o/
20:48:49 <stevebaker> tspatzier: I'll ping you if a handover is needed
20:48:51 <SpamapS> stevebaker: it was 5 minutes ago, so it might even be on pypi
20:49:05 <zaneb> #topic Free-for-all
20:49:09 <SpamapS> https://pypi.python.org/pypi/os-collect-config/0.1.16 indeed it is
20:49:40 <tspatzier> stevebaker: sounds good. just wanted to make sure we get all this really great stuff in which is depending on the first patch.
20:50:35 <zaneb> I'm not opposed to wrapping up 10 minutes early, for the record ;)
20:51:01 <shardy> If anyone wants to take a look, we have a nasty VolumeAttachment regression:
20:51:04 <wirehead_> zaneb: well, we could look at lolcats in lieu of wrapping up 10 minutes early.
20:51:05 <shardy> https://review.openstack.org/#/c/89796/
20:51:51 <shardy> evidently the cinder volume attachment API doesn't actually, uh, attach the volume
20:52:00 <stevebaker> shardy: any evidece that doesn't re-introduce the race?
20:52:09 <shardy> just FYI if anyone feels like taking a look
20:52:28 <shardy> stevebaker: I'm not 100% sure but I tested locally and it worked for attach and detach/delete
20:52:47 <pas-ha> shardy: looks like this depends on actual cider driver
20:52:48 <shardy> stevebaker: the commit message *claims* it's considered the race, but yes that is a concern
20:52:55 <stevebaker> shardy: ok, could you reproduce the race before?
20:53:07 <shardy> reviews welcome, but volume attachements just silently break on current master
20:53:14 <shardy> so would be good to fix :)
20:53:32 <shardy> FYI I'm working on a tempest test which would catch any future regression like this
20:53:33 * SpamapS eyes tempest
20:53:39 * pas-ha facepalms himself
20:53:45 <randallburt> shardy:  odd,  do you know how long ago the original change landed?
20:53:56 <shardy> SpamapS: patch should be up for review tomorrow ;)
20:54:01 <SpamapS> shardy: my hero!
20:54:22 <shardy> I thought my test was broken, then I realized it was *actually* broken
20:54:52 <lifeless> SpamapS: have we talked the scaling bug ?
20:54:54 <wirehead_> <3
20:55:03 <shardy> randallburt: https://review.openstack.org/#/c/86638/
20:55:08 <lifeless> SpamapS: cause stevebaker's patch set didn't make enough difference
20:55:22 <shardy> that broke it two days ago AFAICT
20:55:35 <lifeless> stevebaker: do you think running multiple engines would give enough headroom? We're testing on 24 core machines ....
20:55:43 <randallburt> shardy:  cool, got it. I wondered why we weren't seeing it on our cloud. We're a few days behind master so we missed the breakage
20:56:05 <shardy> randallburt: ah, cool
20:56:57 <stevebaker> lifeless: that might help. I've since added to the patchset but am still working on an optimisation which would make the biggest difference to the number of queries per stack load
20:57:45 <stevebaker> randallburt: because you don't have provisioned servers polling heat every 30 seconds, because you have no mechanism for those servers to authenticate ;)
20:59:18 <lifeless> stevebaker: so rackspace have no dynamic config via heat?
20:59:26 <lifeless> stevebaker: that seems to rather devalue it :)
20:59:42 <zaneb> stevebaker: randallburt was talking about the cinder bug
20:59:47 <lifeless> oh!
20:59:58 <lifeless> but actually that was a question we wanted to ask
20:59:59 <stevebaker> lifeless: currently rackspace chef config is triggered by sshing *from* heat-engine
21:00:12 <lifeless> has rackspace seem the same performance/scale concerns we have
21:00:25 <zaneb> time's up
21:00:26 <lifeless> stevebaker: to end user instances?
21:00:34 <zaneb> -> #heat
21:00:39 <zaneb> #endmeeting