20:00:24 #startmeeting heat 20:00:25 Meeting started Wed Aug 14 20:00:24 2013 UTC and is due to finish in 60 minutes. The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:28 The meeting name has been set to 'heat' 20:00:35 #topic rollcall 20:00:38 hello hello 20:00:39 o/ 20:00:41 o/ 20:01:03 o/ 20:01:07 howdy y'all 20:01:20 \o 20:01:22 HEllo 20:01:34 hello 20:02:09 Hi 20:02:18 \o 20:02:21 hi 20:02:27 hi 20:02:38 SpamapS, therve around? 20:03:02 Ok hi all, lets get started 20:03:11 I think therve's on vacation 20:03:25 radix: Ok, cool, thanks 20:03:43 #topic Review last week's actions 20:03:45 tomorrow is a holiday in most of Europe 20:04:01 #link http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-08-07-20.00.html 20:04:16 hi all 20:04:20 zaneb: it is? doh, not in the UK! ;( 20:04:35 Don't see any actions, anyone have anything to raise from last week? 20:04:36 shardy: only in the Catholic parts ;) 20:04:45 shardy: yes, mission statement 20:05:04 we still busy with that? 20:05:06 #link https://etherpad.openstack.org/heat-mission 20:05:11 wow, over cooking it 20:05:24 imo doesn't need to be this complicated ;) 20:05:27 asalkeld: yeah, agreed 20:06:25 zaneb: what did you want to raise, other than we still need to put it somewhere? 20:06:30 not suggesting that we need to discuss it again here, just that it's time to post it 20:06:38 huh. I think I lagged out for a bit 20:07:02 zaneb: OK, I was going to previously but the discussion was still in progress so I left it and was out last week 20:07:09 #action shardy to post mission statement 20:07:17 stick a fork in it, its done 20:07:29 Ok, anything else from last week? 20:08:04 nope 20:08:05 I was wondering the status of the "heat config" bug, but I can figure that out in #heat later 20:08:15 #topic Reminder re Havana_Release_Schedule FeatureProposalFreeze 20:08:42 So the feature proposal freeze was agreed as Aug 23 20:08:47 #link https://wiki.openstack.org/wiki/Havana_Release_Schedule 20:09:10 So we've all got just over a week to get our stuff posted for review, then a couple of weeks to get it reviewed and merged 20:09:20 I'm waiting on various tempest/devstack/dib things, so unless somebody wants a hand with something I might take a look at Horizon 20:09:55 Anything which won't be posted by the 23rd will be bumped to Icehouse, so speak now if that's likely ;) 20:10:19 which brings us to: 20:10:26 #topic h3 blueprint status 20:10:27 when you say posted you mean patches or bp? 20:10:50 #link https://launchpad.net/heat/+milestone/havana-3 20:11:39 asalkeld: It's patches must be posted AIUI, as in propose the change? 20:11:40 I'll try to keep up on reviews a bit every day 20:11:49 way too late for new BPs now IMO 20:13:19 we've got 28 BPs, about half done - I'm hoping heat-trusts will land but gated on keystoneclient 20:13:29 sdake will the nova native resource BP land? 20:13:46 hopefully - been busy with openshifty stuff 20:13:48 #link https://blueprints.launchpad.net/heat/+spec/native-nova-instance 20:14:08 sdake_ I'm happy to help with that if you like 20:14:12 #link https://blueprints.launchpad.net/heat/+spec/heat-trusts 20:14:29 Yeah, be best if that doesn't get deferred IMO 20:14:30 stevebaker atm i have to abandon my openshift work to make the 23rd deadline for that above blueprint 20:14:38 i'll point you at my wip for rhel dib support 20:14:56 sdake: Happy to reassign to stevebaker then? 20:14:58 not so interested in that ;) 20:15:00 i've got a few patches merged already 20:15:09 either way works for me 20:15:18 what do you prefer stevebaker 20:15:23 both need to get done soon 20:15:33 i wouldn't mind working on native instance 20:15:44 ok assign to yourself then 20:15:50 ok 20:16:11 does anyone know what else needs to happen for https://blueprints.launchpad.net/heat/+spec/instance-resize-update-stack ? I see that it had a patchset merged 20:16:22 tspatzier: some stuff has landed for hot-specification, how much more is planned? 20:16:34 tspatzier: It's still "Started" atm 20:16:50 #link https://blueprints.launchpad.net/heat/+spec/hot-specification 20:17:51 radix: IIRC therve posted a patch and it's marked as Implemented, so done? 20:17:57 shardy: Seems like tspatzier is not here.. I will follow up with him 20:18:07 well, the bp isn't marked as complete. I guess I'll just ask him when he gets back 20:18:10 spzala: OK, thanks 20:18:14 shardy, sorry, was distracted for a moment 20:18:23 shardy: no problem! 20:18:26 does anyone need the rest api for this? https://blueprints.launchpad.net/heat/+spec/provider-upload 20:18:30 radix: It's marked as Implemented? 20:18:34 #link https://blueprints.launchpad.net/heat/+spec/instance-resize-update-stack 20:18:45 Completed by 20:18:46 Thomas Herve on 2013-08-06 20:18:46 oh. 20:18:56 shanks, so for hot-specification I guess there won't be much more in havana, since we will only document was is implemented. 20:18:58 shardy: ok, never mind, I assumed everything in the list that I was looking at was still open 20:19:07 tspatzier: oh you are here :), cool. 20:19:27 asalkeld: I don't think we should have a ReST API for that 20:19:38 mark as done? 20:19:48 tspatzier: Ok, if you're happy, please move to Implemented so we know there aren't more patches pending 20:19:48 shardy, so if you are ok with that direction that hot-specification is only the initial sub-set of HOT we did in havana, we could close it 20:19:57 sure, will do 20:20:15 tspatzier: Ok, sounds good 20:20:34 asalkeld: it would be cool if we had the thing that will allow the client to automatically stick relevant files in the files section... dunno if that should go in that blueprint 20:21:11 that already happens for env registry stuff 20:21:19 asalkeld: Yeah, IIRC the API thing was a historical idea? 20:21:21 zaneb, yea maybe I can do that - just for ever reworking cw stuff 20:22:01 Anyone else need work, or need to get rid of work for h3 before we move on? 20:22:28 stevebaker: this stuff: http://lists.openstack.org/pipermail/openstack-dev/2013-May/009551.html 20:23:43 ja? 20:23:45 ah 20:23:49 #topic Open discussion 20:24:06 Anyone have anything to mention? 20:24:16 stevebaker: genau 20:24:24 shardy probably need to chat about that Policy now returning a url 20:24:48 I chatted to zaneb about it and that seeemed the best solution 20:25:08 https://review.openstack.org/#/c/41855/ 20:25:13 asalkeld: Yeah, so both you and therve have posted patches with compatibility stuff, which pollutes the AWS resources with native stuff, or vice-versa 20:25:20 are we happy this is the way to do? 20:25:24 s/do/go? 20:25:43 considering when we release it in Havana, we'll probably be stuck maintaining it.. 20:25:50 the problem is when you compose the alarm with template 20:26:08 you can't reference the policy from the nested stack 20:26:16 so the url solves that 20:26:45 asalkeld: Ok, and I guess because it's just the result of the Ref, it's not actually a template-level incompatibility with cfn 20:27:33 funzo had a question for asalkeld about alarms wrt the reworked cloudwatch stuff and openshift 20:27:33 If it's been discussed already then fair enough, I just saw it and wanted a sanity-check discussion ;) 20:27:45 I like the cw as a template, but if you guys are very against that I can make a plugin for the ceilometer CW 20:28:05 I don't hate this idea 20:28:26 scaling policy seems like one of the less important Refs to maintain consistency with 20:29:09 hopefully we will have a better autoscaling story soon 20:29:22 asalkeld: Where I was coming from is just that we want ideally to continue moving in a direction which decouples us from the AWSisms, rather than baking an unholy mixture of AWS-compatible and native resources ;) 20:29:34 yeah, whatever happened to the separate autoscaling api? 20:29:57 zaneb: gradually getting there :) 20:30:04 well we need a native autoscaling resource 20:30:19 I'm also working vaguely around it. hence my work on InstanceGroup and tinkering with ResourceGroup 20:30:20 cool, radix getting there 20:30:24 sdake: is this the forum to discuss what openshift would like to be able to do with autoscaling? 20:30:42 asalkeld: Yeah, we need native all-the-things, but those should be clean native implementations, eventually with the AWS-compatible stuff built on top? 20:30:44 funzo think its appropriate to ask if what you want can be serviced by asalkeld's new work 20:30:46 given the 7-week freeze I've decided to refocus a bit, hopefully we can get things rolling quickly after unfreeze 20:30:50 funzo, chat to radix 20:31:01 (off line) 20:31:06 asalkeld: ok 20:31:07 radix: cool :) 20:31:40 radix, we are all looking forward to the new autoscaling:) 20:31:44 yay :) 20:31:44 asalkeld the issue is around alarms not around autoscaling 20:31:55 regarding the multi-engine bug https://bugs.launchpad.net/heat/+bug/1211276, if anybody has feedback about https://bugs.launchpad.net/heat/+bug/1211276/comments/8 please let me know 20:32:05 Launchpad bug 1211276 in heat "can't cancel wedged stack-create" [High,Confirmed] 20:32:20 zaneb: wanna make sure we have all of your use cases so we don't take it in the wrong direction 20:32:42 asalkeld i believe funzo wants to pass a parameter to an alarm 20:32:54 i put my recollection of the Portland channels discussion in there 20:32:56 funzo: want to give us a summary of your issue, or maybe ping a mail to the list where we can discuss it? 20:33:04 zaneb, asalkeld: we're probably going to be implement Heat resources for Otter which will give us a better idea of how the native autoscaling API will work 20:33:06 better than individual discussions IMO 20:33:19 shardy: sure 20:33:38 there was one other thing... let me think... 20:33:52 oh, right, InstanceGroup. we need a way to control it from the API 20:34:07 shardy: I wrote up a document of what I would like to be able to do from openshift here https://github.com/openshift/openshift-pep/blob/master/openshift-pep-007.md 20:35:08 has there been any thought on the ability to patch subsections of the template? If we have an external autoscaling service that's controlling an InstanceGroup in Heat, we need to change the "Size" of the InstanceGroup from that autoscaling service 20:35:18 literally just that one property on that one resource 20:35:35 radix: You just do a stack update 20:35:45 shardy: radix: it would require being able to invoke a scale-up/scale-down using a tool from within the openshift infrastructure. that call would need be able to pass parameters to specify user data 20:35:53 shardy: right but download/change/update can lead to consistency errors 20:35:58 shardy but then you have to send the whole template 20:36:00 I'll follow up with you guys offline 20:36:06 asalkeld: right exactly. 20:36:27 radix seems like a good change 20:36:29 what happens if two actors are doing fetch/change/update, too? 20:36:39 they can stomp on each other 20:36:48 funzo: thanks for the info, will read and we can have a followup discussion 20:37:28 in general IMO UpdateStack should be something that users do when they want to change something manually in their stack, but the size of an InstanceGroup has a different feel to it. is that not right? 20:37:38 I mean, InstanceGroup's "Size" property 20:37:45 asalkeld: I thought we'd had this discussion recently, where we decided that allowing per-resource stack creation (or update) was a bad-idea (tm) 20:37:48 but I can also imagine a PATCH for a stack 20:38:09 shardy that's just an update 20:38:11 which would be isomorphic to stack-update 20:38:19 Ok, so just update, not a piecemeal create 20:38:24 right, exactly 20:38:31 I brought up "resource-create" a while ago and that was shot down 20:38:33 this is subtly different :) 20:38:38 so it's an update, but Heat does the remixing of the template itself 20:38:49 ? 20:38:53 zaneb: yes 20:39:11 imagine "replace *this* element in the template with *this* template snippet" 20:39:15 I'm not sure what real advantage it has, other than a tiny simplification to some dict mangling on the user side 20:39:24 scoping all the way down to individual properties 20:39:27 lots of data 20:39:28 shardy: well, like I said, consistency 20:39:35 lots of data too, but consistency moreso 20:40:01 if you have two actors downloading/changing/updating two different parts of the stack, they can stomp on each other 20:40:10 radix: It's much easier to keep a consistent stack definition if you have one template, rather than something in a random state after a bazillion resource updates over time 20:40:13 I don't think anything else in Heat is consistent 20:40:25 haha 20:40:29 if you kill your nova servers, heat won't know about it 20:40:35 solution: don't do that 20:40:39 sorted. 20:40:49 radix: use git for your templates 20:40:55 right now an error is raised if you try to update while a stack is in update not complete I believe 20:40:57 right but this use case is intrinsically dynamic 20:41:06 remember, autoscale service controlling an InstanceGroup 20:41:15 this isn't something that a user is doing manually 20:41:33 radix: you are talking about read/modify/update... but why read? 20:41:39 well patch is a well know/used way of doing an update - don't see the problem with it 20:41:48 zaneb: how do you know what to send to update? 20:41:48 the AS service knows what outcome it wants, just generate the right template and update 20:41:55 radix: but you're not changing the dynamic behaviour, just the definition used to perform it, or the limits 20:42:03 zaneb: oh, I think I haven't explained it properly 20:42:37 zaneb: so, look at InstanceGroup right now. all you have to do is change the "Size" Property and it adjusts the underlying nested stack as appropriate 20:42:40 radix: I think you're talking about having all of the state in Heat and none in AS, even though it's AS that's calling Heat 20:42:40 radix: maybe do a wiki page, and start a ML thread? 20:42:54 it's actually already there in the wiki page :) 20:43:18 link? 20:43:25 https://wiki.openstack.org/wiki/Heat/AutoScaling 20:43:34 #link https://wiki.openstack.org/wiki/Heat/AutoScaling 20:43:51 it doesn't actually propose the specifics how AS controls the InstanceGroup, but it mentions that it needs to be solved 20:44:24 I can start a mailing list thread 20:45:22 radix: I've said this before, but I think we just need a policy resource which is generic and has hooks to call out to (or be signalled by) a policy-calculation service 20:45:49 ie something which sits between ceilometer and the InstanceGroup 20:45:59 ah, actually it says something about some webhook or whatever, but I think that can change :) 20:46:06 * stevebaker has to go 20:46:16 shardy: hm. I don't think I've ever heard the idea of a "policy resource" 20:46:24 I have heard the phrase "policy service" but it has never been clear to me 20:46:34 you mean like AWS::AutoScaling::ScalingPolicy? 20:46:52 well you always need a resource for the template to use 20:46:54 I understand ScalingPolicy :) 20:47:41 I *guess* I can imagine the autoscaling service directly doing an UpdateStack on the *nested* stack 20:47:50 zaneb: Yeah, but the whole premise of this AS service thing seems to be that folks want to plug something other than that in 20:47:54 but so far I had been imagining it would just update the "Size" of the InstanceGroup in the *parent* stack 20:48:22 radix: ooooooh. that's crazy ;) 20:48:28 zaneb: which one? 20:48:35 parent stack 20:48:44 I'm still not that clear what this "autoscaling service" will actually do, which ceilometer and heat don't already do 20:48:51 "autoscaling service directly doing an UpdateStack on the *nested* stack" is what I was thinking 20:48:54 shardy: a couple of things 20:49:13 shardy cool new stuff 20:49:18 :) 20:49:28 shardy: first, it's just a place to plug in more types of policies. scheduling-based ones, integration with other monitoring systems, etc. 20:49:30 $shiny_stuff 20:49:32 like what funzo needs 20:49:44 shardy: it can also provide an API similar to Amazon Auto Scaling API 20:49:45 shardy: it will be an actual resource that you can reference from anywhere (incl different template), rather than something buried behind a heat plugin 20:50:34 +1 20:50:43 zaneb: Having some new *resource* I understand, a whole new service/project, not so much atm 20:51:09 I don't care whether it's in a new service/project 20:51:31 I think it needs a separate endpoint so we're not precluded from deciding that it does need to be later 20:51:48 shardy: the *main* reason to make it a separate service is to provide an isolated autoscale API, which indeed is not really that important to a Heat purist's goals. Otherwise all the other functionality (scheduling, webhook support for arbitrary other custom monitoring systems to use) can be built directly into Heat 20:51:56 zaneb: +1 20:52:02 well people keep saying "autoscaling service", hence my question about what that service will actually do 20:52:10 scale 20:52:12 automatically. 20:52:27 shardy: autoscaling API is a better name than service 20:52:35 lifeless i think the gap in communication is that heat already does scale automatically 20:52:37 agreed 20:52:43 (with zaneb) 20:52:47 sdake: I know, I was being /totally/ unhelpful :) 20:53:03 even though I've been saying "autoscaling service" :) 20:53:23 shardy: so imagine how CFN probably gets along with AWS AS. it's basically the same thing 20:53:42 when we have an API we can decide whether it makes sense for it to be a separate service. But we need an API IMO. 20:53:46 AS can be used separately, but there's also probably some glue in AS to play well with CFN 20:54:14 zaneb: Ok, cool, I guess I'd rather just see us make our existing stuff more flexible, e.g native resources with more generic interfaces, but if an API is something people we need (rather than controlling via stack update, which can be done right now), then fair enough 20:54:37 s/people we/people think we/ 20:55:17 Ok, lets follow up on the ML, anything else for the last 5mins? 20:55:30 shardy, I'd guess that if people are using the as api, then they are not using heat 20:55:38 shardy: I will try to put together another mailing list post that just lays out the next design issue in a clear way, and put it in the context of the whole expected design 20:55:44 (specifically, how the separate AS API will talk back to Heat to get it to do stuff) 20:55:47 but that is still a valid usecase 20:55:49 asalkeld: why would they want to do that ;p 20:55:56 I know, I know, it's terrible :) 20:56:12 cos not everyone wants to use heat? 20:56:20 blasphemy 20:56:22 need to not divide developer resources 20:56:25 ;) 20:56:25 asalkeld: Yeah, joking ;) 20:56:58 more than that, they might want to e.g. use nested stacks and share launch configs or whatever across stacks 20:57:13 and right now they can't, because it has to be in the same stack as the group 20:57:49 zaneb: hmm, but don't we allow arbitrary inclusion of templates now anyway? 20:57:51 or will? 20:57:55 we can fix that either by implementing a separate api, or doing the lookup internally and trying not to screw up any of the security stuff 20:58:02 I know what I would vote for 20:58:08 yeah that sounds confusing 20:58:19 o/ (sorry late.. had conflicting things) 20:58:31 SpamapS: lol, 2mins left ;) 20:58:35 just in time for the end 20:58:44 SpamapS: you missed out on so much! 20:58:52 yeah 20:58:56 wanted to talk about event table 20:59:00 but we can wait till next week 20:59:03 spamaps I see how it is, heat is #2 in your book :) 20:59:15 sdake: #2? oh yeah, right.. number _two_ 20:59:23 Ok, time's up, thanks all 20:59:31 #endmeeting