20:01:07 #startmeeting heat 20:01:08 Meeting started Wed Mar 20 20:01:07 2013 UTC. The chair is sdake_. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:01:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:01:12 The meeting name has been set to 'heat' 20:01:48 #topic rollcall 20:01:58 hidey ho 20:02:01 o hai! 20:02:03 o/ 20:02:07 o/ 20:02:10 hi 20:02:41 spamaps around? 20:03:21 sdake_: I got an email from him a few minutes ago -- he should be on his way 20:03:27 #topic summit session review 20:03:38 thanks oubiwann 20:03:50 sdake_: yes here 20:04:02 * SpamapS keeps getting distracted by his heat doing awesome things actually 20:04:04 we have 6 topics for summit for the Heat track 20:04:13 thats a good sign ;) 20:04:18 http://summit.openstack.org/cfp/topic/8 20:04:19 I'll add one more 20:04:29 we have 7 slots so sounds like we are full :) 20:04:40 it is open until end of march 20:05:08 our schedule is 4/15 from 9:50 until 17:20 20:05:12 (8 is nova) 20:05:34 did you paste that link right? 20:05:36 asalkeld not on my ui ;) 20:05:59 VM Ensembles (Group Scheduling) - API 20:06:15 asalkeld: doesn't work for me either 20:06:17 odd 20:06:27 ok, well lets go through them individually then 20:06:40 #link http://summit.openstack.org/cfp/details/148 20:06:58 oo, tosca 20:07:15 looking for completeness, relevancy, whether should be bounced to a different track 20:07:17 Interesting, who is "openiduser38" ? 20:07:25 I saw the BP for this, seems to be driven by IBM? 20:07:52 I'd like to know if they're proposing resources to implement, or just saying the feature would be nice 20:07:58 The blueprint dev is Thoms Spatzier 20:08:11 I looked at the spec today and it's so complex I got a bit scared 20:08:37 looks like snafu with summit.openstack.org openid import 20:08:38 Ok thats cool. HP is invested in TOSCA... so it might be something that HP folk would be interested in contributing to as well. 20:08:39 well maybe he can explain it at summit 20:09:06 #action sdake to track down tosca openid session lead corrected in summit.openstack.org 20:09:14 here is my take 20:09:16 worth listening 20:09:22 asalkeld: sure, I just wondered if there was IRC discussion I missed while on PTO 20:09:27 rather then passing judgment now 20:09:38 shardy new material 20:09:52 for those folks attending summit have a look at the tosca spec 20:10:15 Tosca could be interesting, or could be painful, we just don't know yet 20:10:15 make sure you have a very large coffee ready ;) 20:10:25 sdake_: right, I don't think it would be all that disruptive to HEAT as a whole too... so its worth hearing from people who are willing to step up to doing it. 20:10:28 ya don't forget your monster energy drink 20:10:47 hah 20:11:20 If anything, it would lay groundwork for pluggable parsers, which is probably a good thing. 20:11:29 so I'll sort out that problem and put this on short list of things since it seems like it could have big impact on heat for users and devs as well 20:11:30 SpamapS: +1 20:11:46 that may be a relatively big refactoring exercise tho 20:12:10 well i'd prefer not to rewrite heat to bring in tosca 20:12:18 these are all questions we can address at summit 20:12:26 right, but it may be refactoring heat to allow others to plug in tosca/camp/juju/etc. 20:12:36 yea 20:12:45 #link http://summit.openstack.org/cfp/details/136 20:13:15 appears relevant 20:13:52 I would challenge the assumption that converting AWS::CloudFormation::Init to cloud-init would be less code than doing the few basic operations it is capable of. 20:13:56 asalkeld if you could put together some thoughts in etherpad.openstack.org that might get us rolling 20:14:07 sure 20:14:21 Would like to see some preliminary analysis of the features in cloud-init as compared to cfn-init's features 20:14:24 cool, well challenge it at the summit ;) 20:14:37 asalkleld: interesting - atm cloud-init has no capability to read Cloudformation resource metadata AFAIK? 20:14:44 so it can't replace cfn-hup? 20:14:54 shardy: that would be relatively easy actually 20:15:03 It can only read ec2 metadata IIRC 20:15:22 the problem is how heavy handed it is, as it expects to be the initializer of the system, not the ongoing configurator. 20:15:41 SpamapS: Sure, I'm just pointing out that the capability does not yet exist in upstream cloud-init, and we don't know if they would be willing to merge it 20:15:56 shardy the capability is in 0.7 20:16:06 but for cfn-hup, requires more investigation 20:16:10 anyway, would like to see the analysis, I won't reject the session outright because I have not done the comparison. 20:16:20 sdake_: what capability is in 0.7? 20:16:29 everything init needs 20:16:39 It can't read resource metadata via DescribeStackResource 20:17:05 shardy you can put what ever you want in the metadata (something cloud-init) can expect 20:17:14 right that part is really, really easy (adding your own cloud data source). 20:17:48 I think we are busy doing the session? 20:17:57 * sdake_ points at asalkeld 20:17:58 asalkeld: but the instance metadata is not the same as the *resource* metadata, which is what cfn-hup reads, in both heat and CFN, but I suppose we can change that 20:18:14 My personal preference is for cfn-init to stay as-is, and I think the effort to convert to cloud-init will be quite large compared with keeping it as-is. But again... worth somebody looking into it if they are interested in doing that conversion work. :) 20:18:45 #action asalkeld to write up convert cfn config into etherpad for further discussion at summit 20:18:53 sure 20:19:06 #link http://summit.openstack.org/cfp/details/44 20:19:28 looks relevant 20:19:49 one thing I'd like thought about spamaps is the difference between scheduling with corotines vs threads 20:20:05 I think we've all agreed, this is doable in the near term. We should bring our ideas and some thoughts on how difficult they will be to implement. 20:20:07 spamaps can you make an etherpad on the subject 20:20:35 sdake_: sure thing. Not entirely sure where to go to do that. 20:21:20 #link https://etherpad.openstack.org/heat-concurrent-resource-scheduling 20:21:31 ah cool thanks :) 20:21:47 asalkeld if you would follow same convention might find that helpful 20:21:55 sure 20:22:07 #link http://summit.openstack.org/cfp/edit/86 20:22:53 http://summit.openstack.org/cfp/details/86 20:23:04 edit gives me forbidden? 20:23:05 horrible system 20:23:13 are you logged in? 20:23:23 I believe only the owner can do edit 20:23:33 ya shardy made the session 20:23:41 oh wrong link .. 20:23:43 sec 20:23:45 I am logged in, and I created this session, so not sure what the problem is 20:24:01 #link http://summit.openstack.org/cfp/details/88 20:24:07 lets try that one :) 20:24:38 I've raised blueprints for the aws resources that are backed by openstack resources 20:24:41 maybe merge with the tosca one 20:24:52 no please do not merge w/ the tosca 20:25:07 link cut and paste failing me today.. 20:25:20 #link http://summit.openstack.org/cfp/details/86 20:25:26 heat credentials management 20:25:29 this is a critical item and I want to make sure we are all fully understood on the end goal, and the forces driving it. 20:25:39 spamaps lets address that next.. :) 20:25:46 yes back to 86 :) 20:26:17 So I'm hoping to get some input from all on the way forward with this, and in particular get some keystone guys involved 20:26:19 +1 for 86, I have some ideas and would love to share and hear where others think we are 20:26:29 shardy would you fill out an etherpad on the subject 20:26:29 as I'm pretty sure we still need more new keystone features 20:26:30 re 86, if we move to trusts, would that cause issues if we also replace heat-cfntools with aws-cfn-bootstrap? 20:27:00 stevebaker: Thats what I'm referring to, we need the ability to create ec2 keypairs from trust tokens 20:27:00 not sure 20:27:03 keystone needs a "sudo" but not sure best how to handle that 20:27:08 which keystone cannot currently do 20:27:16 howdy azaneb 20:27:25 trusts == sudo 20:27:25 hey, sorry 20:27:25 just going through summit sessions now 20:27:39 so the answer is, yes it would cause issues, so we can't use trusts in-instance until that is figured out 20:27:41 lost track of the time 20:27:59 shardy mind filling in a etherpad on the topic to kick things off? 20:28:13 sdake_: sure will do (be tomorrow now) 20:28:33 #link https://etherpad.openstack.org/heat-credentials-management 20:28:48 #action shardy to fill in heat credentials management etherpad 20:29:11 #link http://summit.openstack.org/cfp/details/88 20:29:38 Isn't this just a resource naming issue? 20:29:39 abstracting aws out of heat 20:30:03 i am not entirely convinced - some openstack resources may have unique properties that aren't handled 20:30:04 could these blueprints be attached to the session? https://blueprints.launchpad.net/heat/+spec/native-cinder-volume https://blueprints.launchpad.net/heat/+spec/native-nova-instance 20:30:07 We previously discussed having a config file with name mappings, so the code doesn't need to have AWS resource names in it? 20:30:34 http://summit.openstack.org/cfp/details/171 20:30:44 just added that one ^ 20:30:56 still need to fill it in a bit 20:30:58 going back to 88 ... 20:30:59 that still leaves properties schema that are aws specific. I'd rather see native resources that are thin wrappers over openstack APIs 20:31:03 Its not just resource naming 20:31:05 it is wikis 20:31:05 seems like people interested to discuss, so definitely recommend a session at summit on the topic 20:31:06 and documentation 20:31:25 yes it impacts everything we do as a complete project in openstack 20:31:40 The concern is simple. OpenStack consumers do not want to drive users of Heat and OpenStack to AWS CloudFormation which has far more capabilities. 20:31:44 as an aside, I have heard significant feedback there is interest in this particular issue 20:32:09 stevebaker: but still have the AWS resource types, subclassed from the native ones I guess? 20:32:09 spamaps could you fill out an etherpad on the topic? 20:32:26 sdake_: yes I'll create one now 20:32:43 https://etherpad.openstack.org/heat-abstracting-aws-out 20:33:10 #action spamaps to fill out abstracting aws out of heat to kick off discussion 20:33:38 #link http://summit.openstack.org/cfp/details/78 20:33:39 SpamapS: who are OpenStack "consumers" in this context? 20:33:45 Rolling updates and instance specific metadata 20:35:28 seems fine 20:35:42 ya bit of two topics in one but seems ok 20:35:49 http://summit.openstack.org/cfp/details/172 20:35:54 https://blueprints.launchpad.net/heat/+spec/heat-autoscaling 20:36:01 zaneb: deployers would have been a better word :) 20:36:08 oh, snap -- sorry guys 20:36:10 wrong window 20:36:18 ninja topic ;) 20:36:37 The reason rolling updates and instance specific metadata is together is that instance specific metadata is needed for rolling updates to work. 20:36:38 totally :-/ 20:36:54 i see 20:37:11 I want to get consensus on the need for both, at the same time. 20:37:29 SpamapS, you might want to metion that it is instance metadata for instancegroups 20:37:36 ok well same drill - same format re etherpad 20:37:56 oubiwann: autoscaling has been in Heat since... forever? 20:38:02 asalkeld: right. 20:38:45 #action spamaps to create etherpad for specific metadata for rolling updates 20:39:07 ok looks like we had some late entrants - lets review those real quick: 20:39:23 #link http://summit.openstack.org/cfp/details/171 20:39:45 definitely need to solve this problem 20:39:57 asalkeld mind putting together an etherpad 20:40:04 (added 5mins ago - so a bit lite) 20:40:06 sure 20:40:17 not late, schedule ends on 30th, but late for the agenda ;) 20:40:25 (lite) 20:40:42 I think thats one where people should come with ideas in hand. 20:40:48 #action asalkeld to create etherpad for multiple heat engines in one openstack deployment 20:41:02 spamaps we have our original architectural ideas 20:41:07 those either need validating or reworking 20:41:08 all sessions should be with "ideas in hand" 20:41:10 I won't be there, so I'll dump my idea in the etherpad 20:41:15 sdake_: cool 20:41:26 stevebaker which session type? 20:41:44 multiple heat-engines 20:41:51 cool 20:41:57 #link http://summit.openstack.org/cfp/details/172 20:42:00 ok last one, :) 20:42:04 no its not 20:42:15 oubiwann: ^^ 20:42:21 I think we have all of that? 20:42:25 I spoke with oubiwann at Pycon briefly about this 20:42:26 oubiwann we have that 20:42:31 I think the idea is to have it without heat 20:42:39 lol 20:42:41 yes, that is next on the agenda 20:42:43 "Autoscaling for Heat" 20:42:49 "Proposed by oubiwann in topic Heat" 20:43:11 oubiwann perhaps a better subject would be "decomposing autoscaling from heat" 20:43:15 he got a call, let me go track him down 20:43:17 I added it to the meeting agenda because I wasn't sure we'd get a submission during session review :) 20:43:35 I can answer questiongs regarding oubiwann's autoscaling project 20:43:39 so you want a new project? 20:43:50 questions* even. 20:43:55 I think the question is, why would this be outside of Heat as a project? 20:44:17 #topic autoscaling decomposition 20:44:19 I understand that it might be its own API and not want to drag heat's template language along. 20:44:22 (this was in agenda btw) 20:44:37 in AWS, Autoscaling is a feature of EC2 20:44:39 But that could still live inside heat as a project and be quite happy. 20:44:49 so arguably it should be provided by Nova 20:44:52 zaneb autoscale is a separate api in aws 20:44:54 SpamapS, you need the launchconfig 20:45:03 SpamapS: what do we gain by doing this though? 20:45:06 oh, my bad 20:45:14 The launch config for AS is a nova launch config. 20:45:20 shardy: I don't know, thats why oubiwan is proposing. :) 20:45:22 with a load balancer 20:45:40 We could implement the API and leave the AS logic in the engine if people want the AWS separate-API for AS 20:45:59 To me, its declarative vs. imperative all over again. 20:46:06 the rationale behind this seems to be autoscaling as a unique service, without orchestration features which heat provides 20:46:23 little history 20:46:28 And no matter how awesome your declarative API is (heat templates), people will want an imperative way to operate it. 20:46:37 well cloud watch is an issue 20:46:49 when we started heat, we had two huge dependencies that we needed solving - 1 was cloudwatch 2 was autoscaling 20:46:56 we merged them into one code base 20:47:01 we need monitoring (been added to ceilometer) 20:47:13 SpamapS: sounds like you're proposing a new service for openstack (which heat could use instead of an internal implementation) 20:47:17 a tad early for this imo 20:47:18 *I* am not 20:47:30 The API we have for Autoscaling entirely abstracts monitoring out of the system. 20:47:32 or a new service in heat 20:47:33 oubiwann is propsing 20:47:34 but we don't have the monitoring infrastructure etc as asalkeld points out 20:47:37 I am merely introducing oubiwan to you guys :) 20:48:06 SpamapS: ok, noted, sorry ;) 20:48:11 fsargent, but you still need an implemetation there 20:48:13 Since oubiwan is busy, can we move on and come back, or is this the last item we have for today? 20:48:13 fsargent, oubiwann will you both be at summit? 20:48:18 The Autoscaling API will setup webhooks per policy, that monitoring will hit. 20:48:20 #link http://summit.openstack.org/cfp/details/90 20:48:27 sdake_: Yup. 20:48:44 stevebaker that is a horizon topic 20:48:54 here is my take 20:48:55 but its heat related 20:49:06 fsargent: It sounds like you guys already did this, without looking at whether or not it was doable in Heat itself. 20:49:10 which, IMO, it is. 20:49:16 worth having a design session, willing to put it into the heat track - since atm heat does the autoscaling around here ;) 20:49:25 If it's already done, where is the code? 20:50:13 maybe someone if finding a link? 20:50:14 We're working on something. 20:50:28 on github? 20:50:41 Yes, but its in a private repo currently. 20:50:46 asalkeld: +1 :) 20:50:46 urg 20:50:49 fsargent: Having a summit discussion about your yet-to-be-unveiled solution will not be worthwhile IMHO 20:51:14 Understood. 20:51:19 yea don't be afraid - make it public 20:51:33 we can provide feedback 20:51:44 Or, don't, and just plug the same API into heat. 20:52:17 muliple projects does make deployment more of a pain 20:52:41 our job isn't to worry about deployment, our job is to worry about making openstack spectacular and well decomposed 20:53:03 well, my job is to worry about deployment, but my job in heat isn't .. ;) 20:53:11 precisely ;) 20:53:22 but if some wants autoscale but not heat, then this might make it easier 20:53:34 I won't go on about how we're intending to submit autoscaling, but it is modular and cross functional. 20:53:41 that said, IMO, heat is the place to do this, as it is the thing that operates on the other services from an automation point of view. 20:53:50 heat can call it, and it'll run everythingin that environment, or outside of it. 20:54:16 and autoscaling, cloudwatch, etc, can just be implemented as api calls. 20:54:18 yes feature sounds interesting - but tbh approach is wrong - ping me after meeting for some advice 20:54:27 Will do sdake_ thanks. 20:54:31 lets move on 20:54:33 6 min 20:54:42 well wanted to get through some blueprints today 20:54:55 but we can hardly get started in our time allotted 20:54:59 heh, not a chance 20:55:04 so I'll switch to open topics at the moment 20:55:09 #topic open topics 20:55:35 well done on rc1 everyone 20:55:45 stevebaker took my line ;) 20:55:49 \o/ 20:56:04 So, first off, everyone doing a spectacular job 20:56:15 we have had a big impact in OpenStack over the last year 20:56:35 We started in March 2012 - look what we accomplished - full integration with a great feature set that makes OpenStack better 20:56:56 high high performance team - keep up the good work ;) 20:57:37 ^5 to all, humbled to have joined such a fine tribe. :) 20:57:38 SpamapS: Error: "5" is not a valid command. 20:57:48 ^help 20:57:50 sdake_: (help [] []) -- This command gives a useful description of what does. is only necessary if the command is in more than one plugin. 20:58:00 have to probe that later ;) 20:58:17 ok thanks all 20:58:18 anything else? 20:58:24 nope 20:58:43 one last thing 20:58:48 if your presenting a session 20:58:57 please send me the days you have booked in your schedule for Monday 20:59:03 rather times on Monday 20:59:11 so that I don't double-book a session 20:59:29 thanks! 20:59:31 #endmeeting