00:00:03 #startmeeting heat 00:00:04 Meeting started Thu Mar 6 00:00:03 2014 UTC and is due to finish in 60 minutes. The chair is stevebaker. Information about MeetBot at http://wiki.debian.org/MeetBot. 00:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 00:00:07 The meeting name has been set to 'heat' 00:00:10 #topic rollcall 00:00:19 o/ 00:00:37 o/ 00:01:33 spzala: hey, mind if we add the template conversion tool to the agenda? 00:01:52 stevebaker: not at all :-) 00:01:56 o/ 00:02:09 I'm awake 00:02:53 o/ 00:02:54 hi 00:03:02 liang: hi! 00:03:46 stevebaker, also wouldn't mind gauging peoples enthusiasm for an image building plugin 00:04:06 stevebaker, hi ;) I got messed up with the schedule last time after coming back from holiday. 00:04:08 SpamapS: about? 00:04:26 #topic Review last meeting's actions 00:04:33 asalkeld: I've added that 00:04:39 cool 00:04:52 no actions 00:05:02 #topic Adding items to the agenda 00:05:11 anything else to add to https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda_.282014-3-6_0000_UTC.29 ? 00:06:34 #topic Feature Freeze Extensions 00:06:59 icehouse-3 has been branched, this is what we got in. Well done everybody 00:07:01 #link https://launchpad.net/heat/+milestone/icehouse-3 00:07:08 o/ 00:07:34 I see this critical didn't make it in https://bugs.launchpad.net/bugs/1287980 00:07:59 ttx: any chance https://bugs.launchpad.net/bugs/1287980 can be added to i-3 once it lands? 00:09:17 There are 4 blueprints which didn't land in time. I'd like to go through each one to check we should be going for a FFE for all of them 00:09:24 #link https://launchpad.net/heat/+milestone/icehouse-rc1 00:09:38 first up is https://blueprints.launchpad.net/heat/+spec/function-plugins 00:09:53 gate is running on those right now 00:09:55 zaneb: there seems to be a bit in the merge queue 00:10:22 zaneb: there are other un-approved changes in that series. Would you say they are part of that blueprint? 00:10:38 let's say that they are 00:10:41 zaneb: which seems to have morphed into a allthethings-plugins ;) 00:10:54 indeed 00:11:12 the function thing is only really interesting in the context of allthethings 00:11:34 there doesn't seem much left, I assume there are no objections to us committing to review these in the next hours/days? 00:11:57 there's only 2 that haven't been approved 00:12:06 #link https://review.openstack.org/#/c/77754/ 00:12:19 #link https://review.openstack.org/#/c/77755/ 00:12:43 #agreed https://blueprints.launchpad.net/heat/+spec/function-plugins reviews will be complete in hours or days 00:12:51 https://blueprints.launchpad.net/heat/+spec/instance-users 00:13:51 * radix arrives 00:13:54 so this is high risk and has already broken things downstream, but it delivers a critical feature so seems worth the pain 00:14:55 anyone object to giving these reviews a high priority over the next days? https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bug/1089261,n,z 00:15:43 +1 00:15:45 I'll take that as a no 00:16:20 #agreed https://blueprints.launchpad.net/heat/+spec/instance-users to be reviewed over the next days 00:16:26 https://blueprints.launchpad.net/heat/+spec/hot-software-config 00:16:52 remaining reviews are here https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/hot-software-config,n,z 00:17:29 this is all shiny, new and self-contained. So I doubt anybody else has tried this stuff yet 00:18:03 I expect there will be bug fixes throughout feature freeze 00:18:40 it's not going to hurt anyone who isn't specifically using it, I assume 00:19:10 zaneb: correct, minor chance of regression in OS::Nova::Server 00:19:50 pffft 00:20:29 that is cool. I took a look at that, but haven't got time to really look into the detail 00:20:45 #agreed https://blueprints.launchpad.net/heat/+spec/hot-software-config to be reviewed over the next days 00:20:56 and finally https://blueprints.launchpad.net/heat/+spec/oslo-messaging 00:21:43 the nature of the beast, its mostly in one uber-commit https://review.openstack.org/#/c/72798/ 00:22:37 yikes, it would take an hour just to review that commit message 00:22:51 ;-) 00:23:01 I did that as a pointer to how to actually review the change ;) 00:23:02 probably a medium risk of regressions 00:23:54 this is the one blueprint that ttx is reluctant to give an FFE on, so all I'm asking for here is to get some committment to giving it some reviews 00:24:26 sdake: is this change something that has immediate benefit, or is it just long-term architectural tidy-up? 00:24:48 I sent a message to stevebaker with the advantages 00:24:51 I'll cut and paste sec 00:24:58 sdake: race you 00:25:28 1. oslo-incubator/rpc is only maintained on a best effort basis. No new features go into oslo-incubator/rpc 00:25:29 2. oslo.messaging has far superior test coverage vs oslo-incubator. Over time new test cases will go into oslo.messaging but no new test cases will go into oslo-incubator/rpc. 00:25:29 3. I believe, but am not certain, that oslo-incubator/rpc->oslo.messaging (the actual oslo part, not our implementation specifically) may break on-wire compatibility over AMQP. If we are going to break the on-wire compatibility, it is better to do it now since heat is not in wide deployment and I expect atleast for RHT customers, Juno Heat will be widely deployed. If we wait to Juno, RHT customers will be negatively impacted. 00:25:32 http://paste.openstack.org/show/72731/ 00:25:32 4. oslo.messaging has a far larger community of developers that support and maintain the code then oslo-incubator/rpc. The oslo-incubator crew seem uninterested in maintaining oslo-incubator/rpc. 00:25:35 5. oslo-incubator/notifications is incompatible with oslo.messaging/notifications on the wire (I am certain of this) - creating further upgrade problems later on 00:25:38 6. oslo.messaging allows easy migration to new messaging systems by the addition of new transport drivers to oslo.messaging. This is not possible with oslo-incubator/rpc since new driver development only happens in oslo.messaging. 00:26:11 oslo.messaging doesn't spew deprecated amqp messages into the log too. 00:26:30 personally I think its worth getting this in early rc1 00:26:37 messaging could support trullius 00:26:45 help move to py3 00:27:11 i'm almost ready to go 00:27:21 sdake: how are those last tests going? 00:27:24 working on the config generator atm (which is broken in master) 00:27:30 after that, I'm going to tackle the test cases 00:27:39 the config generator is causing the gate to fail, so I need to sort thatout first 00:27:49 sorry to interrupt, but I've gotta take off, I just want to mention I'm testing the new AS resources a bit harder over the next week (and thanks for all the reviews) 00:27:53 * stevebaker sharpens sdake's yack shavers 00:27:55 stevebaker: I'm remaining sceptical until I have reviewed the megapatch 00:28:10 zaneb feel free to review now 00:28:13 I'd apprecate it 00:28:24 yeah, on my list for tomorrow 00:28:25 there are about 7 tests commented out which clearly needs addressing 00:28:32 other then that, I think its ready to rock 00:28:33 zaneb: Yep, I won't say we'll land it, I'll say we'll review it 00:29:24 * radix steps out 00:29:31 #agreed https://blueprints.launchpad.net/heat/+spec/oslo-messaging will be reviewed over the next days 00:29:36 radix: hai bai 00:29:53 #topic update failure recovery 00:30:30 SpamapS: can you give us an update on your journal change? 00:31:24 Yes 00:31:37 so I have a set of changes in progress to record which resources have been processed in an update... 00:31:49 this is an attempt at a non-invasive resumable update 00:32:10 it is at the point where it records the things that have been done properly in a table.. 00:32:26 but it still does not use that information yet to fast-forward through the update to the actual failed spot 00:32:47 I have been distracted with the tripleo sprint.. where I've spent much of the time dealing with the recent change to heat to require keystone domains. 00:32:59 SpamapS: how is this journal table different to the events table? 00:33:01 which will hopefully get through the gate in the next 30 minutes 00:33:37 zaneb: it is cleared after a successful update, and it is built to have lookups done by resource, which the events table is not built for 00:34:06 interesting 00:34:09 zaneb: it is entirely possible events could be used 00:34:36 zaneb: I made a shallow attempt at that before pushing toward a specific journal table. 00:36:03 the intention of this is to allow fixing the "can't resume stacks" bug without major surgery, so that we can reasonably merge this post freeze.. 00:36:19 SpamapS: so we're in feature freeze now. Could you justify this change as being a bug fix? 00:36:23 long term we should do the right thing and record the actual state as we do an update ,and resume fully. 00:36:39 stevebaker: IMO it is a bug fix, but I can accept that others will argue it is a feature. 00:37:00 I don't actually care, I just want to use Heat for managing large stacks over long periods of time without having to run a fork. 00:37:41 yes, I'd like to avoid that too 00:38:04 SpamapS: so you're saying just the journal stuff is to be merged now? 00:38:19 and the rest after the Juno thaw? 00:38:25 or am I misreading? 00:39:12 just the journal and the resume using the journal 00:39:39 ah, ok 00:39:41 but then Juno, we rip that out and replace it with a full stateful update and resume. 00:39:53 understood 00:39:57 because this carries problems 00:40:18 such as the fact that it won't really work if your new template doesn't have the same resources _before_ the fail. 00:40:34 because we're just going to skip the things we've done.. we're not going to have something to diff with 00:40:47 but I can accept that, as that will still be useful for recovering updates. 00:41:58 anyway, I haven't finished the code yet.. 00:42:02 but I expect it by Friday 00:42:18 I'm happy to treat this as a bug fix 00:42:59 my other missing feature is graceful updates using a pre-action-notification-thing .. but I can reasonably do that in a plugin until we get a good implementation in Heat 00:43:29 SpamapS: can you elaborate on that a bit? 00:44:02 in tripleo, we want to migrate instances off compute nodes before we rebuild them 00:44:26 I can just write an OS::Nova::Server subclass plugin that does that 00:44:42 until we have something more awesomer 00:45:03 SpamapS: you can have a SoftwareDeployment which runs its config in DELETE (or any other combination of actions) 00:45:27 so that handles server quiesing stevebaker? 00:45:50 sdake: that was the intent 00:46:49 anyway, 14 minutes left 00:46:57 #topic TOSCA HOT conversion tool 00:47:01 stevebaker: DELETE is not the problem, rebuild is. 00:47:37 spzala: could you post a couple of links related to the tool 00:47:46 Sure 00:47:52 SpamapS, that does also mean we can do some customized work after a instance is deleted from a instance group? 00:47:57 stackforge project is created and approved - https://github.com/stackforge/heat-translator 00:48:01 SpamapS: ah, I understand. let me think about that one 00:48:08 liang: yes that is part of it too. 00:48:18 and https://github.com/spzala/heat-translator 00:48:39 spzala that will only be able to handle a subset of tosca, correct? 00:48:40 spzala: it would be nice if you could use https://github.com/openstack-dev/cookiecutter to build a standard project layout 00:48:54 per the next step on new stackforge project instructions, I still need to work on setting up initial review requests and I will be doing it soon. Code not yet ready for review. ( first code drop is there on modelling but it's a work in progress) 00:49:07 sdake: for now, yes that's correct 00:49:19 spzala go ahead and get your work in progress into the queue so people can look at it 00:49:29 stevebaker: oh.. sorry, didn't know about it. Yup, will do. 00:49:32 otherwise one big code drop = suboptimal :) 00:49:36 nice spzala 00:49:56 SpamapS, That's great. thanks! 00:49:58 asalkeld: thanks! :-) 00:50:03 sdake: :-) will do 00:50:04 I'd like this tool to eventually not just do TOSCA<->HOT 00:50:24 stevebaker: agree, that's how trying to design it. 00:50:35 a real basic cfn->HOT would be nice 00:50:44 sdake: tosca yaml specs is in progress but getting close to finalize for the first working draft so it's not a blocker now. 00:50:46 spzala: have you been following the Murano incubation thread on openstack-dev? 00:50:49 really interesting 00:50:54 ooo, yeah and remove cfn from heat 00:51:15 zaneb: have you been doing some tosca stuff? 00:51:18 is it going to have api endpoints too? 00:51:30 zaneb: yes, I am. I did have some discussion with tspatzier and I did see his few replies on that thread 00:51:45 asalkeld: :-) 00:51:54 asalkeld: at this point its just a cli and lib. Maybe eventually we can put it in a heat service 00:51:59 stevebaker: I joined the TOSCA TC. Mostly just keeping lines of communication open at this stage 00:52:00 asalkeld: no api endpoint thought for now 00:52:15 k 00:52:24 zaneb: you've come a long way ;) 00:52:37 zaneb: yup it was great to see you on those calls 00:52:41 so are we putting murano in heat-translate? 00:52:47 * asalkeld ducks 00:52:56 zaneb: everyone on the TC liked it too 00:52:57 * stevebaker kicks asalkeld's ankles 00:53:10 asalkeld: I think we should :) 00:53:15 asalkeld: lol 00:53:59 saying that, we could put a part of solum there 00:54:05 kinda 00:54:24 anyway, thats all I wanted to say. except that we should eventually consider bringing heat-translator into openstack and heat-core instead of stackforge 00:54:40 stevebaker: thanks! 00:54:49 #topic image building plugin 00:54:50 in generral I think we should make a mega openstack program with murano heat and solum 00:55:02 asalkeld: go! 00:55:04 asalkeld++ 00:55:15 ok, so in solum we are making a image building service 00:55:16 might as well throw in workflow at the same time 00:55:23 what would it do? 00:55:36 would people be interested in a heat pluging 00:55:51 so it could expose a tripelo element 00:56:06 and you post a base image glance id 00:56:13 asalkeld: that has always sounded like a workflow thing, not a Heat thing to me 00:56:14 and the link to a git repo 00:56:26 asalkeld: so it would spin up a compute and run diskimage-builder in it, then post to glance? 00:56:34 massive is a good word for it.. :) 00:56:46 stevebaker, something like that 00:57:06 so we need to convert app code into images 00:57:20 asalkeld: how about it goes into contrib at first? 00:57:26 the question is does any one else need such a thing 00:57:30 seems like whatis needed is an actual imagebuilding service 00:57:42 that is what we are doing 00:57:43 rather then heat dealing with that - we end up in same situation as we have with autoscaling atm 00:57:56 sdake, ^ 00:58:14 asalkeld soyou mean a solum plugin to said image building service 00:58:25 yeah 00:58:28 sdake: maybe we should be the home of that service though. nobody else wants it 00:58:32 seems fine to me, we put all kinds of shit in contrib 00:58:44 soudns like solum wants it 00:58:45 we have a workflow library (taskflow), a workflow service (mistral), a workflow orchestration thing (murano)... let's not put workflow resources into Heat. It just fragments things further 00:59:19 zaneb, it's a resource to customize an imaeg 00:59:27 sounds like aeolus 00:59:32 a better version of software config 00:59:35 :) 00:59:44 asalkeld: customising an image is an action, not a thing 00:59:58 so is installing software 01:00:02 your point? 01:00:07 agree 01:00:12 disagree 01:00:13 we ahve some workflow features in heat already 01:00:22 * stevebaker will continue the meeting until someone kicks us out 01:00:25 a software deployment declares deployed software 01:00:29 we don't have a software installer resource. we have a software config resource 01:00:43 first would be an action, second is a thing 01:01:07 zaneb, the resource it the element/ url to that 01:01:16 no difference 01:01:22 I don't think we got super far with the actual image customization 01:01:36 asalkeld: my point is that there are logical ways to accomplish this in both Mistral and Murano 01:01:51 er, no 01:01:54 shoehorning it into Heat is IMHO a mistake 01:01:59 * mspreitz has to go but hopes Zanes talks sense into the rest of you 01:02:10 we're clearly desperately missing a workflow service 01:02:12 ok, my suggestion was not to put the service in heat 01:02:12 http://blog.aeolusproject.org/building-openstack-images-with-image-factory/ 01:02:24 stevebaker: seems like this will be a looooong meeting :-) 01:02:31 IMO we need to get our story straight around workflows across all of OpenStack 01:02:35 only if you want a plugin 01:02:41 alright, lets move to #heat 01:02:45 zaneb: agreed 01:02:46 #endmeeting