00:00:35 <zaneb> #startmeeting heat
00:00:36 <openstack> Meeting started Thu Feb  6 00:00:35 2014 UTC and is due to finish in 60 minutes.  The chair is zaneb. Information about MeetBot at http://wiki.debian.org/MeetBot.
00:00:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
00:00:39 <openstack> The meeting name has been set to 'heat'
00:01:03 <zaneb> #topic Roll call
00:01:07 <randallburt> o/
00:01:08 <kanabuchi> hello
00:02:03 <zaneb> slow day today
00:02:06 <randallburt> yup
00:02:17 <nanjj> hello
00:02:21 <tango|2> hello
00:05:00 <zaneb> ok, I think it's going to be a quick one today
00:05:15 <zaneb> stevebaker is away, so I volunteered last week to chair
00:05:28 <zaneb> sdake and jpeeler also said they couldn't make it today
00:05:41 <zaneb> #topic Review last meeting's actions
00:05:44 <radix> hello
00:05:52 <zaneb> #link http://eavesdrop.openstack.org/meetings/heat/2014/heat.2014-01-29-20.00.html
00:05:58 <zaneb> radix: o/ :)
00:06:01 <radix> :-)
00:06:07 <zaneb> there weren't any!
00:06:09 <zaneb> nest :)
00:06:19 <zaneb> #topic Adding items to the agenda
00:06:25 <zaneb> anybody?
00:06:52 <randallburt> zaneb:  curent agenda link?
00:07:01 <zaneb> #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda
00:07:38 <zaneb> ok, next :)
00:07:47 <zaneb> #topic Default Resource Names
00:07:54 <zaneb> radix: is this you?
00:07:59 <randallburt> ok so, kebray asked me to represent him here
00:08:01 <radix> ummm not me
00:08:06 <radix> ok :)
00:08:07 <randallburt> zaneb:  it was kebray
00:08:11 <zaneb> ok, cool
00:08:20 <zaneb> randallburt: you have the floor :)
00:08:32 <randallburt> basically he is asking that we use stuff like OS::Compute::Server rather than the "internal" project names
00:08:58 <randallburt> use basically use the  type of service from the catalog and openstack documentation in place
00:08:59 <zaneb> I like that idea
00:09:12 <radix> hould we support both? (I guess we need to for backwards compatibility at least)
00:09:14 <radix> s
00:09:22 <zaneb> radix: yes
00:09:25 <randallburt> cool. based on my understanding, he'd add something in the default environment to alias like we did with Quantum
00:09:47 <radix> should those aliases be in the default environment, or should it be in the resource mappings in the code?
00:09:48 <zaneb> tbh it's also trivial to do it in code
00:09:52 <radix> :)
00:10:04 <randallburt> yeah, I'm not personally fussed either way
00:10:16 <zaneb> I actually would have suggested s/Nova/Compute/ a lot earlier...
00:10:21 <nanjj> Give one example, how to say 'OS::Docker::Container'
00:10:39 <radix> nanjj: docker isn't really related here, it's not an OpenStack program
00:10:40 <zaneb> except that I was saving those names for when we found out that are resource models were crap, and we wanted to redo them ;)
00:10:43 <radix> er, not in one
00:11:03 <randallburt> so I'll let him know to raise bp's if needed. perhaps a ML thread?
00:11:19 <randallburt> zaneb:  so cagy ;)
00:11:30 <radix> heh heh
00:11:37 <zaneb> +1, only because turnout is so low at this meeting
00:11:52 <zaneb> (for ML post, that is)
00:12:01 <randallburt> zaneb:  agreed
00:12:30 <zaneb> ok, anything else on this topic?
00:12:38 <randallburt> nope
00:12:42 <zaneb> cool
00:12:49 <zaneb> #topic Discuss status of x-auth-trust bp
00:12:53 <zaneb> I added this one
00:13:03 <zaneb> but I have actually already checked with shardy
00:13:15 <zaneb> so there's not actually anything to discuss :)
00:13:26 <zaneb> #topic Scrub the blueprints list for Icehouse
00:13:29 <randallburt> does it support v2 keystone? ;)
00:13:34 * randallburt ducks
00:13:49 <zaneb> randallburt: don't go there :D
00:13:54 <zaneb> #link https://launchpad.net/heat/+milestone/icehouse-3
00:14:13 <zaneb> so I already bumped a few bps to next
00:14:29 <zaneb> radix: I bumped a couple of the autoscaling API ones, I know
00:14:36 <randallburt> are we having a "cut off date". for example, IIRC, glance is saying in by the 17th or its not going to make it.
00:14:55 <radix> ok, looking
00:15:00 <zaneb> randallburt: I believe we setting on the same feature proposal date as other projects
00:15:04 <randallburt> k
00:15:26 <radix> wait, did you bump any? it looks the same to me
00:16:17 <zaneb> radix: I definitely did, because you only have 3 targeted for i-3 in that list linked above now
00:16:30 <zaneb> #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
00:16:56 <zaneb> so patches for features should be submitted for review by 18 Feb
00:17:08 <zaneb> #link https://wiki.openstack.org/wiki/FeatureProposalFreeze
00:17:33 <zaneb> and they need to be merged by Feature Freeze on the 6th of March
00:17:34 <asalkeld> o/
00:17:43 <zaneb> #link https://wiki.openstack.org/wiki/FeatureFreeze
00:17:49 <zaneb> asalkeld: o/
00:18:01 <randallburt> zaneb:  did you bump my stack-update/resource status bug?
00:18:02 <asalkeld> sorry was on the phone
00:18:17 <andrew_plunk> o/
00:18:35 <zaneb> randallburt: I haven't bumped any bugs as far as I recall
00:18:37 <radix> ok
00:18:50 <randallburt> hrm. k, I'll check in a bit.
00:18:52 <radix> zaneb: oh, I was confused
00:18:53 <radix> zaneb: yeah, ok
00:18:57 <zaneb> but it was yesterday already that I was messing with these, so it's hard to say ;)
00:19:25 <randallburt> zaneb:  nevermind. found it.
00:19:27 <radix> yeah, some of that stuff needs to be moved into future
00:19:28 <randallburt> all is well
00:19:41 <zaneb> the bottom line is that there is less than 2 weeks left to propose patches for new features
00:19:46 <radix> as-engine-db, at least
00:20:03 <zaneb> so it's time to start aggressively bumping blueprints to next if you don't think they're going to make it
00:20:17 <radix> I really doubt intermediate resources are going to be done by I at this point, but I do plan on starting on them
00:20:17 <randallburt> seems there's too many there without names attached IMO
00:20:38 <zaneb> #action everybody to scrub their assigned blueprints for icehouse-3
00:20:46 <radix> alright, doing that now
00:21:04 <zaneb> randallburt: I only see two unassigned bps
00:21:32 <zaneb> they both look to be on stevebaker's patch, but I assume he left them unassigned in case someone else wanted to pick them up
00:21:38 <randallburt> oh, I was counting bugs too
00:21:44 <zaneb> I think it's likely we'll want to bump both of those
00:21:51 <randallburt> probably
00:21:56 <zaneb> but I'll leave it to stevebaker when he gets back
00:22:34 <zaneb> #topic Open Discussion
00:22:39 <kanabuchi> zeneb: I'd like to discuss about this bp: https://blueprints.launchpad.net/heat/+spec/router-properties-object.
00:22:55 <zaneb> kanabuchi: ok, go ahead
00:23:06 <kanabuchi> I wrote down my opinion to bp
00:23:32 <kanabuchi> I think, ExtraRoutes is really important function, for supporting physical hardware
00:23:41 <zaneb> just reading it now
00:23:44 <kanabuchi> ok
00:25:34 <zaneb> so the issue for me is that it's really hard for Heat to model things that are completely outside of OpenStack and Heat's data model
00:26:31 <randallburt> kanabuchi:  does Neutron have an actual API endpoint for CRUD on extra routes? can I get a list of them from neutron and edit each one?
00:26:31 <kanabuchi> Yes, real hardware resource can't model in heat now
00:26:56 <randallburt> or is this strictly operational in nature?
00:27:26 <zaneb> kanabuchi: you seem to be saying that extra routes are needed for operators (i.e. admins)... it's not clear what that implies for Heat, which is user-facing
00:28:12 <kanabuchi> I'm not sure about Neutron's API design, extra route should be update via route now.
00:28:55 <kanabuchi> zeneb: Yes, extraroute need to provide option for operators
00:29:48 * radix is still scrubbing BPs
00:29:56 <kanabuchi> zeneb: My image of usecase is
00:30:41 <kanabuchi> zeneb: when the operator want to hardware network devices, example for VPN route, L3 router, another devices...
00:31:27 <kanabuchi> zeneb: that physical hardware can't model on heat at present, right?
00:31:58 <zaneb> Heat is primarily a service for users, I don't think we should have resources in the tree that are (1) only for operators, and (2) don't actually work for orchestration
00:32:22 <zaneb> operators, unlike users, have the flexibility to install their own plugins
00:32:58 <zaneb> so if you wanted to put the proposed patch in /contrib, I would be OK with that
00:33:26 <zaneb> if it's going to be user-facing, we need to figure out a different model IMO
00:33:35 <asalkeld> +1
00:35:37 <zaneb> ok, anything else on this or any other topic?
00:36:01 <radix> nothing from me
00:36:10 <kanabuchi> zeneb: please continue this discussion, thanks
00:36:31 <kanabuchi> oh, not today
00:36:33 <asalkeld> https://review.openstack.org/#/c/71199/
00:36:47 <asalkeld> (easier contib setup in devstack)
00:36:55 <randallburt> nope
00:36:56 <zaneb> #action zaneb add summary of this discussion to the router-properties-object blueprint
00:36:57 <asalkeld> if anyone is interested
00:38:58 <randallburt> cool asalkeld. minor −1 but lgtm
00:39:01 <zaneb> asalkeld: how does that fit in with https://review.openstack.org/#/c/68751/ ?
00:39:36 <zaneb> actually, that's the wrong patch
00:39:54 <asalkeld> zaneb, I know there is a rename
00:39:56 <randallburt> zaneb:  yeah, stuff got moved around a lot in those patches, but should be fixable in the devstack patch.
00:40:01 <zaneb> https://review.openstack.org/#/c/68746/8
00:40:09 <zaneb> that one ^
00:40:12 <asalkeld> that's why I put Richard on the review
00:40:49 <zaneb> ok, cool
00:41:03 * zaneb goes back to ignoring it ;)
00:41:17 <asalkeld> zaneb,  that's for loading even when it won't work
00:41:23 <asalkeld> I want the plugin to work
00:41:31 <zaneb> yep, you're right, different thing
00:41:33 <asalkeld> :-O
00:41:53 <asalkeld> I'll start using the docker plugin in anger soon
00:42:28 <zaneb> asalkeld: so you're working on Solum stuff now then?
00:42:43 <asalkeld> yeah mostly
00:42:55 <asalkeld> gota make it do something
00:43:09 <zaneb> cool, sounds like a good project for you
00:43:12 <tango|2> Can I ask about the Update Failure Recovery bp?
00:43:21 <zaneb> tango|2: you may
00:44:07 <tango|2> I am working on the bp for troubleshooting, got a dependency on the Update Failure Recovery.
00:44:17 <tango|2> what's the outlook for this bp?
00:44:48 <zaneb> chances are nil for icehouse :(
00:44:58 <zaneb> I already bumped it to next
00:45:29 <tango|2> Is this hard to do? say for someone new like me, maybe with some guidance?
00:46:30 <zaneb> tango|2: it's just about the hardest possible task I can imagine taking on
00:47:36 <tango|2> ok, that's good to know, so I won't make a fool of myself :)
00:47:54 <zaneb> tango|2: I'm not seeing a dependency in the troubleshooting-low-level-control blueprint
00:48:07 <zaneb> #link https://blueprints.launchpad.net/heat/+spec/troubleshooting-low-level-control
00:48:19 <tango|2> It's for continuing a failed stack after an update
00:49:26 <zaneb> if you think it requires another blueprint, please add it as a dependency down at the bottom there
00:49:49 <tango|2> ok, I will add the dependency.
00:49:53 <zaneb> but reading through the description, it's not clear to me that it does
00:50:05 <radix> fwiw pretty much everything depends on it IMO :)
00:51:01 <zaneb> my plan is to use the 2+ months between feature freeze and summit to work on stuff for Juno
00:51:13 <tango|2> if a create stack fails, and the user wants to debug, fix, then continue the stack, I think we can handle in the bp
00:51:16 <zaneb> that way I might have a chance of getting something done in the next cycle
00:51:31 <zaneb> in Icehouse I have got nothing done except emails :(
00:52:27 <tango|2> ok I will just handle debugging failed stack-create for now, deferring the failed stack-update till Juno
00:52:48 <zaneb> tango|2: it definitely seems less useful without being able to continue, but it doesn't seem like you couldn't write a significant portion of the code without that
00:53:08 <zaneb> ok, cool
00:53:52 <zaneb> remember, you have <2 weeks left to submit patches if you want them to land for Icehouse
00:54:06 <zaneb> ok, shall we wrap this one up?
00:54:13 <tango|2> sounds good
00:54:56 <zaneb> this meeting didn't go as quick as it was looking like after all :D
00:55:06 <nanjj> :-)
00:55:21 <radix> hehe
00:55:27 <zaneb> thanks everyone, see you next time and/or in #heat
00:55:27 <kanabuchi> :)
00:55:31 <kanabuchi> bye
00:55:35 <zaneb> #endmeeting