20:00:46 <hub_cap> #startmeeting trove
20:00:47 <grapex> o/
20:00:48 <openstack> Meeting started Wed Oct  2 20:00:46 2013 UTC and is due to finish in 60 minutes.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:52 <openstack> The meeting name has been set to 'trove'
20:00:57 <robertmy_> o/
20:00:59 <datsun180b> just in time
20:01:02 <cp16net> o^/
20:01:06 <robertmy_> nick robertmyers
20:01:06 <pdmars_> o/
20:01:07 <redthrux> o/
20:01:08 <kevinconway> \0/
20:01:09 <robertmy_> ha
20:01:11 <hub_cap> #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting
20:01:16 <hub_cap> nice robertmy_
20:01:18 <dmakogon_ipod> o/
20:01:22 <robertmyers> o/
20:01:23 <hub_cap> at least u didnt show your password like grapex did once
20:01:31 <yogesh> hi
20:01:32 <datsun180b> hunter2
20:01:51 <esmute> hai
20:01:54 <grapex> hub_cap: The worst part was that it was 12345
20:01:54 <hub_cap> #link http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-09-25-20.03.html
20:01:55 <kevinconway> i'm going to hunter2 you up datsun180b
20:01:55 <mattgriffin> hi
20:02:07 <hub_cap> thats my luggage password grapex
20:02:34 <hub_cap> so is slick not around vipul?
20:02:58 <hub_cap> i didnt do nuttun wrt launchpad + perms
20:03:07 <hub_cap> #action SlickNik, hub_cap to check with other teams to set groups permissions correctly on LaunchPad
20:03:12 <hub_cap> i think what we need is a -bugs team
20:03:22 <vipul> o/
20:03:26 <hub_cap> ok moving on
20:03:26 <SlickNik> here
20:03:29 <hub_cap> ok
20:03:38 <hub_cap> SlickNik: did u do anything wrt the LP stuff?
20:03:41 <dmakogon_ipod> hub_cap: -contributors team
20:04:00 <SlickNik> hub_cap: nope, haven't had a chance.
20:04:34 <hub_cap> moving on
20:04:46 <hub_cap> #topic rolling back resources
20:04:49 <hub_cap> so
20:04:59 <hub_cap> historically we have not rolled back resources
20:05:04 <hub_cap> and we have let a delete clean up
20:05:18 <dmakogon_ipod> i'm here
20:05:31 <hub_cap> dmakogon_ipod: has suggested to clean up some of the things when a failure happens
20:05:36 <dmakogon_ipod> main idea for rolling back is to avoid quota exceedance
20:05:46 <isviridov_> why some, but not all?
20:05:46 <hub_cap> well i have a Q
20:05:57 <hub_cap> do the items you roll back have quotas associated with them?
20:06:07 <esmute> yes
20:06:09 <redthrux> yeah - i was ging to say do you have a lower quota for security groups than instances
20:06:10 <dmakogon_ipod> isviridov: one component per review
20:06:22 <dmakogon_ipod> hub_cap: no
20:06:38 <hub_cap> ok so then the quota exceedance does not matter here right?
20:06:50 <dmakogon_ipod> security groups quota controlls by nova
20:07:11 <hub_cap> ok so we are talking about the 3rd party quotas, and im sure DNS will be the same
20:07:17 <redthrux> so i've got a couple of issues:  1) delete instance call when dns support is enabled causes instance delete to fail if dns entry doesn't exist
20:07:19 <dmakogon_ipod> nova quota exceedance is matter, but this one is out of trove scope
20:07:30 <hub_cap> but our main quota is instance, right?
20:07:34 <dmakogon_ipod> redthrux: this is a bug
20:07:46 <hub_cap> if you have an instance, in failed or building state
20:07:46 <dmakogon_ipod> hub_cap: yes
20:07:47 <hub_cap> its a part of  your quotas
20:07:56 <esmute> so when an instance errors in the models (either because of sec-group or DNS), a record in teh DB is created
20:07:57 <hub_cap> you get 1 less instance you can provision
20:08:07 <redthrux> +1 hub_cap, esmute
20:08:09 <dmakogon_ipod> https://review.openstack.org/#/c/45723/ - although take a look at it
20:08:19 <redthrux> and we're only rolling back if prepare fails?
20:08:20 <grapex> redthrux: It seems like we could handle that by not failing the delete if an associated resource can't be found (i.e. was already deleted or orphaned)
20:08:20 <hub_cap> right esmute so when you do a POST /instances, no matter the outcome of it
20:08:22 <esmute> when the exception is raised, the quota engine rolls back the quota... back to what it was originally was
20:08:29 <esmute> but then the record is still there
20:08:42 <cp16net> +1 grapex
20:08:46 <hub_cap> esmute: so if a instance is in FAILED status, its not a part o the quotas?
20:08:48 <esmute> so when the user tries to delete that instance, the quota will falsely decrese
20:08:58 <hub_cap> ok thats a bug esmute
20:09:10 <redthrux> grapex - yes - basically i'm saying this has to be addressed as a prerequisite to cleaning up
20:09:13 <hub_cap> ok im really confused
20:09:16 <hub_cap> let me reiterate
20:09:20 <hub_cap> with what should happen
20:09:24 <hub_cap> 1) an instance is created
20:09:25 <esmute> so what the rollback is trying to do is also to rollback the db record crated
20:09:27 <esmute> created*
20:09:29 <hub_cap> 2) the quota is increased
20:09:33 <hub_cap> 3) a fialure occurs
20:09:39 <hub_cap> 4) the quotas remain the same
20:09:47 <hub_cap> 5) a user deletes
20:09:55 <hub_cap> 6) the quota goes back down by 1
20:09:56 <esmute> 1) instance is created
20:10:03 <esmute> 2) quota is increased
20:10:17 <esmute> 3) failure occurs in the models (different if it occurs in TM)
20:10:33 <esmute> 4) quota catches the exception and rolls back quota to orignal value
20:10:49 <grapex> esmute: What is the distinction for 3? By models do you mean if it fails in the api daemon?
20:10:49 <esmute> 5) the user sees the instance still there (because it was not rolled back)
20:10:54 <esmute> 6) user does delete
20:11:04 <hub_cap> esmute:  this is not a conversation abou tthe bug
20:11:07 <esmute> 7) quota usages decreases (falsely)
20:11:07 <SlickNik> esmute: I think what hub_cap is saying is that if the instance is around (even in FAILED state), the quota shouldn't be rolled back.
20:11:09 <hub_cap> that you have found / fixed
20:11:12 <redthrux> it's about resources
20:11:17 <redthrux> esmute ^^
20:11:25 <redthrux> so - things like dns fall under this
20:11:31 <dmakogon_ipod> esmute: current trove state is that userc cannot delete stucked instance
20:11:33 <hub_cap> lets not worry about that right now
20:11:47 <esmute> grapex: Once the request goes to the TM, the quota usage is committed...
20:11:52 <redthrux> stucked how dmakogon_ipod
20:12:00 <esmute> but if it fails in the API, the quota is rolled back
20:12:05 <dmakogon_ipod> BUILDING status on poll_untill
20:12:10 <hub_cap> we should fix a stuck instance
20:12:16 <hub_cap> rather than roll back _some_ of the resources
20:12:22 <hub_cap> that will lead to more confusion
20:12:25 <redthrux> okay - dmakogon_ipod - that means the prepare call failed
20:12:31 <dmakogon_ipod> hub_cap: how ??
20:12:42 <SlickNik> So, the quota should really be tied to the resource.
20:12:47 <hub_cap> do you know the reason for it getting stuck?
20:12:51 <hub_cap> dmakogon_ipod: ^^
20:12:53 <esmute> what we can do is mark the instance as FAILED or ERROR but do not re-rasie the error
20:12:54 <grapex> hub_cap: Agreed. If we roll back resources because something failed we'll end up duplicating logic to delete resources that already exists in the delete call
20:13:00 <esmute> otherwise the quota will roll back
20:13:01 <redthrux> we actually shouldn't wait for instances to come out of BUILD status with a timeout
20:13:03 <dmakogon_ipod> redthrux: it means that instance cannot be repaired
20:13:06 <SlickNik> If the resource still exists in the DB, the quota should be +1
20:13:24 <hub_cap> ok we are rabbit holing
20:13:27 <hub_cap> *holeing
20:13:32 <redthrux> i don't think rolling back is smart period
20:13:35 <hub_cap> this is not helping the decison
20:13:40 <kevinconway> *wholing?
20:13:48 <hub_cap> kevinconway: 1) users, 2) punch
20:14:01 <hub_cap> its a matter of do we roll back resources on a poll_until timeout
20:14:04 <redthrux> i'd rather us reorder things - dns before prepare - and if prepare fails, then we marked as failed
20:14:05 <hub_cap> right?
20:14:06 <cweid> 3) profit.
20:14:10 <hub_cap> lol
20:14:13 <grapex> esmute: So if the quota is falsely updated if it fails int he api daemon, i.e. before the taskmanager, I think the resolution is to make the failed state happening int he api daemon something which is shown to the user as FAILED but maybe is a different state (has a different description)
20:14:14 <SlickNik> cweid: lol
20:14:29 <hub_cap> clear ;
20:14:37 <redthrux> i actually loathe that a slow prepare call - that eventually finishes - can cause an instance be in failed status
20:14:39 <hub_cap> 1:14 PM hub_cap its a matter of do we roll back resources on a poll_until timeout
20:14:40 <grapex> Actually, let's talk about the bug esmute brought up after we finish talking about dmakogon_ipod's topic.
20:14:44 <dmakogon_ipod> as customer i don't know anything about low-level deployment, so if it cannot become active it is broken, and customer should delete it, but he can't
20:14:45 <redthrux> this happens - i've seen it with my own eyes
20:14:55 <hub_cap> grapex: sure we can add it to the end of the agenda
20:14:58 <esmute> grapex: agree. But if we do that, we cant re-raise the error.. Otherwise the quota will rollback to what it was before
20:15:06 <hub_cap> lets not talk about esmute's bug
20:15:09 <hub_cap> its not the topic
20:15:10 <redthrux> i'd like us to understand the poll until is for USAGE -
20:15:10 <hub_cap> period
20:15:25 <hub_cap> 1:14 PM hub_cap its a matter of do we roll back resources on a poll_until timeout
20:15:28 <hub_cap> 1:14 PM hub_cap its a matter of do we roll back resources on a poll_until timeout
20:15:31 <hub_cap> 1:14 PM hub_cap its a matter of do we roll back resources on a poll_until timeout
20:15:34 <hub_cap> lets talk about this
20:15:37 <hub_cap> and this only
20:15:37 <esmute> hub_cap: The fix for this rollback affect mine bug :P
20:15:42 <robertmyers> lets cut out the poll until
20:15:47 <vipul> what's the definition of rollback?
20:15:52 <vipul> delete everything?
20:15:58 <hub_cap> #link https://review.openstack.org/#/c/45708/
20:15:59 <vipul> or mark somthing in a terminal status
20:16:01 <dmakogon_ipod> vipul: no
20:16:07 <hub_cap> no vipul, to remove security groups
20:16:19 <dmakogon_ipod> vipul: we suppose to leave instance and nothing else
20:16:20 <hub_cap> #link https://review.openstack.org/#/c/45708/29/trove/taskmanager/models.py
20:16:20 <SlickNik> vipul: delete associated artifacts.
20:16:25 <hub_cap> everyone look @ that
20:16:39 <amcrn> a consistent view of the world, in my opinion, is that if a virtual/physical asset is still provisioned (whether it's active or failed), the quota should not be rolled back. An important addendum is that a user/admin should be able to delete an instance in BUILD/ERROR to subtract the current quota usage. So, in short, we should not (imho) rollback resources on a poll_until timeout.
20:16:43 <SlickNik> where associated artifacts = security groups.
20:17:07 <hub_cap> amcrn: thats what ive been tying to say
20:17:10 <vipul> I don't agree with removing /deleiting resources explicitly on timeout
20:17:11 <SlickNik> FWIW: I'm of the view that we should not roll back.
20:17:12 <hub_cap> that was my 6 step program
20:17:15 <vipul> i do agree with marking it as deelted
20:17:17 <amcrn> hub_cap: I'm agreeing with you :)
20:17:18 <dmakogon_ipod> amcrn: that is why i suggested to update status on poll_until timeout
20:17:20 <vipul> err.. error
20:17:34 <grapex> amcrn: +1
20:17:36 <SlickNik> explicit is better than implicit in this case.
20:17:43 <vipul> because the idea is that when the user issues a delete, it will remove the assoicated resources
20:17:51 <SlickNik> Because something failed when the user wasn't looking.
20:17:53 <hub_cap> https://gist.github.com/hub-cap/6799894
20:17:56 <cp16net> amcrn: +1
20:18:01 <dmakogon_ipod> vipul: if it so, you would get quota exceeded exception on next provisioning
20:18:03 <hub_cap> look @ that gist
20:18:14 <vipul> that's fine.. you need to delete
20:18:18 <grapex> hub_cap: That's esmute's issue
20:18:18 <redthrux> dmakogon_ipod: then you can call delete
20:18:19 <redthrux> yes
20:18:22 <grapex> Do we want to talk about that now?
20:18:32 <juice> i think manual delete/cleanup of resources is fine if you are talking one or two errors but this does not scale
20:18:32 <vipul> if you have instances in Error, those count against quota
20:18:40 <hub_cap> dmakogon_ipod: only if you have misconfigured quotas in nova would you have quota issues
20:18:41 <esmute> hub_cap: what do you mean by "4) the quotas remain the same"?
20:18:45 <juice> we are provisioning a whole system not parts
20:19:04 <juice> i see it as a complete transaction either completely done or completely undone
20:19:06 <esmute> 4) quota rolls back
20:19:06 <hub_cap> you prov a resource, it is a hit to quotas, period
20:19:11 <hub_cap> no
20:19:16 <hub_cap> they should not
20:19:17 <dmakogon_ipod> hub_cap: suppose we have less sec.gr than VMs in qouta
20:19:19 <grapex> I feel like the real problem here is dmakogon_ipod has encountered a case where the delete call is unable to fully work. We need to fix that case.
20:19:19 <hub_cap> if a user deletes, then quotas roll back
20:19:29 <hub_cap> dmakogon_ipod: then you will hit the issue even w/o rollbacks
20:19:34 <hub_cap> 10 instances, 8 secgroups
20:19:38 <esmute> ok..that is what grapex suggested
20:19:39 <hub_cap> even w/ perfect instance provisioning
20:19:46 <hub_cap> you will get 8 instances
20:19:50 <hub_cap> and 2 failures
20:19:54 <grapex> juice: I think if we want to switch to using transactions, maybe we (wait for it everyone) wait for Heat. :)
20:20:19 <dmakogon_ipod> hub_cap: no, nova assignes default sec.gr to instance
20:20:26 <juice> if heat addresses this issue then we should not build our own
20:20:48 <vipul> it's a workflow type of scenario.. you have a distributed loosely coupled system.. you cant' impelemnt transactions unless you impolement workflow
20:20:49 <hub_cap> s/heat/lets not talk about this/
20:20:54 <redthrux> lol
20:20:57 <kevinconway> grapex: turn up the HEAT!
20:21:02 <isviridov_> yep, heat has a parameter to rollback or not
20:21:10 <hub_cap> hey lets not talk heat
20:21:13 <grapex> kevinconway: The heat is on.
20:21:15 <redthrux> so - wait - I think the consensus is to say "don't roll back parts of an instance"
20:21:16 <hub_cap> dmakogon_ipod: tell me what you mean
20:21:21 <grapex> kevinconway: It's on the street.
20:21:30 <hub_cap> well everyone agrees w/ that but dmakogon_ipod, rev
20:21:33 <hub_cap> *redthrux
20:21:39 <hub_cap> and id like to get his opinion on it
20:21:46 <hub_cap> dmakogon_ipod: explain the scenario plz
20:22:02 <dmakogon_ipod> hub_cap: if you cannot create new security group that pass None, and nova would assign default security group to instance and that's it
20:22:21 <hub_cap> we do not check that a secgrp is honored by nova?
20:22:26 <hub_cap> thats a bug
20:23:02 <dmakogon_ipod> but, default sec. gr is shared
20:23:23 <dmakogon_ipod> you cannot add identic rules to it
20:23:35 <hub_cap> i understand that
20:23:48 <hub_cap> but do we not check that the secgrp we created is honored
20:23:54 <dmakogon_ipod> current workflow missing checks for creation groups/rules
20:24:50 <hub_cap> so i understand what dmakogon_ipod is saying, but a rollback woudl not change the scenario
20:24:53 <hub_cap> right?
20:25:01 <hub_cap> misconfiguration can be the cause too
20:25:09 <dmakogon_ipod> yes
20:25:12 <redthrux> that's what it sounds like
20:25:12 <hub_cap> so im not sure that rolling back will "fix" this
20:25:28 <hub_cap> and it does leave things in a different state between nova and trove
20:25:32 <dmakogon_ipod> ok, than we should update status
20:26:10 <dmakogon_ipod> instance and task status to let user be able to delete instances with BUILDING/FAILED/ERROR statuses
20:26:35 <hub_cap> yes definitely dmakogon_ipod
20:26:40 <hub_cap> if we dont, we have a bug
20:26:51 <hub_cap> users should be able to delete failed instances
20:27:03 <hub_cap> and instances should go failed if they are broken (fail timeout)
20:27:03 <dmakogon_ipod> hub_cap: but i'm still offering deleting components that are not controlled by trove quota
20:27:17 <hub_cap> the delete will do that dmakogon_ipod
20:27:33 <redthrux> right - the instance delete call will do that.
20:28:00 <dmakogon_ipod> what would it do if there is no specific component ?
20:28:06 <redthrux> and - why roll back anything - people running the infra will want to investigate what's going on with a delete
20:28:07 <dmakogon_ipod> we already heard about dns
20:28:17 <Key5_> can we implement a API call like "refresh all"?
20:28:20 <hub_cap> we would not fail if it does nto exist
20:28:36 <hub_cap> but ther is no reason to "roll back"
20:28:50 <dmakogon_ipod> hub_cap: even if support is turned on ?
20:29:08 <hub_cap> if you try to delete dns, for example, and it does not exist properly
20:29:12 <hub_cap> because it failed
20:29:18 <dmakogon_ipod> hub_cap: than how it failes with DNS ?
20:29:19 <hub_cap> then it should just skip it and finish the delete
20:29:25 <hub_cap> its a bug dmakogon_ipod
20:29:44 <dmakogon_ipod> hub_cap: ok
20:29:44 <redthrux> +1 hub_cap
20:30:06 <redthrux> i filed it
20:30:09 <hub_cap> core team, do we have consensus? we should move on. so far i have 1) we have a bug in delete logic, 2) we will not rollback on create
20:30:18 <hub_cap> are we good? ready to move on?
20:30:29 <SlickNik> I'm good.
20:30:31 <redthrux> here's the bug: https://bugs.launchpad.net/trove/+bug/1233852
20:30:36 <hub_cap> we have lots of stuff to do
20:30:39 <dmakogon_ipod> hub_cap: i'll fix my review with status update tomorrow
20:30:42 <hub_cap> <3 dmakogon_ipod
20:30:56 <hub_cap> #topic Cloud-init service extensions
20:31:07 <dmakogon_ipod> so, security group workflow update would be abandoned
20:31:14 <dmakogon_ipod> ahhh
20:31:18 <dmakogon_ipod> my topic again
20:31:30 <esmute> guys, can you have that in writing somewhere?
20:31:41 <dmakogon_ipod> #link https://gist.github.com/crazymac/6791694
20:31:46 <hub_cap> esmute: it is in writing, this is logged :)
20:31:55 <vipul> hub_cap: I would want to mark it as 'error' though
20:32:07 <hub_cap> yes vipul i think dmakogon_ipod will do that
20:32:08 <esmute> dmakogon_ipod: are you abandoning that fix?
20:32:12 <amcrn> to elaborate on esmute's point, can we get a table of scenarios with desired end states?
20:32:33 <dmakogon_ipod> vipul hub_cap: error for which status ?
20:32:39 <dmakogon_ipod> instance of service ?
20:32:53 <vipul> instance status
20:32:58 <dmakogon_ipod> ok
20:33:02 <dmakogon_ipod> got it
20:33:13 <dmakogon_ipod> now it's another topic
20:33:35 <dmakogon_ipod> updating cloud-init before passing it into userdata
20:33:50 <dmakogon_ipod> my idea is described in gist
20:33:59 <dmakogon_ipod> please, take a look
20:34:08 <hub_cap> yes i think that this is ok
20:34:13 <hub_cap> im fine w/ it
20:34:19 <hub_cap> but we need to really focus on heat support too
20:34:24 <dmakogon_ipod> yse
20:34:40 <dmakogon_ipod> definitely
20:34:46 <hub_cap> and when we do
20:34:53 <hub_cap> itll be easy to shift to it
20:35:07 <dmakogon_ipod> i already mark it for TODO
20:36:02 <hub_cap> ok great
20:36:16 <hub_cap> i have no issues w/ it, so we good to move on?
20:37:08 <hub_cap> #topic Configuration + service type
20:37:27 <dmakogon_ipod> ashestakov ?
20:37:34 <hub_cap> hey guys i will say thx to dmakogon_ipod and isviridov_ for putting their names on their topics
20:37:37 <hub_cap> very smart
20:37:47 <hub_cap> dmakogon_ipod:  this must be from last wk?
20:37:58 <dmakogon_ipod> maybe
20:38:10 <isviridov_> hub_cap, yep. i've removed one. Please refresh
20:38:11 <dmakogon_ipod> someone forgot to update it, right ?)))
20:38:22 <hub_cap> andrey is not around ya?
20:38:38 <dmakogon_ipod> seems like yes
20:38:38 <hub_cap> isviridov_ k
20:38:46 <hub_cap> #action moving on :)
20:38:53 <hub_cap> yogesh: what should i call your next topic?
20:38:56 <cp16net> sounds good
20:39:01 <cp16net> lots of this is on the ML
20:39:02 <hub_cap> #topic service registration
20:39:03 <cp16net> read it there
20:39:09 <hub_cap> #link https://review.openstack.org/#/c/41055/
20:39:13 <yogesh> yup...
20:39:16 <dmakogon_ipod> is it updated ?
20:39:47 <dmakogon_ipod> #link https://gist.github.com/crazymac/6784871
20:39:49 <yogesh> dmakogon_ipod: please update the gist per the latest decision
20:40:11 <yogesh> or is it already... :-)
20:40:25 <hub_cap> ok do we need to talk about this?
20:40:30 <yogesh> we are good...
20:40:34 <hub_cap> ok cool
20:40:40 <dmakogon_ipod> yogesh: there was a typo )))
20:40:49 <yogesh> yeah...
20:40:53 <hub_cap> #topic trove-conductor
20:40:54 <yogesh> but the intent is clear..
20:40:58 <hub_cap> datsun180b: go go go
20:40:58 <konetzed> yea!
20:41:02 <kevinconway> datsun180b: update?
20:41:02 <datsun180b> hello
20:41:29 <kevinconway> datsun180b: conductor code?
20:41:30 <datsun180b> i'm sorting out problems with restart and unit tests but conductor at the moment successfully intercepts mysql status updates
20:41:49 <hub_cap> horray
20:41:49 <datsun180b> it's more than a trove review, there's also a devstack review linked in the agenda.
20:41:54 <amcrn> nice
20:41:59 <dmakogon_ipod> yogesh: done, gist updated
20:42:02 <kevinconway> can you link?
20:42:07 <datsun180b> one moment
20:42:16 <yogesh> dmakogon_ipod: cool thanks
20:42:23 <datsun180b> #link https://review.openstack.org/#/c/45116/
20:42:29 <datsun180b> #link https://review.openstack.org/#/c/49237/
20:42:38 <kevinconway> datsun180b: do you have links?
20:42:49 <hub_cap> datsun180b: anything else to say on teh subject?
20:43:10 <datsun180b> at the moment no, but i'd appreciate eyeballs and advice on the code i've shared so far
20:43:37 <datsun180b> that's pretty much it for me
20:43:49 <hub_cap> moving on, great work datsun180b
20:44:00 <hub_cap> #topic trove-heat
20:44:05 <hub_cap> yogesh: go
20:44:05 <yogesh> hub_cap: i listed down some points from trove/heat integration perspective... https://gist.github.com/mehrayogesh/6798720
20:44:42 <kevinconway> can you folks making gists make sure to put in line breaks so everything fits in the frame?
20:45:23 <yogesh> point 1 and 2 can be skipped...
20:45:27 <hub_cap> kevinconway: +++++++++++++
20:45:41 <yogesh> as hardening is anyway in progress and heat events are not supported as of now...
20:45:50 <yogesh> polling is the only way for checking the stack status in heat..
20:46:10 <hub_cap> im very happy to hear that heat support is going to be fixed up
20:46:39 <yogesh> hub_cap: on its way...
20:46:40 <yogesh> :-)
20:47:14 <yogesh> point 3: template in code, should be configurable....is there a reasoning....
20:47:38 <hub_cap> template should not be in code
20:47:41 <hub_cap> well
20:47:42 <yogesh> sure...
20:47:46 <hub_cap> the template should be "built"
20:47:56 <hub_cap> some tings, like user data should be configurable
20:48:21 <hub_cap> but other things like, the instance itslef in the yaml, can be generated
20:48:32 <hub_cap> it will make building a multi node stack easier
20:48:34 <yogesh> yes agreed...
20:48:54 <dmakogon_ipod> yogesh hub_cap: could we externalize template out of taskamanager ?
20:49:04 <hub_cap> dmakogon_ipod: thats the plan i think
20:49:11 <hub_cap> i think yogesh asked about it
20:49:12 <yogesh> it'll be externalized..
20:49:22 <dmakogon_ipod> and store it like cloud-init script
20:49:26 <yogesh> but then we won't have a completely cooked template...
20:49:33 <yogesh> it'll be created dynamically
20:49:39 <hub_cap> right yogesh
20:49:39 <dmakogon_ipod> ok
20:49:46 <hub_cap> and some parts will be config'd
20:49:48 <hub_cap> like user data
20:49:51 <hub_cap> and some will just be generated
20:49:59 <yogesh> hub_cap: absolutely....
20:50:01 <hub_cap> perfect
20:50:07 <kevinconway> are you going to generate your template using template templates?
20:50:08 <isviridov_> what template parts are you going to generate?
20:50:08 <yogesh> point 4.
20:50:10 <dmakogon_ipod> i'd like to take a look at mechanism of dinamic creation of heat template
20:50:30 <yogesh> i'll keep updating the GIST and mark it off to you guys..
20:50:46 <hub_cap> isviridov_ the part that defines the # of instances, so u can generate a stack w/ > 1 instance
20:50:49 <hub_cap> for clustering
20:51:03 <hub_cap> dmakogon_ipod: once yogesh publishes the review you will be abel to :)
20:51:11 <yogesh> the epecific user scripts will be configurable...
20:51:12 <kevinconway> can't you just roll a jinja2 template for the HEAT templates?
20:51:15 <Key5_> do we want to use HEAT HOT DSL?
20:51:15 <isviridov_> number of instances can be parametrized
20:51:24 <kevinconway> it supports IF and ELSE and all that
20:51:24 <jdbarry> is trove/heat integration spec'd in a blueprint?
20:51:31 <hub_cap> Key5_: if it impls everything that the cfn does
20:51:53 <dmakogon_ipod> does heat supports it now ?
20:52:04 <hub_cap> kevinconway: right, im not sure we are defining the _how_ of template generation
20:52:15 <yogesh> the design of dynamic template creation can be a separate topic
20:52:35 <dmakogon_ipod> hub_cap: yogesh: let's take a weed and think about it
20:52:38 <kevinconway> i was just curious about the term "dynamic template generation"
20:52:50 <isviridov_> kevinconway, +1
20:52:55 <dmakogon_ipod> keviwconway +1
20:52:55 <hub_cap> HAH dmakogon_ipod
20:53:05 <cweid> i AM DOWN
20:53:07 <imsplitbit> I think he meant "week"
20:53:08 <cweid> oops caps
20:53:14 <yogesh> dynamic, in the sense that there is not a precooked yaml/template file which gets loaded into heat
20:53:15 <dmakogon_ipod> iyes
20:53:16 <dmakogon_ipod> yes
20:53:26 <hub_cap> cweid: too bad u cant specify font size in irc
20:53:30 <yogesh> template generation will be abstracted...
20:53:42 <kevinconway> i guess my confusion was with template
20:53:53 <kevinconway> i think of template like jinja2 template, but it means something else in HEAT
20:54:00 <yogesh> yup...
20:54:11 <dmakogon_ipod> i thinks we could have minimaly working template and than extend it for need os each service
20:54:12 <yogesh> dynamic heat template generation, i should say
20:54:16 <hub_cap> so topic 4
20:54:17 <kevinconway> so when you say dynamic template you mean parameterizing your HEAT templates
20:54:27 <kevinconway> yeah ok, thanks for the clarification
20:54:35 <yogesh> point 4...
20:54:38 <hub_cap> yogesh: lets talk about pt 4
20:54:46 <yogesh> for multi instance templates...
20:55:04 <yogesh> marking it back into the instance list in trove...
20:55:18 <hub_cap> ok i think we need to know that its > 1 instance
20:55:25 <yogesh> yeah...
20:55:25 <hub_cap> so we will have to put something in the db
20:55:29 <yogesh> yup
20:55:34 <hub_cap> but im not sure i want to call them separate instances
20:55:40 <hub_cap> ^ ^ people working on clustering api will kill me
20:55:42 <yogesh> and relateing it to point 5
20:55:58 <yogesh> do we need an instance-group abstraction in trove
20:56:02 <yogesh> :-D
20:56:11 <hub_cap> amcrn: do u remember when i asked you about making a cluster an instance a few wks ago? ;)
20:56:14 <yogesh> hub_cap: i akinda agree... :-)
20:56:20 <hub_cap> the topic has come back up, and i like it
20:56:44 <yogesh> instance_group classification in trove would be nice...
20:56:56 <hub_cap> yes yogesh we will need to define somethign liek thsi for clustering
20:56:56 <yogesh> make the pipeline all aligned...
20:57:03 <yogesh> awesome..
20:57:52 <isviridov_> do we have a BP for heat integration?
20:57:57 <dmakogon_ipod> instance group - it is like cluster but in theory ?
20:58:01 <yogesh> in addition, do we think that there is any part which may be missing and needs to be taken care of...just wanted to have a trove/heat task list...from multi instance / clustering perspective
20:58:16 <hub_cap> ok 3 min left, here is what weve come up with
20:58:18 <amcrn> correct, yes
20:58:21 <hub_cap> an instance is an instance is a cluster
20:58:34 <hub_cap> amcrn: hehe its come back up ;)
20:59:00 <yogesh> conceptually, whether multiple instances always will be spun for clustering...
20:59:09 <yogesh> thinking aloud...
20:59:31 <amcrn> :|
21:00:09 <hub_cap> ok time to end
21:00:12 <hub_cap> #endmeeting