20:00:46 <hub_cap> #startmeeting trove
20:00:47 <openstack> Meeting started Wed Oct  9 20:00:46 2013 UTC and is due to finish in 60 minutes.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:47 <datsun180b> Oh, I think it's time
20:00:48 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:50 <openstack> The meeting name has been set to 'trove'
20:00:59 <cp16net> o^/
20:01:00 <hub_cap> #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting
20:01:02 <kevinconway> \\o//
20:01:07 <imsplitbit> o/
20:01:07 <datsun180b> present
20:01:08 <mattgriffin> o/
20:01:08 <isviridov> Hi all
20:01:09 <SlickNik> here
20:01:13 <pdmars_> o/
20:01:14 <SlickNik> Hello all
20:01:18 <KennethWilke> howdy
20:01:19 <kevinconway> \\\o///
20:01:21 <cweidenkeller> oi
20:01:26 <grapex> o/
20:01:44 <hub_cap> #link http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-10-02-20.00.html
20:01:46 <juice> o/
20:01:53 <hub_cap> #topic update to action items
20:01:58 <hub_cap> god SlickNik this is embarrasing
20:02:10 <cp16net> lol
20:02:14 <hub_cap> have u done any work w/ LP for group stuff?
20:02:24 <SlickNik> ugh, I keep forgetting about this. I'm gonna do it right now. :)
20:02:39 <hub_cap> #action SlickNik to check with other teams to set groups permissions correctly on LaunchPad
20:02:46 * hub_cap is removed
20:02:47 <hub_cap> ;)
20:02:51 <SlickNik> very clever.
20:02:55 <SlickNik> ;)
20:03:00 <hub_cap> now its your fault if its not done
20:03:03 <hub_cap> hahahahah victory
20:03:11 <SlickNik> heh
20:03:13 <hub_cap> #topic escaping dots in urls
20:03:13 <dmakogon_> does users could be able to retrigg their builds ?
20:03:16 <hub_cap> datsun180b: go
20:03:23 <datsun180b> okay
20:03:32 <hub_cap> dmakogon_: it has to do with that and modifying bugs / blueprints
20:03:38 <datsun180b> so there's a problem when issuing requests that end in a field that is dotted
20:03:49 <datsun180b> user calls except create for example
20:04:30 <datsun180b> so in case a username/hostname contains a dot, the route mapper has a habit of eating the final extension, thinking it's a file extension
20:04:48 <hub_cap> can u give an example?
20:04:56 <isviridov> doesn't user should encode url himself?
20:05:01 <datsun180b> this can be a problem when issuing a call like GET /users/foo.bar, because the response ends up being a 404: 'user foo DNE'
20:05:23 <vipul> o/
20:05:24 <datsun180b> so to get around this we can escape the dots as %2e: "GET /users/foo%2ebar"
20:05:37 <dmakogon> ok with that
20:05:45 <datsun180b> but to issue this call manually, it must be encoded a SECOND time: GET /users/foo%252ebar
20:06:12 <isviridov> datsun180b:  is it python client issue?
20:06:23 <datsun180b> This is an issue with the way we use route mapper
20:06:34 <hub_cap> well the issue is _cuz_ we use route.mapper
20:06:46 <datsun180b> This has been a pain point for Rax users, even ones using curl or similar clients to issue requests
20:07:09 <dmakogon> how do we able to fix that ?
20:07:22 <isviridov> It should be done on client side  http://www.w3schools.com/tags/ref_urlencode.asp
20:07:32 <datsun180b> One way would be to inspect the request object to look for the 'missing' extension
20:07:53 <datsun180b> but that would require a whitelist of expected suffixes and limit possible usernames
20:08:07 <datsun180b> another approach would be to use unique ids for users that do not contain the username or host
20:08:18 <hub_cap> ive ruled on this in the past fwiw, and i think it should not be something we whitelist. its up to the user to deal with
20:08:24 <kevinconway> datsun180b: who uses curl for our api?
20:08:28 <kevinconway> that sounds painful
20:08:34 <hub_cap> lol kevinconway no rules against it
20:08:43 <hub_cap> i write al my webapps in bash
20:08:53 <dmakogon> hah
20:08:54 <hub_cap> bash on bails?
20:08:56 <dmakogon> nice
20:08:59 * KennethWilke dies
20:09:02 <grapex> kevinconway: Curl is the preferred choice for people who want to show examples proving they actually wrote a REST API and not something that requires a special client. :)
20:09:03 <NehaV> but there needs to be a sophisticated way to handle it for customers
20:09:06 <datsun180b> well i'm having a hard time speaking for all of our users
20:09:27 <isviridov> so it is you job to encode urls properly, right?
20:09:33 <hub_cap> ++
20:09:40 <datsun180b> i'll reiterate another approach: to not refer to users by their name or host in request URL
20:09:47 <hub_cap> lets just say its the job of the consumer. i rule on it.
20:09:50 <datsun180b> completely sidestep the issue
20:10:00 <hub_cap> datsun180b: i agree w/ that somewhat, but the mysql api allows for it
20:10:04 <hub_cap> and how else can we delete users
20:10:06 <hub_cap> :/
20:10:14 <datsun180b> delete users/user-uuid
20:10:14 <hub_cap> if they are their own resource
20:10:15 <NehaV> currently it is not clear how customers with a . name can easily handle it
20:10:23 <datsun180b> that would be a v2 change i'm guessing
20:11:02 <datsun180b> hub_cap: final word?
20:11:06 <hub_cap> NehaV: we should document that
20:11:11 <isviridov> it is ugly, we don't ask google to add extra logic to make incorrect requests working
20:11:14 <dmakogon> so, make a conclusion
20:11:20 <hub_cap> but we are not going to sacrifice functionlity of the app for this
20:11:21 <kevinconway> datsun180b: can i take my 50/50
20:11:43 <hub_cap> obvi dmakogon wants us to move on
20:11:57 <hub_cap> ;)
20:11:58 <isviridov> it is standard, what we are talking about
20:12:04 <hub_cap> ++
20:12:05 <dmakogon> no, Ed said - "final word" )))
20:12:10 <kevinconway> isviridov: it's not standard to double url encode your input
20:12:12 <NehaV> documenting is fine but we are not giving a good experience. this is a common use case to have . in username
20:12:24 <datsun180b> especially in ipv4 addresses
20:12:33 <hub_cap> NehaV: the other solutions limit the app behavior
20:12:49 <hub_cap> you have to blacklist particular users that are allowed in mysql, artificilly limiting behavior
20:13:07 <hub_cap> so we either accept that you have to encode, or limit our api
20:13:36 <vipul> why not just patch routes?
20:13:39 <isviridov> kevinconway:  yes, that is why no additional logic of recognition wrong encoding on server side
20:13:47 <vipul> does it give us the ability to get teh extension?
20:13:52 <hub_cap> vipul: i did a while ago, for escaping dots
20:13:54 <vipul> if so.. we could write some hack that concats
20:14:10 <isviridov> what about url fixing on reverse proxy side if it exists?
20:14:11 <vipul> we don't have any use case in our api for extensions anyway
20:14:47 <kevinconway> i think we need a new daemon to translate user names to the right format
20:14:47 <hub_cap> vipul: https://github.com/bbangert/routes/commit/1c5da13b9fc227323f7023327c8c97e9f446353f
20:14:50 <datsun180b> for example if i wanted to name a user foobar.json it would be ambiguous
20:15:03 <vipul> we allow .json in the url? is it meaningful?
20:15:08 <hub_cap> yes we do
20:15:11 <datsun180b> vipul: apparently it is
20:15:14 <hub_cap> its a format string, try it out
20:15:20 <vipul> that's what the accept is for
20:15:22 <NehaV> hub_cap - isn't there a better way to fix it rather than just documenting and asking to add escape
20:15:25 <hub_cap> .xml and .json
20:15:31 <cp16net> it can override the accept headers
20:15:50 <hub_cap> NehaV: not that doesnt break the overall functionality of the app for _anything_ that requires dots in urls
20:16:13 <isviridov> hub_cap:  +1
20:16:17 <datsun180b> well i'll note that we've exhausted 15 minutes of our weekly meeting about this. this discussion may need to continue somewhere or somewhen else with respect to the other issues we have to discuss
20:16:28 <kevinconway> to the mailing list!
20:16:42 <datsun180b> that's actually a good idea
20:16:44 <dmakogon> ML or die
20:16:55 <hub_cap> ML or die
20:16:58 <cp16net> +1
20:17:02 <kevinconway> ML or pie
20:17:06 <hub_cap> ill take pie
20:17:08 <hub_cap> moving on
20:17:10 <datsun180b> but i'm not in control of this meeting, i just technically have the floor. hub_cap, up to you to move the topic if you like
20:17:27 <hub_cap> #topic Provisioning post ACTIVE
20:17:55 <hub_cap> vipul: this is u ya?
20:17:57 <vipul> This is me.. comment on redthrux's patch for moving DNS provisioning
20:18:05 <hub_cap> hey put your name on the things u add
20:18:06 <vipul> things like DNS.. floating IPs.. security groups
20:18:18 <vipul> aren't needed unless the instnace is usable
20:18:23 <vipul> which means ACTIVE
20:18:29 <vipul> so why not provision them after that
20:18:38 <dmakogon> about that, we cannot add any resource creation after poll_untill cuz it would brake or limit heat workflow
20:18:47 <grapex> vipul: I think the issue is they actually are needed
20:19:01 <vipul> grapex: they are needed even if the instance never goes active?
20:19:02 <dmakogon> cuz heat would cover every resources ever imagined
20:19:10 <grapex> If there is no DNS, the instance can't actually be used.
20:19:19 <hub_cap> i believe firmly that if the resources we are supposed to create do not get created, its FAILED
20:19:25 <dmakogon> grapex: why is that ?
20:19:33 <vipul> Sure.. which is why if DNS fails.. then we dont' consider it active
20:19:34 <hub_cap> if we need a sec grp or dns, and the instance comes online but dns failed, mark it as FAILED
20:19:38 <vipul> regardless of whether mysql came up
20:19:40 <hub_cap> correct vipul
20:19:43 <isviridov> do we have any cases when user fixes proviosioning parts? Like re-adding floating IP?
20:19:45 <hub_cap> FAIL
20:19:47 <grapex> vipul: dmakogon: Do you mean, why provision that stuff if the instance itself won't provison? Why not put it at the end?
20:19:51 <hub_cap> isviridov: no point in that
20:19:55 <hub_cap> just delete/recreate
20:20:02 <grapex> I agree we should put it at the end- maybe
20:20:12 <grapex> I just think the status should only be ACTIVE if *all* resources have provisioned
20:20:17 <hub_cap> ++
20:20:21 <vipul> sure.. agreed
20:20:23 <isviridov> so, let us think about it as atomic thing
20:20:27 <grapex> vipul: So maybe we agree on the point that the resource prov order should be changed
20:20:52 <grapex> Currently, the status of ACTIVE comes back when the server and volume are ACTIVE and the guest status is also ACTIVE
20:20:59 <dmakogon> server -> DB -> sec.gr -> fl.ip -> dns
20:21:13 <hub_cap> well
20:21:20 <hub_cap> server+db+sec+ip+dns
20:21:23 <grapex> If we want to move the provisioning of other stuff until after the server provisions, then we'd need to essentially set the active status in the trove database at the end of the task manager provisioning routine
20:21:24 <hub_cap> = ACTIVE
20:21:38 <dmakogon> this all would work with nova background
20:21:39 <SlickNik> I agree on the ACTIVE part (i.e. an instance should go ACTIVE iff none of the component parts fail provisioning), still thinking about the order part.
20:21:49 <hub_cap> so the status is async
20:21:56 <hub_cap> its sent back from teh guest
20:22:12 <isviridov> order is defined by dependencies, but result is single
20:22:13 <hub_cap> so i dont think we need to necessarily wait for that to prov other resources
20:22:16 <hub_cap> we can always clean up
20:22:19 <grapex> hub_cap: Right now the active status checks the server and volume STATUS, plus the service (guest) status, and if they're all ACTIVE it shows the user ACTIVE
20:22:22 <grapex> Maybe what we do is
20:22:41 <grapex> in the taskmanager, we wait until everything provisions, and then we save something to the trove DB as a marker saying we beleive all resources provisioned
20:22:57 <grapex> then the ACTIVE status is whether the Trove taskmanager thinks it finished plus if the service status is ACTIVE
20:23:03 <dmakogon> hub_cap: vipul: grapex: how this would work with heat provisioning
20:23:03 <vipul> couldn't we use task state for that
20:23:05 <dmakogon> ??
20:23:13 <grapex> vipul: Maybe
20:23:28 <vipul> so maybe we actually make use of that when aggregating the end user status
20:23:30 <grapex> dmakogon: Not sure, but I think if we changed that it would be closer to what we need for Heat
20:23:47 <vipul> taksmanager goes to PROVISION COMPELTE or whatever -> implies ACTIVE
20:24:06 <grapex> vipul: Or maybe it should be set to NONE-
20:24:17 <grapex> that actually would be more consistent with the other action code
20:24:26 <dmakogon> hub_cap: vipul: grapex: when we are using heat all resources already created with stack, so we cannot manipulate and controll creation of any instance related resources !
20:24:27 <grapex> vipul: But I agree with the idea
20:24:37 <hub_cap> +1 dmakogon
20:24:43 <hub_cap> lets not change it
20:24:48 <isviridov> +1
20:24:49 <SlickNik> dmakogon: Ideally heat would be able to provision all this for us, so we wouldn't have to piecemeal provision an instance.
20:24:59 <dmakogon> hub_capL totally agree with you
20:25:03 <vipul> can't you do thinks with HEAT that wait for conditions to be met
20:25:08 <grapex> dmakogon: The idea isn't that we're dramatically changing it, we're just moving stuff around a bit
20:25:43 <grapex> Btw, I only think this is necessary if we care about the order of provisioning
20:25:44 <dmakogon> grapex: it would brake whole workflow provisioning
20:25:50 <grapex> I personally don't know if its worth doing now
20:26:08 <dmakogon> i'm suggesting to leave it as it is
20:26:17 <hub_cap> vipul: WaitConditions
20:26:22 <vipul> For now, we could let it go to ACTIVE.. and if DNS proviisioning fails, mark as FAILED
20:26:27 <grapex> dmakogon: I honestly agree. Vipul, what was a big motivation to change the order?
20:26:30 <hub_cap> lets solve redthrux's meeting
20:26:32 <hub_cap> LOL
20:26:32 <vipul> I don't think the whole status reporting needs to be solved
20:26:32 <hub_cap> issue
20:26:42 <grapex> vipul: I don't think that would work for DNS
20:26:48 <hub_cap> i dont think the whole thing needs to be redone
20:26:51 <grapex> So redthrux had to miss the meetign due to a doctor appointment
20:26:55 <grapex> I talked to him a bit
20:26:59 <grapex> Here is his real problem
20:27:01 <hub_cap> lets make sure that we mark a status to failed for dns if it fails
20:27:09 <grapex> When DNS fails, we set it to a task status that means that DNS fails-
20:27:15 <dmakogon> hub_cap: vipul: grapex: forget about DNS, it also would be a part of heat resources
20:27:31 <grapex> the problem is though, in this state a Delete operation is not allowed. That's the bug we need to fix- it should be possible to delete such instances.
20:27:38 <dmakogon> hub_cap: vipul: grapex: dns is not critial resource
20:27:56 <vipul> heat = magic wand
20:27:58 <grapex> dmakogon: It's only critical if we want users to be able to use it to log into their databases.
20:28:11 <SlickNik> lol @ vipul
20:28:12 <grapex> vipul: Lol
20:28:13 <hub_cap> dmakogon: thats not correct. it is critical
20:28:15 <hub_cap> if you need dns
20:28:18 <dmakogon> hub_cap: vipul: grapex: mgmt allows to re-create dns record or we could do that by hands
20:28:20 <hub_cap> and its part of your workflow
20:28:22 <hub_cap> no
20:28:24 <hub_cap> no no no
20:28:30 <hub_cap> if dns fails your instance fails
20:28:31 <hub_cap> period
20:28:37 <hub_cap> delete/recreate
20:28:43 <hub_cap> and fix your crappy dns system ;)
20:28:44 <grapex> How about we make an action item to allow DELETE to work on instances in the BUILDING_ERROR_DNS state?
20:28:54 <vipul> So in the current patch, we tell taskamanger to provision the instance, and immediately go create DNS
20:28:56 <isviridov> grapex:  let us fix bug with deleting, instead of introducing new ones
20:29:02 <SlickNik> dmakogon: I don't think any of trove's customers need to or want to involve themselves with managing DNS :)
20:29:02 <dmakogon> grepex +1
20:29:02 <hub_cap> ++
20:29:26 <SlickNik> grapex: agreed
20:29:38 <dmakogon> SlickNik: maybe so, so i'm ok with new DNS status
20:29:48 <dmakogon> like grapex said
20:29:59 <dmakogon> could we talk about security groups review ?
20:30:00 <grapex> #action Fix bug with DELETE to allow instances in BUILDING_ERROR_DNS state to be deleted.
20:30:11 <vipul> that's works.. I still think we are unnecessarily provisioning things unless we wait for Gueat to come active
20:30:29 <dmakogon> almost everyone already seen it, so i want tot make a conclusion
20:30:30 <grapex> vipul: I agree
20:30:43 <grapex> vipul: it just seems like no one wants to fix it right now if Heat is coming
20:30:51 <hub_cap> ok moving on?
20:30:58 <imsplitbit> yes pls
20:31:00 <grapex> vipul: I think it would be possible in a minimal way, but it sounds like there's not much support... maybe we can talk later.
20:31:02 <vipul> yep.. revist after HEAT
20:31:14 <SlickNik> sounds good
20:31:21 <hub_cap> #topic Provision database in specific tenant
20:31:26 <hub_cap> #link https://blueprints.launchpad.net/trove/+spec/dedicated-tenant-db-provisioning
20:31:29 <hub_cap> isviridov: yer up
20:31:49 <isviridov> so, idea is to make Trove real DBaaS out of the box
20:31:50 <hub_cap> can u explain what this means?
20:32:29 <isviridov> Now we are creating db in target tenant quota, but already have own quota management
20:32:48 <isviridov> why not create all instances in f.e. trove tenant?
20:33:08 <vipul> it's possible.. just need a custom remote.py
20:33:10 <hub_cap> im confused, so youre saying submit resources on behalf of a single user?
20:33:14 <grapex> isviridov: What is the difference between "trove tenant" and "target tenant quota"?
20:33:25 <vipul> one super-tenant?
20:33:29 <vipul> that holds all instances
20:33:30 <hub_cap> so nova resources are not shown as the users instances?
20:33:37 <isviridov> so, user doesn't care about his quota and doesn't see instances what belongs to trove actually
20:33:39 <hub_cap> like a shadow user so to speak
20:34:02 <vipul> sounds like it
20:34:06 <amcrn> isviridov: because as a provider, if your deployment currently assigns a tenant per user, you now have no way of restricting resources on a developer/project basis?
20:34:16 <dmakogon> it means that trove should own personal tenant and which you rule all stuff
20:34:18 <hub_cap> hi amcrn!
20:34:21 <amcrn> hi :)
20:34:31 <hub_cap> thats now how openstack works though
20:34:43 <hub_cap> what problem are you trying to solve?
20:34:47 <amcrn> right, so I'm asking what's the problem he's trying to solve
20:34:53 <amcrn> jinx :P
20:34:57 <SlickNik> You'd have the extra overhead of doing _all_ of the quota management yourself.
20:35:02 <hub_cap> if you need managed resources in nova
20:35:10 <hub_cap> then we should fix nova to allow it
20:35:39 <hub_cap> ive spoken w/ the nova team about this (a long time ago)
20:35:49 <isviridov> hub_cap:  it is an idea to hide all trove resource management from user a
20:35:52 <hub_cap> to provision resources that maybe a user cant see when they list nova isntances, or at least cant use it
20:36:03 <hub_cap> like the problem is a user sees nova resources
20:36:07 <hub_cap> and can, for instance, change ram
20:36:12 <vipul> yea if you're running Trove against a public nova endpoint, i see the issue..
20:36:23 <hub_cap> yes this is not a trove problem
20:36:23 <vipul> you have Trove instances littered in your acct along with compute instances
20:36:34 <hub_cap> its a resource viewing / accessing issue in nova
20:36:49 <hub_cap> lets talk to nova to see if htey would still allow "shadow"/"managed" users
20:37:00 <isviridov> 1 sec
20:37:22 <vipul> btw isviridov.. if you _really_ want to do this.. just put in a diffrent remote.py :)
20:38:00 <isviridov> look you have got me vipul
20:38:17 <isviridov> just create all instances in trove tenant, not users'one
20:38:39 <isviridov> and handle all quota inside that tenant
20:39:10 <isviridov> so, user uses it as pure dbaas
20:39:24 <hub_cap> isviridov: its pure dbaas w/ multiple tenants too
20:39:25 <dmakogon> so, all auth stuff could happen under the hood of trove api
20:39:26 <hub_cap> for the record
20:39:52 <vipul> upstream trove supports deployment that creates instances in user's tenant...
20:40:04 <vipul> you could alwyas change that behavior.. which is the reason why we make it pluggbale
20:40:07 <hub_cap> fwiw, this shoud not be fixed in trove... there is no need to have a single global tenant id think
20:40:16 <vipul> not in upstream
20:40:19 <hub_cap> vipul: and not contribue it upstream
20:40:22 <hub_cap> vipul: ++
20:40:55 <hub_cap> managed instances will be the best approach in nova
20:40:56 <dmakogon> hub_cap: vipul: is it possible to make it configurable
20:40:57 <dmakogon> ?
20:41:04 <hub_cap> i dont think its necessary..
20:41:10 <hub_cap> all of openstack behaves one way
20:41:15 <hub_cap> we should stay standard
20:41:24 <isviridov> anyhow we don't give access to db instances for user
20:41:35 <isviridov> why he should see and manage all that resources?
20:41:46 <hub_cap> again this goes back to "what is the problem you are trying to solve"
20:41:55 <hub_cap> if the problem is users can see your nova resources
20:42:04 <vipul> dmakogon: it is today
20:42:06 <hub_cap> then teh problem is that we need to fix it in nova
20:42:32 <dmakogon> vipul: what ?
20:43:01 <vipul> dmakogon: It is already configurable.. how you decide to spin up nova resources
20:43:01 <rnirmal> and other projects as well.. since the same applies for cinder or designate for example
20:43:08 <hub_cap> ++
20:43:26 <rnirmal> also there's a caveat with using single tenant when you want to create tenant networks
20:43:42 <rnirmal> say for private tenant network for a cluster
20:43:42 <dmakogon> vipul: i mean to make it resources per user tenant and resources per trove tenant
20:43:55 <hub_cap> rnirmal: ++
20:44:09 <hub_cap> it will be a problem
20:44:22 <vipul> dmakogon: Yes that's all driven by whether you pass down the auth-token to NOva or obtain a new one for the shadow tenant
20:44:35 <hub_cap> we should move on
20:44:37 <hub_cap> there are more things
20:44:38 <vipul> yes
20:44:43 <isviridov> let us
20:44:51 <dmakogon> vipul: oh, i got it
20:44:53 <SlickNik> yup
20:45:31 <hub_cap> #topic Provisioning several db processes on the same cluster
20:45:38 <hub_cap> #link https://blueprints.launchpad.net/trove/+spec/shared-cluster-db-provisioning
20:46:13 <hub_cap> ok im not a fan of this one either
20:46:16 <hub_cap> :)
20:46:16 <hub_cap> weve talked about it in the past
20:46:31 <isviridov> It is about cloud utilization. The idea to host several database daemons on the same cluster
20:46:32 <hub_cap> if you have too many spare cycles on your nodes, your nodes are too big
20:46:35 <isviridov> )
20:46:44 <hub_cap> shrink the nodes if you have extra utilization
20:46:47 <hub_cap> and prov a new cluster
20:46:48 <imsplitbit> hub_cap: +10000000
20:47:02 <hub_cap> upgrades, guest status, and many other things would not be easy
20:47:08 <isviridov> Not extra, but idle
20:47:12 <rnirmal> and create more vms where necessary.. or containers..they are cheap
20:47:18 <hub_cap> ++++++++++ rnirmal
20:47:19 <SlickNik> I think this is a bad idea in general.
20:47:25 <kevinconway> so you want to have one instance run multiple guests and multiple db engines?
20:47:26 <hub_cap> cloud is cheap
20:47:35 <hub_cap> kevinconway: thats what he was proposing
20:47:39 <amcrn> i could see a situation once baremetal is supported, that you'd want multiple processes (say Redis) on a single machine to avoid wasting hardware, but other than that, i'm with hub_cap/rnirmal/etc.
20:48:04 <hub_cap> what is active? active_cassandra, active_mongo, some status thereov?
20:48:14 <hub_cap> my mongo is up but my cassandr is down
20:48:14 <hub_cap> what does that mean
20:48:19 <hub_cap> or 2 clusters
20:48:29 <imsplitbit> yeah we've definitely talked alot about this in the past and the general concensus has always been one container does one thing
20:48:43 <kevinconway> does one user access all the engines?
20:48:44 <juice> amcrn: even in that case wouldn't it be better to use an lxc driver to partition the instances and resources?
20:49:00 <isviridov> kevinconway:  different
20:49:05 <amcrn> juice: fair enough
20:49:12 <grapex> isviridov: I feel like for flexibility, this *should* have been built into Trove. But as you can see no one likes it. :)
20:49:28 <juice> amcrn: assuming lxc driver works with nova :P
20:49:31 <SlickNik> juice / amcrn: yup, even if baremetal were supported, I'd push for having these agents run in separate containers on the bare metal node so some isolation exists.
20:49:34 * hub_cap spills some coffee over grapex's keyboard
20:49:44 <isviridov> looks it will be in future or substituled witk containers like dokit
20:49:55 <isviridov> thanks, let's move on
20:50:10 <hub_cap> #topic Auto-recovery of node in replica/cluster
20:50:18 <hub_cap> #link https://blueprints.launchpad.net/trove/+spec/auto-recovery-of-cluster-replica
20:50:21 <hub_cap> now i like this!
20:50:29 <isviridov> Finally ^)
20:50:34 <hub_cap> hahah
20:51:06 <vipul> what types of metrics woudl we push to ceilometer
20:51:14 <dmakogon> yes, as we discussed earlier, autorecovery/failover is one of the goals for IceHOuse
20:51:31 <vipul> and how does ceilometer notify Trove to do sometihng
20:51:41 <dmakogon> vipul: specific database related metrics
20:51:46 <isviridov> vipul:  it depends on the type of cluster and specific for db
20:51:52 <cp16net> Bender: C'mon, it's just like making love. Y'know, left, down, rotate sixty-two degrees, engage rotors...
20:52:08 <hub_cap> i agree w cp16net
20:52:13 <dmakogon> cp16net: nice one)))
20:52:13 <vipul> lol
20:52:14 <isviridov> vipul:  celiometer has Alerts mechanism which is used in HEAT for autoscaling
20:52:16 <datsun180b> sounds like someone else needs coffee on their keyboard
20:52:24 * hub_cap throws a random number of Ninja Turtles at cp16net
20:52:36 <juice> D20
20:52:47 <hub_cap> ok srsly though, i think that we can discuss the approach offline. i do like the idea though
20:53:01 <dmakogon> hub_cap +1
20:53:01 <kevinconway> ML is nigh
20:53:05 <vipul> how about doing this for a single instance?
20:53:09 <hub_cap> yes ML++
20:53:13 <vipul> why does it need to wait for clusters
20:53:34 <hub_cap> lets table this
20:53:38 <hub_cap> i reallyw ant to discuss the next one
20:53:38 <dmakogon> vipul: as first step - maybe, what about down-time and data consistency ?
20:53:44 <SlickNik> I like the idea as well. But I think a prereq for it would be to have some sort of cluster / replica implemented no?
20:53:58 <vipul> it's down anyway.. if it needs to be recovered
20:54:10 <isviridov> from backup f.e.
20:54:19 <dmakogon> recovery = spinning new instance
20:54:21 <imsplitbit> btw I would like to open the trove channel for replication/cluster api discussion after this meeting and tomorrow morning when everyone gets in
20:54:28 <isviridov> the criteria should be defined for single instance
20:54:50 <isviridov> imsplitbit:  would love to join
20:55:02 <SlickNik> vipul: I like the idea but, I'm not sure what would be the recovery in case a single instance is down?
20:55:14 <vipul> restore from last known backup
20:55:18 <vipul> which is all you can do
20:55:22 <isviridov> yes
20:55:25 <dmakogon> SlickNik : provisioning new one
20:55:46 <vipul> i'm just saying there is opportunity to prove this with a single instance.. before getting crazy with clusters
20:55:46 <hub_cap> heyo
20:55:51 <hub_cap> #table it
20:55:52 <dmakogon> to substitute
20:55:55 <hub_cap> #move on
20:55:57 <vipul> fine!
20:55:59 <hub_cap> :P
20:56:03 <hub_cap> #topic Lunchpad meetings
20:56:06 <SlickNik> I like it, but table for later.
20:56:09 <hub_cap> crap i menat to fix that typo
20:56:13 <hub_cap> *meant
20:56:16 <isviridov> The last from my side
20:56:18 <SlickNik> heh
20:56:31 <hub_cap> #link https://launchpad.net/sprints
20:56:37 <hub_cap> did the openstack bot die?
20:56:41 <dmakogon> good idea
20:56:42 <isviridov> We are in different timezone, let us arrange the topic meetings
20:56:56 <imsplitbit> oh noes
20:56:58 <SlickNik> looks like it
20:57:00 <imsplitbit> no bot?!
20:57:01 <dmakogon> scheduling is always good way
20:57:08 <isviridov> hub_cap:  how it can help with scheduling?
20:57:21 <vipul> what is this link for
20:57:30 <vipul> are we having a gathering?
20:57:31 <hub_cap> im not sure. im curios to know if we need it
20:57:42 <isviridov> vipul:  for creating public visible meeting
20:57:59 <hub_cap> so like this meeting?
20:58:01 <dmakogon> hub_cap: yes, we could plan discussions
20:58:03 <kevinconway> isviridov: other than this one?
20:58:14 <SlickNik> What's wrong with #openstack-meeting-alt for that?
20:58:16 <amcrn> "Launchpad can help you organize your developer sprints, summits and gatherings. Register the meeting here, then you can invite people to nominate blueprints for discussion at the event. The meeting drivers control the agenda, but everyone can see what's proposed and what's been accepted."
20:58:18 <isviridov> kevinconway:  yes, but by the topic
20:58:19 <dmakogon> monday = cluster API , tue = failover
20:58:26 <amcrn> I imagine it's a replacement for the wiki you're using
20:58:32 <kevinconway> oh no… meetings every day?
20:58:36 <amcrn> (that holds the topics, last weeks notes, etc.)
20:58:46 <isviridov> amcrn:  forget about description, it is just a tool
20:58:50 <hub_cap> how is this different from just being in #openstack-trove ?
20:58:59 <hub_cap> and sayding "lets discuss tomorrow at 10am"
20:59:00 <SlickNik> here's an example of a sprint: https://launchpad.net/sprints/uds-1311
20:59:02 <dmakogon> kevinconway: if you want to be involved everywhere
20:59:13 <isviridov> hub_cap:  we can come together and discuss specific topic
20:59:20 <vipul> i hate launchpad..
20:59:29 <kevinconway> this is a ML topic
20:59:29 <vipul> so the less we do the better IMHO :P
20:59:43 <hub_cap> isviridov: how do we not do that _in_ the room?
20:59:45 <hub_cap> does it allow for async?
20:59:53 <dmakogon> vipul made good conlusion for this topic: "I have LP"
21:00:08 <hub_cap> if it requires us to be somewhere, virtually, at the same time
21:00:09 <vipul> s/have/hate
21:00:16 <hub_cap> i dont se what it buys us over just being in the channel
21:00:21 <SlickNik> So like amcrn said, it would only be a replacement for the meeting agenda page on the wiki, if anything…
21:00:22 <hub_cap> and for async, we have the ML
21:00:29 <fungi> ahh, yep, openstack (the meetbot) is missing his ops hat. fixing
21:00:34 <hub_cap> <3 fungi
21:00:37 <hub_cap> ill endmeeting after
21:00:40 <SlickNik> thanks fungi!
21:00:53 <hub_cap> #endmeeting