20:00:15 #startmeeting trove 20:00:16 Meeting started Wed Jul 17 20:00:15 2013 UTC. The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:19 The meeting name has been set to 'trove' 20:00:21 o/ 20:00:22 o/ 20:00:23 #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting 20:00:28 o7 20:00:35 &o 20:00:42 o/ 20:00:58 \o/ 20:01:12 crap i put a bad link on the wiki :p 20:01:22 #link http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-07-10-20.00.html 20:01:30 #topic action items 20:01:49 not many AI's. SlickNik is not around? 20:01:58 vipul: get a chance to do any more wikifying? 20:02:11 hub_cap: No, didnt spend any time on this one 20:02:17 likely an ongoing thing 20:02:28 o/ 20:02:31 o/ 20:02:31 kk, lets action item it again 20:02:32 o/ 20:02:40 #action Vipul to continue to update reddwarf -> trove 20:02:40 SlickNik: hey, yer up. initial stab @ dev docs 20:02:40 o/ 20:02:42 i saw something 20:02:47 can u link the review? 20:02:57 yeah, one sec. 20:03:20 #link https://review.openstack.org/#/c/37379/ 20:03:31 SWEET 20:03:40 good work. 20:03:46 anything else to add wrt the action items? 20:03:57 I've taken the initial info from the wiki and the trove-integration README. 20:04:14 SlickNik: Nice! 20:04:36 Once that's approved, I can turn on the CI-doc job that builds it. 20:04:40 :) 20:04:40 thanks SlickNik 20:04:49 lets get that done then!!! ;) 20:04:55 And then I need to contact annegentle to add the link to the openstack site. 20:05:13 okey moving on then? 20:05:16 yup. 20:05:23 #topic h2 milestone released 20:05:27 #link https://github.com/openstack/trove/tree/milestone-proposed 20:05:28 WOO 20:05:33 WOO 20:05:34 they will cut it i think, thursday? 20:05:39 w00t! 20:05:44 \o/ 20:05:51 #lnk http://tarballs.openstack.org/trove/ 20:05:53 doh 20:05:55 #link http://tarballs.openstack.org/trove/ 20:05:58 there we are 20:06:04 woah look at that 20:06:18 Did you see all those issues marked as Released by Thierry C? 20:06:25 yes i did 20:06:26 yup :) 20:06:28 cuz i get ALL of them ;) 20:06:38 we can move critical bugs back to h2 if we need to 20:06:41 i suspect we wont 20:06:49 since no one is really gonna deploy it 20:07:01 its more just to get us understanding how things work around here 20:07:06 I don't know of any critical bugs, atm. 20:07:10 Aight enough w/ the glass clinking, time to move on 20:07:20 feel free to view the links 20:07:26 #link https://wiki.openstack.org/wiki/GerritJenkinsGithub#Authoring_Changes_for_milestone-proposed 20:07:31 #link https://wiki.openstack.org/wiki/PTLguide#Backporting_fixes_to_milestone-proposed_.28Wednesday.2FThursday.29 20:07:35 if u want to know more about the process 20:07:46 #topic Restart mysql 20:07:55 doh forgot the word test 20:07:57 #link https://github.com/openstack/trove/blob/master/trove/tests/api/instances_actions.py#L256-262 20:08:04 lets spend a bit of time discussing the validity of this 20:08:10 and then spend the rest of the time on replication 20:08:30 SlickNik: all u 20:08:48 So, I agree with grapex that we need a test to validate what the guest agent behavior is when mysql is down. 20:09:14 But I think that that's exactly what the mysql stop tests are doing. 20:09:37 link? 20:09:41 #link https://github.com/openstack/trove/blob/master/trove/tests/api/instances_actions.py#L320-L324 20:10:11 SlickNik: The only major difference is that explicitly tells the guest to stop MySQL, versus letting the status thread do its thing 20:10:40 as in, we are testing the periodic task does its job? 20:10:44 right but isn't the status thread still the thing that's updating status 20:10:51 it's just a different way of stopping mysql 20:11:02 one is explictly other is by messing up logfiles 20:11:20 vipul: True, but the stop rpc call also updates the DB when its finished 20:11:32 and that ib_logfile behavior is very deliberately for mysql, right? 20:12:05 SlickNik: Can you give another summary of the issue the test itself is having? 20:12:23 Isn't it that MySQL actually can't start up again when the test tries to restart it? 20:12:36 grapex: when we corrupt the logfiles, mysql doesn't come up. 20:12:48 the upstart scripts keep trying to respawn mysql since it can't come up. 20:13:01 SlickNik: Does the reference guest not delete those iblogfiles? 20:13:17 i think the tests do 20:13:30 that sounds right 20:13:33 correct 20:13:45 grapex: not delete; but mess up so that they are zeroed out. 20:13:56 so teh difference is 20:14:06 1 test stops mysql, the other kills it behind the scenes 20:14:07 Now since upstart is trying to bring mysql up, it has a lock on the logfiles. 20:14:16 the latter test waits for the periodic task to signal its broken 20:14:23 the former test updates the db as part of the stop 20:14:24 ya? 20:14:24 So Sneaky Pete actually wipes the ib logfiles. Maybe that's something the reference guest should do? 20:14:41 It does it as part of the restart command 20:14:57 lets first try to figure out if the tests are truly different 20:15:10 and then once we agree it needs to stay (if it does) we can think about solutions 20:15:11 Well that one also makes sure the iblogfiles are wiped 20:15:30 grapex: won't that mean mysql can start again? 20:15:40 vipul: Yes. 20:15:43 So there's also the point that this test takes about ~4-5 mins. 20:15:55 then this test will fail, because the test expects that it cannot start 20:16:50 So one question is that do we think that this 1 scenario (which isn't all that different from the stop tests) warrants an extra addition of ~4-5 minutes on every test run? 20:17:06 if it tests something different i think its warranted 20:17:09 (in parens) = my opinion 20:17:26 is exactly the same != isint all that different 20:17:29 are they testing different things? 20:17:35 I'm sorry, I misspoke about wiping the iblogfiles - that happens on resizes and other cases, not for restart 20:17:36 thats what i want us to agree upon 20:17:54 well, in either case we are testing for a broken connection. 20:18:00 are we? 20:18:06 And mysql not running is causing the broken connection. 20:18:07 SlickNik: I disagree 20:18:14 i thought the screw_up_mysql tests that the periodic task updates the db properly 20:18:24 and the explicit stop tests that the stop updates the db synchronously 20:18:25 I think also whether a test has value is a different question from whether we want to run it every single time as part of CI if it's impeding people 20:18:28 is that not the case? 20:18:41 grapex: what code path does the restart tests hit that the resize tests don't also hit? 20:18:53 do* 20:19:03 restart truly makes sure the status thread sees MySQL die and updates appropriately 20:19:16 so the stop_db code seems to set that state = None 20:19:19 self.instance.update_db(task_status=inst_models.InstanceTasks.NONE) 20:19:22 correct 20:19:24 stop is actually stopping it, so it updates the database as part of that RPC code path, not the thread 20:19:27 Which means the status thread will set it to shutdown 20:19:55 sure but it does taht based on different circonstances vipul 20:20:10 1) it checks the task is NONE vs 2) it cant talk to mysql, right? 20:20:34 it checks the status is shutdown and can't talk to mysql 20:20:52 ok 20:20:59 does the other tests update the task to none? 20:21:03 *test 20:21:13 restart also sets it to None 20:21:15 Also keep in mind, the Sneaky Pete tests actually sets the status to stop as part of that RPC call. If you're saying the reference guest doesn't, I'm not sure why it wouldn't 20:21:44 ok weve got 4 more min on this and im gonna call it for now 20:21:46 as undecided 20:21:51 id like to discuss replication 20:22:00 +1 on that 20:22:07 lol imsplitbit 20:22:12 Well, I want to suggest something 20:22:18 sure 20:22:21 youve got a few min 20:22:23 hub_cap: i will gladly accept a gist link of the chat 20:22:25 go! 20:22:25 If this test is really being a bother, lets just take it out of the "blackbox" group but keep it in the code. 20:22:46 KennethWilke: its logged, you can see it on http://eavesdrop.openstack.org/meetings/trove/ 20:22:48 We run it at Rackspace all the time and I find it useful. It could still be run nightly or something. 20:22:52 hub_cap: ty 20:23:08 grapex: I'd totally be fine with that. 20:23:09 (nightly for the completely public Ubuntu / KVM / Reference guest code) 20:23:27 ya but id argue we shouldnt remove it till we do the nightly different tests 20:23:32 SlickNik: Maybe the solution is to create a second group called "nightly" which just has more test groups added to it 20:23:35 hub_cap: Seconded. 20:23:36 if it in fact does test somethign different 20:23:45 +2 20:24:16 hub_cap: +1 20:24:17 +1 20:24:18 i vote keep it, even if it means moving it 20:24:26 which im still not sure it _does_ test something different at this point 20:24:32 but lets move on 20:24:39 i think we have a reasonable consensus to keep it but move it 20:24:54 i don't want your goddamn lettuce 20:25:08 moving on? 20:25:09 hub_cap: Are you talking to a rabbit? 20:25:09 need some more research to verify that it is indeed different 20:25:12 can we just action it? 20:25:13 yes vipul 20:25:16 go head 20:25:40 #action SlickNik, vipul to compare Stop test and Unsuccessful Restart tests to identify differences 20:25:44 grapex: no. google it 20:25:53 ok. repl time 20:25:59 #replication :o 20:26:01 hub_cap / grapex: I'll move it for now so that we don't keep hitting it on rdjenkins. I'll also look to see if we can fix the test so we don't run into the upstart issue (also the research that vipul actioned). 20:26:02 lol 20:26:12 #topic replication :o 20:26:21 let me relink 20:26:21 +1 SlickNik cuz we will have to deal w/ fedora soon too ;) 20:26:25 plz do 20:26:32 #link https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API 20:26:41 #link https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API-Using-Instances 20:26:49 hub_cap: go! 20:26:56 thanks guys, on to replication! 20:27:08 #all i wanted was a cheeseburger 20:27:15 ok SO 20:27:22 weve gone back and forth on this topic for a while now 20:27:29 #all I got was a lousy T-shirt? 20:27:35 lol 20:27:47 2 schools, /instances has ALL instances, some error conditions on things like resize 20:28:06 or /instances and /clusters, and things move from /instances to /clusters when they get promoted 20:28:16 and cluster nodes are never a part of /instances 20:28:18 no demorris :) we can decide w/o him 20:28:23 HA nice 20:28:29 he should be here 20:28:31 vipul: u wish 20:28:34 just to stir the pot 20:28:36 :) 20:28:36 boo 20:28:38 NICE 20:28:40 lol, he's lurking. 20:28:45 always 20:29:07 ok i was of the opinion that when we promote to /clusters we move the instance there 20:29:12 so u can do things u shouldnt do on it 20:29:17 as in, if its a slave, u shouldnt add a user 20:29:23 or u shodlnt delete a master 20:29:42 but after thinking about it for a while, there arent a LOT of failure cases for modifying /instances 20:29:50 the only one i can think of is deleting a master 20:30:18 u shoudl be able to add a RO user to a slave, u shoudl be able to resize a slave to something that might not be ok for the cluster 20:30:26 the permutations for what u shouldnt be able to do are NOT small 20:30:35 and are different for differetn cases of a cluster 20:30:41 and different types of clusters 20:30:55 hell they are probably close to infinite given differente circonstances 20:31:13 so id rather keep things in /instances, and just limit very few cases for modifying an instance in a cluster 20:31:33 if we find something that should _never_ be done, then so be it, we add it as a failure case in /instances 20:31:42 so.. would /cluster has the same set of operations that /instances has (create user, add db, etc) 20:31:47 no 20:31:55 it would be helper for doing things to an entire cluster 20:31:58 and thats it 20:32:11 create/resize/delete 20:32:12 add db/create user 20:32:18 we cant really define how a user will use a slave 20:32:29 but they may not always be slaves right 20:32:31 i had a extra db on slaves on some of my setups w/ urchin a while ago 20:32:38 and different users 20:32:56 you may have a galera cluster.. wehre the users / schemas will all be replicated across 20:33:00 no matter which one you write to 20:33:02 yes 20:33:09 so given that case it doesnt matter where u write it 20:33:15 so then we cant restrict it 20:33:20 so why not write it to /cluster.. why do they have to pick one 20:33:22 or shouldn't 20:33:23 there is no "master master" there 20:33:34 because i want to add a RO user to slave 1 20:33:36 how do i do that 20:33:40 vipul: I think there is a good case to add some helper things to /cluster 20:33:55 but it isn't needed to implement a cluster and support it 20:33:58 So what's the ultimate reason for not doing this on the cluster but doing it on the individual instances? 20:34:08 Duplication of code? 20:34:14 duplication of schema 20:34:20 and complication ot end user 20:34:35 1/2 my instances in one /path and 1/2 in /another seems very unintuitive 20:34:43 agreed 20:34:48 i have a bunch of /instances, period 20:34:54 at the end of the day that what they are anywya 20:35:09 and vipul im not tryign to define what we can and cant do on /clusters 20:35:16 im tryin to get consensus on where /instances live 20:35:19 It seems like as we do auto-failover, etc.. we'd want to abstract the actual 'type' of instance away from the user.. so the user only sees a db as a single endpoint 20:35:47 in a cluster.. you could see a single endpoint that's load balalcned also 20:35:48 vipul: if we do that then you have to separate replication from clustering 20:35:54 because they aren't the same 20:36:00 :o 20:36:03 yet they share alot of functionality 20:36:29 but is it that different? if we promote a slave to a master on behalf of the user.. and spin up a new slave for them 20:36:35 we will still ahve list /clusters 20:36:40 and u can show a single endpoint 20:36:47 from the user's perpective it doesn't matter if it's a multi-master or single master/slave 20:36:57 fwiw tho all clsutering apis dont use single endpoint 20:37:08 agreed, we can't yet 20:37:09 i believe tungsten uses its own internal code to determine where to write to 20:37:14 I've got a question as the infamous No-NoSQL guy. 20:37:15 imsplitbit: as i am catching up on this feature those links of the API and API with instances are the 2 proposed plans we are debating? 20:37:16 in its connector api 20:37:38 vipul: but you're assuming use, what if I have a db on one instance and I want to keep a spare copy of it warm on another host but also want to use that host as a db server for a complete different dataset? 20:37:40 but again, if you list /clusters we can provide a single endpoint 20:37:47 cp16net: yes 20:37:54 hub_cap: exactly 20:37:56 but if you _want_ you can enact on a slave / "other master" in /instances 20:38:06 im saying dont remove them from /instances 20:38:11 if the cluster type supports a single endpoint then /clusters should return that information 20:38:12 we can still totally do what vipul wants in /clusters 20:38:18 you are essentially paying for every /instance 20:38:21 so we shoudl show them 20:38:28 even if u ahve auto failover 20:38:33 u buy 2 or 3 or X instances 20:38:39 and use one ip 20:38:52 yea i think the instnace info should be visible.. but at some point in the future.. we may have a single dns entry returned or something 20:38:55 i would separate out billing from it though 20:38:56 if i was paying for 9 instances in a auto failover cluster, id like to see them all in /instances 20:39:12 vipul: and that will be returned with the cluster ref 20:39:15 demorris: there is no billing in it, just providing a point from a customer point of view 20:39:18 if applicable 20:39:24 vipul: we can do that, now even if applic... grr dsal 20:39:31 hub_cap: k 20:39:33 just say what i was gonna say why dont ya 20:39:43 i got a can of these baked beans too 20:39:49 vipul: why couldnt you create a single dns entery returned for the cluster but still have dns for each instance like it is now? 20:40:01 id want that 20:40:09 cuz if i had to connect to instance X to clean it up manually 20:40:12 id want to be able to 20:40:19 konetzed: I would think most people would 20:40:21 konetzed: I guess you could.. but then the customer would end up breaking if they happened ot use one of the instance entries 20:40:26 like auto-failover is ont working, let me get on node X to prmote it 20:40:38 vipul: you can only protect stupid so much 20:40:43 HAH 20:40:43 :D 20:40:47 hah 20:40:51 ya none yall proteced fro me 20:40:54 this is really a question of how much do we hide from the user, so even if they are stupid they can use it 20:40:59 why u think they moved me to cali 20:41:13 sure vipul and i think we could concede on some of that 20:41:20 tahts not set in stone 20:41:27 +1 20:41:29 we could even rev teh api a bit when we have > 1 cluster 20:41:30 SHIT 20:41:32 im out of power 20:41:32 Okay, so I guess it depends on what we're shooting for here. 20:41:40 dude 20:41:44 hub_cap: FAIL 20:41:44 sweet found a plug 20:41:45 vipul: i think you will find enough arguments for each way 20:41:53 agreed 20:42:05 well there's 2 types of users right? power users and button pushers 20:42:13 you need to find enough to facilitate both 20:42:16 yes 20:42:24 or provide a RBAC solution 20:42:31 that allows the installer to decide 20:42:39 If we're looking for a managed DB solution here that exposes a simple clustering API to the user, then I think that is probably better served by having a single endpoint for it. 20:43:04 i think we are looking to provide a service that is extensible enough to do that 20:43:09 _or_ allow the user access to all 20:43:10 frankly 20:43:20 we WILL NEVER be able to provide a fully turnkey solution 20:43:29 otherwise someone else woudlve 20:43:32 mysql is a tricky beast 20:43:34 SlickNik: no one is arguing against providing a single endpoint for users who want one 20:43:52 we will always need to provide a way for a user or operator to get to any instance 20:43:55 one thing to keep in mind is the more that we hide, the less the user can faak us up.. like break our ability to auto-failover 20:43:55 But if we're talking about letting users do things like have an instance as part of a cluster, as well as able to connect to the db directly, there's no way of getting away from a complex clustering API with actions spread across /instances and /clusters 20:44:07 actions yes SlickNik 20:44:13 but entities, no 20:44:18 thats the first line of agreement 20:44:23 as long as we are all on the same page tehre 20:44:30 it makes the api closer to concrete 20:44:43 im sure we can, eventually, hide instances if we want to 20:44:50 shown_to_user=False 20:44:53 easy as pie 20:44:59 or at least not allow them to operate on them 20:45:02 lets solve the easy solution first 20:45:07 sure vipul 20:45:09 I always go back to some of this being up to the provider / operator of Trove and separating that out from what the API supports 20:45:10 managed vms 20:45:17 i was just going to say sounds like were going down a rabbit hole 20:45:17 we need that anyway for nova 20:45:29 why can't each cluster type have a policy that dictates what can and cannot be done to the cluster or instances themselves 20:45:34 cuz they can just muck w/ them in nova if you are using their user to prov instances ;) 20:45:39 yes demorris RBAC 20:45:40 demorris: +1 20:45:47 if my policy says, individual operations are not support on /instacnes, then you don't allow it 20:45:47 i said that like ~5 min ago 20:45:58 it really is a deployment type of decision it seems 20:46:00 SlickNik: having a single endpoint might restrict users from building a system that reads from all nodes and only writes to one. 20:46:01 lets just solve the easy solution first tho 20:46:05 we are getting out of hand 20:46:10 hub_cap: you know I can't follow every message in here…brain won't allow it :) 20:46:10 we need to solve master/slave 20:46:15 before we get to magical clsutering 20:46:22 demorris: transplant ;) 20:46:36 hub_cap: is master/slave /cluster then? 20:46:39 we understand the set of actions in /clusters can grow 20:46:42 thats fine 20:46:44 yes 20:46:50 ok 20:46:53 but both isntances are avail via /instances 20:47:02 I don't like the use of the word clusters for replication because it implies too much 20:47:03 and u can resize the slave down via /instances/id/resize 20:47:08 but we can't think of a better term for it 20:47:16 * hub_cap shreds imsplitbit with a suspicious knife 20:47:21 * hub_cap boxes imsplitbit with an authentic cup 20:47:21 * hub_cap slaps imsplitbit around with a tiny and bloodstained penguin 20:47:23 * hub_cap belts imsplitbit with a medium sized donkey 20:47:26 * hub_cap tortures imsplitbit with a real shelf 20:47:32 :) 20:47:40 I won't give up that fight 20:48:01 but I acknowledge that it doesn't need to be fought right now 20:48:12 even though cluster is overloaded, it does fit even if it's master/slave 20:48:14 imo 20:48:18 does what i say make sense vipul? 20:48:22 create master slave via /cluster 20:48:31 resize both nodes cuz youre on oprah, /cluster/id/resize 20:48:33 yep, makese sense 20:48:42 resize indiv node cuz youre cheap /instance/id/resize 20:49:15 create db on slave cuz u need a local store for some operation on an application /instance/id/db 20:50:01 what about create db/user on master? does that go through /instance/id or /cluster/id? 20:50:02 if u want to create it on all of the, create it on the master ;) 20:50:18 u _know_ u have a master, why not let the user just do that 20:50:27 this only applies for master/slave 20:50:44 hub_cap: I think that is the least prescriptive approach 20:50:47 for what its worth 20:50:58 right, but we should allow it to be created on the /cluster as well 20:51:00 /clusters/id/resize is NOT going to be easy 20:51:07 i have 9 instances 20:51:09 3 failed 20:51:11 1 is now broken 20:51:17 the master just went down 20:51:17 So is there a difference between create db on master vs create db on cluster? 20:51:19 fix it so it never fails 20:51:19 what do i do 20:51:32 konetzed: youre out yo mind 20:51:41 i.e. if I do /instance/id/db CREATE, it is a local instance that will not get replicated? 20:51:45 hub_cap: the hp ppl didnt know that already 20:51:47 on the master 20:51:56 hub_cap: but that same scenario would exist if you did a single instance resize... where that one failed 20:52:02 now the user is stuck.. 20:52:06 cuz they have to fix it 20:52:14 where as in /cluster/resize we'd fix it 20:52:18 right but thats up to you to control vipul 20:52:27 think about the permutations there vipul 20:52:28 SlickNik: i think user adds on the master would be replicated 20:52:34 lets at least defer it 20:52:41 till we see some real world scenarios 20:52:51 id prever "acting" on clusters to come later 20:52:56 konetzed: what about db adds? 20:52:56 because its /hard/ 20:53:16 imsplitbit: arnt all crud operations done on the master sent to slaves? 20:53:20 resizing a cluster sounds like it might easier to migrate the data to a new cluster.. 20:53:41 :P esp 20:53:41 konetzed: yes 20:53:43 rather than trying to resize each individual node if that's what we are talking about :) 20:53:49 esp: that could be one way to do it.. 20:53:56 create db will go to a slave if issued on a master 20:54:00 esp: maybe so but if the dataset is 500GB that may not be true 20:54:05 imsplitbit: you can choose to replicate only certain dbs if you so desire 20:54:05 imsplitbit: so to answer SlickNik's question user and db adds all get replicated 20:54:13 if you asked me to individually resize a 9 node cluster I would scream at you. 20:54:33 esp: even if 90% of the time it failed for you if u did /cluster/id/resize 20:54:39 esp: agreed which is why we would want to support doing a cluster resize 20:54:42 taht means you would have to issue it 9 times anwyay 20:54:47 and if one failed to upgrade 20:54:50 but hub_cap's point is it's not gonna be easy 20:54:51 then u gotta downgrade the others 20:54:52 imsplitbit: I gotcha, doesn't cover all cases. 20:54:52 double downtime 20:54:57 right 20:54:59 imsplitbit: so why should we allow extraneous dbs (outside the cluster) to be created on slaves but not on master? 20:55:03 lets defer "Actions" to /clusters 20:55:06 to get _something_ done 20:55:13 to summarize 20:55:15 we have 5 min 20:55:22 instances are all in /instances 20:55:23 SlickNik: because it's a mistake to assume what a user will want to do 20:55:27 i think we need to get past resizes failing, because that has nothing to do with clusters 20:55:27 u can enact on them indiv 20:55:30 SlickNik: good point.. is this a valid use case even? i'm no DBA.. but why would you do that 20:55:37 ok maybe no summary............ 20:55:48 * hub_cap waits for the fire to calm down b4 going on 20:55:51 do DBAs create dbs on slaves... 20:55:56 why not vipul 20:56:02 vipul: I have configured db setups for very large corporations in our intensive department and I can say it happens often 20:56:07 yes 20:56:08 because at any time, you'd promote that 20:56:09 i have done it 20:56:16 not necessarily vipul 20:56:19 and you'd need to do it again on the new slave 20:56:25 read slaves are not 100% promotion material 20:56:29 theya re sometimes to _juist_ read 20:56:45 you may just have a slave to run backups on 20:56:45 we cant guaranteee everyone will use it the same way 20:56:48 yea I get that.. but they are reading master data 20:56:52 hence the need to _not_ be perscriptive 20:56:59 ya and could be 10 minutes behind vipul 20:57:09 ok lets chill it out 20:57:12 let me summarize 20:57:12 demorris: then the additional dbs you created are also backed up.. 20:57:14 we have 3 min 20:57:19 lol hub_cap 20:57:24 or ill just decide w/o anyone elses input 20:57:29 ill be the DTL 20:57:29 hub_cap: you need a timer bot :) 20:57:33 u can decide what the D means 20:57:41 Guido van hub_cap 20:57:44 summary 20:57:44 if we have backup slaves.. should those additional DBs/Users be backed up? 20:57:54 lets take indiv questions offline vipul plz 20:58:02 sorry :) 20:58:03 here is the first cut of the api 20:58:19 instances are in /instances, all of them, all visible, all actions can happen to them 20:58:35 /clusters is used for create/delete only as a helper api 20:58:42 that will be V1 of clusters 20:58:45 hub_cap: and also some atomic actions 20:58:48 as we decide we need more stuff, we will add it 20:58:50 hub_cap: I'm bought on the idea of instance stuff going in /instances. But does the instance still contain cluster data now? 20:59:01 this magic "attributes" addition? 20:59:06 yes kevinconway it will have to, we can decide that stuff later 20:59:14 there will be some indication 20:59:41 once we have a need for more operations, be them atomic or acting upon many instances, we will add to /clusters 20:59:43 hub_cap: when did we drop having actions on clusters? 20:59:49 otherwise we will be coding this forever 20:59:54 kevinconway: It's necessary if you want to have any sort of ruleset dictating what is possible on the instance vs on the cluster. 20:59:57 demorris: i made an executive decison for R1 20:59:58 V1 21:00:01 we can always add them 21:00:05 but if they suck we cant remove them 21:00:15 lets just get something up 21:00:17 and working 21:00:19 no actions!!!! 21:00:21 hub_cap: I would vote for V1 to have at least atomic actions - add nodes, resize flavors, resize storage…in that they happen to the whole cluster 21:00:24 SlickNik: you can mark an instance as part of a cluster without modifying the instance resource though 21:00:26 it seems like it's easier to have a /clusters API that's completely isolated from /instances.. if we remove the 'promote' existing instance requirement 21:00:38 demorris: we had a whole conversation about problem permutations 21:00:46 goto #openstack-trove to contineu 21:00:48 #endmeeting