Wednesday, 2013-07-17

*** sarob has quit IRC00:02
*** sarob_ has quit IRC00:03
*** sarob_ has joined #openstack-meeting-alt00:03
*** sarob_ has quit IRC00:04
*** vkmc has joined #openstack-meeting-alt00:07
*** colinmcnamara1 has quit IRC00:11
*** IlyaE has quit IRC00:13
*** Riddhi has quit IRC00:15
*** jrodom has joined #openstack-meeting-alt00:16
*** colinmcnamara has joined #openstack-meeting-alt00:24
*** markwash has joined #openstack-meeting-alt00:27
*** jrodom has quit IRC00:50
*** colinmcnamara has quit IRC00:52
*** akuznetsov has quit IRC00:53
*** jrodom has joined #openstack-meeting-alt00:56
*** jrodom has quit IRC01:00
*** jrodom has joined #openstack-meeting-alt01:02
*** jodom has joined #openstack-meeting-alt01:10
*** IlyaE has joined #openstack-meeting-alt01:13
*** jrodom has quit IRC01:14
*** qwerty_nor has quit IRC01:14
*** qwerty_nor has joined #openstack-meeting-alt01:36
*** cp16net is now known as cp16net|away01:39
*** cp16net|away is now known as cp16net01:40
*** IlyaE has quit IRC01:42
*** markmcclain has joined #openstack-meeting-alt02:01
*** jodom has quit IRC02:13
*** demorris has joined #openstack-meeting-alt02:36
*** vkmc has quit IRC02:37
*** IlyaE has joined #openstack-meeting-alt02:40
*** IlyaE has quit IRC02:40
*** IlyaE has joined #openstack-meeting-alt02:57
*** lastidiot1 has joined #openstack-meeting-alt03:05
*** lastidiot has quit IRC03:07
*** mtreinish has joined #openstack-meeting-alt03:14
*** demorris has quit IRC03:18
*** dhellmann_ has quit IRC03:18
*** SergeyLukjanov has joined #openstack-meeting-alt03:18
*** mtreinish_ has joined #openstack-meeting-alt03:19
*** jrodom has joined #openstack-meeting-alt03:21
*** mtreinish has quit IRC03:21
*** mtreinish_ is now known as mtreinish03:21
*** dhellmann has joined #openstack-meeting-alt03:21
*** jrodom has quit IRC03:22
*** bdpayne has quit IRC03:26
*** dhellmann is now known as dhellmann_03:28
*** colinmcnamara has joined #openstack-meeting-alt03:40
*** IlyaE has quit IRC03:45
*** lastidiot1 has quit IRC03:57
*** lastidiot has joined #openstack-meeting-alt03:58
*** IlyaE has joined #openstack-meeting-alt03:58
*** bdpayne has joined #openstack-meeting-alt03:59
*** bdpayne has quit IRC04:01
*** lastidiot has quit IRC04:07
*** IlyaE has quit IRC04:07
*** markmcclain has quit IRC04:14
*** qwerty_nor has quit IRC04:18
*** IlyaE has joined #openstack-meeting-alt04:26
*** IlyaE has quit IRC04:26
*** IlyaE has joined #openstack-meeting-alt04:29
*** IlyaE has quit IRC04:29
*** colinmcnamara has quit IRC04:31
*** colinmcnamara has joined #openstack-meeting-alt04:37
*** colinmcnamara1 has joined #openstack-meeting-alt04:40
*** colinmcnamara has quit IRC04:42
*** stanlagun has quit IRC04:45
*** colinmcnamara1 has quit IRC04:45
*** stanlagun has joined #openstack-meeting-alt04:46
*** colinmcnamara has joined #openstack-meeting-alt04:54
*** SergeyLukjanov has quit IRC05:08
*** IlyaE has joined #openstack-meeting-alt05:28
*** colinmcnamara has quit IRC05:58
*** IlyaE has quit IRC06:09
*** yidclare has joined #openstack-meeting-alt06:17
*** yidclare has left #openstack-meeting-alt06:17
*** abaron has joined #openstack-meeting-alt06:38
*** markwash has quit IRC06:51
*** markwash has joined #openstack-meeting-alt07:14
*** SergeyLukjanov has joined #openstack-meeting-alt07:58
*** dosaboy has joined #openstack-meeting-alt08:20
*** SergeyLukjanov has quit IRC08:24
*** abaron has quit IRC08:41
*** abaron has joined #openstack-meeting-alt08:54
*** SergeyLukjanov has joined #openstack-meeting-alt09:01
*** SergeyLukjanov has quit IRC09:51
*** SergeyLukjanov has joined #openstack-meeting-alt10:06
*** SergeyLukjanov has quit IRC10:29
*** SergeyLukjanov has joined #openstack-meeting-alt10:30
*** SergeyLukjanov has quit IRC10:32
*** SergeyLukjanov has joined #openstack-meeting-alt10:33
*** SergeyLukjanov has quit IRC10:36
*** pcm__ has joined #openstack-meeting-alt10:38
*** pcm_ has joined #openstack-meeting-alt10:39
*** demorris has joined #openstack-meeting-alt10:52
*** markwash has quit IRC11:01
*** gals has joined #openstack-meeting-alt11:13
*** gals has quit IRC11:15
*** gals has joined #openstack-meeting-alt11:16
*** pcm_ has quit IRC11:30
*** vkmc has joined #openstack-meeting-alt11:41
*** vkmc has joined #openstack-meeting-alt11:41
*** KillTheCat has joined #openstack-meeting-alt11:49
*** KillTheCat has left #openstack-meeting-alt11:51
*** SergeyLukjanov has joined #openstack-meeting-alt11:56
*** demorris has quit IRC11:57
*** pcm_ has joined #openstack-meeting-alt12:04
*** pcm_ has quit IRC12:04
*** abaron has quit IRC12:05
*** abaron has joined #openstack-meeting-alt12:16
*** djohnstone has joined #openstack-meeting-alt12:16
*** pdmars has joined #openstack-meeting-alt12:34
*** pcm__ has joined #openstack-meeting-alt12:44
*** sballe has joined #openstack-meeting-alt12:51
*** lastidiot has joined #openstack-meeting-alt12:55
*** pcm__ has quit IRC13:01
*** lastidiot has quit IRC13:03
*** pcm_ has joined #openstack-meeting-alt13:13
*** kevinconway has joined #openstack-meeting-alt13:13
*** pcm_ has quit IRC13:13
*** pcm_ has joined #openstack-meeting-alt13:14
*** jergerber has joined #openstack-meeting-alt13:15
*** jergerber has quit IRC13:15
*** sballe has quit IRC13:34
*** cp16net is now known as cp16net|away13:49
*** lastidiot has joined #openstack-meeting-alt14:14
*** SergeyLukjanov has quit IRC14:15
*** SergeyLukjanov has joined #openstack-meeting-alt14:22
*** jcru has joined #openstack-meeting-alt14:22
*** rnirmal has joined #openstack-meeting-alt14:25
*** IlyaE has joined #openstack-meeting-alt14:28
*** dosaboy has quit IRC14:29
*** Riddhi has joined #openstack-meeting-alt14:31
*** dosaboy has joined #openstack-meeting-alt14:34
*** lastidiot has quit IRC14:45
*** sballe has joined #openstack-meeting-alt14:53
*** abaron has quit IRC14:57
*** pcm_ has left #openstack-meeting-alt14:59
*** esp has joined #openstack-meeting-alt15:01
*** bdpayne has joined #openstack-meeting-alt15:13
*** dhellmann has joined #openstack-meeting-alt15:15
*** tanisdl has joined #openstack-meeting-alt15:19
*** megan_w has joined #openstack-meeting-alt15:20
*** demorris has joined #openstack-meeting-alt15:24
*** akuznetsov has joined #openstack-meeting-alt15:26
*** esp has quit IRC15:29
*** akuznetsov has quit IRC15:35
*** Riddhi has quit IRC15:36
*** cp16net|away is now known as cp16net15:39
*** Riddhi has joined #openstack-meeting-alt15:40
*** IlyaE has quit IRC15:41
*** ruhe has joined #openstack-meeting-alt16:18
*** lastidiot has joined #openstack-meeting-alt16:27
*** markwash has joined #openstack-meeting-alt16:36
*** SergeyLukjanov has quit IRC16:37
*** ruhe has quit IRC16:39
*** SergeyLukjanov has joined #openstack-meeting-alt16:40
*** SergeyLukjanov_ has joined #openstack-meeting-alt16:40
*** SergeyLukjanov_ has quit IRC16:42
*** SergeyLukjanov_ has joined #openstack-meeting-alt16:42
*** SergeyLukjanov_ has quit IRC16:42
*** SergeyLukjanov_ has joined #openstack-meeting-alt16:42
*** SergeyLukjanov_ has quit IRC16:43
*** SergeyLukjanov_ has joined #openstack-meeting-alt16:44
*** SergeyLukjanov has quit IRC16:45
*** lastidiot has quit IRC16:46
*** SergeyLukjanov has joined #openstack-meeting-alt16:49
*** SergeyLukjanov_ has quit IRC16:53
*** qwerty_nor has joined #openstack-meeting-alt16:56
*** SergeyLukjanov_ has joined #openstack-meeting-alt17:00
*** pcm__ has joined #openstack-meeting-alt17:00
*** pcm__ has quit IRC17:00
*** pcm_ has joined #openstack-meeting-alt17:01
*** SergeyLukjanov has quit IRC17:04
*** SergeyLukjanov_ has quit IRC17:05
*** SergeyLukjanov has joined #openstack-meeting-alt17:05
*** lastidiot has joined #openstack-meeting-alt17:07
*** dosaboy__ has joined #openstack-meeting-alt17:08
*** SergeyLukjanov_ has joined #openstack-meeting-alt17:09
*** SergeyLu_ has joined #openstack-meeting-alt17:09
*** SergeyLukjanov has quit IRC17:12
*** dosaboy has quit IRC17:12
*** dosaboy__ is now known as dosaboy17:18
*** Riddhi has quit IRC17:23
*** cp16net is now known as cp16net|away17:38
*** tanisdl has quit IRC17:38
*** markwash_ has joined #openstack-meeting-alt17:44
*** cp16net|away is now known as cp16net17:44
*** markwash has quit IRC17:45
*** markwash_ is now known as markwash17:45
*** markmcclain has joined #openstack-meeting-alt17:49
*** Riddhi has joined #openstack-meeting-alt17:51
*** rnirmal has quit IRC17:59
*** esp has joined #openstack-meeting-alt18:01
*** esp has left #openstack-meeting-alt18:02
*** dosaboy has quit IRC18:10
*** markwash has quit IRC18:21
*** yidclare has joined #openstack-meeting-alt18:21
*** megan_w has quit IRC18:29
*** markwash has joined #openstack-meeting-alt18:34
*** rnirmal has joined #openstack-meeting-alt18:41
*** amytron has joined #openstack-meeting-alt18:44
*** amytron has quit IRC18:45
*** amytron has joined #openstack-meeting-alt18:45
*** markmcclain has quit IRC18:46
*** tanisdl has joined #openstack-meeting-alt18:49
*** sarob has joined #openstack-meeting-alt18:54
*** dhellmann_ is now known as dhellmann18:54
*** dhellmann is now known as dhellmann_18:55
*** dhellmann_ is now known as dhellmann18:58
*** megan_w has joined #openstack-meeting-alt18:58
*** IlyaE has joined #openstack-meeting-alt19:08
*** akuznetsov has joined #openstack-meeting-alt19:11
*** jergerber has joined #openstack-meeting-alt19:28
*** melodous has joined #openstack-meeting-alt19:35
*** melodous has quit IRC19:39
*** datsun180b has joined #openstack-meeting-alt19:45
*** saurabhs has joined #openstack-meeting-alt19:46
*** esp has joined #openstack-meeting-alt19:50
*** konetzed has joined #openstack-meeting-alt19:56
*** imsplitbit has joined #openstack-meeting-alt19:58
*** hub_cap has joined #openstack-meeting-alt19:58
konetzedhub_cap: hi19:58
hub_capheyo19:59
datsun180bhello hello19:59
esphello19:59
hub_cap#startmeeting trove20:00
openstackMeeting started Wed Jul 17 20:00:15 2013 UTC.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: trove)"20:00
openstackThe meeting name has been set to 'trove'20:00
vipulo/20:00
djohnstoneo/20:00
hub_cap#link https://wiki.openstack.org/wiki/Meetings/TroveMeeting20:00
datsun180bo720:00
hub_cap&o20:00
juiceo/20:00
kevinconway\o/20:00
hub_capcrap i put a bad link on the wiki :p20:01
hub_cap#link http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-07-10-20.00.html20:01
hub_cap#topic action items20:01
*** openstack changes topic to "action items (Meeting topic: trove)"20:01
*** kgriffs has joined #openstack-meeting-alt20:01
hub_capnot many AI's. SlickNik is not around?20:01
hub_capvipul: get a chance to do any more wikifying?20:01
*** SlickNik has joined #openstack-meeting-alt20:02
vipulhub_cap: No, didnt spend any time on this one20:02
vipullikely an ongoing thing20:02
*** grapex has joined #openstack-meeting-alt20:02
SlickNiko/20:02
grapexo/20:02
hub_capkk, lets action item it again20:02
imsplitbito/20:02
vipul#action Vipul to continue to update reddwarf -> trove20:02
hub_capSlickNik: hey, yer up. initial stab @ dev docs20:02
pdmarso/20:02
hub_capi saw something20:02
hub_capcan u link the review?20:02
SlickNikyeah, one sec.20:02
SlickNik#link https://review.openstack.org/#/c/37379/20:03
hub_capSWEET20:03
hub_capgood work.20:03
hub_capanything else to add wrt the action items?20:03
SlickNikI've taken the initial info from the wiki and the trove-integration README.20:03
grapexSlickNik: Nice!20:04
SlickNikOnce that's approved, I can turn on the CI-doc job that builds it.20:04
hub_cap:)20:04
vipulthanks SlickNik20:04
hub_caplets get that done then!!! ;)20:04
SlickNikAnd then I need to contact annegentle to add the link to the openstack site.20:04
hub_capokey moving on then?20:05
SlickNikyup.20:05
hub_cap#topic h2 milestone released20:05
*** openstack changes topic to "h2 milestone released (Meeting topic: trove)"20:05
hub_cap#link https://github.com/openstack/trove/tree/milestone-proposed20:05
hub_capWOO20:05
datsun180bWOO20:05
hub_capthey will cut it i think, thursday?20:05
SlickNikw00t!20:05
konetzed\o/20:05
hub_cap#lnk http://tarballs.openstack.org/trove/20:05
hub_capdoh20:05
hub_cap#link http://tarballs.openstack.org/trove/20:05
hub_capthere we are20:05
vipulwoah look at that20:06
datsun180bDid you see all those issues marked as Released by Thierry C?20:06
hub_capyes i did20:06
SlickNikyup :)20:06
*** plomakin has quit IRC20:06
hub_capcuz i get ALL of them ;)20:06
hub_capwe can move critical bugs back to h2 if we need to20:06
hub_capi suspect we wont20:06
hub_capsince no one is really gonna deploy it20:06
hub_capits more just to get us understanding how things work around here20:07
SlickNikI don't know of any critical bugs, atm.20:07
hub_capAight enough w/ the glass clinking, time to move on20:07
hub_capfeel free to view the links20:07
hub_cap#link https://wiki.openstack.org/wiki/GerritJenkinsGithub#Authoring_Changes_for_milestone-proposed20:07
hub_cap#link https://wiki.openstack.org/wiki/PTLguide#Backporting_fixes_to_milestone-proposed_.28Wednesday.2FThursday.2920:07
*** lastidiot has quit IRC20:07
hub_capif u want to know more about the process20:07
hub_cap#topic Restart mysql20:07
*** openstack changes topic to "Restart mysql (Meeting topic: trove)"20:07
hub_capdoh forgot the word test20:07
hub_cap#link https://github.com/openstack/trove/blob/master/trove/tests/api/instances_actions.py#L256-26220:07
hub_caplets spend a bit of time discussing the validity of this20:08
hub_capand then spend the rest of the time on replication20:08
hub_capSlickNik: all u20:08
SlickNikSo, I agree with grapex that we need a test to validate what the guest agent behavior is when mysql is down.20:08
SlickNikBut I think that that's exactly what the mysql stop tests are doing.20:09
hub_caplink?20:09
vipul#link https://github.com/openstack/trove/blob/master/trove/tests/api/instances_actions.py#L320-L32420:09
grapexSlickNik: The only major difference is that explicitly tells the guest to stop MySQL, versus letting the status thread do its thing20:10
hub_capas in, we are testing the periodic task does its job?20:10
vipulright but isn't the status thread still the thing that's updating status20:10
vipulit's just a different way of stopping mysql20:10
vipulone is explictly other is by messing up logfiles20:11
grapexvipul: True, but the stop rpc call also updates the DB when its finished20:11
*** cp16net is now known as cp16net|away20:11
datsun180band that ib_logfile behavior is very deliberately for mysql, right?20:11
*** plomakin has joined #openstack-meeting-alt20:11
*** SergeyLukjanov_ has quit IRC20:11
*** cp16net|away is now known as cp16net20:12
grapexSlickNik: Can you give another summary of the issue the test itself is having?20:12
grapexIsn't it that MySQL actually can't start up again when the test tries to restart it?20:12
SlickNikgrapex: when we corrupt the logfiles, mysql doesn't come up.20:12
SlickNikthe upstart scripts keep trying to respawn mysql since it can't come up.20:12
grapexSlickNik: Does the reference guest not delete those iblogfiles?20:13
vipuli think the tests do20:13
datsun180bthat sounds right20:13
hub_capcorrect20:13
SlickNikgrapex: not delete; but mess up so that they are zeroed out.20:13
hub_capso teh difference is20:13
hub_cap1 test stops mysql, the other kills it behind the scenes20:14
SlickNikNow since upstart is trying to bring mysql up, it has a lock on the logfiles.20:14
hub_capthe latter test waits for the periodic task to signal its broken20:14
hub_capthe former test updates the db as part of the stop20:14
hub_capya?20:14
grapexSo Sneaky Pete actually wipes the ib logfiles. Maybe that's something the reference guest should do?20:14
grapexIt does it as part of the restart command20:14
hub_caplets first try to figure out if the tests are truly different20:14
hub_capand then once we agree it needs to stay (if it does) we can think about solutions20:15
grapexWell that one also makes sure the iblogfiles are wiped20:15
vipulgrapex: won't that mean mysql can start again?20:15
grapexvipul: Yes.20:15
SlickNikSo there's also the point that this test takes about ~4-5 mins.20:15
vipulthen this test will fail, because the test expects that it cannot start20:15
*** jrodom has joined #openstack-meeting-alt20:16
SlickNikSo one question is that do we think that this 1 scenario (which isn't all that different from the stop tests) warrants an extra addition of ~4-5 minutes on every test run?20:16
hub_capif it tests something different i think its warranted20:17
SlickNik(in parens) = my opinion20:17
hub_capis exactly the same != isint all that different20:17
hub_capare they testing different things?20:17
grapexI'm sorry, I misspoke about wiping the iblogfiles - that happens on resizes and other cases, not for restart20:17
hub_capthats what i want us to agree upon20:17
SlickNikwell, in either case we are testing for a broken connection.20:17
hub_capare we?20:18
SlickNikAnd mysql not running is causing the broken connection.20:18
grapexSlickNik: I disagree20:18
hub_capi thought the screw_up_mysql tests that the periodic task updates the db properly20:18
hub_capand the explicit stop tests that the stop updates the db synchronously20:18
grapexI think also whether a test has value is a different question from whether we want to run it every single time as part of CI if it's impeding people20:18
hub_capis that not the case?20:18
SlickNikgrapex: what code path does the restart tests hit that the resize tests don't also hit?20:18
SlickNikdo*20:18
grapexrestart truly makes sure the status thread sees MySQL die and updates appropriately20:19
vipulso the stop_db code seems to set that state = None20:19
vipul            self.instance.update_db(task_status=inst_models.InstanceTasks.NONE)20:19
hub_capcorrect20:19
grapexstop is actually stopping it, so it updates the database as part of that RPC code path, not the thread20:19
vipulWhich means the status thread will set it to shutdown20:19
hub_capsure but it does taht based on different circonstances vipul20:19
hub_cap1) it checks the task is NONE vs 2) it cant talk to mysql, right?20:20
vipulit checks the status is shutdown and can't talk to mysql20:20
hub_capok20:20
hub_capdoes the other tests update the task to none?20:20
hub_cap*test20:21
*** KennethWilke has joined #openstack-meeting-alt20:21
vipulrestart also sets it to None20:21
grapexAlso keep in mind, the Sneaky Pete tests actually sets the status to stop as part of that RPC call. If you're saying the reference guest doesn't, I'm not sure why it wouldn't20:21
hub_capok weve got 4 more min on this and im gonna call it for now20:21
hub_capas undecided20:21
hub_capid like to discuss replication20:21
imsplitbit+1 on that20:22
hub_caplol imsplitbit20:22
grapexWell, I want to suggest something20:22
hub_capsure20:22
hub_capyouve got a few min20:22
KennethWilkehub_cap: i will gladly accept a gist link of the chat20:22
hub_capgo!20:22
grapexIf this test is really being a bother, lets just take it out of the "blackbox" group but keep it in the code.20:22
hub_capKennethWilke: its logged, you can see it on http://eavesdrop.openstack.org/meetings/trove/20:22
grapexWe run it at Rackspace all the time and I find it useful. It could still be run nightly or something.20:22
KennethWilkehub_cap: ty20:22
SlickNikgrapex: I'd totally be fine with that.20:23
grapex(nightly for the completely public Ubuntu / KVM / Reference guest code)20:23
hub_capya but id argue we shouldnt remove it till we do the nightly different tests20:23
grapexSlickNik: Maybe the solution is to create a second group called "nightly" which just has more test groups added to it20:23
grapexhub_cap: Seconded.20:23
hub_capif it in fact does test somethign different20:23
juice+220:23
grapexhub_cap: +120:24
cp16net+120:24
datsun180bi vote keep it, even if it means moving it20:24
hub_capwhich im still not sure it _does_ test something different at this point20:24
hub_capbut lets move on20:24
hub_capi think we have a reasonable consensus to keep it but move it20:24
*** arborism has joined #openstack-meeting-alt20:24
hub_capi don't want your goddamn lettuce20:24
hub_capmoving on?20:25
grapexhub_cap: Are you talking to a rabbit?20:25
vipulneed some more research to verify that it is indeed different20:25
vipulcan we just action it?20:25
hub_capyes vipul20:25
hub_capgo head20:25
vipul#action SlickNik, vipul to compare Stop test and Unsuccessful Restart tests to identify differences20:25
hub_capgrapex: no. google it20:25
hub_capok. repl time20:25
hub_cap#replication :o20:25
SlickNikhub_cap / grapex: I'll move it for now so that we don't keep hitting it on rdjenkins. I'll also look to see if we can fix the test so we don't run into the upstart issue (also the research that vipul actioned).20:26
hub_caplol20:26
hub_cap#topic replication :o20:26
*** openstack changes topic to "replication :o (Meeting topic: trove)"20:26
imsplitbitlet me relink20:26
hub_cap+1 SlickNik cuz we will have to deal w/ fedora soon too ;)20:26
hub_capplz do20:26
imsplitbit#link https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API20:26
imsplitbit#link https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API-Using-Instances20:26
imsplitbithub_cap: go!20:26
SlickNikthanks guys, on to replication!20:26
hub_cap#all i wanted was a cheeseburger20:27
hub_capok SO20:27
hub_capweve gone back and forth on this topic for a while now20:27
SlickNik#all I got was a lousy T-shirt?20:27
hub_caplol20:27
hub_cap2 schools, /instances has ALL instances, some error conditions on things like resize20:27
hub_capor /instances and /clusters, and things move from /instances to /clusters when they get promoted20:28
hub_capand cluster nodes are never a part of /instances20:28
vipulno demorris :) we can decide w/o him20:28
hub_capHA nice20:28
imsplitbithe should be here20:28
demorrisvipul: u wish20:28
imsplitbitjust to stir the pot20:28
imsplitbit:)20:28
vipulboo20:28
hub_capNICE20:28
SlickNiklol, he's lurking.20:28
demorrisalways20:28
hub_capok i was of the opinion that when we promote to /clusters we move the instance there20:29
hub_capso u can do things u shouldnt do on it20:29
hub_capas in, if its a slave, u shouldnt add a user20:29
hub_capor u shodlnt delete a master20:29
hub_capbut after thinking about it for a while, there arent a LOT of failure cases for modifying /instances20:29
hub_capthe only one i can think of is deleting a master20:29
hub_capu shoudl be able to add a RO user to a slave, u shoudl be able to resize a slave to something that might not be ok for the cluster20:30
hub_capthe permutations for what u shouldnt be able to do are NOT small20:30
hub_capand are different for differetn cases of a cluster20:30
hub_capand different types of clusters20:30
hub_caphell they are probably close to infinite given differente circonstances20:30
hub_capso id rather keep things in /instances, and just limit very few cases for modifying an instance in a cluster20:31
hub_capif we find something that should _never_ be done, then so be it, we add it as a failure case in /instances20:31
vipulso.. would /cluster has the same set of operations that /instances has (create user, add db, etc)20:31
hub_capno20:31
hub_capit would be helper for doing things to an entire cluster20:31
hub_capand thats it20:31
imsplitbitcreate/resize/delete20:32
hub_capadd db/create user20:32
hub_capwe cant really define how a user will use a slave20:32
vipulbut they may not always be slaves right20:32
hub_capi had a extra db on slaves on some of my setups w/ urchin a while ago20:32
hub_capand different users20:32
vipulyou may have a galera cluster.. wehre the users / schemas will all be replicated across20:32
vipulno matter which one you write to20:33
hub_capyes20:33
hub_capso given that case it doesnt matter where u write it20:33
hub_capso then we cant restrict it20:33
vipulso why not write it to /cluster.. why do they have to pick one20:33
imsplitbitor shouldn't20:33
hub_capthere is no "master master" there20:33
hub_capbecause i want to add a RO user to slave 120:33
hub_caphow do i do that20:33
imsplitbitvipul: I think there is a good case to add some helper things to /cluster20:33
imsplitbitbut it isn't needed to implement a cluster and support it20:33
SlickNikSo what's the ultimate reason for not doing this on the cluster but doing it on the individual instances?20:33
SlickNikDuplication of code?20:34
hub_capduplication of schema20:34
hub_capand complication ot end user20:34
hub_cap1/2 my instances in one /path and 1/2 in /another seems very unintuitive20:34
imsplitbitagreed20:34
hub_capi have a bunch of /instances, period20:34
hub_capat the end of the day that what they are anywya20:34
hub_capand vipul im not tryign to define what we can and cant do on /clusters20:35
hub_capim tryin to get consensus on where /instances live20:35
vipulIt seems like as we do auto-failover, etc.. we'd want to abstract the actual 'type' of instance away from the user.. so the user only sees a db as a single endpoint20:35
*** abaron has joined #openstack-meeting-alt20:35
vipulin a cluster.. you could see a single endpoint that's load balalcned also20:35
imsplitbitvipul: if we do that then you have to separate replication from clustering20:35
imsplitbitbecause they aren't the same20:35
hub_cap:o20:36
imsplitbityet they share alot of functionality20:36
vipulbut is it that different?  if we promote a slave to a master on behalf of the user.. and spin up a new slave for them20:36
hub_capwe will still ahve list /clusters20:36
hub_capand u can show a single endpoint20:36
vipulfrom the user's perpective it doesn't matter if it's a multi-master or single master/slave20:36
hub_capfwiw tho all clsutering apis dont use single endpoint20:36
vipulagreed, we can't yet20:37
hub_capi believe tungsten uses its own internal code to determine where to write to20:37
grapexI've got a question as the infamous No-NoSQL guy.20:37
cp16netimsplitbit: as i am catching up on this feature those links of the API and API with instances are the 2 proposed plans we are debating?20:37
hub_capin its connector api20:37
imsplitbitvipul: but you're assuming use, what if I have a db on one instance and I want to keep a spare copy of it warm on another host but also want to use that host as a db server for a complete different dataset?20:37
hub_capbut again, if you list /clusters we can provide a single endpoint20:37
imsplitbitcp16net:  yes20:37
imsplitbithub_cap: exactly20:37
hub_capbut if you _want_ you can enact on a slave / "other master" in /instances20:37
hub_capim saying dont remove them from /instances20:38
imsplitbitif the cluster type supports a single endpoint then /clusters should return that information20:38
hub_capwe can still totally do what vipul wants in /clusters20:38
hub_capyou are essentially paying for every /instance20:38
hub_capso we shoudl show them20:38
hub_capeven if u ahve auto failover20:38
hub_capu buy 2 or 3 or X instances20:38
hub_capand use one ip20:38
vipulyea i think the instnace info should be visible.. but at some point in the future.. we may have a single dns entry returned or something20:38
demorrisi would separate out billing from it though20:38
hub_capif i was paying for 9 instances in a auto failover cluster, id like to see them all in /instances20:38
imsplitbitvipul: and that will be returned with the cluster ref20:39
hub_capdemorris: there is no billing in it, just providing a point from a customer point of view20:39
imsplitbitif applicable20:39
hub_capvipul: we can do that, now even if applic... grr dsal20:39
demorrishub_cap: k20:39
hub_capjust say what i was gonna say why dont ya20:39
hub_capi got a can of these baked beans too20:39
konetzedvipul: why couldnt you create a single dns entery returned for the cluster but still have dns for each instance like it is now?20:39
hub_capid want that20:40
hub_capcuz if i had to connect to instance X to clean it up manually20:40
hub_capid want to be able to20:40
imsplitbitkonetzed: I would think most people would20:40
vipulkonetzed: I guess you could.. but then the customer would end up breaking if they happened ot use one of the instance entries20:40
hub_caplike auto-failover is ont working, let me get on node X to prmote it20:40
konetzedvipul: you can only protect stupid so much20:40
hub_capHAH20:40
konetzed:D20:40
cp16nethah20:40
hub_capya none yall proteced fro me20:40
vipulthis is really a question of how much do we hide from the user, so even if they are stupid they can use it20:40
hub_capwhy u think they moved me to cali20:40
hub_capsure vipul and i think we could concede on some of that20:41
hub_captahts not set in stone20:41
konetzed+120:41
hub_capwe could even rev teh api a bit when we have > 1 cluster20:41
hub_capSHIT20:41
hub_capim out of power20:41
SlickNikOkay, so I guess it depends on what we're shooting for here.20:41
imsplitbitdude20:41
imsplitbithub_cap: FAIL20:41
hub_capsweet found a plug20:41
konetzedvipul: i think you will find enough arguments for each way20:41
vipulagreed20:41
imsplitbitwell there's 2 types of users right?  power users and button pushers20:42
imsplitbityou need to find enough to facilitate both20:42
hub_capyes20:42
hub_capor provide a RBAC solution20:42
hub_capthat allows the installer to decide20:42
SlickNikIf we're looking for a managed DB solution here that exposes a simple clustering API to the user, then I think that is probably better served by having a single endpoint for it.20:42
hub_capi think we are looking to provide a service that is extensible enough to do that20:43
hub_cap_or_ allow the user access to all20:43
hub_capfrankly20:43
hub_capwe WILL NEVER be able to provide a fully turnkey solution20:43
hub_capotherwise someone else woudlve20:43
hub_capmysql is a tricky beast20:43
imsplitbitSlickNik: no one is arguing against providing a single endpoint for users who want one20:43
hub_capwe will always need to provide a way for a user or operator to get to any instance20:43
vipulone thing to keep in mind is the more that we hide, the less the user can faak us up.. like break our ability to auto-failover20:43
SlickNikBut if we're talking about letting users do things like have an instance as part of a cluster, as well as able to connect to the db directly, there's no way of getting away from a complex clustering API with actions spread across /instances and /clusters20:43
hub_capactions yes SlickNik20:44
hub_capbut entities, no20:44
hub_capthats the first line of agreement20:44
hub_capas long as we are all on the same page tehre20:44
hub_capit makes the api closer to concrete20:44
hub_capim sure we can, eventually, hide instances if we want to20:44
hub_capshown_to_user=False20:44
hub_capeasy as pie20:44
vipulor at least not allow them to operate on them20:44
hub_caplets solve the easy solution first20:45
hub_capsure vipul20:45
demorrisI always go back to some of this being up to the provider / operator of Trove and separating that out from what the API supports20:45
hub_capmanaged vms20:45
konetzedi was just going to say sounds like were going down a rabbit hole20:45
hub_capwe need that anyway for nova20:45
demorriswhy can't each cluster type have a policy that dictates what can and cannot be done to the cluster or instances themselves20:45
*** sarob has quit IRC20:45
hub_capcuz they can just muck w/ them in nova if you are using their user to prov instances ;)20:45
hub_capyes demorris RBAC20:45
vipuldemorris: +120:45
demorrisif my policy says, individual operations are not support on /instacnes, then you don't allow it20:45
hub_capi said that like ~5 min ago20:45
vipulit really is a deployment type of decision it seems20:45
*** sarob has joined #openstack-meeting-alt20:45
espSlickNik: having a single endpoint might restrict users from building a system that reads from all nodes and only writes to one.20:46
hub_caplets just solve the easy solution first tho20:46
hub_capwe are getting out of hand20:46
demorrishub_cap: you know I can't follow every message in here…brain won't allow it :)20:46
hub_capwe need to solve master/slave20:46
hub_capbefore we get to magical clsutering20:46
hub_capdemorris: transplant ;)20:46
vipulhub_cap: is master/slave /cluster then?20:46
hub_capwe understand the set of actions in /clusters can grow20:46
hub_capthats fine20:46
hub_capyes20:46
vipulok20:46
hub_capbut both isntances are avail via /instances20:46
imsplitbitI don't like the use of the word clusters for replication because it implies too much20:47
hub_capand u can resize the slave down via /instances/id/resize20:47
imsplitbitbut we can't think of a better term for it20:47
* hub_cap shreds imsplitbit with a suspicious knife20:47
* hub_cap boxes imsplitbit with an authentic cup20:47
* hub_cap slaps imsplitbit around with a tiny and bloodstained penguin20:47
* hub_cap belts imsplitbit with a medium sized donkey20:47
* hub_cap tortures imsplitbit with a real shelf20:47
imsplitbit:)20:47
imsplitbitI won't give up that fight20:47
imsplitbitbut I acknowledge that it doesn't need to be fought right now20:48
vipuleven though cluster is overloaded, it does fit even if it's master/slave20:48
vipulimo20:48
hub_capdoes what i say make sense vipul?20:48
hub_capcreate master slave via /cluster20:48
hub_capresize both nodes cuz youre on oprah, /cluster/id/resize20:48
vipulyep, makese sense20:48
hub_capresize indiv node cuz youre cheap /instance/id/resize20:48
*** kevinconway has quit IRC20:49
hub_capcreate db on slave cuz u need a local store for some operation on an application /instance/id/db20:49
*** kevinconway has joined #openstack-meeting-alt20:50
SlickNikwhat about create db/user on master? does that go through /instance/id or /cluster/id?20:50
hub_capif u want to create it on all of the, create it on the master ;)20:50
hub_capu _know_ u have a master, why not let the user just do that20:50
hub_capthis only applies for master/slave20:50
*** sarob has quit IRC20:50
imsplitbithub_cap: I think that is the least prescriptive approach20:50
hub_capfor what its worth20:50
vipulright, but we should allow it to be created on the /cluster as well20:50
hub_cap /clusters/id/resize is NOT going to be easy20:51
hub_capi have 9 instances20:51
hub_cap3 failed20:51
hub_cap1 is now broken20:51
hub_capthe master just went down20:51
SlickNikSo is there a difference between create db on master vs create db on cluster?20:51
konetzedfix it so it never fails20:51
hub_capwhat do i do20:51
hub_capkonetzed: youre out yo mind20:51
SlickNiki.e. if I do /instance/id/db CREATE, it is a local instance that will not get replicated?20:51
konetzedhub_cap: the hp ppl didnt know that already20:51
SlickNikon the master20:51
vipulhub_cap: but that same scenario would exist if you did a single instance resize... where that one failed20:51
vipulnow the user is stuck..20:52
vipulcuz they have to fix it20:52
vipulwhere as in /cluster/resize we'd fix it20:52
hub_capright but thats up to you to control vipul20:52
hub_capthink about the permutations there vipul20:52
konetzedSlickNik: i think user adds on the master would be replicated20:52
hub_caplets at least defer it20:52
hub_captill we see some real world scenarios20:52
hub_capid prever "acting" on clusters to come later20:52
SlickNikkonetzed: what about db adds?20:52
hub_capbecause its /hard/20:52
konetzedimsplitbit: arnt all crud operations done on the master sent to slaves?20:53
espresizing a cluster sounds like it might easier to migrate the data to a new cluster..20:53
hub_cap:P esp20:53
imsplitbitkonetzed: yes20:53
esprather than trying to resize each individual node if that's what we are talking about :)20:53
vipulesp: that could be one way to do it..20:53
hub_capcreate db will go to a slave if issued on a master20:53
imsplitbitesp: maybe so but if the dataset is 500GB that may not be true20:54
SlickNikimsplitbit: you can choose to replicate only certain dbs if you so desire20:54
konetzedimsplitbit: so to answer SlickNik's question user and db adds all get replicated20:54
espif you asked me to individually resize a 9 node cluster I would scream at you.20:54
hub_capesp: even if 90% of the time it failed for you if u did /cluster/id/resize20:54
imsplitbitesp: agreed which is why we would want to support doing a cluster resize20:54
hub_captaht means you would have to issue it 9 times anwyay20:54
hub_capand if one failed to upgrade20:54
imsplitbitbut hub_cap's point is it's not gonna be easy20:54
hub_capthen u gotta downgrade the others20:54
espimsplitbit: I gotcha, doesn't cover all cases.20:54
hub_capdouble downtime20:54
hub_capright20:54
SlickNikimsplitbit: so why should we allow extraneous dbs (outside the cluster) to be created on slaves but not on master?20:54
hub_caplets defer "Actions" to /clusters20:55
hub_capto get _something_ done20:55
hub_capto summarize20:55
hub_capwe have 5 min20:55
hub_capinstances are all in /instances20:55
imsplitbitSlickNik: because it's a mistake to assume what a user will want to do20:55
konetzedi think we need to get past resizes failing, because that has nothing to do with clusters20:55
hub_capu can enact on them indiv20:55
vipulSlickNik: good point.. is this a valid use case even?  i'm no DBA.. but why would you do that20:55
hub_capok maybe no summary............20:55
* hub_cap waits for the fire to calm down b4 going on20:55
vipuldo DBAs create dbs on slaves...20:55
hub_capwhy not vipul20:55
imsplitbitvipul: I have configured db setups for very large corporations in our intensive department and I can say it happens often20:56
hub_capyes20:56
vipulbecause at any time, you'd promote that20:56
hub_capi have done it20:56
hub_capnot necessarily vipul20:56
vipuland you'd need to do it again on the new slave20:56
hub_capread slaves are not 100% promotion material20:56
hub_captheya re sometimes to _juist_ read20:56
demorrisyou may just have a slave to run backups on20:56
hub_capwe cant guaranteee everyone will use it the same way20:56
vipulyea I get that.. but they are reading master data20:56
hub_caphence the need to _not_ be perscriptive20:56
hub_capya and could be 10 minutes behind vipul20:56
hub_capok lets chill it out20:57
hub_caplet me summarize20:57
vipuldemorris: then the additional dbs you created are also backed up..20:57
hub_capwe have 3 min20:57
vipullol hub_cap20:57
hub_capor ill just decide w/o anyone elses input20:57
hub_capill be the DTL20:57
SlickNikhub_cap: you need a timer bot :)20:57
*** Riddhi has quit IRC20:57
hub_capu can decide what the D means20:57
SlickNikGuido van hub_cap20:57
hub_capsummary20:57
vipulif we have backup slaves.. should those additional DBs/Users be backed  up?20:57
hub_caplets take indiv questions offline vipul plz20:57
vipulsorry :)20:58
hub_caphere is the first cut of the api20:58
hub_capinstances are in /instances, all of them, all visible, all actions can happen to them20:58
hub_cap /clusters is used for create/delete only as a helper api20:58
hub_capthat will be V1 of clusters20:58
demorrishub_cap: and also some atomic actions20:58
hub_capas we decide we need more stuff, we will add it20:58
kevinconwayhub_cap: I'm bought on the idea of instance stuff going in /instances. But does the instance still contain cluster data now?20:58
kevinconwaythis magic "attributes" addition?20:59
hub_capyes kevinconway it will have to, we can decide that stuff later20:59
hub_capthere will be some indication20:59
hub_caponce we have a need for more operations, be them atomic or acting upon many instances, we will add to /clusters20:59
demorrishub_cap: when did we drop having actions on clusters?20:59
hub_capotherwise we will be coding this forever20:59
SlickNikkevinconway: It's necessary if you want to have any sort of ruleset dictating what is possible on the instance vs on the cluster.20:59
hub_capdemorris: i made an executive decison for R120:59
hub_capV120:59
hub_capwe can always add them21:00
hub_capbut if they suck we cant remove them21:00
hub_caplets just get something up21:00
hub_capand working21:00
imsplitbitno actions!!!!21:00
demorrishub_cap: I would vote for V1 to have at least atomic actions - add nodes, resize flavors, resize storage…in that they happen to the whole cluster21:00
kevinconwaySlickNik: you can mark an instance as part of a cluster without modifying the instance resource though21:00
vipulit seems like it's easier to have a /clusters API that's completely isolated from /instances.. if we remove the 'promote' existing instance requirement21:00
hub_capdemorris: we had a whole conversation about problem permutations21:00
hub_capgoto #openstack-trove to contineu21:00
hub_cap#endmeeting21:00
*** openstack changes topic to "OpenStack meetings (alternate)"21:00
vipulboo21:00
openstackMeeting ended Wed Jul 17 21:00:48 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:00
demorrishub_cap: i am sure you did: :)21:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-07-17-20.00.html21:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-07-17-20.00.txt21:00
openstackLog:            http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-07-17-20.00.log.html21:00
espthx!21:01
SlickNikkevinconway: Good point, but it would suck if the user is not given an indication of this when he does an instance show / GET21:01
imsplitbitkevinconway: but if you have that information why not display it or provide it back so that the end user can build interesting tools by consuming that information21:01
SlickNikthanks all.21:01
SlickNikgood discussion21:01
SlickNikgreat points all around.21:01
imsplitbit#agree21:01
grapexSlickNik: Thanks again for the docs, I look forward to that. It's going to be really nice.21:01
*** SergeyLu_ has quit IRC21:02
*** kgriffs has left #openstack-meeting-alt21:02
*** konetzed has left #openstack-meeting-alt21:02
SlickNikgrapex: No problem! Thanks for the discussion around the restart tests. I'll be back with more info, and hopefully we can fix the test and not pull it out.21:02
grapexAgreed21:02
grapexSee you!21:02
*** saurabhs has left #openstack-meeting-alt21:03
*** sarob has joined #openstack-meeting-alt21:03
*** imsplitbit has left #openstack-meeting-alt21:04
*** jrodom has quit IRC21:06
*** pdmars has quit IRC21:11
*** djohnstone has quit IRC21:11
*** lastidiot has joined #openstack-meeting-alt21:12
*** cp16net is now known as cp16net|away21:14
*** cp16net|away is now known as cp16net21:16
*** pcm_ has quit IRC21:19
*** amytron has quit IRC21:22
*** pcm_ has joined #openstack-meeting-alt21:29
*** pcm_ has quit IRC21:29
*** pcm_ has joined #openstack-meeting-alt21:30
*** akuznetsov has quit IRC21:31
*** lastidiot has quit IRC21:32
*** abaron has quit IRC21:33
*** akuznetsov has joined #openstack-meeting-alt21:35
*** dhellmann is now known as dhellmann_21:40
*** Riddhi has joined #openstack-meeting-alt21:45
*** megan_w has quit IRC21:47
*** cp16net is now known as cp16net|away21:55
*** cp16net|away is now known as cp16net21:56
*** lastidiot has joined #openstack-meeting-alt21:58
*** jergerber has quit IRC22:00
*** dosaboy_ has quit IRC22:03
*** dosaboy has joined #openstack-meeting-alt22:04
*** pcm_ has quit IRC22:07
*** datsun180b has quit IRC22:12
*** jrodom has joined #openstack-meeting-alt22:16
*** IlyaE has quit IRC22:17
*** jrodom has quit IRC22:19
*** vipul is now known as vipul-away22:27
*** sarob has quit IRC22:32
*** sarob has joined #openstack-meeting-alt22:32
*** sballe has quit IRC22:35
*** sarob has quit IRC22:37
*** rnirmal has quit IRC22:43
*** KennethWilke has quit IRC22:44
*** vipul-away is now known as vipul22:47
*** dhellmann has joined #openstack-meeting-alt22:47
*** lastidiot has quit IRC22:47
*** dhellmann_ has quit IRC22:47
*** gals has quit IRC22:47
*** lastidiot has joined #openstack-meeting-alt22:47
*** gals has joined #openstack-meeting-alt22:48
*** IlyaE has joined #openstack-meeting-alt22:53
*** demorris has quit IRC22:55
*** sarob has joined #openstack-meeting-alt22:56
*** tanisdl has quit IRC23:04
*** EmilienM has quit IRC23:08
*** tanisdl has joined #openstack-meeting-alt23:11
*** EmilienM has joined #openstack-meeting-alt23:15
*** markwash has quit IRC23:17
*** megan_w has joined #openstack-meeting-alt23:25
*** lastidiot has quit IRC23:30
*** megan_w has quit IRC23:37
*** megan_w has joined #openstack-meeting-alt23:37
*** megan_w has quit IRC23:42
*** jcru has quit IRC23:47
*** Riddhi has quit IRC23:56
*** sarob_ has joined #openstack-meeting-alt23:56
*** grapex has quit IRC23:57
*** sarob has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!