20:02:19 #startmeeting trove 20:02:19 Meeting started Wed Nov 13 20:02:19 2013 UTC and is due to finish in 60 minutes. The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:02:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:02:23 The meeting name has been set to 'trove' 20:02:43 #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting 20:02:48 o^/ 20:02:51 o/ 20:03:05 o/ 20:03:18 o/ 20:03:19 so i suspect a good bit of trovesters might miss this wks meeting heh 20:03:28 0? 20:03:42 esmute: u should get that checked out 20:03:42 o/ 20:03:43 o/ 20:03:51 o7 20:03:55 >o< meow 20:04:00 lol 20:04:03 so, lets talk 20:04:09 o/ 20:04:10 #topic summit 20:04:53 so i think the summit went well. we ddint have a ton of visitors but we did have good participation from the heat crew and the testing crew during thos meetings... as well as the guest talk 20:05:04 it was very good to have some real track time this time around 20:05:24 we discussed clustering a lot and there is still a bunch of ideas that have not coalesced, i feel 20:05:28 a big difference from the last conference in terms of interest 20:05:35 yes juice i think so 20:05:44 also a lot of terrible beer 20:05:51 adrian otto seemed pretty adamant on keeping it as it is 20:05:57 * CaptTofu lurks 20:06:05 juice: the beer? 20:06:13 hi CaptTofu 20:06:16 keep it terrible 20:06:17 yes kevinconway the beer 20:06:23 hey man! How was Vegas? 20:06:30 offline ;) 20:06:34 :) 20:06:49 yeah capttofu this is a serious meeting :) 20:06:53 We do need to close on that soon though hub_cap 20:07:02 :) 20:07:02 vipul: the beer? 20:07:05 so i htink we need to start some ML threads on the difference between clustering and replication 20:07:19 and using service_types instead of cluster_types 20:07:23 kevinconway: i know what's on your mind :) 20:07:43 and really deciding on if trove is a 1) diy replication mechanism, or 2) a set of patterns you can deploy 20:07:57 that is a good way to slice it 20:08:03 hub_cap: service_types instead of cluster_types? 20:08:15 #action hub_cap to start a really long never ending ML thread for clustering and replication 20:08:27 can't we just offer the api where people log into the database and set up their own replication? 20:08:32 that sounds the easiest 20:08:33 yogesh: right, the thought being "multi-master" as a service_type or "master-slave-slave" or whatever 20:08:45 kevinconway: lol im pretty sure we offer that today ;) 20:08:53 o/ 20:09:04 so lets not discuss this for a full hr today 20:09:10 but instead send out some ML stuff 20:09:21 hub_cap: "cluster-type" doesn't it make more specificity and sense... 20:09:25 multi-master means no more than two masters and N slaves. Avoid like the plague ring replication. 20:09:27 anything else notable from the trove side of things? 20:09:42 so lets not discuss this for a full hr today 20:09:44 but instead send out some ML stuff 20:09:45 sure 20:09:49 hehe 20:09:50 agreed 20:09:54 CaptTofu: i fully expect u to reply 20:10:01 yogesh: u too 20:10:10 CaptTofu: wouldn't that be Bi-Master? 20:10:12 can someone kickstart it by summarizing the discussion points that happened in HK? 20:10:17 (on the mailing list of course) 20:10:22 kevinconway: dual-master. 20:10:28 yes amcrn i made it a action item for me 20:10:34 excellente', thanks 20:10:37 np 20:10:38 makes sense 20:10:52 thats gonna cause some contevercy but.... whats new 20:11:06 *controversy 20:11:14 CaptTofu: exactly, not really multi-master. lets just talk about this for an hour 20:11:23 kevinconway: i will kick u 20:11:24 ;) 20:11:29 lol 20:11:38 the other big controversial item was the guest i think 20:11:45 who was it? 20:11:50 * CaptTofu stays stoic 20:11:52 :-) 20:11:58 bah-dum-chs 20:12:02 some people said the guest should be as dumb as pushing ssh commands, effectively, to it 20:12:13 kevinconway: ill give u that one, it was pretty funny 20:12:28 so i think the "guest" world in the openstack ecosystem is also all over the place 20:12:30 yeah that wo; sudo rm -rf /n't backfire 20:12:49 lol datsun180b 20:13:02 isn't the guest one of the differentiating factors for trove? 20:13:04 ok so does any of the other people who wer at the summit have stuff to add? 20:13:15 i.e. the guest does SOMETHING SPECIFIC AND USEFUL 20:13:19 redthrux: well... other projects are using guests too 20:13:21 mirantis put up their murano agent for discussion and suggested we take a look at that 20:13:34 savanna and murano and heat all have agents 20:13:39 not really except that nothing major decision was reached on anything 20:13:45 right 20:13:53 juice: isn't that a cookie… no wait that's milano 20:13:55 I can't recall Clint's proposal other than keeping the agent simple 20:13:56 well the testsing stuff went well 20:14:01 that was decided upon 20:14:01 right - I understand that - so what's the argument - we should be using a guest agent? or everyone using the SAME guest agent? 20:14:12 that was teh question redthrux 20:14:17 lots of people writing agents.. why not standardize 20:14:27 gotcha - it sounded like everyone has their own requirements though 20:14:46 redthrux: the guest just needs to be infinitely configurable to specific needs 20:14:46 seems an agent could be extendable and pluggable 20:14:46 but there is common ground between then so standardize some of it i guesss? 20:14:50 the murano agent is universal because it pulls from a repo of commands which can do any task. the agent just sits there and listens for tasks 20:14:53 the outcome of testing is that the "reddwarf" job can become gating once we move it into the devstack-gate job, but itll only gate for trove 20:15:00 like some kind of language that people can describe logic in 20:15:09 do any of the other agents have a fully developed upgrade/broadcast story? 20:15:10 oops ok I guess we have moved on to tempest testing 20:15:11 whereas the tempest tests will be run for all teams, and, nova, for instance, cant break trove 20:15:26 kevinconway: are you talking about Go again? 20:15:44 :D 20:15:48 in theory you never need to upgrades if you make it dumb enough :D 20:15:54 i think Go is a great implementation of a universal guest agent 20:16:05 you just describe what you want and it compiles it and makes it happen 20:16:06 * hub_cap moves on 20:16:07 like magic 20:16:15 so +1 on making the "bones" pluggable 20:16:21 so lets discuss this broken gate 20:16:25 #topic broken gate 20:16:26 lets do 20:16:32 there is a devstack review in flight to fix the gate 20:16:32 * hub_cap is not happy that its been broken for so long 20:16:39 #link https://review.openstack.org/#/c/55992/ 20:16:45 ok. what do we need to do to get this fast tracked? 20:16:56 some +2's 20:17:04 that would fix it overall 20:17:05 but... 20:17:09 why isn't it affect other teams :s 20:17:16 god my grammer sucks today 20:17:17 i made a fix that would help us in themean time 20:17:20 #link https://review.openstack.org/#/c/56273/ 20:17:35 and speling 20:17:44 that review should pass here in a min and it just turns off the vnc proxy service 20:17:48 I think we should fix it on our end if possible 20:17:58 cp16net: +1 20:18:09 awesome. lets make sure this passes and ill +2approve it 20:18:13 arg... 20:18:14 ok so teh real question 20:18:22 why did it take 2 wks to get merged in? 20:18:22 looks like the second review i made is goofed 20:18:41 hub_cap: i think its because no body was around 20:18:43 ok cp16net u can fix it in a bit, but i think is the right track for now 20:18:48 it started failing the firday everyone left 20:18:57 cp16net, yeah 20:19:08 my review was first one which failed 20:19:14 it has been multiple issues tho 20:19:26 seems like that is usually the cas tho.. when everyone leaves things get broken 20:19:28 jenkins started failing too 20:19:30 cp16net: lol, i guess ill buy that since i was in HK.. i just assumed others wouldve tried to push fixes while the 6 of us were at the summit ;) 20:19:51 we tried 20:19:54 ok good 20:19:57 tahts all i wanted to hear 20:20:01 we just pretend we work in minecraft 20:20:05 HAH 20:20:07 when you guys leave we unload the chunks 20:20:07 i tried to make reviews that would resolve what i could fix 20:20:09 lol 20:20:19 cool. lets make sure cp16net's review gets fixed/merged today 20:20:34 good work team! moving on 20:20:35 hub_cap, agreed 20:20:39 yeah install works but kick-start is failing currently 20:20:53 #topic Roadmap for IceHouse-1 20:21:04 denis_makogon: do explain "Documenting mechanism" plz 20:21:05 that is mine 20:21:29 all team should put real milestone for their BPs 20:21:45 to make I1 feature-list a bit clean 20:21:50 yes i agree 20:21:53 as well as bugs 20:22:02 yes 20:22:06 so far i think me and the mirantis guys are teh only ones who put that in ;) 20:22:13 reliably ^ ^ 20:22:22 if u dont plan on working on it, make it "trove next" 20:22:29 what is the icehouse 1 date 20:22:37 then we could list all features with help of launchpad 20:22:38 is that the default? 20:22:44 * hub_cap goes to launchpad for dates 20:22:51 vipul, end of december 20:22:54 #link http://launchpad.net/trove/+milestone/icehouse-1 20:22:58 wait, did we find out what "Documenting mechanism" was? 20:23:00 2013-12-05 20:23:08 kevinconway: i think its your mouse and fingers 20:23:28 launchpad is the answer kevinconway 20:23:44 kevinconway, with correct milestones we could go to launchpad and then list all features for I1 20:23:45 when is icehouse-3? 20:23:55 roughly before the next summit? 20:24:02 that's like in 3 weeks! 20:24:05 juice: that info is in launchpad 20:24:09 http://launchpad.net/trove/+milestone/icehouse-3 20:24:14 denis_makogon: ok so when i make bugs or BP's i should just let you know so you can set all that up right? 20:24:26 kevinconway: no sir 20:24:42 so fwiw, i have to do _all_ this manually as we get close to a milestone 20:24:58 that sounds terrible 20:25:07 im sure some of u remember me asking "did u do this during this timeperiod" durin the last round 20:25:08 hub_cap, does we have roadmap for IceHouse release ? 20:25:10 and looking thru git history 20:25:14 *do 20:25:25 denis_makogon: there is no real roadmap other than what companies pledge to do 20:25:38 so i cant really set a roadmap... i can say that id prefer to see X over Y first 20:25:53 if only there was some kind of machine that could automate tasks related to managing information.... 20:25:56 hub_cap, i know, but we could do such thing 20:26:27 again this just goes back to waht you originally said. as companies set blueprints, set the milestone they expect them to be done 20:26:31 thats the roadmap 20:26:46 hub_cap, ok 20:26:55 i have my roadmap... thats to get testing integration :) 20:27:09 but yes i think this is something that we need to do cuz it kills me 20:27:14 literally! 20:27:15 hub_cap, got you 20:27:20 thank you for bringing it up denis_makogon 20:27:41 hub_cap, thanks 20:27:52 what it also means is that we need to kick back blueprints that dont have the info liek this 20:28:08 it would be idea to have some shared goals 20:28:08 bugs are slightly different because i dont think that people are necessarily working on bugs if they report them 20:28:17 ie. replication, testing (tempest) 20:28:28 juice, yes 20:28:35 those being our do-or-die type of features 20:28:39 ya juice let me touch base w/ some of the other PTL's to see how they handle this 20:28:40 hub_cap: do we assume that the creator of a blueprint is working on it? 20:28:48 +1 20:28:51 kevinconway: for the most part i woudl assume that 20:28:51 then the rest is up to individual teams 20:28:56 like we should have some clustering implemnted in I 20:28:57 companies, individuals 20:29:01 or separate the guest agent, etc 20:29:14 #action hub_cap to talk to PTL's about what a community roadmap and how they handle it 20:29:37 ok so time to move on? we good w this subject based on what we now know? 20:29:45 and ill report back w/ more info next wk 20:29:52 it would also be beneficial to have some over arching direction so that we all pull in the same direction 20:30:12 sort of a release kick off if you will 20:30:40 maybe we can do that in a week or two when we sort out the features and goals 20:31:47 so, i think we could move on to the next topuc 20:31:52 *topic 20:32:10 wow oops 20:32:11 hub_cap is posting in openstack-trove 20:32:13 hahahah 20:32:14 HAHAHA 20:32:21 HAHAHAAHAH 20:32:26 #topic Multi-Datastore/Versioning-Support 20:32:30 ashestakov_: thats you 20:32:33 hi all 20:32:35 * juice goes to read openstack-trove 20:32:41 want to discuss few points 20:32:45 juice, come back 20:32:58 ashestakov_, go 20:33:02 first, datastore_engine should be renamed to datastore_manager 20:33:11 engine is confusing 20:33:15 i think thats fair 20:33:19 thanks denis_makogon 20:33:24 engine is usually a mysql thing (thats what i think of at least) 20:33:33 so +1 to renaming that 20:33:38 agreed with ashestakov_ 20:33:39 engine_manager :) 20:33:47 is this part of the datastore_type api? 20:33:50 is datastore_manager is ok? 20:33:51 wait thats a conductor 20:33:53 vipul, yes 20:33:59 choo choo 20:34:07 what's wrong with 'type'? 20:34:36 ok i thought this was internal code... what exactly are we talking about ashestakov_? ccan u link something? 20:34:54 if we change the api again i think ashestakov_ is going to pull his hair out 20:35:24 its not api, its just option name 20:35:38 but it should be renamed in few places 20:35:53 ok 20:36:04 i think, in general, engine is a bad name if we are > just mysql 20:36:10 "engine" is reaaly confusing 20:36:22 vroom 20:36:23 i am confused as to what we are renaming 20:36:27 lets decide what is better 20:36:30 ashestakov_, Are there only renamings or something else ? 20:36:54 maybe we should just push this api change until the Jaundice release. 20:36:57 amcrn proposed "datastore_manager", i afree with it 20:37:11 #link https://review.openstack.org/#/c/47934/ 20:37:11 kevinconway: love the name 20:37:15 to elaborate on what ashestakov_ is referring to 20:37:17 https://review.openstack.org/#/c/47934/9/bin/trove-guestagent 20:37:25 manager = dbaas.datastore_registry().get(CONF.datastore_engine) 20:37:36 datastore_manager 20:37:37 done 20:37:39 do it 20:37:47 juice, agreed 20:37:50 or engine_manager 20:37:58 flip a coin 20:37:59 haha i was just gonna say "i bet amcrn dislikes engine too" 20:38:05 and i see the first comment lol 20:38:08 hub_cap, lol 20:38:12 :D 20:38:20 just a config option.. manager is fine 20:38:36 yes 20:38:38 vipul: well, it changes the datamodel as well 20:38:42 ashestakov_, is it all or something else ? 20:38:51 aka https://review.openstack.org/#/c/47934/9/trove/db/sqlalchemy/migrate_repo/versions/016_add_datastore_type.py 20:38:52 datastore_manager do it 20:39:02 ok, ill rename it 20:39:05 next one 20:39:07 i think manager makes sense 20:39:23 default image_id should be removed from datastore 20:39:34 and operatos should specify image for each version 20:40:30 im ok w/ that too 20:40:35 makes sense.. there should be at least 1 version entry 20:40:49 so, image_id is per version, not per datastore_type ? 20:41:40 any opposition for it? 20:41:43 you should have an image per version 20:41:49 correct, see my comment @ #29 @ https://review.openstack.org/#/c/47934/9/trove/datastore/models.py for the reasoning 20:42:34 ah, makes sense 20:42:36 ++ 20:43:03 i'm ok w/ thati 20:43:04 vipul: why would you need a datastore type api if you had an image per version? 20:43:42 because i'm not booting images.. i'm booting a datastore 20:43:52 vipul, ++ 20:43:59 i don't see the distinction here 20:44:06 lets think we decided... 20:44:30 i'm saing there should be at least one image.. but it's possible that multiple versions really have the same image 20:44:36 i thought the whole point of the datastore type api was to have one image that could be molded into the one we wanted 20:44:51 i thought that was ashestakov_'s original vision 20:44:57 and install the packages on creation? 20:45:09 yep if a deployer chooses to 20:45:15 but the package info needs to come from somewhere.. 20:45:20 and that's the datastore version 20:45:39 ultimately the point of this API is so we can return a NotImplmented when the underlying datastore doesn't support users right? 20:46:00 kevinconway: lol 20:46:51 did we have a netsplit? 20:46:57 tap tap tap is this thing on? 20:47:03 yes 20:47:15 and one more thing 20:47:29 hub_cap: we're ignoring you 20:47:32 i thonk we should implement subcommands in trove-manage 20:47:39 imsplitb1t: 1o1 20:47:50 thats lol in "i cant remember my handle properly" 20:47:57 lol 20:48:00 ashestakov_: example? 20:48:24 hub_cap: see https://review.openstack.org/#/c/47934/9/bin/trove-manage, comment on #107 20:48:40 better? 20:48:44 hub_cap: trove-manage db sync, db wipe, db upgrade .... datastore update bla bla bla datastore version_update bla bla bla 20:49:13 imsplitbit: yes 20:49:27 hub_cap: now for datastore/version are very long commands, we can split it for few subcommands 20:49:32 well first thing... id guess that our -manage script is old, ugly and out of date... we need to maybe clean it up 20:49:42 hub_cap: like in nova-manage 20:49:43 and i think its fair to have subcommands there 20:50:00 wasn't nova-manage nuked from orbit in grizzly? 20:50:30 :o 20:50:33 anyway, that's not here nor there, the open question was whether subcommands or named parameters were more appropriate 20:50:47 and whether that should be forked into a new bp to avoid blocking this review 20:51:01 ok so no matter what #1 answer is, #2 is yes 20:51:10 agreed, alright 20:51:48 9 mins 20:51:49 im leaning toward subcommands honestly 20:51:56 yes lets move on 20:52:07 i have last point to discuss 20:52:07 some of these things shoudl be moved to the proper 'trove' cli 20:52:12 subcommand are good, i suppose 20:52:16 vipul: yes 20:52:18 nova has done this mostly 20:52:20 ashestakov_: make it quick 20:52:38 for my changes need to change trove-integration 20:52:54 it should initialize db like run_tests 20:53:14 e.g. add datastores/versions to db 20:53:19 sure 20:53:55 question is how to do this, just add in addition to existing setup? 20:54:04 id say lets add it to devstack 20:54:19 if its something that will more or less be the "default" setup 20:54:58 does devstack trove-manage the glance image? i don't think so, right? 20:55:11 no but it will 20:55:19 i guess for now lets add to -int 20:55:31 and when i do the tempest tests refacotring / image creation we can add it there 20:55:34 good call amcrn 20:55:55 ok moving on? 20:55:57 ok, ill add changes in trove-int 20:55:59 yes 20:56:04 si' 20:56:19 #topic other service support (cassandra, mongo, redis) 20:56:27 since we are all waiting for ashestakov_ review 20:56:30 i have opinions on this but id like to hear what denis_makogon has to say 20:56:41 we have cassandra, mongo single instance review 20:56:54 also we have update to trove-integration 20:57:26 we could get merge into trove and test out code with integration 20:57:51 i need to hear what cores think about it 20:57:54 so for tests, id liek to see these tested by the new tempest stuff 20:57:59 but that doenst exist yet 20:58:03 which is a problem 20:58:03 lol 20:58:07 yes 20:58:09 lets just wait until it does 20:58:13 then worry about mongo 20:58:19 i dont want us to write more old tests 20:58:34 if we can focus on the tempest codebase for new data types 20:58:51 itll make our code cleaner anyway (no more old int-tests) 20:58:52 we reused old test groups for simple instance 20:59:07 the problem is that itll tkae 1month i think for all this stuff to get lined up 20:59:21 so if we _need_ cassandra support before then we can try to merge it in 20:59:22 hub_cap, even more 20:59:30 hub_cap: Here's another thing- Tempest is for integrating the entire stack. Should we wait for everything to be finished before writing test one for some of these new datastore types? 21:00:16 Then again we'll only need the one image- maybe it won't be that bad if datastore types can get in soon. 21:00:22 hi grapex :) meeting over, move to #openstack-trove 21:00:26 #endmeeting