21:04:49 #startmeeting Trove 21:04:50 Meeting started Tue Jun 18 21:04:49 2013 UTC. The chair is SlickNik. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:04:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:04:53 The meeting name has been set to 'trove' 21:05:25 #topic Agenda Items 21:05:43 #link http://eavesdrop.openstack.org/meetings/trove___reddwarf/2013/trove___reddwarf.2013-06-11-21.03.html 21:06:02 esmute: the first one's us. 21:06:09 ok 21:06:23 im guessing is the rd jenkins log? 21:06:32 yes 21:06:35 esmute/SlickNik to figure out the archiving of the reddwarf logs for rdjenkins jobs. 21:06:41 Spoke to clarkb. 21:07:03 Since we have been incubated, we might not need to use our own rd-jenkins for rd int tests 21:07:12 we can leverage openstack-jenkins to do this. 21:07:35 it seems that i have to create a YAML file that describe the jenkins job 21:07:50 and add it to to the openstack-infra project 21:07:53 https://github.com/openstack-infra/config/tree/master/modules/openstack_project/files/jenkins_job_builder/config 21:08:00 So basically the best way to move this ahead is to get the old devstack-vm-gate patch that we had for running our int-tests revived, I think. 21:08:20 #winning 21:08:29 once we do this, everything will take care of itself. They already have a way to create an instance with devstack... 21:08:41 we can just install trove in there and run tests 21:08:45 #tigerblood 21:09:06 Okay, so let's convert this action item to do that then... 21:09:07 nice. we can finally put rd-jenkins down like an ailing calf. 21:09:11 the logs will be put in a wellknown directory and jenkins will pick them up and move it to the oepnstack log file server. 21:09:40 hey grapex 21:09:45 can we add an agenda item to talk about the reddwarf->trove module move? (editing wiki sux on phone) 21:09:52 SlickNik: Weird... limechat just crashed. :) 21:10:00 i like this plan more, instead of figuing out logging 21:10:09 hub_cap: i'll add it 21:10:19 wont we still have to figure out logging? 21:10:22 #action esmute and SlickNik to look into what happened to devstack-vm-gate integration. 21:10:43 openstack-infra's job 21:10:45 Does the existing CI infrastructure capture logs of daemons on failed test runs? 21:11:02 we'll have to figure out a way to get the guest log, sigh 21:11:06 hub_cap: According to clarkb, as long as we put the logs in a well known dir, jenkins will push them to the log file server 21:11:13 grapex: I'm not sure, that's something we'll have to talk to clarkb / infra team about. 21:11:36 a folder in the log file server with the project name, build number and other information about the run will be created 21:12:09 and the logs will be placed there... just like the jenkins run that execute the tox tests is doing now 21:12:14 sweet im on my laptop now 21:12:25 they put a plug in a breaker outside and dont need it anymore 21:12:28 now i can type fast 21:12:30 woot!!!!! 21:12:35 welcome back… :) 21:12:39 moving on. 21:12:44 hopefully i can get enough charge so if they need it again i can stay on 21:12:55 datsun180b to create a doodle for meeting times 21:13:09 Did everyone vote? 21:13:09 thanks for doing that datsun180b 21:13:15 even though he's not here. 21:13:21 i have not voted. i will now tho 21:13:22 grapex: I am not a citizen 21:13:25 can someone link me? 21:13:29 http://doodle.com/fvpxvyxhmc69w6s9#table 21:13:31 vote for what? 21:13:36 lol 21:13:40 esmute: The meeting time. 21:13:44 esmute: whether to deport you 21:13:44 #link http://doodle.com/fvpxvyxhmc69w6s9erhvvpt4/admin#table 21:13:47 oh thats what i meant grapex 21:14:20 I think datsun180b was going to close the vote soon. 21:14:42 So please vote on the new meeting time ASAP if you haven't already done so. 21:14:57 Anything else to add here? 21:15:06 DON'T FORGET TO SET THE TIMEZONE WHEN YOU DO 21:15:14 wow that really sticks out :) 21:15:16 we need to remove the tue 2pm pdt 21:15:23 early results... looks liek the current time will work 21:15:33 just need to change the day 21:15:48 juice: OKAY 21:15:57 I think that's probably what will end up happening. 21:16:01 Same time, different day. 21:16:06 which day? 21:16:13 But who knows — once the votes are in... 21:16:14 TBD 21:16:35 let's make the deadline EOW? 21:16:47 I think that's reasonable. 21:16:49 ok just voted 21:17:12 Please vote before end of the week. 21:17:18 annashen, esp ^ 21:17:23 I'll ask datsun180b to close the vote then 21:17:34 http://doodle.com/fvpxvyxhmc69w6s9erhvvpt4/admin#table 21:17:42 ^^annashen, esp 21:17:50 okay, let's move on 21:18:02 robertmyers add bug for backup deletion 21:18:24 SlickNik: Rob can't be here today, but he wanted me to tell everyone his work continues. :) 21:18:29 SlickNik: ok I'm on it. 21:18:54 I think he added the info already. 21:19:11 Thanks grapex. 21:19:20 well said grapex 21:19:28 (golf clap) 21:19:49 Thank you, thank you all very much. 21:19:54 * grapex blushes with pride. :) 21:20:05 okay, moving on. 21:20:12 Not sure who had this one (hub_cap?) 21:20:16 look into Heat Agent for packaging / repository organization 21:20:21 oya 21:20:34 ive got the template finished for ubuntu 21:20:47 oh this is diff - it's a carryover from separating the guest agent 21:20:48 was working on changing dib / elements up fo rit 21:20:55 AH 21:20:57 into its own repo 21:21:14 oh the guest, like trove-guest? 21:21:14 i think we still need to do that.. 21:21:19 yes 21:21:30 for packaging purposes its a good idea, grapex will hate it 21:21:36 you'd be surprised :) 21:21:39 hub_cap: I broached it last time. :) 21:21:46 Though I originally didn't want to 21:21:53 WHAT?!?!?!?!!? 21:21:54 Yeah, he mentioned it the last time. :) 21:22:01 u want to *cough cough* separate things??????????????????????????????????????????? 21:22:18 * hub_cap 's mind is blown 21:22:29 lol 21:22:32 im all for it 21:22:35 let'd do this! 21:22:39 Honestly I think fewer repos would be cool... I heard the CERN is working on technology that would make packaging possible even in such terrifying circumstances. 21:22:50 lol package smasher grapex 21:23:09 Have found a god package yet, grapex? 21:23:10 :) 21:23:10 But, I think for the guest having a unique structure would help out to seperate it from the Trove code, for like cfgs and stuff. 21:23:28 +1 it helps for packaging too 21:24:15 so the action was referring to Heat already doing something like this for their agent? 21:24:20 I'm in favor of separating it out as well. 21:24:22 and looking into that.. 21:24:23 Hey, I don't want to make Santa Claus's job any harder, believe me. :) I'm ok with the guest being in it's own repo. 21:25:10 Anyone want to action that again for this week? 21:25:40 wow.. no volunteers 21:25:44 you can give it to me 21:25:49 I'll do it 21:25:54 volunteers! 21:26:06 I'll be baby sitting validation 21:26:10 but nothing else on the plate 21:26:17 what is the name of the repo? trove-agent? 21:26:29 #action juice / vipul to look into how Heat achieves packaging / repo organization for its agent 21:26:40 what about shared code - are we going to have a common repo 21:26:44 esmute: TBD 21:26:44 thanks guys. ,3 21:26:46 <3 21:26:51 or will trove-agent depend on trove code 21:27:04 trove-common? 21:27:13 juice: since we're using oslo, that should be our common code (hopefully) 21:27:21 juice: Will someone shoot us if we had five Trove repos? 21:27:32 I will 21:27:33 3 21:27:34 trove-agent should be talking in rabbit...heopfully there wont need for a common 21:27:36 A trove of Trove repos! 21:27:41 How delightful... 21:27:45 I mean common between guest agent and api proper 21:27:57 well it makes sense to have a common 21:28:19 i wonder how much code is actually used from reddwarf.common 21:28:24 i'd bet only a couple of classes 21:28:35 service, and cfg maybe? 21:28:56 wsgi also 21:28:59 even if it's lightweight it makes sense, or use openstack common and contrib our common utils to that? 21:29:09 guest wont need wsgi i don't think 21:29:14 api mostly 21:29:21 I'll do some analysis and report back 21:29:26 kk 21:29:28 I am sure that it more than a handful 21:29:39 Yeah, probably have to do some research and figure out the common footprint. 21:29:48 but nevertheless we shouldn't be copying and pasting code 21:29:51 I think there might be some common instance models as well. 21:29:58 But I don't know for sure off the top of my head. 21:30:14 juice: +1 21:30:22 Okay, next action item. 21:30:34 Vipul and SlickNik (and others) to provide feedback on Replication API 21:30:43 SlickNik: If there is, that would violate the separation of concern principle 21:30:56 https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API 21:31:01 thanks imsplitbit 21:31:05 I moved the api stuff to it's own page 21:31:08 #link https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API 21:31:24 imsplitbit: one quick comment i had was when creating a cluster, why no flavor ? 21:31:31 do all nodes have to be same flavor? 21:31:50 there is some debate on that 21:32:03 there's a clear argument for allowing any flavor 21:32:26 but it can also be detrimental to performance 21:32:36 if you have a 8gb master and 2 512 slaves 21:33:02 that could be bad 21:33:13 right... 21:33:22 I would say make it optional? 21:33:37 yea, should always worry when there's $$ involved 21:33:44 so that optional would be find 21:33:45 fine 21:34:42 also is the purpose of the 'attributes' element to be a generic area like metadata? 21:34:52 yes 21:35:41 should Create Replication Set: (Previous db instance) be a PUT ? 21:36:27 eh guess maybe not 21:36:37 to esp's point, i don't see a restful way to say.. Create an Instance via /instances and convert that to a cluster (modify the instance) 21:37:12 also, can we do away with 'actions' PUT /clusters/{cluster_id}/actions 21:37:13 vipul: neither did I. I was hoping for some more expertise on that particular path 21:37:54 actions was a demorris contribution, he is a big fan of using actions for things like promote 21:37:55 since you're already doing a PUT on /clusters, why have actions 21:37:55 I think POST is fine too. just wondering 21:38:17 the question is how to RESTify the conversion of an instance to a cluster? 21:38:27 that is one question 21:38:53 clustertypes is ok? 21:38:56 would it not be a POST to cluster with the instance ids? 21:39:07 kevinconway: thats what I have it as now 21:39:14 IIRC 21:39:19 it has a side affect of modifying another resource 21:39:27 that's the only thing in question 21:39:31 yeah 21:39:44 regarding actions, you have "role": "slave", 21:39:56 you could PUT and change the 'role' element 21:40:04 can you no longer reference the instance by itself? 21:40:39 that's a good question, can you do instance operations on a instance that is also joined to a cluster? 21:40:43 kevinconway: we had some good discussion on that. If you make changes to an instance without using the context of the cluster you have the potential to break things in a magnificent way 21:40:46 or shoudl you only do cluster operations? 21:41:02 we contend that once a cluster always a cluster 21:41:11 operations *should* be done on the cluster 21:41:25 because thats where the cluster aware knowledge is 21:41:25 makes sense 21:41:40 i guess my question is does creating a cluster modify the resource metadata at all or simply alter the underlying instance and create a new cluster resource? 21:41:49 imsplitbit: I think I agree, but what if the action only applies to one node in the cluster. 21:42:01 if you remove all slave/nodes from a cluster leaving one tho it should become just an instance 21:42:10 shouldn't it be like putting a user in a user group? 21:42:32 SlickNik: there are some edge cases where it makes sense to do an operation on just a node of the cluster 21:42:41 like if you allow different flavors 21:42:48 and you need to bump the memory of one of the slaves 21:43:06 so it makes sense to allow those things at the instance level 21:43:09 what if i just want to kill an instance in my cluster for a cool effect? 21:43:24 which people will do one day one 21:43:25 can of worms (sorry) 21:43:28 but if you want to add or remove nodes it should only be done in /clusters 21:43:45 not removing… lets say restarting 21:44:00 we can validate and prevent that (removing) on day one and figure out a better solution in v3 21:44:02 v2 21:44:09 kevinconway: /instances should be used for stopping and starting nodes IMO 21:44:14 but it gets confusing 21:44:31 basically any action that can damage the cluster must be done at the cluster level 21:44:41 IMO restarting a node shouldn't be destructive 21:44:48 so /instances would be the place for that 21:45:01 it's a slippery slope tho :) 21:45:25 i'll go back to my user/usergroup similarity 21:45:42 yeah, I can see it start to become confusing (which actions do I need to call on /instances vs /cluster?) 21:45:54 right 21:45:58 is there a critical difference between the idea of clusters and the idea of user groups in terms of a rest resrouce? 21:46:23 so adding or removing nodes to a cluster id done /clusters but individual actions like resize should happen at /instances 21:46:37 kevinconway: yeah I think a cluster behave like a single thing 21:46:45 Yes, kevinconway: I think there is a difference. 21:46:46 ^ behaves 21:46:49 esp: +1 21:46:58 esp: so does a group of anything 21:47:15 so then why even show them as separate resources 21:47:19 i can give a group access to a think without giving access to each individual 21:47:26 where as a user group is kinda a collection of individual things 21:47:33 for users, having them in a user group doesn't mean that certain user operations now need to be done on the user group. 21:48:10 but for instances, if they are part of a cluster, all mysql operations that were previously on the instance now have to be done on the cluster. 21:48:38 …or maybe not…just thinking out loud here... 21:48:58 if thats the case then what about a 301 redirect to the head node when you try to sql on a slave node 21:49:10 and are we talking master/slave with only one master? 21:49:30 we're talking about replication/clustering in the general sense 21:49:51 because this api must facilitate doing mongodb replication or even redis or postgres 21:49:58 I think this topic needs more discussion. :) 21:50:04 agree 21:50:05 a master/master should allow me to interact with any node and have those changes replicated 21:50:05 or even galera 21:50:17 I would love/welcome much much more discussion 21:50:30 which may be out of the scope of this week's meeting. 21:50:36 regarding clustertypes, what was our proposal for service_types? 21:50:53 we had a spec somewhere where we introduced that 21:50:55 e;t 21:51:09 ? 21:51:12 vipul: link? 21:51:17 sorry 21:51:22 I don't recall seeing that 21:52:04 #link https://wiki.openstack.org/wiki/Reddwarf-versions-types 21:52:07 i think that's the one 21:52:22 Let's take further discussion on this to #openstack-trove... 21:52:23 SlickNik: if the discussion is outside the scope of this meeting I'd love to setup a time to get everyone together and discuss further 21:52:31 imsplitbit: agreed 21:53:03 yea we can find a slot in openstack-trove 21:53:58 movin' on then? 21:54:03 #imsplitbit, SlickNik, vipul and others to discuss the replication and clustering API 21:54:17 irc://15.185.114.44:5000/#imsplitbit, SlickNik, vipul and others to discuss the replication and clustering API 21:54:20 lol 21:54:23 lol 21:54:26 #action imsplitbit, SlickNik, vipul and others to discuss the replication and clustering API 21:54:27 trying to action it 21:54:28 woops 21:54:29 thanks! 21:54:30 :) 21:54:33 moving on 21:54:43 phew! 21:54:56 #topic Next meeting time 21:55:12 #link http://doodle.com/fvpxvyxhmc69w6s9erhvvpt4/admin#table 21:55:13 didn't we spend the first half-hour on this topic? 21:55:23 ^^ Vote soon. Poll closes end of week. 21:55:29 yes we already covered it. 21:55:46 #topic reddwarf -> trove move. 21:56:23 So we've changed our repos already. 21:56:53 what's the status btw? just code renames need to happen? 21:57:06 hub_cap: ^^ 21:57:11 hub_cap was working on changing our codebase so that any references to reddwarf are now trove. 21:57:44 I hope he's not lost his electricity again. :( 21:57:58 im here sry 21:58:05 im also talking in #openstack-meeting 21:58:13 ah, okay. 21:58:14 CUZ THEY ARE DURING THE SAME TIME!!!! 21:58:18 skip and come back 21:58:23 okay. 21:58:31 move this to open discussion. 21:58:39 #topic API validation. 21:58:51 juice, any updates for us? 21:59:08 removed all the validation code 21:59:15 tox passes 21:59:24 running int tests as we speak 21:59:29 review should land today 22:00:00 I unlock jsonschema achievement - 22:00:10 okay, glad that you were able to figure out the jsonschema bits. 22:00:10 which gets you... 22:00:15 thanks for that! 22:00:32 +100 trove gratitude. :) 22:00:40 with the size of some of the schemas 22:00:51 I am not sure if the codebase actually grew or shrank 22:01:03 heh... 22:01:09 …and that brings us to... 22:01:15 #topic open discussion 22:01:19 not going to do it on this pass but would like in the near future to modularize the schemas 22:01:32 nice work juice 22:01:52 Anything else to discuss? 22:02:09 (other than the status of the rename.) 22:02:56 ... 22:03:03 I guess not. 22:03:21 hub_cap take it away. 22:03:27 I would like to discuss my fresh feelings about the api stuff - as in when can we correct them to make them more restful 22:03:33 HEYO 22:03:43 but I don't know if it's urgent 22:03:52 just planting the seed right now 22:03:52 juice: nooo!! 22:04:02 we need to think of doing that for v2 api... 22:04:08 which may be w/clustering? 22:04:12 we should do this when we move away from wsgi 22:04:33 ok so should i talk status of rename real quick? 22:04:50 move away from wsgi? as in the wsgi interface or the wsgi module from openstack common? 22:05:05 at least the latter 22:05:19 Go for it hub_cap. 22:05:30 ok so troveclient is renamed, it failed jenkins for some reason ill look into it 22:05:37 we might ahve to merge it so i can make progress w/ reddwarf 22:05:44 but i anticipate ~24hrs itll be done 22:05:50 i officially have 2 working recepticals now 22:05:52 so ill be good to go 22:05:52 speaking of which --- can we release to pypi afterwards? 22:06:03 thats a good idea 22:06:13 i dont control that i think grapex does 22:06:21 hub_cap: Not so 22:06:25 IIRC it was based on a tag 22:06:30 I do have access to the repo, but so does mordred 22:06:43 ok ill ask mordred to push it 22:06:45 vipul: That was it. 22:06:46 grapex: did we build it around the ci-infra tagging? 22:06:57 if the unit tests work is everyone ok w/ me pushing the code for the client rename? 22:07:02 SlickNik: I'm not sure 22:07:19 yea that's what it was... push a tag up to gerrit, and it will push to pypi 22:07:23 grapex: okay, I can look into that. 22:07:52 hub_cap: yea, if things worky let's push it 22:07:58 #action SlickNik look into publishing to pypi based on tags. 22:08:12 On that note 22:08:19 Now that we've got more ci-infra support 22:08:23 can we look at generating docs? 22:08:40 The client actually had some, though they worked as tests which turned out to not be a great idea 22:08:42 hub_cap: I'm fine with that 22:09:14 We could change the docs to not run as PyTests and generate it though- has anyone heard about how pushing docs to PyPi works with the new CI infra stuff? 22:09:39 hub_cap: Go nuts... although maybe we should check it in after we have a pull request ready to change the tests. 22:10:27 grapex: I'm not sure about the docs. Someone will need to look into it. 22:11:17 And that was the status. 22:11:22 Anything else? 22:11:44 I think we might be all done with the meeting, otherwise... 22:12:08 going once. 22:12:12 what? 22:12:36 mordred, did you have something you wanted to bring up? 22:12:46 * mordred just saw my name get pinged 22:13:17 oh, it came up in the context of publishing python-troveclient to pypi... 22:13:27 and you having the creds to do so. 22:13:35 cool. so that should work just by pushing a tag to gerrit 22:13:45 yes, I was going to check on that. 22:13:58 And update the ci-infra scripts if that's not in place already. 22:14:09 I believe we should be up to date 22:14:15 okay cool! 22:14:19 main thing to check is python-troveclient itself on pypi 22:14:42 which corvus setup, so it likely has openstackci added to it properly 22:14:47 so should work! 22:15:09 awesome. 22:15:19 grapex: change the tests? 22:15:58 hub_cap: Yeah, we should just rip the DocTest stuff out and leave the docs 22:16:13 They were pretty useful to a few ops at Rackspace, though there's close to nothing in them. 22:16:25 ah 22:16:38 Sweet. I think we're done. 22:16:42 Thanks everyone! 22:16:48 #endmeeting