21:03:56 #startmeeting Trove / Reddwarf 21:03:57 Meeting started Tue Jun 11 21:03:56 2013 UTC. The chair is vipul. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:03:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:04:01 The meeting name has been set to 'trove___reddwarf' 21:04:01 thanks 21:04:15 o/ 21:04:23 #link http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-06-04-21.04.html 21:04:28 o/ 21:04:29 o/ 21:04:31 hokay, so let's look at that agenda 21:04:34 o/ 21:04:44 first up, action items? 21:04:44 \o/ 21:04:56 oh crap i gotta set the topic 21:04:59 #topic Action Items 21:05:05 esmute work with SlickNik to figure out the archiving of the reddwarf logs for rdjenkins jobs. 21:05:08 that's 1. 21:05:15 yeah. 21:05:26 any progress? 21:05:42 esmute and I got started on this. 21:06:13 i need to talk to clarkb to ask how openstack jenkins talk to the log server. 21:06:28 i have configure a log server . 21:07:01 We still need to figure out a good way to copy logs from the _guestagent_ to the server as well. 21:07:03 esmute: happy to talk over in #openstack-infra whenever 21:07:06 but i havent worked out on the communication (how jenkins gets the logs to the server) 21:07:31 thanks clarkb. Ill just probably walk to you and talk 21:07:53 thats convenient 21:07:55 :) 21:08:00 So let's action this one again since there's still more work to be done here. 21:08:20 #action esmute/SlickNik to figure out the archiving of the reddwarf logs for rdjenkins jobs. 21:08:35 Looks like that was all the actions from last week. 21:08:41 thanks for leading the charge on this one esmute! 21:08:50 there is one from hub_cap 21:08:54 http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-06-04-21.04.html 21:09:03 about meeting times 21:09:11 hub_cap is away today and tomorrow 21:09:17 oh, i see that 21:09:17 Anyone know if hub_cap did his doodle duty? 21:09:17 its at a conf 21:09:27 i dont think so 21:09:32 oh yeah, he was mentioning Cassandra conf. 21:09:40 unless i have been heads down adn missed it 21:09:43 didn't see anything in eavesdrop about an actual link 21:09:43 which is possible 21:10:03 #action hub_cap to create a doodle for meeting times 21:10:06 Let's re-action it and move on. 21:10:09 oh i was about to steal that 21:10:11 cool, thanks vipul. 21:10:25 you want the action datsun180b? 21:10:38 since i'm here, i'll take it 21:10:44 i can even do it now and #link it 21:11:16 #action datsun180b to take meeting times instead of ^ 21:11:33 instead of top hat? 21:11:34 lol 21:11:58 don't call the meeting until i have a #link for you, i'll add an action for us all to check in 21:12:04 it will look better in the meeting notes 21:12:09 trust me :) 21:12:16 Yes, it will. :) 21:12:25 oh i see what you mean 21:13:06 datsun180b: you want me to change the topic? 21:13:23 please do, i don't think i can anyway 21:13:24 I guess, that's all for action items. 21:13:30 since you fired it up 21:13:40 #topic Next Meeting time 21:13:50 Agenda: https://wiki.openstack.org/wiki/Meetings/RedDwarfMeeting#Agenda_for_the_next_meeting 21:14:05 do we wanna discuss anything more about this? 21:14:10 i think it's carry-ver 21:14:11 over 21:14:23 yup. 21:14:42 vipul: I think we'd like the move the time forward a bit but it looks like it conflicts with the other OS meetings 21:14:42 I think we need the doodle so we can vote on what times are most convenient for each of us. 21:14:47 And datsun180b is on that. 21:15:21 datsun180b: when you create that doodle, look at the other meetings that hub_cap has to be on 21:15:35 yes grapex: lots of other OS meetings today (TC / infra) 21:15:40 Here I thought it was an opt-in poll 21:16:12 So it's up to each of us to vote and let the tool intersect all of the votes 21:16:25 I'll have something for us all soon 21:16:35 datsun180b: https://wiki.openstack.org/wiki/Meetings for possible conflicts 21:16:58 datsun180b: that's okay take your time. Link doesn't have to be available during this meeting. 21:17:09 #topic API Validation Update 21:17:17 Sounds good, I'll have an answer by next week, then 21:17:18 datsun180b: You can send it out at #openstack-trove when it's ready. 21:17:32 #topic API Validation update 21:17:35 umm 21:18:00 juice was working on this one…? 21:19:22 juice 21:19:40 ok we'll move on? 21:19:44 sorry forgot it was that time 21:19:49 still in progress 21:20:00 k 21:20:00 well just getting to the bulk of it today 21:20:14 stuffing it in Resources validation as discussed with hub_cap last week 21:20:23 where is hub_cap? 21:20:31 he be out 21:20:41 he's out for a couple of days (Cassandra conf, I think) 21:20:52 ah yes - well put vipul 21:21:01 you'd make him proud 21:21:09 heh 21:21:12 :D 21:21:14 with use of street vernacular 21:21:31 #topic Encrypted Backups 21:21:45 sorry datsun180b i was supposed to let you run this 21:21:55 it's okay, i was shaky on the syntax 21:22:12 i've got it for next time 21:22:24 k, next one is all yours 21:22:35 so demorris had some questions about this patch 21:22:42 oh you're all going to love this next part where i use doodle to spam you 21:22:45 grapex have you had a chance to convene with him? 21:23:14 vipul: No, not enough to remember his specific points 21:23:42 well we got a couple of +2's already on it.. but morris was concerned about using a shared key across tenants 21:23:59 I think the thnking is that this is a early (phase1) implementation 21:24:12 and we'd want to do it the right way, probably with assymetric encryption later 21:24:27 And that we definitely need to iterate and improve on this. 21:24:31 so wanted to nudge it along if Morris is cool wit it 21:24:36 yeah he doesn't like phases, phase1 = production done 21:24:43 vipul: Could we use the type field to iterate? 21:25:04 type field? 21:25:09 why can't this just be done with xtrabackup? 21:25:12 grapex: do you mean in the blueprint? not sure I understand 21:25:16 imsplitbit: Lol. I think part of it is things end up getting deployed and then it becomes an operations issue if migrations are needed 21:25:21 robertmyers: not supported yet in xtrabackup 21:25:22 it supports encryption? 21:25:26 it does in 2.1 21:25:26 yeah I get it 21:25:29 which is beta 21:25:33 I raised questions too about the common key 21:25:36 robertmyers: it does but it's beta and it's busted with streaming. 21:25:42 but theres alot behind doing that 21:26:30 SlickNik: I guess what I'm saying is, if we use the type field if a shared key is used across tenants and we want to fix it later we should be able to iterate while offering backwards compatability, right? 21:26:32 the downside here is letting this is as is means if anyone is using this in production and we change it to user specified we need to be able to provide a way to get things back out using the old way 21:26:57 backwards compat is gonna be tricky 21:27:57 yea that could potentially be mitigated by adding key-type (or whatever) when per-tenant key is implemented 21:28:01 quick question: are we married to Wednesday meetings, or is that also up for grabs? 21:28:12 imsplitbit: In a perfect world, we'd have a CI test for each iteration if we broke backwards compatability. This is assuming we care enough about an iteration of an implementation to add CI tests for it alone and BC 21:28:29 grapex: agreed 21:28:38 grapex: yeah, I see what you mean. 21:29:11 datsun180b: you mean tuesday meetings 21:29:16 yes, yes i do 21:29:22 grapex: right now the only thing indicating this is the manifest for the backup. 21:29:25 datsun180b: i think it's up for grabs 21:29:30 datsun180b lives in the future 21:29:39 So vipul SlickNik: when you say encrypted backups are iteration 1, can I ask if you're planning on deploying them w/ the shared key? In my mind the moment they're deployed somewhere is when it would be fairly nice if we could test that iteration in CI 21:29:43 just means more checkmarks to add to this form! 21:29:43 makes sense to extend the type field to mark it as encrypted. 21:30:26 grapex: yes we plan to deploy them... 21:30:39 grapex: i beleive the current backup/restore tests test this today 21:30:48 vipul: Ok- really wish Morris was here. ;) 21:31:08 grapex: So it would be a matter of making sure that the next iteration there was a test to check backward compatibility 21:31:23 grapex: do you mean have tests in CI to test with and without encryption? vipul: right now CI tests only 1 backup and restore…. 21:31:31 vipul: Yeah, then even if we later decide to change it we can at least still get the old backups out 21:31:51 SlickNik: Yeah, I said "perfect world"... I don't know if it would be worth that. 21:32:10 But if the type field exists you could at least run tests like that internally if you wanted. 21:32:40 Btw, I have a related thing I want to bring up about deleting backups where the type field may come in handy as well 21:33:07 grapex, SlickNik are we talking about extending the backup_type to indicate a new type called 'shared-key-encrypted-xtrabackup' or something? 21:34:16 vipul: I think that's the suggestion. Then the decryption method for the restore can be chosen based on the type as well. 21:34:58 we should at least set the type to xtrabackup_v1 21:35:07 so there are a couple of other things you could use.. swift metadata, file extension 21:35:16 cause we know there will be more 21:37:10 robertmyers: is the v1 necessary? assuming next rev of xtrabackup can restore older version backup? 21:37:11 vipul / grapex: we currently set the file manifest for the encrypted backups to be different (i.e. xbstream.gz.enc instead of xbstream.gz) 21:38:02 SlickNik: That's probably fine then - you wouldn't be able to show that when listing backups but it's a fairly tiny difference to a user. 21:38:06 vipul: I guess, I'm mainly referring to it as the first iteration of backups 21:38:19 but we could use metadata 21:38:47 we also need to use metadata when deleting the file 21:39:15 we should use the manifest prefix to construct the segment files 21:39:21 robertmyers: yea seems like features need to be versioned as much as the api :) 21:39:22 and then delete them 21:40:49 ok.. grapex I leave it to you to get Morris' opinion on this 21:40:55 robertmyers: isn't that what we're doing? 21:41:01 for the delete case. 21:41:13 no the code is manually doing it 21:41:25 vipul: Will do. 21:41:32 which is breaking us becuase we are using the same containers 21:41:39 aren't 21:42:58 SlickNik: https://github.com/stackforge/reddwarf/blob/master/reddwarf/taskmanager/models.py#L536 21:43:07 ah robertmyers: I just looked at the code and see what you are saying. Sorry, kagan wrote that delete piece and I wasn't aware. 21:43:47 I'll submit a patch to fix it 21:43:47 is that bugged? robertmyers 21:43:59 k 21:44:08 vipul: it's not bugged but it isn't what we expected. 21:44:13 #action robertmyers add bug for backup deletion 21:44:28 then fix 21:44:42 yea the manifest should've been used 21:44:44 thanks robertmyers! 21:45:23 #topic Open Discussion 21:45:27 #link http://doodle.com/fvpxvyxhmc69w6s9 21:45:35 woah that was fast 21:45:41 #action EVERYONE fill that in. 21:45:46 datsun180b: Great work! 21:46:02 All I did was click buttons 21:46:02 So, I'd like to start off the open discussion with a bit of hypocrisy 21:46:15 :) 21:46:21 always good. 21:46:25 Months back vipul you suggested moving the guest into it's own repo 21:46:34 And I was totally against it 21:46:49 yea remember talking about that 21:46:56 yes! 21:46:57 However, it's becoming hard to see what's a Reddwarf server change vs a guest change 21:47:23 +1 21:47:26 I also feel like there's a lot of code for config values and stuff that's only guest related which is a burden to the normal Trove code 21:47:26 +++ 21:47:27 So 21:47:32 maybe we could put it in it's own repo 21:47:45 It will take a bit to untangle. 21:47:50 or, we could put it into it's own directory in the reddwarf one 21:47:53 So the only thing is.. there is shared code 21:47:55 (inc x) 21:47:56 But I think it's totally worth it. 21:47:59 all of the reddwarf/common stuff 21:48:05 so it at least has the structure of an independent project 21:48:08 if we can make it just openstack/common then makes that easier 21:48:37 the other thing is.. what happens when patches to separate projects dpened on one anoter 21:48:46 we've seen this across reddwarf-cli and reddwarf 21:48:48 I personally would prefer the later so when work gets done it's easier to merge it in at once 21:48:58 * datsun180b pleads the fifth 21:49:34 I also makes sense if we start supporting other clients 21:49:35 Keep it in the repo, but just in an independent location, like ./guest/setup.py, /guest/trove-guest/*.* 21:49:52 (I guess that would be /guest/troveguest/*.*) 21:50:20 (or whatever we want to name it / put it) 21:50:32 yea that could work 21:50:59 hopefully we can get to a point where guest could be tested independently of the server-side components 21:51:27 also packaged independently :) 21:51:42 +++ to packaged independently. 21:51:59 vipul: Yes. As for packaging, you may want to talk to the CI group in case packaging it if it lives in another repo is impossible 21:52:57 Ok will do i think it might be easier if it lived in a separate repo 21:53:37 We can't really get all the goodness of endpoints/stevedore/plugins unless we can package it separately. 21:54:18 i wonder how nova packages the different components (if they do) 21:54:58 Not sure; will look into that. 21:55:02 We shouldn't just do it their because it's the way they do it, though 21:55:42 agreed datsun180b: it's good to see different ways of doing it and enumerate our options, though :) 21:55:57 I would imagine for heat integration a separate package would be very helpful too 21:56:01 vipul: Very good question 21:56:03 right 21:56:13 Canonically is there just one big package for all the nova daemons? 21:56:23 it might be.. not sure 21:56:33 I think it's not the debian packages so much as the Python ones. 21:56:54 I think the CI team doesn't want to deal with setup.py's being anywhere but in the root of a repo 21:58:28 Ok we should take a look at other projects and bring this topic up again.. i think it is still something we should look into doing 21:58:51 oh theres some craziness with packages 21:58:52 vipul: The closest comparison might be the old guest agent in Heat 21:58:57 Which I think is going away 21:59:20 def. we need to do some research here and talk about it again next meeting. 21:59:25 I forget the name of it but they have one. It's the closest analog to what we're doing so maybe they already figured it out. 21:59:36 sounds worthy of an action 21:59:38 client packages are fairly straight forward but daemon packages are broken out pretty heavily. 22:00:08 imsplitbit: in pypi? 22:00:39 vipul: no I was speaking of debian/ubuntu packaging sorry 22:00:47 ah ok 22:01:39 #action look into Heat Agent for packaging / repository organization 22:02:00 anyone have anything else they'd like to bring up? 22:02:16 I think leaving the doodle poll open for a week sounds right 22:02:27 I'll close it before next week's meeting 22:02:29 vipul: replication 22:02:36 datsun180b: sounds good 22:02:46 the docs are up for the latest ideas we've had 22:02:53 input would be greatly appreciated 22:03:09 #link https://wiki.openstack.org/wiki/Reddwarf-MySQL-Replication-and-Clustering-Concepts#Ideas_for_API 22:03:15 imsplitbit: thanks for the link. 22:03:30 imsplitbit: Yep, have looked quickly over it. Sorry haven't had enough time to digest it yet 22:03:39 I will try to get some feedback this week 22:03:40 it's a little more than "noodling" at this point :) 22:04:20 Do you have decisions on which technologies the implementation will be using? 22:04:30 or are we trying to get agreement on API first 22:04:32 no this is just api 22:04:32 same here imsplitbit. Haven't had a chance to look at it in any detail whatsoever. 22:05:00 if we can get the api ideas solid I can put them in the blueprint and we can start writing some code 22:05:29 #action Vipul and SlickNik (and others) to provide feedback on Replication API 22:06:05 expect something from us before the next meeting 22:06:08 Was just skimming. This is good stuff. Thanks imsplitbit! 22:07:15 another topic.. not sure anyone else is noticing this, but our API gives back TMI in failure responses 22:07:36 i'm looking into it now, but if someone knows what the issue is.. please ping me :) 22:07:42 is there such a thing as TMI in failures? 22:07:51 :) 22:07:55 imsplitbit: There is when it's on accident. :p 22:07:56 I getcha 22:08:01 500 server error 22:08:02 lol 22:08:05 Yes :) like returning the IP of the mysql server 22:08:07 oh so not by design 22:08:09 gotcha 22:08:34 http://globalnerdy.com/wordpress/wp-content/uploads/2007/12/bug_vs_feature.gif 22:08:59 vipul: But Vipul, think of how helpful this is to users! 22:09:06 omg thats awesome grapex 22:09:09 Why they may even be able to tell Operators which flags were misconfigured. 22:09:15 totally! 22:09:16 lmao 22:09:19 lol@grapex 22:09:41 we're over 9 minutes 22:09:46 ok i'm done 22:09:50 anyone have anything else? 22:09:58 No, I'm good as well 22:10:14 Great meeting guys 22:10:30 * imsplitbit waves 22:10:33 yup thanks for the discussions 22:10:37 til next time 22:10:41 good stuff, like where you're head is at 22:10:51 bai 22:10:55 #endmeeting