18:01:25 #startmeeting Trove 18:01:26 Meeting started Wed Nov 4 18:01:25 2015 UTC and is due to finish in 60 minutes. The chair is peterstac. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:29 The meeting name has been set to 'trove' 18:01:49 Waiting for folks to trickle in ... 18:03:01 (cp16net has an appointment and SlickNik and amrith are unavailable, so I'll be chairing) 18:04:23 o/ 18:04:55 This could be a small meeting ... I think people forgot about the time change ;) 18:05:06 ok peterstac let's vote on some stuff 18:05:34 sure, how about who picks up the bar tab at mid-cycle? 18:05:45 i move that the next sponsor for mid-cycle provides beer all day during the meetings 18:06:37 Well, I guess we'll move on - people can catch up as they join 18:06:48 #topic Trove pulse update 18:07:10 #link https://etherpad.openstack.org/p/trove-pulse-update 18:07:33 cp16net updated the stats, thx! 18:07:56 Not surprising, stats fell for the week due to the summit 18:08:26 Hopefully they'll pick up again this week 18:08:57 it would be good to get old reviews dropped from the queue 18:09:06 but i guess the submitters would have to abandon them 18:10:01 I'm still in favor of having a bot do that ... nobody can get mad at a bot :P 18:10:55 i assume that having a bot do that is still out of favour with the community? 18:11:33 That was my take, but maybe we can ask some other projects how they handle this 18:11:49 Ok, moving on 18:11:59 #topic Gate Jobs failing 18:12:47 We've noticed that the gate jobs are failing (apparently in stable branch as well) 18:13:03 They're getting a segmentation violation ... 18:13:21 amrith and cp16net (I believe) were looking into it ... 18:13:55 I don't know if they've made any progress 18:14:35 I've ran some tests myself, trying to pin down whether it's a pip versioning issue 18:14:42 No smoking gun yet 18:15:10 smoking guns aren't allowed in canada 18:15:24 we have a new Prime Minister who isn't a gun freak like the last one 18:17:19 <_imandhan_> I've been here o/// 18:17:29 Ok, I guess we'll get an update once amrith and cp16net are back online 18:17:53 #topic Open Discussion 18:17:55 well we are kind of stuck until we can get the gate fixed 18:18:05 so that should be a priority for anyone that has some cycles to figure it out 18:18:13 <_imandhan_> are all the patches in merge conflict due to the manager refactor part 1 that were merged? 18:18:19 Yeah, I'll keep looking into it - but the fact that it affects stable branch is disconcerting 18:18:19 did anyone see that i posted a blueprint for moving to oslo.db ? 18:19:35 mvandijk_, have you submitted a spec? Most people don't follow bp's explicitly 18:19:59 Not yet. Working on scoping the work then I'll put one up. 18:20:37 Do you want to talk about it? ;) 18:20:54 _imandhan_ yes the merge conflict is likely due to manager refactor 18:21:55 peterstac, just wait for the spec then we can discuss it 18:22:06 _imandhan_, I've seen gerrit claim a merge conflict but then not get one when doing the actual merge, so it could be very easy 18:22:14 mvandijk_, sounds good 18:22:25 howdy yall 18:22:45 hey cp16net, that was quick 18:23:07 maybe you can give us an update on the gate issues (if you have one)? 18:23:09 it was faster than i expected 18:23:35 my only update on the gate is that its broken giving a seg fault on unit tests 18:23:46 and i've seen other random failures here and there as well 18:24:01 only thing i've seen consistent was the seg fault 18:24:08 have you looked at stable branch? (i.e. is it the same issue?) 18:24:22 not looked at stable yet 18:24:38 i'm focused on master first 18:24:55 because we have a larger queue of patches there that need to pass 18:25:20 i dunno if its the same issue or not 18:25:23 I was just trying to guess what could cause the issue there, since don't they peg the pip versions pretty tight? 18:26:09 yeah so i went through and found the diff between a successful run and the last bad run i saw from pip freeze and got a small diff 18:26:14 let me see if i can find it.... 18:26:15 I also took a look as some other projects to see if they were having issues, and it doesn't seem so 18:26:37 #link http://paste.openstack.org/show/477928/ 18:26:42 theres the small diff i found 18:26:48 . 18:27:29 something makes me think WebTest but then i'm not sure because its unit tests that are failing 18:27:51 so with "goodpip" py27 works? 18:27:57 yes 18:28:15 so can't we just change one requirement at a time and find which one breaks it 18:28:18 it should have a few older versions 18:28:18 I ran a test with the 'good' pip, but knocked out some I didn't think were culpable 18:28:22 and it still failed 18:28:34 one of them was redis - maybe that's causing the issue? 18:28:44 yeah i'm starting to wonder if its something else 18:29:09 if i saw some strange oslo changes i would think that would be the issue 18:29:14 but thats not the case that i see 18:30:53 I'll start a test just pegging redis to the good version - see if that passes 18:31:10 peterstac: sounds good 18:31:22 we need to get this resolved soon 18:31:27 Otherwise we may need to do what dougshelley66 suggested - try each one separately 18:32:08 yeah if anyone else has a good theory on this it would be appreciated 18:32:17 or if you know of other projects that ran into the same issue 18:32:38 i'll continue working on this and looking around 18:33:09 ok, any other items to discuss? 18:34:01 one other thing i'd like to mention is the releases are changing a bit and allowing projects to be a little more free when they make their release 18:34:13 like we talked about at the summit last week 18:34:46 cp16net i must have missed that - can you provide an overview 18:34:49 cp16net, you also mentioned possibly doing some kind of summit overview 18:34:56 should we start with that? 18:35:09 <_imandhan_> that would be nice :) 18:35:31 so there was an email from doug tagged with [release] 18:35:37 there were a few acrtually 18:36:08 but the main thing i took away from the emails was that there would be a move away from managing releases from launchpad 18:36:39 ah ok - i guess i need to catch up on my ML reading 18:36:43 since i'm fairly new to keeping up with the releases i'm not sure what this exactly means yet but i'll have more info moving forward 18:37:13 ok, great 18:37:15 i'm working on getting a new trove client version 1.4.0 18:37:34 this will include the cluster create with AZ and network 18:37:46 and a few other minor changes 18:38:18 this will help users with the latest liberty release of trove 18:38:29 cp16net, let me know when you do that and I'll generate new CLI docs 18:38:43 k 18:39:26 #link https://etherpad.openstack.org/p/mitaka-trove-summit 18:39:32 thats all i had on the release stuff 18:39:37 so the overview of the summit talks 18:39:41 There's a link to the main summit etherpad 18:40:14 Actually, this one is probably better: 18:40:17 #link https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Trove 18:40:35 so we talked about adding support for other backends for backups 18:40:57 we specifically focused on looking into supporting ceph 18:41:26 this didnt seem to be very controversial 18:41:48 more discussion was about how and if it would allow us to tie in for backups 18:41:59 or snapshotting the volumes 18:42:00 <_imandhan_> is this something we will be implementing during mitaka or down the lane? 18:42:45 i think its something that should be worked on during mitaka 18:42:53 although i'm not sure who will be working on it 18:42:57 _imandhan_, well, nobody committed to doing the work yet ... 18:43:12 <_imandhan_> hmm okay 18:43:16 yeah no one was assigned and no bp that i've seen yet 18:43:40 then managing upgrades was another topuic 18:44:03 yes - that is something that has been kicked around for a while 18:44:35 we talked about adding a way to add a key for accesss 18:44:50 but this turns out to not be a good idea for security reasons 18:45:00 until we realized the big security hole it created ;D 18:45:05 yup 18:45:25 i think we should look at other options for this still 18:45:41 there wasnt a clear cut way to do this within trove 18:46:23 because many deployers today run something outside of trove to manage the updates ie. puppet/chef/others 18:46:57 from the user/op session 18:47:26 we got some feedback about needing a cluster status 18:47:34 rather than just a task that means nothing 18:47:51 something to determine healthy/unhealthy cluster 18:48:34 we need a way of force deleting an instance from any state 18:48:53 adding support for trove in openstack cli 18:49:19 people like that everything is together there and has multiple output options 18:49:51 we need to support mgmt cli calls again 18:50:12 something about test requirements there that we are missing 18:50:18 not sure what that was about for sure 18:50:35 for the toggle status session 18:50:52 we decided that the manager refactor helped mitigate this 18:51:01 so we all went over this and reviewed it 18:51:11 looks like it should be merged now 18:51:22 yep, the first part is in 18:51:40 (unfortunately with the gate broken, we haven't seen any benefit yet) 18:51:41 for agnostic linux distros 18:52:14 we are continuing the work forward around this 18:52:44 guest image building 18:53:14 this session was interesting with a variety of views 18:53:51 if the image should be expected to connect to internet for packages and setup or not 18:54:36 i think it doesnt matter and a deployer should choose their own adventure reguarding connections needed or not 18:54:45 o/ 18:55:06 so this leads me to thinking that its not a good idea to remove the install packages as needed from the guest 18:56:20 towards the end we talked about getting locks in nova or for other projects for resources managed by a system 18:56:58 there maybe a way we could use keystone to manage the roles of a project that is owned by a user 18:57:43 there was a long discussion about this and i think towards the end we made some good arguments for moving this forward 18:57:54 but not sure what the outcome will be yet of the discussions 18:58:10 so i think that wraps up my overview 18:58:18 and its about time 18:58:27 sorry for taking up all the time at the end 18:58:30 <_imandhan_> thanks cp16net :) 18:58:43 but i hope for others that were not there it was benificial to hear this 18:59:03 yep, thanks cp16net! 18:59:04 if you have questions feel free to ask in the channel 18:59:13 thats all i got 18:59:36 sounds good - anything else I guess we can discuss back in #openstack-trove 18:59:45 #endmeeting