19:02:00 <jeblair> #startmeeting infra
19:02:01 <openstack> Meeting started Tue Jan  6 19:02:00 2015 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:02 <nibalizer> o/
19:02:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:02 <jhesketh> Morning
19:02:05 <asselin> hi
19:02:05 <openstack> The meeting name has been set to 'infra'
19:02:16 <krtaylor> o/
19:02:20 <jeblair> #link agenda: https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:02:20 <jeblair> #link last real meeting: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.html
19:02:20 <jeblair> #link last informal meeting: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-23-19.03.html
19:02:22 <clarkb> o/
19:02:27 <jeblair> #topic  Actions from last meeting
19:02:41 <jeblair> jeblair rmlist -a third-party-requests
19:02:43 <jeblair> i did that
19:03:01 <jeblair> seems to be gone.  :)
19:03:06 <timrc> o/
19:03:10 <anteaya> awesome
19:03:11 <anteaya> thank you
19:03:17 <jeblair> #topic  Schedule next project renames
19:03:30 <jeblair> so it looks like we have 8 projects ready to go to the attic
19:04:02 <AJaeger> annegentle_'s patch for the service-apis needs at least another iteration but that should be easy to do
19:04:30 <jeblair> i will not be available this weekend (SFO->AKL)
19:04:56 <fungi> we've generally treated attic moves as low-priority to be batched with more important renames as they arise
19:04:56 <jeblair> anyone around this weekend want to do it?  otherwise, i'm available again in +2 weeks
19:05:08 <fungi> are these more urgent than normal
19:05:11 <fungi> ?
19:05:17 <jeblair> fungi: that's true.  i don't think they are.
19:05:34 <jeblair> want to defer for a few more weeks and see if anything else comes up?
19:05:38 <mordred> jeblair, fungi: also may want to move shade into openstack-infra at some point based on dib-nodepool things - but also not urgent
19:05:40 <fungi> i vote we just let them simmer until another rename request comes up with some urgency, yeah
19:05:51 <mordred> also, that is pending us deciding to do that
19:05:55 <mordred> oh - wait - that's not a rename
19:05:57 * mordred shuts up
19:06:22 * AJaeger agrees with fungi
19:06:29 * mordred agrees with fungi
19:06:39 <jeblair> mordred: well, we should still talk about it... maybe we can put that on a future meeting agenda
19:06:42 <anteaya> I have no objection to letting them simmer
19:07:04 <jeblair> #agreed defer attic moves for a few more weeks pending further rename accumulation
19:07:16 <jeblair> and if nothing shows up after a while, we'll do them anyway
19:07:18 <clarkb> I am out this weekend and next so they aren't great for renames if I am doing them
19:07:34 <jeblair> #topic  Priority Efforts - Swift logs
19:08:07 <jhesketh> so the next step is to get the uploading script running in a venv so that it can install dependencies
19:08:46 <jhesketh> https://review.openstack.org/#/c/142327/
19:08:48 <anteaya> jhesketh: is that patch working and just waiting on being merged?
19:08:56 <jeblair> #link https://review.openstack.org/#/c/142327/
19:08:59 <AJaeger> jhesketh: is this needed for each and every job? Can't we update our base images so that they contain all dependencies?
19:09:15 <anteaya> ah sean has a concern
19:09:18 <jhesketh> well sdague had some feedback but I'd like to get another opinion as the method I used was the same for how we set up zuul
19:09:52 <jhesketh> (long term we probably want to move the upload script to its own project as it has grown in features and it has even been suggested to use a templating engine)
19:10:24 <fungi> the concern is that installing dependencies of a python tool in the global system context on all our workers increases chances for conflict with things jobs themselves need
19:10:29 <mordred> yes
19:10:43 <mordred> that's my concern as well - I'm supportive of running it in a venv
19:10:54 <fungi> zuul-cloner is run from a virtualenv for the puppet integration job for similar reasons
19:11:24 <jhesketh> right I think we agree on that... sdague's input was to have the job set up the venv itself rather than as part of the setup process
19:11:29 <mordred> I do also agree with sean's concern though - and that should be easy enough to fix
19:11:59 <jeblair> i'm not sure i agree with it.  there's nothing wrong with using full paths when invoking programs
19:12:06 <jeblair> but at any rate, there's the change to review
19:12:23 <jeblair> jhesketh: anything else blocking?
19:12:43 <jhesketh> there are some related changes but they are not blocking
19:13:03 <jhesketh> https://review.openstack.org/#/c/143340/ after the venv is merged
19:13:09 <jeblair> #link https://review.openstack.org/#/c/143340/
19:13:32 <jhesketh> https://review.openstack.org/#/c/141286/ and https://review.openstack.org/#/c/141277/ add in improvements to the index generation that were requested
19:13:43 <jeblair> #link https://review.openstack.org/#/c/141286/
19:13:46 <jeblair> #link https://review.openstack.org/#/c/141277/
19:14:54 <jeblair> jhesketh: anything else?
19:15:06 <jhesketh> nope :-)
19:15:11 <jeblair> thanks!
19:15:11 <jeblair> #topic Priority Efforts - Puppet module split
19:15:36 <jeblair> asselin: so last time you were proposing some changes to stage a bunch of split work at once
19:15:45 <asselin> So after last meeting, I proposed a spec update to optimize the split work: https://review.openstack.org/#/c/143689/
19:15:48 <asselin> #link https://review.openstack.org/#/c/143689/
19:17:30 <asselin> If we agree, then I'd like to do a mini sprint to get these done in batches.
19:17:42 <anteaya> well we have a problem
19:17:47 <anteaya> that needs to be addressed
19:17:52 <anteaya> with repo replication
19:18:01 <nibalizer> im not familiar with that problem
19:18:11 <anteaya> which zaro jesusarus and I have been trying to get to the bottom of
19:18:15 <jeblair> asselin: wait, so are you abandoning your idea of combining the prep work into one change?
19:18:28 <anteaya> happened yesterday with the lodgeit split
19:18:33 <asselin> jeblair, no
19:18:50 <asselin> jeblair, just one 2nd change b/c it introduces too many conflicts.
19:18:50 <anteaya> after the merge to project-config the repo didn't end up on 3 of 5 git servers
19:18:59 <jeblair> anteaya: let's talk about that leater
19:19:00 <jeblair> later
19:19:05 <anteaya> okay
19:19:31 <jeblair> asselin: do you have a change prepared that does all of the prep work?
19:19:39 <asselin> jeblair, yes
19:19:49 <asselin> #link https://review.openstack.org/#/c/140523/
19:20:31 <asselin> reviewers prefered getting agreement via spec, and I agreed it would clarify the plan
19:20:38 <jeblair> hrm
19:20:52 <jeblair> well, i don't see how your large change was different than a simple optimization of what's in the spec
19:21:07 <jeblair> wheras you have actually added quite a lot of stuff to the spec that's going to need to go through some significant review
19:21:15 <jeblair> so i'm not sure that this is the most expeditious process
19:21:25 <asselin> well, I can go either way, honestly :)
19:21:34 <jeblair> if anyone actually objects to the approach in https://review.openstack.org/#/c/140523/10  let's just hear it now
19:22:02 * nibalizer no objection
19:22:12 <anteaya> as long as anyone can do a puppet module split out and find current instructions for doing so, that works for me
19:22:15 <jeblair> jesusaurus: you had an objection in the review
19:22:21 <jesusaurus> im not sure what the benefit is of that approach
19:22:46 <jesusaurus> and i think the current process keeps changes logically together
19:23:21 <jeblair> jesusaurus: fewer and smaller changes to review while going through the process?
19:23:43 <asselin> the benefit is we can do the splits with less effort, in e.g a mini sprint.
19:23:50 <krtaylor> yeah, less chance for typos IIRC
19:24:00 <jesusaurus> is it actually fewer changes? i think the current changes are fairly small and dont really need to be trimmed down
19:24:02 <asselin> b/c there are a few little changes sprikled in a few files. seems easier to do all at once.
19:24:22 <jesusaurus> but i dont have a very strong opinion on the matter
19:24:34 <fungi> my only fundamental reservation was with earlier patchsets which added lots of commented-out sections to gerrit/projects.yaml but this version does not so i'm fine with it
19:24:48 <jeblair> #vote use approach in https://review.openstack.org/#/c/140523 ? yes, no
19:24:58 <jeblair> clarkb: help! :)
19:25:05 <nibalizer> its #startvote i think
19:25:09 <jeblair> #startvote use approach in https://review.openstack.org/#/c/140523 ? yes, no
19:25:10 <openstack> Begin voting on: use approach in https://review.openstack.org/#/c/140523 ? Valid vote options are yes, no.
19:25:12 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
19:25:12 <asselin> fungi, yes that patch was abandoned b/c it would also create lots of merge conflicts which is counter productive
19:25:13 <clarkb> so I have tried to avoid this because I don't feel super strongly either way
19:25:22 <AJaeger> #vote yes
19:25:25 <anteaya> #vote abstain
19:25:26 <openstack> anteaya: abstain is not a valid option. Valid options are yes, no.
19:25:27 <clarkb> my biggest gripe which wasn't super important is reviewing an 800 line diff is annoying
19:25:27 <nibalizer> #vote yes
19:25:30 <asselin> #vote yes
19:25:36 <sweston> #vote yes
19:25:40 <fungi> it's a bit of a bikeshed, but i'm in favor by way of not being against ;)
19:25:42 <krtaylor> #vote yes
19:25:44 <fungi> #vote yes
19:25:47 <pleia2> #vote yes
19:25:50 <jesusaurus> #vote no
19:26:12 <mordred> #vote yes
19:26:19 <fungi> ultimately, my feeling is that whoever's willing to do the work can choose how to go about it, as long as there are no serious problems with the plan
19:26:22 <clarkb> how concerned are we about breaking multiple things in a way that prevents us from fixing it with puppet because its all wedged
19:26:27 <anteaya> fungi: +1
19:26:30 <jeblair> #endvote
19:26:30 <openstack> Voted on "use approach in https://review.openstack.org/#/c/140523 ?" Results are
19:26:31 <openstack> yes (8): mordred, krtaylor, sweston, nibalizer, fungi, AJaeger, pleia2, asselin
19:26:32 <openstack> no (1): jesusaurus
19:26:47 <AJaeger> asselin: looks like time to rebase the patch and then let's merge...
19:26:49 <jeblair> clarkb: i don't think this changes that
19:26:49 <jhesketh> (I agree with fungi too but don't know enough about the proposed changes to vote)
19:26:53 <anteaya> and follows up problems and fixes them
19:27:26 <jeblair> okay, so part 2 of this is, should we have a mini-sprint to try to get through a bunch of modules at once?
19:27:44 <asselin> these changes are noops until the real change is merged (which pulls in the project into openstack-infra)
19:28:18 <jeblair> i'm not in a good position to help with that for the next 2.5 weeks
19:28:21 <fungi> our infra-manual sprint was a huge success, and this seems like a good candidate for similar priority task knock-out
19:28:31 <jeblair> fungi: i agree, i think it's desirable
19:28:33 <mordred> ++
19:28:38 <nibalizer> jeblair: i love the idea of picking a day and getting it done
19:28:41 <nibalizer> or 85%
19:28:46 <mordred> I'm also not in much of a position to be substantively helpful until feb
19:28:54 <clarkb> +1
19:29:04 <clarkb> but january is hard ... ETOOMUCHGOINGON
19:29:04 <jesusaurus> id be happy to help whenever it happens
19:29:05 <anteaya> I'm useless until Fed too
19:29:06 <jeblair> maybe thursday.  :)
19:29:12 <anteaya> Feb
19:29:14 * fungi has travel coming up in a couple weeks and is getting back up to speed as well
19:29:19 <pleia2> sounds good to me, happy to help with testing/reviewing and logistics again (announcements, etherpad setup, summary)
19:29:22 * nibalizer fine with waiting, epecrially if we pick a day in advance
19:29:49 <fungi> you know, that thursday which always happens in february
19:30:01 <anteaya> do we want to discuss the repo replication bug now or wait until open discussion?
19:30:27 <fungi> anteaya: it's not directly related to teh puppet module split
19:30:35 <jeblair> let's pick a date now
19:30:40 <fungi> anteaya: just a (probably gerrit) bug impacting new project creation
19:30:43 <anteaya> oh I have been operating on the belief that it is
19:30:45 <jeblair> friday jan 30?
19:30:57 <pleia2> nibalizer and I won't be around then
19:31:05 <pleia2> (fosdem travel)
19:31:05 <anteaya> nor will I
19:31:13 <jeblair> pleia2: when do you leave?
19:31:21 <asselin> I leave for vacation on the 31st, so 30 is good for me
19:31:25 <pleia2> I fly out on the 29th
19:31:28 <clarkb> every day that week but monday is good for me
19:31:35 <fungi> i'm around that week as well
19:31:40 <pleia2> so, thursday
19:31:51 <jeblair> how about wed 28?
19:31:59 <pleia2> wfm, nibalizer when do you head to fosdem?
19:32:01 <fungi> wfm
19:32:04 <clarkb> wfm
19:32:07 <nibalizer> uh i have not planned fosdem
19:32:16 <pleia2> nibalizer: ok, don't leave until thursday :D
19:32:19 <jeblair> nibalizer: you leave after wednesday feb 28.  ;)
19:32:19 <anteaya> <- cinder mid-cycle
19:32:33 <mordred> I cannot do the 28th - but don' tlet that stop you
19:32:48 <nibalizer> feb28 doesn't conflict with fosdem at all
19:32:52 <nibalizer> fosdem is feb 1 and 2
19:32:56 <jeblair> jhesketh: around wed 28th?
19:32:56 <nibalizer> i can do feb28 no prob
19:33:00 <pleia2> oh, jeblair said jan
19:33:11 <jhesketh> jeblair: yes, works for me
19:33:12 <fungi> february 30
19:33:16 <nibalizer> fungi: nice
19:33:27 <pleia2> late february is fine for me
19:33:42 <jeblair> mordred: seems like we have a few cores, so should be okay....
19:33:46 <mordred> cool
19:33:55 <jeblair> since there was confusion...
19:33:58 <jhesketh> err, both should be fine although feb is possibly a little hazy
19:34:01 <fungi> but yeah, that seems like a reasonably soon but not inconveniently soon week
19:34:08 <jeblair> last call: wednesday, january 28th okay for everyone but mordred?
19:34:09 <jesusaurus> wait, jan 28 or feb 28?
19:34:18 <anteaya> mordred and anteaya
19:34:31 <anteaya> but you dont' need me eitehr
19:34:35 <nibalizer> jan 28 works for me
19:34:41 * jesusaurus can do jan 28
19:34:43 <fungi> anteaya: that's when the cinder mid-cycle is scheduled you say?
19:34:44 <pleia2> jan 28 is good
19:34:45 <asselin> anteaya, you did a great job reviewing the patch. thanks!
19:34:48 <anteaya> fungi: yes
19:34:52 <jeblair> anteaya: oh i thought you were okay that date
19:35:13 <anteaya> jeblair: no mid-cycling after lca until first week of Feb
19:35:23 <anteaya> but it works for everyone else so go ahead
19:35:39 <jeblair> anteaya: yeah, but you're core on one of the projects
19:35:49 <anteaya> so is andreas
19:36:04 <anteaya> I can try to be around but I just can't commit
19:36:06 <jeblair> yeah, i'm just trying to maximize people who can actually approve these changes :)
19:36:11 <anteaya> can't do two sprints at once
19:36:31 <anteaya> I don't want to block, works for everyone else
19:36:37 <anteaya> hard to pick a better time
19:36:54 <jeblair> #agreed module split sprint wednesday, jan 28
19:37:03 * nibalizer excited
19:37:06 <jeblair> we can continue to split them off incrementially till then of course
19:37:09 <asselin> awesome. thank you! :)
19:37:32 <jeblair> #action asselin update module split staging change and merge asap
19:37:53 <jeblair> asselin: when that's ready, do please pester people to approve it so we don't end up with lots of conflict rebases, etc
19:38:17 <jeblair> #topic Priority Efforts - Nodepool DIB
19:38:20 <asselin> will do
19:38:36 <jeblair> mordred, clarkb: what's the status here?
19:38:36 <mordred> so - I haven't done much since mid dec
19:38:38 <clarkb> I haven't been able to do much around this since the switch to trusty
19:38:42 <mordred> but I got time again today
19:38:44 <mordred> which has been great
19:38:56 <jeblair> hey everyone, mordred has free time!  ;)
19:38:57 <mordred> I've got half of my crappy shell scripts properly translated over in to python
19:39:07 <clarkb> but there are a bunch of nodepool changes outstanding that I and others have written that fix bugs that allow us to do this better
19:39:16 <clarkb> let me pick out some important onces
19:39:33 <jeblair> mordred: what is the character of the python changes you are writing?
19:40:02 <mordred> well, the most complicated part of this is the "use glance v1 on one cloud and swift+glance v2 on the other cloud"
19:40:02 <clarkb> #link https://review.openstack.org/#/c/140106/
19:40:07 <clarkb> looks like I can actually approve that one
19:40:09 <mordred> with completely different workflows
19:40:20 <clarkb> #link https://review.openstack.org/#/c/130878/
19:40:30 <jeblair> mordred: so the existing nodepool glance stuff was only tested/written for one cloud?
19:40:32 <clarkb> #link https://review.openstack.org/#/c/139704/
19:40:34 <mordred> jeblair: yes
19:40:56 <mordred> jeblair: so I'm writing an interface that will work for both clouds
19:41:19 <mordred> and will do all of the things we need
19:41:51 <jeblair> i'm really sad that it's so different.  :(
19:41:59 <mordred> I'm currently doing that inside of shade for ease of testing, and because I'm pretty sure other folks might need this - but if we decide that we don't want to make nodepool depend on shade then I can copy and paste the relevant stuff into a local class for nodepool
19:42:07 <mordred> jeblair: it's COMPLETELY different
19:42:22 <fungi> if only there were some common service platform they could both use. maybe if it were free software they'd have no excuse
19:42:25 <mordred> but neither cloud is doing anything "wrong"
19:42:31 <clarkb> we tried explaining to glance devs why the client shoul dmake it not different for us
19:42:36 <clarkb> not really sure thta went anywhere
19:42:52 <mordred> well, hopefully in a few more hours I'll have a client that makes it not different for us
19:43:05 <fungi> yeah, the glance image type situation is pretty unfortunate
19:43:13 <clarkb> fungi: its not even that though
19:43:16 <mordred> and then I expect it to take another couple of days to get a nodepool patch for folks to look at
19:43:29 <clarkb> fungi: on rackspace you have to upload to swift separate from glance, then tell glance there is an image over in swift
19:43:40 <fungi> oh, oww
19:43:47 <mordred> fungi: yeah - it's completely different
19:43:48 <jeblair> ok cool, so we have changes clarkb linked to, and some forthcoming changes from mordred.  hopefully in a couple of weeks we can maybe try this for real?
19:43:50 <fungi> you can't upload directly to glance there?
19:43:54 <mordred> jeblair: ++
19:44:01 <mordred> fungi: no - by choice/design
19:44:09 <mordred> but on HP it's the opposite, you _cannot_ upload to swift and import
19:44:16 <mordred> you must upload directly to glance
19:44:19 <jeblair> clarkb: can we continue to increase dib in hpcloud after the current nodepool changes merge?
19:44:19 <fungi> by design of course ;)
19:44:26 <clarkb> jeblair: yes
19:44:35 <jeblair> because actually, getting dib everywhere there and spreading across all 9 routers might help other things
19:44:45 <clarkb> jeblair: the important one for that is teh second linked change. it allows rax to be snapshot and hp to be dib
19:45:03 <jeblair> so maybe we should plan to continue to push dib in hpcloud to help work out issues there
19:45:17 <jeblair> hopefully that will make the rax-dib work less painful when it's ready
19:45:19 <fungi> i'm in favor
19:45:25 <clarkb> +1
19:45:27 <mordred> I'd love to get outstanding nodepool patches taht are releated to openstack apis merged or at least agreed to in large part
19:45:32 <mordred> before I start in with the next set
19:45:40 <mordred> because I'm pretty sure they need to be additive
19:45:49 <jeblair> mordred: the existing ones clarkb linked to?
19:45:51 <clarkb> mordred: yes and before we do that I think we should fix our bugs :)
19:46:04 <mordred> clarkb: yup
19:46:08 <mordred> jeblair: yes
19:46:29 <mordred> I'll work on reviewing those today after I eat a sandwich
19:46:29 <jeblair> anything else dib related?
19:46:51 <jeblair> #topic Priority Efforts - Jobs on trusty
19:47:18 <jeblair> fungi: is ubuntu still shipping a non-functioning python?
19:47:41 <fungi> jeblair: yes, i think they're all asleep at the wheel. pinged several bugs and no response on any of them
19:47:52 <jeblair> https://bugs.launchpad.net/ubuntu/+source/python3.4/+bug/1367907
19:47:53 <uvirtbot> Launchpad bug 1367907 in python3.4 "Segfault in gc with cyclic trash" [High,Fix released]
19:48:03 <jeblair> that's been marked as 'invalid' in oslo.messaging...
19:48:07 <fungi> not sure how to increase visibility, though i guess i could start rattling mailing lists, irc channels, something
19:48:39 <clarkb> jeblair: ya its a python bug not an oslo.messaging bug
19:49:31 <fungi> the bug "affects" oslo.messaging insofar as it can't be tested on ubuntu's python 3.4 in trusty, but i'm not going to argue the point with the oslo bug team
19:49:31 <jeblair> so when do we drop py3k-precise testing?
19:49:52 <jeblair> i think we've waited quite long enough
19:50:34 <fungi> i guess it depends on what we want to do when we drop it. go ahead and start testing everything which isn't breaking on 3.4?"
19:51:01 <fungi> and just stop testing the stuff which is no longer testable on 3.4 (because of needing fixed 3.4 on utrusty)?
19:51:10 <jeblair> yeah, i think that's one option
19:51:21 <fungi> i can do that fairly trivially
19:51:33 <jeblair> another option would be to switch our testing platform to an os that fixes bugs
19:51:42 <fungi> at that point it's just oslo.messaging and oslo.rootwrap as far as i've been able to tell
19:51:44 * mordred doesn't unlike that idea
19:52:33 <clarkb> then we just have to decide on what that distro/OS is :)
19:52:35 <mordred> jeblair: just for completeness - we could also start maintaining our own python package
19:52:44 <mordred> I'm not voting for that one, just mentioning
19:52:54 <clarkb> the big problem with that is the problem that travis exposes
19:53:03 <clarkb> you don't actually end up testing a python platform that is useable to anyone
19:53:05 <mordred> yah
19:53:11 <mordred> because $distro-crazy
19:53:15 <clarkb> so nothing works when you install your code over there on ubunut/centos
19:53:25 <mordred> remember when distros distro'd software and didn't much with it so much?
19:53:30 * mordred shuts mouth
19:53:36 <jeblair> so keeping precise going isn't a big deal
19:53:45 <jeblair> the big deal to me is that we've actually decided to test on py34
19:53:47 <jeblair> but we can't
19:53:54 <jeblair> which is a hindrance to the move to py34
19:53:58 <clarkb> ya
19:54:00 <mordred> ya
19:54:05 <dhellmann> fungi: I think I set that bug to invalid. That might not be the right state, but my impression was there wasn't really anything the Oslo team could do about it.
19:54:09 <clarkb> I do think a move to say debian is not unreasonable
19:54:31 <mordred> and I agree with clarkb that if we did something like slackware - we'd be "testing" python 3.4 but it still might not work on the platforms people are using
19:54:36 <fungi> dhellmann: nothing the oslo team can do about it, but it is still breaking oslo.messaging so... up to you
19:54:45 <dhellmann> fungi: yeah :-/
19:55:04 <fungi> dhellmann: well, nothing you can do about it short of asking the ubuntu python package maintainers to please fix it
19:55:08 <mordred> clarkb: well, even with a move to debian - since the distro pythons are all patched and different - what _Are_ we actually testing?
19:55:22 <clarkb> mordred: the ability to run on debian/ubuntu/centos
19:55:32 <anteaya> our host clouds
19:55:43 <dhellmann> fungi: if only I had time -- maybe sileht can talk to zul, though
19:56:00 <jeblair> okay, so i think within a few weeks we should come up with a plan
19:56:02 <fungi> dhellmann: zul's aware, but says barry warsaw needs to take care of it
19:56:04 <mordred> jeblair: ++
19:56:11 <jeblair> probably not going to resolve now
19:56:22 <fungi> jeblair: i'll also see what py3k centos7 offers us
19:56:24 <dhellmann> fungi: ok, I'll try poking barry by email
19:56:32 <jeblair> but let's be thinking about whether we want to do partial 34 testing, or switch platforms, or ...
19:56:43 <clarkb> mordred: basically if we stick to a non boutique distro then we continue to have an answer for reasonably tested on X that you can be expected to use without eye rolls
19:56:44 <fungi> jeblair: sounds good
19:56:48 <jeblair> #topic Priority Efforts - Zanata
19:57:04 <jeblair> i need to add this to the agenda
19:57:13 <mordred> clarkb: I'm not sure I agree - but I do understand your pov
19:57:34 <jeblair> but i mostly wanted to remind people to watch the puppet-zanata repo, and help review pleia2's patches as this starts up
19:57:43 <mordred> clarkb: I'm not sure I care that we have a 'reasonable' answer if that reasonable answer ends up with someone downloading something we say works and they discover that it does not
19:57:45 <mordred> jeblair: woot
19:58:09 <jeblair> pleia2: anything you want to mention?
19:58:15 <anteaya> pleia2: I've been seeing your patches fly by in scrollback are you using a consistent topic?
19:58:38 <pleia2> nothing major I only have one review up so far to get wildfly itself installed
19:58:55 <pleia2> struggling a bit with zanata itself, but nothing I can't get past
19:58:59 <jeblair> it has a +2! :)
19:59:14 <pleia2> yes, thanks jeblair!
19:59:27 <pleia2> oops, almost forgot about this one too: https://review.openstack.org/#/c/143512/
19:59:35 <jeblair> #link https://review.openstack.org/#/c/143512/
19:59:43 <pleia2> that's in system-config to pull in the wildfly module
20:00:10 <pleia2> I did a cursory overview of the module we're grabbing, but other eyes on it would be nice in case it does something crazy that I missed
20:00:25 <pleia2> s/overview/review
20:00:47 <jeblair> public service announcement: the TC meeting is canceled this week. the next meeting in this channel is the cross-project meeting at 21:00
20:01:02 <jeblair> i'm going to run 5 minutes over if people don't mind...
20:01:09 <jeblair> #topic  Upgrading Gerrit (zaro)
20:01:09 <mordred> jeblair: go for it
20:01:24 <zaro> nothing much to report atm
20:01:34 <jeblair> so we talked a bit about this, and zaro pointed out the hung sshd issues
20:01:35 <zaro> will try to upgrade review-dev.o.o soon
20:01:59 <jeblair> zaro: my understanding is that with this change: https://gerrit-review.googlesource.com/#/c/62070/
20:02:23 <jeblair> zaro: 2.9.3 should no longer have those issues (they downgraded the sshd module to a version without the problems)
20:02:29 <jeblair> zaro: does that sound right?
20:02:49 <jeblair> (also 2.9.4 has been released since then, and only contains a jgit upgrade)
20:03:01 <zaro> yes, that is correct.
20:03:53 <zaro> i believe that change brings it back to same state as 2.8
20:04:25 <jeblair> so it seems like 2.9.3 or 2.9.4 are reasonable upgrade targets
20:04:37 <zaro> yes, i believe so.
20:05:14 <zaro> i will stop puppet agent on review-dev soon.
20:05:52 <zaro> also i noticed that there was a fix to the replication retry in 2.9 as well.
20:06:07 <zaro> #link https://gerrit-review.googlesource.com/#/c/41320/
20:06:10 <jeblair> okay cool.  mostly just wanted to make sure we concluded the sshd discussion and agreed on a version
20:06:14 <fungi> anteaya: ^ that's possibly the solution to the issue you were asking about earlier
20:06:35 <anteaya> fungi: oh I do hope so
20:08:05 <fungi> though the description doesn't sound entirely the same
20:08:32 <jeblair> let's call that it for this week
20:08:34 <jeblair> thanks everyone!
20:08:36 <jeblair> #endmeeting