19:01:29 <jeblair> #startmeeting infra
19:01:30 <openstack> Meeting started Tue Aug  6 19:01:29 2013 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:31 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:33 <openstack> The meeting name has been set to 'infra'
19:01:35 <jeblair> #link http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-07-30-19.02.html
19:01:40 <jeblair> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting
19:02:00 <zaro> o/
19:02:29 <clarkb> jeblair has super exciting news I think
19:02:41 * fungi thinks so too
19:03:01 <jeblair> that's a slightly stale agenda, but i think we can work with it
19:03:18 <jeblair> #topic asterisk
19:03:19 <mordred> o/
19:03:45 <jlk> o/
19:04:10 <jeblair> it's probably not worth calling in again, as i'm not sure anyone has had a chance to identify causes/solutions to the high cpu from transcoding we saw...
19:04:25 <clarkb> jeblair: I haven't seen any puppet changes to address that
19:04:34 <clarkb> there was one small change to deal with NAT better but I doubt that is related
19:04:49 <jeblair> russellb: i haven't had a chance to look into it, have you?
19:04:56 <jeblair> pabelanger isn't around :(
19:05:08 <russellb> jeblair: i haven't touched it
19:05:12 <russellb> clarkb: not realted
19:05:22 <russellb> that was related to me helping someone get their local PBX behind NAT working
19:05:55 <russellb> i don't think we know that it's transcoding specifically, and not just the cost of running the conference bridge
19:06:25 <russellb> i don't think we had monitoring set up last week?  so would be worth doing it again once we have a graph to look at i guess
19:07:08 <fungi> #link http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=39
19:07:42 <russellb> awesome
19:08:00 <fungi> did we possibly under-size the vm? we can always grow it to the next flavor up if needed, i expect
19:08:19 <fungi> but we'll obviously want to observe its performance under load
19:08:26 <russellb> i don't see CPU on there
19:08:28 <clarkb> should we call in again to get some numbers in cacti?
19:08:32 <clarkb> russellb: second page
19:08:49 <russellb> oops :)
19:08:52 <jeblair> (or set "graphs per page" to a high number at the top)
19:09:14 <fungi> where "high number" is i excess of the ~25 graphs currently generated
19:09:33 <jeblair> that's 5 min intervals
19:10:35 <jeblair> let's aim for calling in again next week
19:10:39 <clarkb> ok
19:10:48 * russellb is in now in case anyone wants to call in and idle for a bit
19:11:01 <jeblair> oh, or we could do it now.  :)
19:11:03 <russellb> should see if we can get a graph of number of calls on here too
19:11:12 <russellb> i'm in bridge 6000
19:11:18 <fungi> what's the did again?
19:11:26 <russellb> https://wiki.openstack.org/wiki/Infrastructure/Conferencing
19:11:35 <fungi> #link https://wiki.openstack.org/wiki/Infrastructure/Conferencing
19:11:40 <fungi> thanks!
19:13:19 <russellb> i couldn't figure out how to mute my other client, lol.
19:13:20 <russellb> sorry.
19:13:36 * mordred not calling in beause he's in brazil. btw
19:14:21 <jeblair> mordred: you could try SIP :)
19:14:27 * jeblair is listening in stereo
19:15:51 <fungi> so we presumably want to load the pbx for a good 10 minutes to get a decent snmp sample period
19:16:02 <jeblair> fungi: yeah
19:17:02 <jeblair> so while that's going on...
19:17:04 <jeblair> #topic Multiple Jenkins masters (jeblair)
19:17:11 <fungi> yes!
19:17:19 <jeblair> we have them!
19:17:21 <clarkb> \o/
19:17:28 <fungi> several!
19:17:34 <jeblair> all of the test load should now be on jenkins01 and jenkins02
19:17:43 <jeblair> while jenkins is running the jobs on the special slaves
19:18:03 <jeblair> this is going to be a big help in that we can scale out jenkins with our test load
19:18:12 <fungi> i've seen no problem reports which seem attributable to the change
19:19:07 <jeblair> (from an overall system pov, we've moved the bottleneck/spof to zuul, but there's a logical bottleneck there in terms of having a single logical gate)
19:21:31 <jeblair> next i'm going to be working on devstack-gate reliability, and then speed improvements
19:22:01 <fungi> i haven't looked back at the resource graphs for zuul recently to see whether that's creeping upward
19:22:53 <jeblair> it seems to be keeping up with gerrit changes, etc
19:23:02 <jeblair> i think the new scheduler there did a lot to help that
19:23:28 <jeblair> so, completely unscientifically, now that we're spreading the load out a bit, it seems as if jenkins itself is able to keep up with the devstack-gate turnover better.
19:23:29 * mordred bows down to jeblair and the jenkinses
19:23:30 <fungi> cpu and load average look fine on zuul
19:23:44 <jeblair> at least, when i've looked, the inprogress and complete jobs are finishing quickly
19:24:35 <jeblair> #topic Requirements and mirrors (mordred)
19:24:42 <clarkb> and we can upgrade jenkins eith zero downtime :)
19:24:50 <jeblair> mordred, fungi: updates on that?
19:24:57 <jeblair> clarkb: yes!  i hope to do that soon!
19:25:16 <mordred> well... we've goten somewhere
19:25:27 <fungi> oops, i hung up on the pbx
19:25:31 <mordred> devstack is now updating all projects requirements to match openstack/requirements
19:25:37 <mordred> before installing them
19:25:48 <mordred> and requirements is now gated on devstack
19:25:53 <mordred> so that's super exciting
19:26:06 <fungi> anyway, i've tried and scrapped about three different designs for separate-branch mirrors and settled on one i think should work
19:26:17 <mordred> woot
19:26:35 <fungi> i've got a wip patch for the job additions up right now while i hammer out the jeepyb run-mirror.py addition
19:27:12 <fungi> i still don't have good ideas got bootstrapping new branches of openstack/requirements to the mirror without manualintervention
19:27:44 <clarkb> fungi: is listing the branches in the requirements repo not sufficient?
19:27:56 <fungi> and what to do with milestone-proposed periods
19:28:16 <clarkb> milestone proposed belongs to the parent branch right?
19:28:35 <fungi> clarkb: well, if we branch requirements for a new release, we want the mirror to already be there so tests will run, right?
19:29:03 <clarkb> fungi: yes, we can branch requirements first though
19:29:11 <fungi> so do we rename the mirrors in place, or duplicate them, or play with symlinks during transitions or...
19:29:21 <mordred> I'm not sure we should ever have a milestone-proposed requirements, should we?
19:29:34 <clarkb> I think we can duplicate. It is easy, pip cahce prevents it from being super slow
19:29:39 <clarkb> and python packages are small
19:29:47 <mordred> can we do failover-to-master ? or is that crazy
19:29:54 <fungi> mordred: not so much a milestone-proposed requirements, but what do we gate nova milestone-proposed against? master? havana?
19:30:11 <mordred> fungi: gotcha
19:30:22 <fungi> thinking mostly in terms of which mirror to use in integration tests around release time
19:30:26 <mordred> yah
19:30:45 <jeblair> mordred: particularly since devstack forces the requirements now, the m-p branch of code will either use what's in requirements/m-p or requirements/master
19:30:47 <fungi> so anyway, i'll get the bare functionality up first and then we can iterate over release-time corner cases for the automation
19:31:04 <jeblair> mordred: if we don't branch requirements, then that means master requirements must be frozen during the m-p period
19:31:24 <mordred> nod. branching requirments seems sensible
19:31:26 <jeblair> which i think is counter to what we want m-p for (to keep master open)
19:31:36 <jeblair> so i think we probably have to branch requirements...
19:31:39 <mordred> and if we branch it first, then that could trigger the m-p mirror
19:31:47 <mordred> yeah. I'm on board with that now
19:31:52 <jeblair> fungi: the act of creating a branch is a ref-updated event that could trigger the run
19:31:57 <fungi> agreed
19:32:17 <jeblair> so it should be transparent to the projects as long as requirements is branched first
19:32:29 <fungi> another point in question here is, once we have requirements-sync enforced on projects, do we need to keep carrying forward the old package versions?
19:32:33 <jeblair> fungi: we'll want to make sure that the release documentation for the m-p branching process is updated for this.
19:32:40 <fungi> absolutely
19:33:20 <jeblair> fungi: it doesn't seem like we do need to carry those.
19:33:32 <jeblair> fungi: maybe clean them up after 48/72 hours or something?
19:33:51 <jeblair> fungi: (to give us a chance to pin them if something breaks)
19:34:02 <fungi> i wouldn't think so either, just wanted to make sure i didn't go to extremes to populate new branch mirrors with the contents of the old ones indefinitely
19:35:02 <jeblair> we may also need requirements branches for feature branches
19:35:25 <fungi> cleaning up old packages will be an interesting endeavor as well... especially if different python versions cause different versions of a dependency to get mirrored
19:35:56 <clarkb> jeblair: I think it is fair to make feature branches dev against master
19:36:08 <clarkb> s/master/master requirements/
19:36:11 <fungi> so we can't necessarily guarantee that if two versions appear i the mirror, the lower-numbred one should be cleared out
19:36:16 <jeblair> fungi: yeah, i don't think there's any rush to do that.  if you were mostly asking about carrying existing things to new branches, then i think nbd.
19:36:24 <clarkb> otherwise you won't be able to sanely test keystone branch foo against everything else master in tempest
19:36:54 <jeblair> clarkb: the feature branch requirements would see that it is done.
19:36:57 <fungi> jeblair: that was mostly it. create the new mirror from scratch when the branch happens, vs prepopulating with a copy of the old mirror first
19:37:34 <clarkb> jeblair: the problem with it is if nova master conflicts with keystone foo
19:37:37 <jeblair> clarkb: if reqs has feature branches, then devstack will now force the feature reqs to be the deps for all the projects.  that will either work or not in the same way as master.
19:37:39 <clarkb> then all of your testing fails
19:38:06 <clarkb> jeblair: I think the net effect is you end up with something a lot like the master requirements
19:38:11 <jeblair> clarkb: yes, such a requirements change would not be allowed to merge to the feature branch.
19:38:22 <jeblair> clarkb: so that's the system working as intended.
19:38:41 <clarkb> yup, but the diff between master and foo requirements will be tiny
19:38:44 <jeblair> clarkb: if two openstack projects need updating to work with an updated requirement in the feature branch, then two openstack projects will need that feature branch.
19:38:57 <clarkb> gotcha
19:38:57 <jeblair> clarkb: tiny but important.
19:39:19 <jeblair> (important enough for you to go through all this hassle to get a new version of something.  :)
19:39:49 <mordred> ++
19:39:50 <jeblair> anything else?
19:40:11 <jeblair> (i really like the way this is heading)
19:40:19 <jeblair> #topic  Gerrit 2.6 upgrade (zaro)
19:40:42 <jeblair> we forgot to link zaro's change last meeting (or maybe it wasn't pushed yet)
19:40:50 <jeblair> zaro: do you have the link to your gerrit patch handy?
19:41:01 <zaro> yes, give me a min.
19:41:31 <fungi> zaro: btw i played around with your wip feature poc and it seems to do what we want
19:41:39 <mordred> I agree
19:41:40 <fungi> excellent stuff
19:41:41 <zaro> # link https://gerrit-review.googlesource.com/48254
19:41:41 <mordred> I like it
19:41:53 <jeblair> of: [cgit, py3k, git-review, and storyboard], which do we need to talk about at this meeting?
19:41:53 <zaro> #link https://gerrit-review.googlesource.com/48254
19:41:57 <zaro> #link https://gerrit-review.googlesource.com/48255
19:42:08 <zaro> was uploaded for almost a week, no love yet.
19:42:09 <clarkb> jeblair: cgit and py3k have recent changes
19:42:25 <fungi> i have py3k and g-r updates, but they're not critical to be covered
19:42:47 <zaro> so just waiting now..
19:43:04 * ttx lurks
19:43:08 <clarkb> zaro: should we try and get people to review that change?
19:43:21 <zaro> i could make a request.
19:43:22 <clarkb> zaro: I am not sure what their review backlogs look like, but asking in IRC might help
19:43:36 <zaro> probably mfick
19:43:52 <jeblair> zaro: david did have a nit on that second change
19:43:58 <zaro> he just came back from vacation.
19:44:01 <jeblair> zaro: very cool
19:44:27 <zaro> jeblair: was a nit but didn't give a score.
19:44:53 <zaro> jeblair: i was waiting for maybe someone else to score before fixing the nit
19:44:54 <jeblair> zaro: anyway, you might want to update that, and then, yeah, start pestering people.  :)
19:45:09 <zaro> jeblair: ok. will give that a try.
19:45:22 <jeblair> #topic cgit server status
19:45:41 <pleia2> so, we're pretty close, 2 of the 3 reviews outstanding should be pretty good to go
19:46:23 <jeblair> someone keeps scope-creeping at least one of them, sorry.  :)
19:46:35 <pleia2> yeah, that's the 3rd :)
19:46:43 <fungi> pleia2: i see the replication and https changes open... what's the third?
19:46:49 <pleia2> fungi: ssl
19:46:51 <pleia2> oh
19:46:57 <clarkb> I intend on updating my reviews now that there are new patchsets (I believe)
19:47:03 <pleia2> fungi: git daemon https://review.openstack.org/#/c/36593/
19:47:10 <jeblair> pleia2: did we agree https only ?
19:47:12 <fungi> ahh, yes
19:47:14 <pleia2> jeblair: yeah
19:47:26 <pleia2> patched ssl one accordingly
19:47:34 <jeblair> that seems reasonable to me, fwiw.  https (no http) and git:// otherwise.
19:47:38 <clarkb> ++
19:47:53 <fungi> wfm
19:48:06 <pleia2> once these are done there is just cleanup and theming if we want
19:48:08 <mordred> ++
19:48:23 <jeblair> ttx: you may be interested in the "Requirements and mirrors" topic earlier in this meeting
19:48:29 <mordred> I'd like theming - but I'm fine with that coming later
19:48:55 <clarkb> yeah, I think right now having performant fetches on centos is more important that theming :)
19:48:56 <pleia2> also, I'm flying to philly on thursday for fosscon, so my availability will be travel-spotty as I prep and attend that
19:49:03 <ttx> jeblair: reading backlog
19:49:07 <jeblair> ttx: short version: we will probably need to create a m-p branch of the openstack/requirements repo before doing any other project m-p branches.
19:49:24 <clarkb> pleia2: have fun
19:49:29 <pleia2> clarkb: thanks :)
19:49:34 <jeblair> #topic Py3k testing support
19:49:52 <fungi> we have a couple outstanding moving parts which need to get reviewed
19:49:57 <fungi> #link https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:py3k,n,z
19:50:18 <fungi> clark has a patch out there to start non-voting py33 tests on all the clients after those above go in
19:50:26 <clarkb> #link https://review.openstack.org/#/c/40323/
19:50:31 <ttx> jeblair: I'll need to do a m-p for swift 1.9.1 tomorrow or thursday morning
19:51:08 <jeblair> ttx: if it's just for swift, it shouldn't be an issue
19:51:30 <jeblair> ttx: it only matters if the requirements for master and m-p diverge, which i am certain will not happen for swift.  :)
19:52:02 <jeblair> ttx: so you should be able to ignore it for this, and hopefully we'll have all the machinery in place by h3
19:52:12 <ttx> jeblair: ack
19:52:18 <jeblair> fungi: i will promote those to the top of my queue
19:52:34 <fungi> jeblair: awesome. thanks
19:52:58 <fungi> other than that, the projects which are testing on py33 seem to be doing so successfully
19:53:06 <fungi> not much to add on that topic
19:53:12 <jeblair> very excited about running py33 tests for the clients, since zul is actually submitting changes there!
19:53:16 <jeblair> #topic Releasing git-review 1.23 (fungi)
19:53:32 <fungi> this was on the agenda just as a quick heads up
19:53:56 <mordred> fungi: cool. did we get anywhere with installing the hook differently?
19:54:05 <mordred> fungi: so that it applies to merge commits too?
19:54:07 <fungi> we have a contributed patch to convert git-review to pbr
19:54:24 <fungi> #link https://review.openstack.org/#/c/35486/
19:54:26 <fungi> and i want to tag a release on the last commit before that
19:54:48 <mordred> ++
19:54:54 <fungi> just so if we have installation problems in the following release for some users, the fallback is as up to date as possible
19:55:07 <fungi> i've been using the current tip of master for weeks successfully
19:55:20 <fungi> have one cosmetic patch i want to cram in and then tag it, probably later this week
19:55:35 <jeblair> fungi: sounds good
19:55:40 <mordred> yup
19:55:42 <fungi> the pbr change is exciting though, because we have integration tests which depend on that
19:55:48 <mordred> very exciting
19:55:54 <jeblair> ah, that's what's holding that up
19:56:01 <fungi> #link https://review.openstack.org/#/c/35104/
19:56:11 <jeblair> #topic Storyboard (anteaya)
19:56:19 <fungi> also i want to turn on the tests as no-ops first so we can gate them on themselves
19:56:19 <jeblair> anteaya: 4 minutes.  :(
19:56:26 <anteaya> hello
19:56:31 <ttx> hi!
19:56:39 <anteaya> well most of my questions from last week were answered
19:56:47 <anteaya> I hadn't set up the db properly
19:57:06 <anteaya> I have a patch waiting to merge that adds those instructions to the readme
19:57:07 * ttx started working on the project group feature today
19:57:34 <anteaya> basically what I do know is that ttx wants to stay with 1.4 django, correct ttx?
19:57:36 <jeblair> ttx: https://review.openstack.org/#/q/status:open+project:openstack-infra/storyboard,n,z
19:57:47 <jeblair> ttx: i think both of those changes could use some input from you if you have a min.
19:57:57 <anteaya> and other than that I am still trying to get the models straight
19:58:10 <ttx> jeblair: oh. I wasn't notified on those for some reason
19:58:17 <anteaya> that is about it from me, ttx, anything else
19:58:26 <jeblair> ttx: ah, you may need to add it to your watched projects list in gerrit
19:58:35 <jeblair> https://review.openstack.org/#/settings/projects
19:58:37 <ttx> doing that right now
19:58:44 <fungi> ttx: i don't think i've started watching it in gerrit either. good reminder
19:58:52 <jeblair> (used to happen as part of the lp group sync)
19:59:23 <mordred> ttx: I personally don't see any reason to stay with 1.4
19:59:28 <mordred> but defer to you
19:59:37 <clarkb> upgrade to all the new things
19:59:45 <ttx> no good reason, except that supporting 1.4 is not really causing an issue
20:00:02 <fungi> oh, and reminder to everyone i'm working from seattle the week of the 18th, then mostly unreachable the week after that
20:00:10 <anteaya> ttx if I were able to put together a patch to upgrade to 1.5, would you look at it?
20:00:11 <ttx> I'm fine with 1.5+ if we end up using something that is only 1.5 :)
20:00:12 <clarkb> django is tricky because supporting mutliple versions isn't super straightforward aiui
20:00:28 <jeblair> thanks all!
20:00:32 <jeblair> #endmeeting