19:02:44 #startmeeting infra 19:02:45 Meeting started Tue Aug 13 19:02:44 2013 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:46 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:48 The meeting name has been set to 'infra' 19:02:56 #topic Backups (jeblair) 19:03:27 clarkb and mordred were working on getting us an account in hpcloud we could use for backups 19:03:40 clarkb, mordred: do you know the current status of that? 19:04:13 the account is there, mordred has apparently requested that it be comped, but I haven't heard if that has actually been done yet 19:04:22 mordred: have you gotten a response on that request yet? 19:04:26 clarkb: any way you can find out? 19:04:40 it's starting to seem like mordred won't be here. 19:05:01 I don't think I have direct access to the things that will tell me, but I can ask around today to see if anyone else can dig it up 19:05:14 #action clarkb look into new hpcloud account status 19:05:46 so the other thing is that mordred was supposed to write a database backup script 19:06:15 has anyone seen anything come out of that? 19:06:22 I haven't 19:06:34 i haven't noticed a review for that yet, no 19:06:51 I know pcrews attached an example script to the bug, but I don't think mordred built on that 19:07:16 yeah, that's not at all what we need 19:07:26 we basically just need "mysqldump > file" 19:07:58 oh in that case we can just copy pasta the equivalent in the etherpad module out into its own thing and use it where necessary 19:08:26 copy pasta. hmmmm. 19:08:28 I wrote the one in etherpad and can split it out if that is what we think we want to do 19:08:34 clarkb: that would be great. 19:08:49 ok I will put that at the top of this afternoons list to avoid it lagging any further 19:09:04 clarkb: with that, we should be able to just "include bup" and "include mysql_backup" or something in the manifest of any host we want to backup. 19:09:10 yup 19:09:36 #action clarkb split etherpad mysql backup out into distinct puppet module 19:09:40 clarkb: i also have one i use on my personal servers i can send you, for alternative perspective 19:09:49 o i forgot to: 19:09:50 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting 19:09:55 #link http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-06-19.01.html 19:10:20 #topic Project renames (jeblair) 19:10:29 this is another mordred topic really... 19:10:44 istr that mordred said he could do the project names this weekend. 19:10:44 tripleo project renames? 19:10:49 they've been languishing a while... 19:11:02 i cannot, likely, as i'll be flying cross-country a good chunk of the weekend 19:11:10 so i think our options are: punt another week, schedule them and do them ourselves, or schedule them on mordred's behalf. 19:11:23 note that even week-ends will start getting busy starting next week 19:11:31 approaching the freezes 19:11:59 I can assist if we decide to go with this weekend. But will be driving to and from portalnd saturday mroning and sunday afternoon so something in the middle is easiest for me 19:12:12 yeah. i don't have the bandwidth to manage these myself at the moment. 19:12:45 yay summer 19:12:49 i believe the list is: tripleo, pypi-mirror (delete), and puppet-quantum 19:12:52 also mordred will be AFK beginning next week 19:13:03 burning the man, yes 19:13:15 I'm already burning here fwiw 19:13:42 countries on the equatorian line do that to you 19:13:49 jeblair: and pyghmi 19:13:59 or is triplie == pyghmi ? 19:14:21 so it doesn't sound like any of us can be primary on this one, so i think we should not schedule anything ourselves, give mordred the list, and hopefully he can schedule and do the work soon. 19:14:27 clarkb: no, tripleo is an org move 19:14:42 clarkb: of os-* and such 19:14:55 clarkb: pyghmi is python-ipmi -> pyghmi (both in stackforge) 19:14:59 jeblair: gotcha and I agree with giving mordred the list 19:15:29 yes, sadly 19:15:31 #action mordred schedule project renames: tripleo, python-ipmi, puppet-quantum, pypi-mirror(delete) 19:15:47 #topic Tarballs move (jeblair) 19:16:02 we need to move tarballs to static.o.o 19:16:09 because they are still on old-wiki 19:16:45 i think that's a dns change, plus rsync. probably best done with jenkins.o.o offline (to prevent new uploads during the move) 19:16:48 jeblair: there was a recent bug about making their refresh atomic, in case you want to handle both a tthe same time 19:17:06 bug 1211717 19:17:08 Launchpad bug 1211717 in openstack-ci "master tarballs publication is non atomic " [Medium,Triaged] https://launchpad.net/bugs/1211717 19:17:24 i suspect that will take some development work 19:17:43 fungi: yes, if it's just dns + rsync probably best fixed independently 19:18:00 i think the solution to that is the artifact upload script we've talked about 19:18:13 (also relates to logs) 19:18:18 yeah, i suggested it in the bug 19:18:41 basically instead of using scp module in jenkins, we have a script upload artifacts to a web service 19:19:03 which can then put them (somewhere: filesystem, swift, doesn't matter), and then we have another web service serve them 19:19:20 i think the log htmlifier is the first part of the second service 19:19:36 agreed 19:20:17 i wonder if we could modify the scp plugin to fix bug 1211717 19:20:19 Launchpad bug 1211717 in openstack-ci "master tarballs publication is non atomic " [Medium,Triaged] https://launchpad.net/bugs/1211717 19:20:44 perhaps it could upload to a tempfile and then ssh mv the result 19:21:13 jeblair: I think that would be one alternative 19:21:31 (thinking short-term here, because obviously the other thing is very long-term) 19:21:47 jeblair: shouldn't affect the move to static.o.o though 19:21:59 ttx: correct 19:22:02 yeah, i described that as an alternative (though didn't specifically mention the development would need to take place in the jenkins-scp plugin, it was implied) 19:23:07 i guess i said "script" in the bug, but was trying not to get to too far into jenkins detail weeds there if it was a non-optimal path 19:23:33 i'd like to stay focused on getting jenkins/devstack-gate scaled out in anticipation of the increased load around the freeze 19:24:03 definitely 19:24:30 fungi, clarkb: so unless you want to volunteer to lead the move very soon, i'd probably defer it until after the freeze 19:25:09 there are a lot of freezes 19:25:10 i don't think it's urgent, and would rather not risk impact to release activities 19:25:22 19:26:05 fungi: ++ 19:26:32 so let's leave that on the agenda to schedule when we have bandwidth and have a better handle on how to fit it into release activities. 19:27:05 the actual downtime shouldn't be too long though; we just need some prep time and a good window to fix problems. 19:27:27 #topi Asterisk server (jeblair, pabelanger, russelb) 19:27:30 #topic Asterisk server (jeblair, pabelanger, russelb) 19:28:44 pabelanger, russellb: do you want to do more 'internal' testing of asterisk, or should we get some other folks using it (reed + user groups, foundation staff, ...?) to widen testing a bit? 19:29:10 note cacti is currently off for security reasons which may make looking at numbers slightly difficult 19:29:50 it's still collecting stats, just not viewable at the moment 19:30:15 but we're not losing historical trending data afaik 19:30:30 if we really needed a graph, we could create one too 19:30:36 * fungi didn't comment out the snmp cron jobs 19:30:46 jeblair, ya, a load test would be good to doo 19:30:47 do* 19:30:57 that way we can see how well the server will hold up 19:31:37 pabelanger: ok, so we should schedule a time where we can all try to call in 19:31:46 jeblair, yes, I think that will work 19:32:30 how about this friday? 19:32:51 Friday works for me. anytime but around lunch pacific 19:32:54 wfm 19:33:05 i'm cool with any time friday 19:33:06 works here 19:33:15 zaro: wfm 19:33:31 jeblair: post the precise time on the infra list and I'll call if I'm around 19:34:38 #action jeblair send announcement to infra list for call at 10am pacific friday 19:35:03 that's uh, 1700 utc i think 19:35:14 yep 19:36:11 #topic cgit server status (pleia2) 19:36:22 pleia2: you're up! 19:36:48 mordred's patch to create the repos on git.o.o was merged earlier, but the script won't trigger automatically until projects.yaml is updated again 19:36:56 do we want to help it along? 19:37:03 pleia2: i ran the updated create-cgitrepos script on git.o.o just a few minutes ago, but it didn't chown the repos to the cgit user 19:37:16 so they're still root:root owned 19:37:25 ah right, that script runs from root, not cgit user 19:37:40 I'll write a chown patch 19:37:54 i'll clear those out and we can try again when that merges 19:37:59 thanks 19:38:10 once that's done, replication should start working and we should be good 19:38:11 we can probably merge https://review.openstack.org/#/c/41643/ soon, which is a projects.yaml update 19:38:42 (if we want to use that to test the git work) 19:39:02 that would be preferable to me manually triggering the script, definitely 19:39:08 exercises more of the automation 19:39:24 pleia2: so close! :) 19:39:32 yes, this week, for real this time! 19:39:44 #topic OpenPGP workflow (fungi) 19:40:23 this was more of just a general heads up that with the formation of the new release program, i'm pushing to start a strong openpgp web of trust for the project 19:40:39 ++ 19:40:43 we're a bit behind the curve there given our size, but the speed at which the project grew makes that understandable 19:41:09 i'm putting together some recommendations in a wiki article this week for general workflow around key validation and signing 19:41:28 great 19:41:32 sounds good 19:41:39 and then we'll probably aim to start doing organized key signing parties a la debian/ubuntu as of the j summit 19:41:41 jeblair: we're having ubuntu hour + debian dinner wednesday, sign keys? ;) 19:41:53 fungi: do you intend on trying to have the jenkins jobs check the keys as part of the release process too? 19:42:00 clarkb: that is a goal 19:42:21 clarkb: more than that, i've been playing with ways to validate our tarballs on a trusted slave prior to signing them with a release key too 19:42:55 fungi: would the release team still sign the final releases personally, or with a key that is not owned by a bot? 19:42:58 a release automation key would be signed at a minimum by the release team members and so on 19:43:33 jeblair: we could certainly have a mechanism to take individual detached signatures in the process, sure 19:44:03 so release team members could still sign them directly, though it would add an additional delay into things like client uploads to pypi 19:44:17 fungi: ok. having things automatically signed by jenkins is good -- but in my mind all it means is "this thing was signed by jenkins" 19:44:23 yep 19:44:31 fungi: which i don't trust _nearly_ as much as "this thing was signed by thierry" 19:44:37 ++ 19:44:39 it's something we'll want to discuss the pros and cons of when we get closer to making it work 19:44:46 *nod* 19:45:32 anyway, i'll give you all a heads up when i've got some initial documentation up, and i look forward to signing your keys in hk perhaps 19:45:53 fungi: cool :) 19:46:02 and thanks! 19:46:11 my pleasure 19:46:17 #topic Open discussion 19:46:41 the log htmlifier should be working on logs.o.o now 19:47:04 and logstash is only indexing non debug screen log lines now and is much happier as a result 19:47:07 cool! 19:47:29 when sdague gets back I intend to start hitting that system with actualy work 19:47:32 *actual 19:47:51 the devstack-gate node pooling code is going to become its own project called 'nodepool' 19:48:01 fungi: I saw signs that you might be working on capping stable/folsom reqs ? 19:48:11 fungi: or does it have nothing to do with it ? 19:48:28 ttx: yes, i'm seeing what needs to happen to backport the requirements enforcement to grizzly and maybe also folsom 19:49:10 we will drop folsom support/testing after the icehouse summit right? 19:49:12 fungi: err.. I was talking about introducing caps to stable/folsom to prevent it from breaking while nobody looks after it 19:49:20 or when havana releases? is it worth putting effort into it? 19:49:31 clarkb: hrm. this is blank for me: http://logs.openstack.org/59/38259/9/check/gate-tempest-devstack-vm-testr-full/23210d1/logs/screen-n-net.txt.gz 19:49:53 jeblair: :( looks like something broke 19:50:23 500 internal server error, I will poke at it when I get a chance. 19:50:32 ttx: oh, just for folsom? yeah i started figuring out how to identify the transitive dependencies of each project so we can actually cap them, and i think we decided that it was fine not to care about supporting dependencies which have security fix releases except on a case-by-case basis (in other words, cap to today's version number exactly)? 19:50:34 if necessary we can revert the apache change that enabled it on logs.o.o 19:50:53 o/ 19:50:55 fungi: that would work for me. We can still bump manually 19:50:55 hey all 19:51:26 fungi: keep me posted if you make progress on that. Scripting it would be nice since we'll need to do that every 6 months 19:52:01 ttx: it will be different for grizzly i think since we'll want to integrate it into openstack/requirements enforcement tooling 19:52:05 (we need to do folsom yesterday and grizzly some time after yje icehouse summit) 19:52:09 mordred: just in time. :) 19:52:28 woot :) 19:52:35 ttx: whereas folsom i think we can just do by hand as a proof of concept for figuring out what actually works 19:52:43 * mordred apparently had to go to a bank in a cellar in the middle of brasilia 19:52:46 fungi: +1 19:52:59 mordred: to find wifi ? 19:53:24 mordred: is that where you keep your offshore funds and casks of amontillado 19:53:34 ttx: yes 19:53:35 fungi: yes 19:53:52 any way I can be useful to anyone? 19:54:10 mordred: we made a list of project renames, but none of us can take the lead on it this week, so it's up to you to schedule/do them. 19:54:22 mordred: clarkb and i may be able to pitch in in a supporting role. 19:54:22 jeblair: great. I will do that 19:54:24 mordred: all clear to play with merge-milestone-proposed-back-to-master on swift 1.9.1 19:54:24 I'm holding on removing the branch 19:54:32 ttx: awesome. thank you 19:54:42 i had two patches for gerrit WIP votes. One got accepted, https://gerrit-review.googlesource.com/48255 :) The other one didn't, https://gerrit-review.googlesource.com/48254 :( 19:54:54 ttx: I think I want to do it manually a couple of times before automating it - just to make sure I grok all the things - is that ok with you? 19:55:14 mordred: sure. We'll have havana-3 to play with too 19:55:17 awesome 19:55:52 zaro: looks like it's still collecting suggestions at least, so they haven't vetoed it 19:56:26 fungi: yeah, i think it's pretty much dead 19:57:04 zaro: you don't think you can implement their suggestions? 19:57:28 jeblair: ohh, i just got the feedback, so that's what i plan to do next. 19:57:58 fungi: i just meant that patch was dead. need to redo. 19:58:18 oh, ok. patch - yes, idea - no. :) 19:58:49 zaro: ahh, okay. yeah minor stumbling block on the road to acceptance there, from the looks of it. none of the comments seemed to say "nah, we don't want the feature" 19:59:55 thanks everyone! 19:59:57 #endmeeting