19:02:31 #startmeeting infra 19:02:32 Meeting started Tue Dec 17 19:02:31 2013 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:36 The meeting name has been set to 'infra' 19:02:36 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting 19:02:41 #link http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-12-10-19.02.html 19:03:03 jeblair: before you go reassigning the tarballs action items to yourself, don't. almost done... 19:03:43 fungi: oh cool, thanks. :) 19:03:54 ninja fungi 19:04:07 fungi: how's the quota increase request? 19:04:27 #topic actions from last meeting 19:04:34 o/ 19:04:35 done. our openstackci account can go up to 25tb in rackspace now, and up to 100 cinder volumes 19:04:41 fungi: awesome 19:04:41 fungi: yaaay! 19:04:53 fungi: woohoo 19:04:54 i've added a 200gb volume for tarballs on static.o.o 19:04:59 rsync'd the contents in 19:05:01 what did you say in the request? 19:05:03 checked out the vhost 19:05:34 lowered ttl on the dns record to 5 minutes 19:05:34 one minor cosmetic issue outstanding... can't get the new filesystem usage to show up in cacti 19:05:54 restarted snmpd on static, re-ran the graph creation scripts on cacti manually, no good 19:05:59 fungi: cool, so i think next maybe just put jenkins.o.o in shutdown mode so it doesn't generate new tarballs, then do an rsync/dns switch 19:06:33 agreed. catch-up rsyncs are taking on the order of 10 seconds looks like, so should go quickly 19:06:41 ++ 19:07:07 #action fungi move tarballs.o.o to static.o.o 19:07:40 sligtly closer to tearing down the old wiki server 19:07:46 almost there! 19:07:50 #topic Tripleo testing (lifeless, pleia2) 19:08:12 pleia2: anything to coordinate on here? 19:08:31 I don't think so 19:08:43 cool 19:08:51 I now have derekh's setup to test, but that's more on my side than infra just yet 19:09:54 pleia2: do you know if anyone has volunteered/been assigned to do ipv6 nodepool/jenkins work? 19:10:12 jeblair: afaik, no one yet 19:10:16 k 19:10:22 #topic Savanna testing (SergeyLukjanov) 19:10:25 hey 19:10:32 SergeyLukjanov: anything new here? 19:10:43 everything is ok, waiting for review for tempest patches 19:10:50 nothing new atm 19:11:09 SergeyLukjanov: the jobs are running though correct? 19:11:15 SergeyLukjanov: they just don't actually test much yet 19:11:16 yup! 19:11:32 only api for node group templates 19:11:38 waiting for review 19:11:45 that's the best way to go -- things will be self-testing as they go into tempest 19:11:49 and then will add test for the rest api edpoints 19:12:14 hope to receive some reviews this week 19:12:23 tempest guys are very busy as I see 19:12:32 tempest people 19:13:11 SergeyLukjanov: cool, thanks 19:13:11 btw we're starting using zuul+nodepool to run savanna-ci and I hope that will return back with some patches to support neutron in nodepool 19:13:39 SergeyLukjanov: yeah, that'd be great 19:13:58 changes are pretty small atm 19:14:15 and I'd like to start discussion about dib jobs 19:14:27 but I'm not prepared atm, so, let's do it offline 19:14:36 I'll try to prepare some initial queestions 19:14:36 (though i hope you don't have to run savanna-ci much longer as we move things into openstack) 19:14:59 jeblair, we'll need it to run slow tests 19:15:10 like sequential scaling of clusters 19:15:32 well, as much as we can :) 19:15:41 yep :) 19:15:54 we'd like to have at least all tests in tempest 19:16:07 and run them if needed in savanna-ci but from tempest 19:16:20 * fungi imagines a 100-node hadoop cluster being spun up for each change 19:16:35 :) 19:16:37 fungi: we'll need you to write more nice quota requests 19:16:42 we've tested 200 nodes clusters 19:16:45 #topic Trove testing (mordred, hub_cap) 19:17:01 heyo jeblair 19:17:02 hub_cap: heya! 19:17:21 so SlickNik has updates (hes working on the dib elements) 19:17:25 hey guys. 19:17:34 #link https://blueprints.launchpad.net/trove/+spec/trove-tempest 19:17:40 people 19:17:54 i think next hes going to work on the image caching, right SlickNik? ;) 19:18:06 I didn't have much of a chance to work on this last week, but I'm going to be working on this 100% this week. 19:18:34 Yup image caching and devstack-vm-gate changes to run the tests. 19:19:25 A couple of other folks from the trove team signed on to get started moving trove integration tests to tempest. 19:19:34 and we have some people from mirantis working on server side tests, and we have some client tests in a review (iirc) already 19:20:23 SlickNik, hub_cap: yes tarballs.o.o is where we will stick images we build 19:20:38 flying-bond (Debashish) and dlakunchikov (Dmitri) 19:21:36 horray for progress 19:22:03 hub_cap: sounds good; any questions or blockers atm? 19:22:22 none from myself 19:22:26 <3 19:22:29 jeblair: none at the moment. I'll likely be bugging people for reviews this week, so stay tuned! 19:22:43 * hub_cap turns a prop radio nob 19:22:54 cool, looking forward to it! 19:23:04 #topic Jenkins 1.540 upgrade (zaro, clarkb) 19:23:16 so that happened, briefly, then unhappened. 19:23:43 the reason for the unhappening was lost or truncated logs, was it not? 19:23:44 so i'm trying to setup latest jenkins with scp plugin to see what happened there. 19:23:45 ya it was sad 19:24:00 zaro: cool. clarkb and i have both worked on that plugin 19:24:10 anteaya: correct, new version of jenkins didn't play nice with teh scp plugin console copying 19:24:16 ah 19:24:18 :( 19:24:59 zaro: i think if you write a job that emits 10 or 20k lines to the console, that will probably be enough to replicate 19:25:36 yeah i have it setup in my dev env now, but having difficulties even getting plugin to connect to a server. 19:25:54 still working on it. 19:26:10 #topic Maven clouddoc plugin move (zaro, mordred) 19:26:33 #link https://etherpad.openstack.org/p/java-release-process 19:26:50 haven't heard from sharwell since last wedn. 12/11. 19:27:09 should we just go ahead with this? #link https://review.openstack.org/#/c/58349/ 19:27:31 it looks like i need to go into the sonotype jira and open a case requesting a dedicated groupId, based on subsequent info from dcramer 19:28:00 fungi: yes, that does need to happen 19:28:06 zaro: so i think that brings us back to the etherpad i originally prepopulated with all the info they want in the jira ticket fields 19:28:23 so is the situation that you were coordinating with someone and now someone else is involved in the process, with no access to the prior person? 19:28:28 need to figure out all the little details about our org.openstack.cloud.api 19:29:05 fungi: i think you'll need to coordinate with sharwell on those fields. 19:29:34 according to dcramer sharwell can provide access. 19:29:38 okay, i guess they need to match what's on org.rackspace.cloud.api? 19:29:53 er, com.rackspace 19:30:16 ohh, wait that's right this is a new groupId. 19:30:17 i'll find out 19:30:27 then i think you can just make it your own. 19:30:34 yeah, we have to ask sonotype to create it in maven central 19:30:35 i mean create it like new. 19:31:03 yes, you can probably create without sharwell or dcramer input then. 19:31:05 okay. do we request org.openstack.cloud.api or just org.openstack and then get the ability to create sub-ids i wonder 19:31:17 i'll check with them 19:31:23 i think former. 19:32:04 ohh definately former. cannot create subs. 19:32:22 #topic Private gerrit for security reviews (zaro, fungi) 19:32:44 (the zaro-fungi part of the meeting continues) 19:32:46 just got good feedback from fungi on the change. 19:33:01 yes, nothing new ATM, just WIP 19:33:15 sorry it's taken me so long to find time to go over it, but i think it's close to what we need 19:33:27 good to hear! 19:33:38 probably worth bringing to the group is whether we want to start it on latest gerrit rather than giving ourselves yet one more gerrit to upgrade from 2.4 19:33:53 I would be all for starting it on new gerrit 19:33:53 i think fungi mentioned that we should wait until 2.8 upgrade. 19:34:00 or after 2.8 upgrade 19:34:22 well, or just build it on 2.8 (there's not a lot special it really needs for the workflow we outlined) 19:34:23 yeah, ++ 19:34:31 yeah, i think for the moment we can say we'll target the rollout of security after we deploy 2.8 19:34:45 but i'm fine with prioritizing the upgrade project, given limited resources 19:34:51 i don't think we should try to run it on 2.8 while we're running regular gerrit on 2.4 19:34:52 yep, totally agree 19:35:06 ++ 19:35:36 though since we don't know for certain everything that will be involved in the 2.8 upgrade and timeline yet, we should feel free to revisit that... 19:36:03 if it looks like it'll be 3 months till we upgrade and security is ready to go, it'd probabl be better to go ahead and deploy security on 2.4 and upgrade it too. 19:36:19 okay 19:36:37 cool. 19:36:44 #topic Upgrade gerrit (zaro) 19:36:46 speaking of 19:37:06 Blueprint https://blueprints.launchpad.net/openstack-ci/+spec/gerrit-2.8-upgrade 19:37:29 Etherpad #link https://etherpad.openstack.org/p/gerrit-2.8-upgrade 19:37:40 jeblair: had a question in there about alternative to WIP plugin. 19:38:13 also I’m blocked waiting for approval on #link https://review.openstack.org/#/c/61542/ 19:38:27 _david_ wrote up some text about the upgrade, so i copied it into the etherpad 19:38:30 #link https://etherpad.openstack.org/p/gerrit-2.8-upgrade 19:38:33 #link https://blueprints.launchpad.net/openstack-ci/+spec/gerrit-2.8-upgrade 19:38:40 and then annotated it with some of my thoughts 19:38:56 fungi, clarkb: ^ that's probably worth a read over and your initial feedback too 19:39:03 adding to my list 19:39:05 it has some deployment choices 19:39:11 jeblair: ok bookmarking 19:39:24 zaro: on 61542 i think we were waiting for mordred to chime in, but he's been absent for a few days 19:39:41 yeah, if he doesn't vote this afternood, let's aprv 19:39:49 afternoon 19:40:14 wfm 19:40:20 i'd like to continue the tradition of unanimous approvals of ssh access if we can. :) 19:40:51 agreed 19:41:08 #topic Zuul release (2.0?) / packaging (pabelanger) 19:41:16 this might be stale... 19:41:22 and pabelanger isn't here... 19:41:27 #topic Open discussion 19:41:36 if I could get feedback here, that would be useful: http://lists.openstack.org/pipermail/openstack-infra/2013-December/000515.html 19:41:38 can we circle back to clouddocs? 19:41:49 working through publications, but we need branch names that make sense 19:42:15 pleia2: eek, i missed that mail, sorry. 19:42:20 not sure i got an answer whether we should just go ahead with https://review.openstack.org/#/c/58349/ 19:42:35 zaro: i think we should sit on it for now. 19:42:40 and I also confirmed that we have all history from https://github.com/openstack-ci/publications so it can be deleted 19:42:51 jeblair: np 19:43:37 pleia2: i think the concern originally expressed was that until we move those into branches in the new location (and out of old git commits in the history) they're not exposed anywhere easily consumable 19:44:01 fungi: fair enough, so we'll have that problem solved soon 19:44:07 yeah, so let's keep ci/pub around until we finish the other branches 19:44:13 and then delete 19:44:14 i think it's safe to hold off deleting from github until then 19:45:16 so it turns out that crm114 adds enough time to log processing that the workers got backlogged 19:45:38 i'm working on a logstash worker puppet module refactor that will let us colocate multiple logstash workers on a single host 19:45:57 to better utilize cpu there -- especially once we move the workers to rax performance nodes 19:45:59 oh, one other thing which sprang to mind for the tarballs move. the target path changes slightly on the new server, so i'll need to tweak the publisher location on jenkins.o.o for it after it quiesces 19:46:23 and we'll add some more nodes as well 19:46:40 it would be swell if we could graph the gearman queue... 19:46:54 clarkb: maybe we could have the log client splat that to statsd/graphite? 19:47:05 jeblair: right I was thinking of adding that feature to geard directly 19:47:14 jeblair: unless yo uthink that is better off living external 19:47:44 fungi: ok, is that a change to the publishers in jobs, or is it a change to the scp site in the global config? 19:47:53 fungi: will all the jjb jobs refer to static instead of tarballs.o.o now? 19:47:55 jeblair: the latter 19:47:57 clarkb: hrm; adding it to geard has a certain elegance 19:48:21 zaro: they won't. the jobs stay the same because the publisher target stays the same 19:48:33 jeblair: yeah may be generally useful to other geard users 19:48:49 zaro: jeblair: it's the "Root Repository Path" which i'll need to update 19:49:12 clarkb: yep. we probably _don't_ want it for zuul though. 19:49:58 oh, and i've proposed two changes to zuul that should allow us to start using templates in layout.yaml will will make it much smaller 19:49:59 https://review.openstack.org/#/q/status:open+project:openstack-infra/zuul,n,z 19:50:27 saw the titles, haven't had time to review yet but very excited by the promise they make 19:50:45 ohh that would be nice! 19:51:31 oh, and stable/havana backports of the tox.ini sync are proposed now... https://review.openstack.org/#/q/branch:stable/havana+topic:tox-sync,n,z 19:52:05 mostly working, sdague and mtreinish helped me on missing/broken prereqs in devstack and tempest 19:52:14 fungi: cool 19:52:54 anyone have anything else? 19:52:56 fungi: is grizzly affected? 19:53:24 clarkb: grizzly affected grenade upgrades to the havana patches, so there was some involvement there 19:53:39 for tempest anyway 19:54:06 thank 19:54:11 though havana and grizzly stable branches of most of the servers are back to being testable again as of this week 19:54:18 finally 19:55:24 I don't have anything else 19:55:42 yep, all done 19:55:59 thanks all! 19:56:01 nothing else for me 19:56:02 #endmeeting