19:01:05 <jeblair> #startmeeting infra
19:01:05 <openstack> Meeting started Tue Sep 10 19:01:05 2013 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:08 <openstack> The meeting name has been set to 'infra'
19:01:29 <jeblair> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting
19:01:55 <jeblair> #link http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-09-03-19.03.html
19:02:18 <anteaya> o/
19:02:25 <jeblair> #topic Salt (UtahDave)
19:02:51 <fungi> #link https://review.openstack.org/#/c/45898/1
19:03:04 <fungi> that's up for review
19:03:19 <jeblair> neat.  i wasn't around much yesterday; what do we need to discuss here?
19:03:26 <UtahDave> jeblair: Should I describe the purpose, or is the commit clear enough?
19:03:32 <fungi> we discussed the runaway processes/memory leak and there was apparently an issue with the message queue and unaccepted minion certs
19:04:14 <clarkb> one big thing that came up was if and how we should determine which nodes need puppet kicked when a config change merges
19:04:29 <clarkb> I think we settled on not worrying about it initially and just kicking everything
19:04:35 <jeblair> clarkb: sounds good to me
19:04:45 <fungi> UtahDave: do you happen to know which salt release fixed the unaccepted minion certs leak? (or did you already tell me and i missed it?) just want to make sure that won't be an issue for us long-term
19:04:45 <jeblair> fungi: are those things resolved or do we need to make some decisions?
19:05:23 <fungi> i think it's mostly just getting confirmation on that situation before we turn it back on everywhere
19:05:28 <UtahDave> fungi: that issue should be fixed in the 0.16 branch.  If I recall correctly, your server was running 0.15.3
19:05:36 <mordred> o/
19:05:37 <fungi> UtahDave: thanks!
19:05:58 <fungi> so i'll check to make sure we don't end up with older salt anywhere
19:06:12 <fungi> and if the issue we were seeing crops back up, then it's something else
19:06:35 <UtahDave> fungi: sure. If it crops back up we can dedicate some engineering resources to help track down the problem
19:06:51 <mordred> so, if I understand the patch correctly, it means we would run a job on the jenkins master which would execute that salt command, yeah?
19:07:04 <fungi> UtahDave: keen. i'll need to look back through how we were installing it to make sure we have channels to new enough versions on our systems
19:07:31 <fungi> mordred: it wouldn't have to be the on the jenkins master--that was just a first stab i think
19:07:42 <UtahDave> mordred: correct.    The jenkins user should run   sudo salt-call event.fire_master 'some data' 'jenkins'
19:07:50 <mordred> fungi: ok
19:07:52 <fungi> mordred: it could be a specific node-label or even a dedicated slave if we really wanted
19:07:57 <mordred> kk
19:08:15 <mordred> UtahDave: what are "some data" and "jenkins"
19:08:35 <UtahDave> 'jenkins' is the tag that the Reactor is scanning for.
19:08:49 <mordred> UtahDave: like, if the thing I want to acheive on each node is "puppet agent --test" ... would I do sudo salt-call event.fire_master 'agent --test' 'jenkins' ?
19:09:19 <UtahDave> The first item is the "data" field in which you can put any arbitrary pertinent data.
19:09:35 <zaro> ~.~.
19:09:40 <UtahDave> The current setup is not using the data field
19:09:57 <jeblair> i don't think we want to pass parameters
19:10:06 <UtahDave> mordred: I would avoid allowing the jenkins server to pass in commands to be run
19:10:08 <jeblair> i think we want jenkins to say "run puppet" and have salt know how to do that
19:10:31 <mordred> UtahDave: great. and I agree
19:10:38 <UtahDave> So right now, when the reactor sees the 'jenkins' tag it just executes the /srv/reactor/tests.sls
19:10:38 <fungi> yeah, from a security perspective we just want to make sure that the slave where this job runs can tell the salt master to do one thing and one thing only (for now, and expand to a vetted list later if desired)
19:10:44 <mordred> so we'd want to do salt-call event.fire_master '' 'jenkins'
19:10:58 <UtahDave> yes, exactly.
19:11:01 <mordred> coool
19:11:17 <jeblair> this seems like a safe thing to do on the jenkins master.  is that where we should run it?
19:11:28 <mordred> jeblair: seems like a safe place to me
19:11:37 <mordred> and also less work than other things
19:11:57 <UtahDave> I think it's pretty safe on the jenkins server based on the sudo privileges the jenkins server has
19:11:57 <fungi> agreed. it's flexible enough we could put it wherever we want slave-wise, but should be fine on a jenkins server as well
19:12:03 <anteaya> will it run on all jenkins masters, or just one?
19:12:26 <fungi> the way it's written now, all i think
19:12:32 <anteaya> k
19:12:45 <fungi> so whichever one zuul picks at random
19:13:12 <fungi> though the job itself is not written yet
19:13:41 <UtahDave> fungi: correct. There would need to be a jenkins job written that executed the above mentioned salt-call command when appropriate
19:14:00 <jeblair> this is a review comment, but i'd imagine we don't want that sudo command defined everywhere, so we'll probably want to put a sudoers.d fragment just on whatever jenkins master/slave will run this
19:14:13 <fungi> jeblair: yeah, i was thinking the same
19:14:24 <fungi> right now this sets it on every server where we install sudo
19:14:35 <fungi> but easily addressed
19:14:52 <jeblair> (which is actually making me lean slightly toward having a slave for this; i'd like to trust the \d\d masters less in the future)
19:15:26 <fungi> anyway, mostly just wanted to sync up on comfort level for turning salt back on and making sure it's the right version to theoretically avoid the previous issue we were seeing
19:15:38 <fungi> sounds like we're cool with that?
19:15:51 <jeblair> fungi: sounds like it; and we can go over the finer points in reviews
19:15:58 <fungi> perfect
19:16:12 <clarkb> ++
19:16:14 <mordred> ++
19:16:21 <jeblair> #topic Marconi migration from stackforge -> openstack (flaper87)
19:16:27 <jeblair> flaper87: hi there!
19:17:00 <jeblair> marconi was accepted for incubation
19:17:11 <jeblair> and i think they would like an org move
19:17:26 <clarkb> yes, that is my understanding
19:17:52 <clarkb> #link https://review.openstack.org/#/c/44963/
19:18:20 <clarkb> that is a WIP change that can be merged after the manual steps of moving a project are completed
19:18:24 <jeblair> also, it would be cool to know what kind of testing they're planning on
19:18:57 <jeblair> will they be doing devstack-gate tests, etc...
19:19:14 * mordred would love to know that
19:19:19 <jeblair> but since flaper87 doesn't seem to be around anymore (though he was here at the beginning of the meeting)....
19:19:30 <jeblair> i guess we'll shelve this for now
19:19:32 <clarkb> jeblair: maybe come back to this when we are done with the other agenda items?
19:19:46 <jeblair> #topic Trove testing (jeblair)
19:19:53 <jeblair> also, mordred, hub_cap ^
19:20:04 <jeblair> real quick:
19:20:05 <hub_cap> heloo helooo
19:20:15 <jeblair> i've put a couple of project testing related topics on the agenda
19:20:28 <jeblair> trove, tripleo, and xen....
19:20:37 * mordred supports this
19:20:37 <jeblair> because there are efforts to get upstream ci testing going for all of those
19:20:49 <mordred> so - hub_cap - how's upstream ci testing going for trove?
19:20:53 <jeblair> and i want to make sure that we're being supportive of those, and they don't slip through the cracks
19:21:01 <mordred> ++
19:21:10 <hub_cap> going as in, how is it going w/ us running it?
19:21:35 <hub_cap> the only problems we have is the plugin hp uses to spin up builds... which wiould be much nicer if done by yall
19:22:00 <hub_cap> or do you mean, hows the integration w/ teh gate going, mordred? (cuz thats not happened yet)
19:22:18 <mordred> how's the integration with the gate going?
19:22:25 <hub_cap> for me, i see our devstack integration as gating for me to get the integration w teh gate
19:22:39 <hub_cap> https://review.openstack.org/#/c/38169/
19:22:51 <hub_cap> its been going back and forth and SlickNik is doing it on free time
19:22:52 <mordred> awesome. I'll go star that review
19:22:54 <hub_cap> so i might take it over
19:22:59 <hub_cap> and push it forward
19:23:18 <mordred> I think that getting trove into devstack is a valid first step for sure
19:23:23 <hub_cap> yes yes
19:23:34 * flaper87 is here
19:23:35 <hub_cap> then i can focus on the special steps for our tests in teh gate
19:23:46 <mordred> anything you need from us this week (I'm guessing no, since you're waiting on devstack itself right now)
19:24:06 <hub_cap> mordred: correct. ill come to you when i need to start integrating.
19:24:14 <hub_cap> lets say late this wk, early next
19:24:21 <hub_cap> depoending on the reviews for devstack
19:24:43 <hub_cap> <feel free to pull me in next wk to check my status jeblair
19:25:08 <mordred> hub_cap: I believe we're goign to pull you in weekly until such as time as you're integrated
19:25:35 <jeblair> hub_cap: cool, thanks
19:25:48 <hub_cap> mordred: jeblair good by me
19:25:52 <hub_cap> <3
19:25:58 <hub_cap> itll keep me workin on it ;)
19:26:25 <jeblair> yay!  ok, back to marconi
19:26:31 <jeblair> #topic Marconi migration from stackforge -> openstack (flaper87)
19:26:58 <jeblair> flaper87: hi, so one of the things we want to discuss (in addition to the org move) is testing for marconi
19:27:06 <flaper87> sorry, got disconnected
19:27:06 <flaper87> did I miss my chance ?
19:27:06 <flaper87> :(
19:27:39 <jeblair> flaper87: will you be doing integration tests with devstack, or something similar?
19:27:48 <jeblair> flaper87: (what are marconi's integration points with the rest of openstack?)
19:29:03 <jeblair> this isn't going very well, is it?
19:29:36 <clarkb> :/
19:29:44 * fungi mails flaper87 more internets
19:29:54 <jeblair> #topic Tripleo testing (jeblair)
19:30:06 <jeblair> ok, so tripleo is a program now
19:30:30 <mordred> yup. so probably stuff should get tested and stuff
19:30:35 <clarkb> ++
19:30:43 <jeblair> and while it isn't part of the integrated release, it would still be great if whatever testing that is done could be done with this neato infra we have
19:30:56 <pleia2> so with the baremetal stuff I have a sketched out plan to use portions of tripleo with lxc
19:31:09 <pleia2> but still slogging through some issues running openstack in lxc
19:31:20 <jeblair> pleia2: how does this relate to "toci"?
19:31:34 <jeblair> (i don't really know what any of these things are as i've never seen them)
19:31:38 <pleia2> jeblair: I'll be using portions of toci
19:32:04 <mordred> toci is basically a scripted version of https://git.openstack.org/cgit/openstack/tripleo-incubator/tree/devtest.md
19:32:10 <pleia2> but toci is designed to run on actual bare metal, whereas we're all virtual (so lxc and qemu)
19:32:12 <mordred> which is the walkthrough on what it takes to install tripleo
19:32:20 <jeblair> so a thing that got me thinking about this is this bug: https://bugs.launchpad.net/openstack-ci/+bug/1217815
19:32:21 <uvirtbot> Launchpad bug 1217815 in openstack-ci "Tripleo ci service account in gerrit" [Undecided,New]
19:32:32 <pleia2> so I'm writing patches for tripleo scripts to support lxc, and eventually will have to patch toci to do the same
19:32:51 <mordred> lifeless, SpamapS: around? we're talking about you in here
19:33:16 <jeblair> which got my attention because most openstack programs don't have their primary testing infrastructure hosted outside of openstack
19:33:24 <pleia2> but full tripleo is more complicate than what I'd doing (since my goal is testing baremetal nova driver, not tripleo)
19:33:29 <pleia2> complicated
19:33:39 <pleia2> I just happen to be using tripleo to do it
19:33:50 <ttx> o/
19:34:03 * flaper87 is here
19:34:05 <lifeless> mordred: hi, yes in tuskar meeting just now
19:34:07 <flaper87> sprry, I got disconnected
19:34:08 <lifeless> mordred: then OSRB
19:34:10 <lifeless> mordred: then physio
19:34:11 <flaper87> did I miss my chance?
19:34:16 <lifeless> mordred: then maybe work ::P
19:34:19 <flaper87> :)
19:34:26 <mordred> lifeless: well, we're talking about infra testing of tripleo
19:34:30 <lifeless> cool
19:34:38 <anteaya> flaper87: I think jeblair will try to give you another shot
19:34:40 <lifeless> it needs to be openstack-infra'd as soon as possible
19:34:48 <lifeless> was talking with derekh about it last night
19:34:52 <mordred> lifeless: we'd like that - but we don't really know what that means
19:35:03 <jeblair> so we want to find out who to talk to about that
19:35:32 <pleia2> I will be at the tripleo sprint next week, so I can have some chats then
19:35:37 <lifeless> ok, so me
19:35:41 <clarkb> I will be there as well
19:35:46 <clarkb> (ish)
19:35:53 <lifeless> derekh is more familiar with the toci plumbing, but he's on leave for 2 weeks.
19:36:00 <pleia2> clarkb: cool, maybe we schedule some time to talk specifically about testing with them?
19:36:08 <clarkb> pleia2: that sounds like a good idea
19:36:14 <mordred> lifeless: aiui, that runs on some metal that is laying around somewhere, right?
19:36:15 <pleia2> lifeless: can we add this to sprint schedule?
19:36:22 <pleia2> mordred: yeah, I think it's at redhat
19:36:24 <lifeless> pleia2: it's an etherpad... :P
19:36:31 <lifeless> mordred: yes, which is a big scaling problem.
19:36:31 <pleia2> lifeless: oh right :)
19:36:37 * pleia2 digs up the etherpad
19:36:49 <lifeless> mordred: I want to remove all the redundancy between it and the gerrit /zuul/jenkins infra
19:37:05 <lifeless> mordred: turn it into a focused test runner script
19:37:57 <jeblair> i think engineering this is far too large of a topic for this meeting
19:38:01 <mordred> totally
19:38:08 <pleia2> clarkb: penciled in for thursday https://etherpad.openstack.org/tripleo-havana-sprint
19:38:11 <jeblair> so the useful things to know are who's leading the effort
19:38:12 <mordred> I think the outstanding question is qhat to do about the toci service account request
19:38:20 <clarkb> jeblair: agreed. I think if pleia2, mordred and I sit in a session at their sprint we should be able to get somewhere next week
19:38:21 <mordred> and who to talk to in general
19:38:27 <jeblair> and where/how should we track the design?
19:38:48 <jeblair> clarkb, pleia2: thank you
19:39:08 <mordred> I'd say the goal for next week shoudl be an etherpad or somethign with a design on it
19:39:15 <pleia2> ++
19:39:17 <mordred> that we all fel comfortable we can communicate to jeblair
19:39:29 <mordred> without saying "oh, I guess you needed to have been there"
19:39:33 <clarkb> ++
19:39:40 <lifeless> waiting for the sprint would be a mistake :). derekh's not there, lets get rolling on discussions.
19:39:51 <lifeless> suggest, either a dedicated etherpad, or ml discussion, or both
19:40:00 <mordred> I'd say etherpad
19:40:05 <mordred> ml discussion wrong scope level
19:40:24 <mordred> and also some IRC outside of this meeting
19:41:01 <clarkb> jeblair: so I think lifeless is the person to talk to now, derekh becomes the person when back. And an etherpad will be the place to track the design
19:41:09 * fungi is bowing out to drive to red hat hq. if we discuss the marconi org move scheduling, i'm free to help basically any saturday/sunday for the forseeable future
19:41:21 <pleia2> ok, here we go: https://etherpad.openstack.org/tripleo-initial-testing
19:41:27 <jeblair> fungi: have fun, thanks
19:41:31 <clarkb> #link https://etherpad.openstack.org/tripleo-initial-testing
19:41:40 <pleia2> thanks clarkb
19:41:43 <clarkb> jeblair: does that cover what we need to do in this meeting?
19:41:56 <jeblair> clarkb: yep.  thanks
19:42:10 <jeblair> flaper87: around?
19:42:13 <flaper87> yup
19:42:18 <jeblair> #topic Marconi migration from stackforge -> openstack (flaper87)
19:42:33 <jeblair> 19:28 < jeblair> flaper87: will you be doing integration tests with devstack, or something similar?
19:42:33 <jeblair> 19:29 < jeblair> flaper87: (what are marconi's integration points with the rest of openstack?)
19:42:52 <flaper87> jeblair: I already have a patch ready for devstack
19:43:03 <flaper87> so we'll be doing it w/ devstack
19:43:09 <jeblair> flaper87: awesome!  do you have a link to that?
19:43:26 <flaper87> jeblair: yup, https://github.com/FlaPer87/devstack/tree/marconi
19:43:33 <flaper87> I haven't submited it for review
19:43:40 <flaper87> because I was waiting for marconi to be migrated
19:43:46 <flaper87> and for another patch in requirements to land
19:43:59 <flaper87> which alread landed
19:44:21 <flaper87> we're already integrated with the rest of the infrastructure
19:44:38 <jeblair> flaper87: ok.  are there any unusual requirements for running it in devstack?
19:45:10 <flaper87> don't think so, the most unusual would be mongodb but ceilo already uses it
19:45:24 <clarkb> more mongo? :/
19:45:36 <flaper87> plus, we can run tests on sqlite
19:45:37 <jeblair> ok.  well, the're about to (they actually use mysql atm), but that should be in place by the time your stuff lands
19:46:06 <flaper87> so, mongodb is not a "requirement" for tests
19:46:24 <jeblair> flaper87: all right, that all sounds pretty easy then.
19:46:25 <clarkb> jeblair: that depends on zul getting newer mongodb into cloud archive right?
19:46:30 <flaper87> but it would be nice to be able to run tests against mongodb, anyway
19:47:10 <jeblair> clarkb: jd__ pushed up some changes that lead me to believe that's happened.
19:47:15 <clarkb> jeblair: cool
19:47:50 <jeblair> flaper87: so when would be a good time to perform the repo rename?  we usually try to do it during a quiet period
19:48:06 <jeblair> at this point, we could probably do a friday afternoon US-time, or weekend
19:48:25 <flaper87> jeblair: either work for us
19:48:40 <jeblair> i don't think i could help this friday or this weekend, but am available next weekend (though fungi said this weekend was fine)
19:48:54 <flaper87> ok, this weekend it is
19:48:56 <flaper87> :D
19:48:58 <mordred> I cannot help this weekend either
19:49:34 <jeblair> clarkb: thoughts?
19:49:43 <clarkb> I think I can do this weekend. Why don't we plan for Saturday at like 1700UTC and check with fungi when he is back?
19:49:59 <flaper87> sounds good to me
19:50:37 <jeblair> #action fungi clarkb move marconi saturday sept 14 1700 utc
19:50:44 <flaper87> w000000000000000000000t
19:50:47 <flaper87> thanks guys
19:50:49 <jeblair> clarkb: thanks
19:50:56 <flaper87> clarkb: fungi thanks :)
19:51:04 <jeblair> flaper87: thanks for being on top of things!
19:51:13 <flaper87> my pleasure
19:51:24 <jeblair> #topic Xen testing (jeblair)
19:51:45 <jeblair> I also put this on the agenda, but not with enough notice to make sure that BobBall could be here
19:52:04 <jeblair> because i want to make sure we don't lose track of his amazing effort to test xen
19:52:10 * mordred is interested in his amazing efforts
19:52:21 <jeblair> so we'll try to catch up with him later
19:52:45 <jeblair> #topic puppet-dashboard (pleia2, anteaya)
19:52:54 <jeblair> what's the latest?
19:53:02 <anteaya> I have a sodabrew-dashboard up using ruby 1.9.3
19:53:11 <anteaya> though the package is called ruby1.9.1
19:53:13 <pleia2> on a test vm for now
19:53:23 <anteaya> and a puppet client server
19:53:32 <anteaya> I used these instructions: http://paste.openstack.org/show/46510/
19:53:42 <anteaya> now I am trying to get them talking to each other
19:54:04 <pleia2> once we have everything running, we'll dive into what we need to change in the puppet-dashboard module to support sodabrew instead
19:54:49 <anteaya> separate servers since puppet client uses ruby1.8
19:55:05 <anteaya> I think
19:55:11 <pleia2> yeah
19:55:24 <jeblair> what do you mean 'separate servers'?
19:55:27 <anteaya> yup/l $ ruby -v
19:55:28 <anteaya> ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux]
19:55:29 <pleia2> need to make sure puppet with ruby1.8 can talk to dashboard with 1.9
19:55:38 <pleia2> jeblair: testing infrastructure
19:55:51 <anteaya> they each have their own vm
19:55:54 <pleia2> have a puppet dashboard server and a client that looks like some of our regular clients
19:56:02 <jeblair> ah, gotcha
19:56:14 <clarkb> a little mini infra
19:56:19 <pleia2> very little :)
19:56:45 <jeblair> sounds promising
19:56:50 <anteaya> yay
19:56:55 <jeblair> #topic Open discussion
19:57:08 <jeblair> #action jeblair send email update about asterisk testing
19:57:25 <clarkb> jeblair: please revwiew https://review.openstack.org/#/c/45928/1
19:57:32 <pleia2> I'm completely unreachable on saturday (no marconi for me!) and as mentioned flying to seattle sunday for the tripleo sprint
19:58:08 <mordred> I'll be in New Orleans over the weekend and early next week. I will then be in Seattle late next week, I will then be back in NYC
19:58:29 <mordred> it's possible that next week's meeting might be difficult...
19:58:39 <clarkb> jeblair: are you in New Orleans as well?
19:58:39 <ttx> mordred: I was thinking the same
19:58:40 <jeblair> i'm flying to nola on friday, so won't be around then
19:58:47 <mordred> if pleia2 is going to be in Seattle and jeblair and I will both be in nola
19:58:55 <jeblair> let's cancel it?
19:59:02 <clarkb> I can run a short one to do testing updates
19:59:08 <clarkb> to keep hub_cap et al honest :)
19:59:12 <pleia2> and just so we don't get bored, the following weekend anteaya, RyanLane and I running this on Sunday the 22nd: http://codechix-openstack1-rss.eventbrite.com/
19:59:37 <zaro> Not sure how i should proceed with gerrit WIP patch.  tried RFC on the patch.  been there for 2 weeks without any comments.
19:59:39 * pleia2 is going to need a nap after all this
19:59:40 <jeblair> clarkb: ok all yours if you want it.  :)
19:59:49 <jeblair> time's up
19:59:55 <clarkb> jeblair: ok, I will try wrangling the three involved paries
19:59:57 <jeblair> thanks all!
20:00:00 <jeblair> #endmeeting