19:01:38 <jeblair> #startmeeting infra
19:01:39 <openstack> Meeting started Tue Feb 11 19:01:38 2014 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:42 <openstack> The meeting name has been set to 'infra'
19:02:00 <jeblair> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting
19:02:00 <SergeyLukjanov> jeblair, btw, I'm always here in this time :)
19:02:06 <jeblair> #link http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-04-19.02.html
19:02:23 <jeblair> SergeyLukjanov: i know, i just didn't want to be pushy.  :)
19:02:42 <SergeyLukjanov> jeblair :)
19:02:46 <jeblair> #topic  Actions from last meeting
19:02:55 <jeblair> wow there are a lot
19:04:17 <jeblair> #action mordred continue looking into bug 1242569
19:04:18 <uvirtbot> Launchpad bug 1242569 in openstack-ci "manage-projects error on new project creation" [Critical,In progress] https://launchpad.net/bugs/1242569
19:04:36 <jeblair> clarkb: zmq plugin upgrade?
19:04:55 <clarkb> the patch for zmq plugin has been tested and merged, I need to tag a release and apply it to servers
19:05:04 <clarkb> but I think that owkr is less important now than having reliable logstash
19:05:05 <jeblair> clarkb: what's that in service of?
19:05:13 <jeblair> clarkb: ++logstash
19:05:15 <clarkb> jeblair: adding master data to logstash records
19:05:18 <jeblair> ah
19:05:24 <clarkb> which isn't helpful if logstash is fallen over
19:05:56 <jeblair> fungi: i haven't looked at graphite recently; did you get a chance to look at it and see if we should move it to ssd?
19:06:00 <fungi> short answer is i don't think there's any need since the cinder volume addition. see the iowait change on the monthly cpu graph
19:06:04 <fungi> #link http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=440&rra_id=all
19:06:13 <fungi> and the load average dropped off similarly
19:06:16 <fungi> #link http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=439&rra_id=all
19:06:23 <jeblair> fungi: cool, yeah, that looks much healthier
19:06:45 <fungi> the whisper files also aren't going to buy us anything to clean up currently
19:06:48 <jeblair> anyone know anything about virtualenv 1.10.1 pin?
19:06:51 <fungi> breakdown looks like 1.9% untouched for 8 months, 21% untouched for 2 months, 28% untouched for a month, 42% untouched for a week, 52% untouched for a day
19:07:15 <clarkb> jeblair: yes, we need tox to merge my PR before we can unpin it
19:07:33 <clarkb> jeblair: the virtualenv, tox, and pip pins are all intermingled
19:07:36 <fungi> that's assuming we want to unpin tox and virtualenv/pip at the same time, which is probably wise
19:07:49 <jeblair> clarkb: roger.  i won't echo that action item them
19:07:50 <jeblair> then
19:08:00 <jeblair> anything else from that list we should check in on?
19:08:30 <jeblair> #action jeblair get ssl cert for storyboard
19:08:52 <fungi> that seems to have covered it
19:08:54 <jeblair> #topic Trove testing (mordred, hub_cap, SlickNik)
19:09:02 <jeblair> i think this is in the same state as last week
19:09:07 <jeblair> #link https://review.openstack.org/#/c/69501/
19:09:12 <jeblair> pending tempest review
19:09:38 <jeblair> so i'll move on unless there's anything further on this
19:10:10 <jeblair> #topic  Tripleo testing (lifeless, pleia2, fungi)
19:10:22 <jeblair> i think it is happening again.
19:10:30 <fungi> the tripleo-ci cloud is back online thanks to lifeless's ministrations, in nodepool again and zuul is handing it jobs once more
19:10:44 <pleia2> I'm still working on getting fedora building as a slave in nodepool
19:10:49 <pleia2> but we've had some good patches go in
19:10:51 <fungi> also, with a MUCH larger quota now
19:11:04 <jeblair> pleia2: what's fedora in service of?
19:11:04 <fungi> in theory, so we can also spin up bare/devstack slaves if we want
19:11:16 <pleia2> jeblair: a nodepool slave
19:11:35 <jeblair> pleia2: right, but to what end?
19:11:48 <pleia2> jeblair: ah, we want to test on both ubuntu and fedora for the entire testing stack
19:12:01 <fungi> as opposed to ubuntu and rhel/centos i guess
19:12:06 <jeblair> pleia2: this is a tripleo desire?
19:12:13 <pleia2> jeblair: yes
19:13:02 <fungi> has anybody tried to nodepool-prep a bare centos 6.4 image?
19:13:02 <pleia2> jeblair: but since starting this, I've also bumped into other people interested in running nodepool with fedora, so the work will be valuable beyond just this
19:13:28 <jeblair> pleia2: so for the gate, we took what the project said it wants to support, combined that with the lifecycles of the respective distros, factored to the least common denominator and ended up with precise and centos
19:13:36 <fungi> from an infra team perspective, having nodepool able to give us resources to run python 2.56 jobs would be really swell
19:13:50 <fungi> er, 2.6
19:14:16 <lifeless> jeblair: so, from a 'does python work' perspective thats fine
19:14:22 <jeblair> fungi: yes, we definitely need a bare centos6, but it shouldn't be too hard since it's an existing puppet thing
19:14:30 <lifeless> jeblair: but from a deploy perspective, centos is not == fedora
19:14:34 <clarkb> fungi: no, but I did try to do a d-g centos image back when dprince and ArxCruz wrote changes to support centos
19:14:39 <clarkb> er maybe it was fedora, ya fedora
19:15:05 <jeblair> lifeless: definitely; but one of the reasons why we don't test on fedora or saucy is due to the support lifecycle
19:15:29 <jeblair> lifeless: this is probably not a concern for tripleo since its support lifecycle is much smaller than openstack's
19:15:32 <jeblair> as a whole
19:15:50 <lifeless> jeblair: also because we're targeting CD, where we expect folk to be tracking latest everything, more or less.
19:16:02 <jeblair> i think i'm bringing it up to tease out any issues that may be related to this...
19:16:08 <fungi> i think to some extent our current choices are reflected in the fact that ubuntu uses ltc for uca and rh does rdo on rhel rather than fedora
19:16:10 <lifeless> jeblair: but, there are folk wanting stable release branches - like slagle - so they may want stable OS test jobs.
19:16:35 <jeblair> so i think as long as these nodes are only being used on master, there's probably no issue
19:16:48 <jeblair> fungi: that too
19:17:34 <jeblair> pleia2, lifeless: are you just planning on running tripleo-related jobs on these?
19:20:14 <pleia2> I think we do want to expand it once we have at least a 2nd hardware pool (from redhat) and everything is running fast and stable
19:20:20 <pleia2> but lifeless can answer that better
19:20:28 <jeblair> the main consideration is that in general when we test a release on an os, we like to keep testing it on the same os; since the fast releases of both ubuntu and fedora have shorter lives than openstack stable releases
19:21:03 <jeblair> so if we were to start running nova unit tests on saucy, we would not be able to continue duing that for the whole life of icehouse
19:21:31 <jeblair> that's why the primary testing platforms are lts/centos
19:22:15 <jeblair> we could additionally run tests on fedora or latest-ubuntu, if that got us something, but in general it doesn't
19:22:24 <jeblair> specifically for tripleo, i could see how it would though
19:22:34 <fungi> similarly, there's little point in supporting security fixes for a project which is only being tested on distros which are no longer themselves under security support
19:23:18 <jeblair> so running tripleo master on the CD platforms it wants to deploy on makes sense to me.  more so than say nova unit tests or even the devstack-gate.
19:23:22 <fungi> and the chances that a deployer will upgrade the operating system underneath openstack without upgrading openstack are somewhat low
19:23:27 <jeblair> fungi: ++
19:23:48 <fungi> which leaves us with stable releases on distro releases which outlive or at least keep pace with them
19:23:59 <clarkb> ++
19:24:00 <lifeless> ok so
19:24:36 <lifeless> the RH cloud is a)more capacity but more importantly its the multi-vendor multi-site redundancy we were told we needed for gating
19:25:00 <lifeless> so the goal with the RH cloud is to get to having gate jobs for deploys.
19:25:10 <fungi> got it. so fedora is being used for the undercloud instances then?
19:25:24 <lifeless> fungi: there's several facets
19:25:41 <lifeless> fungi: the RH region will be running on top of Fedora, so that we're running two regions with two different distros
19:25:52 <lifeless> thats a coverage thing to find things tests don't, as much as anything ;)
19:25:58 <lifeless> fungi: separately
19:26:15 <fungi> neat
19:26:31 <lifeless> fungi: we want to know that we work on fedora, so we want to gate it - because all the devs work on fedora there, and breaking folk is not nice
19:26:49 <lifeless> fungi: but also, we'll have a production region running fedora. We want that to not break.
19:26:58 <lifeless> so circular logic :)
19:27:01 <fungi> sure
19:27:07 <jeblair> lifeless: what do you want to gate on fedora?  unit tests?  devstack-gate?
19:27:17 <lifeless> jeblair: tripleo-gate
19:27:49 <lifeless> jeblair: we want to gate tripleo-gate on fedora and ubuntu and possibly more in future; nothing to do with unittests or d-g
19:27:54 <jeblair> makes sense.
19:27:57 <lifeless> jeblair: but tripleo-gate will include tempest
19:28:19 <lifeless> what else
19:28:27 <lifeless> oh, we're happy for excess capacity to be d-g nodes
19:28:35 <lifeless> and they can be whatever you want to upload to glance
19:28:41 <lifeless> centos etc
19:28:44 <jeblair> lifeless: sure.  so that all makes sense and should work fine as long as you are focusing on CD and master
19:28:50 <lifeless> yup
19:28:50 <fungi> yay actual usable glance!
19:29:15 <jeblair> lifeless: if tripleo and tripleo-gate start to want to service stable branches, then you'll run into the same disconnect that left us with lts/centos
19:29:19 <lifeless> I think when slagle's stable branch stuff happens, in the first iteration it won't be tested on updates.
19:29:40 <lifeless> simply because we don't have capacity at the moment
19:29:42 <jeblair> lifeless: this is perhaps not a problem we need to solve now though, but to be aware of for later.
19:30:03 <jeblair> also, yay actual usable glance! :)
19:30:10 <slagle> yea, i'm not honestly sure if we're going to want the stable branches stuff for fedora
19:30:12 <lifeless> if vendors want to step up with enough capacity for that, in enough regions to satisfy the infra redundancy stuff, then we can do tests for those branches on suitable oses
19:30:15 <slagle> i suspect centos would be fine
19:30:34 <lifeless> slagle: yah
19:30:57 <jeblair> ok.  i think this has been useful.  thanks.
19:31:05 <jeblair> anything else on this topic?
19:31:11 <fungi> however if you don't test master on centos too, then come release time you're stuck with software which probably only runs on fedora
19:31:30 <pleia2> I think that's it
19:31:42 <jeblair> fungi: yep.  so that could look like master on centos+fedora, then stable on centos.
19:32:02 <lifeless> jeblair: similar for ubuntu family, and debian family, and so on.
19:32:08 <fungi> agreed
19:32:16 <lifeless> but
19:32:18 <lifeless> baby steps
19:32:23 * fungi was singling fedora/centos out as merely an example
19:32:43 <jeblair> #topic  Requested StackForge project rename (fungi, clarkb, zhiwei)
19:32:46 <lifeless> Specifically - I want to get the current check jobs covering more of the feature set, and I want to get them gating.
19:32:49 <lifeless> then add width.
19:33:10 <jeblair> lifeless: ++depth first.
19:33:21 <clarkb> hrm I thought zhiwei was going to attend the meeting? are you around?
19:33:47 <fungi> clarkb: he did pop up on sunday i think (which was probably his monday) saying he was back and ready when we are
19:34:14 <jeblair> too bad he missed the oslo renames
19:34:31 <clarkb> I can do this weekend, weekend after that is harder, iirc holiday and stuff
19:34:34 <jeblair> i'd prefer to batch this with openstack downtime.
19:34:40 <clarkb> jeblair: wfm
19:35:04 <SergeyLukjanov> which projects should be renamed?
19:35:16 <SergeyLukjanov> looks like I've missed it
19:35:16 <clarkb> SergeyLukjanov: the chef cookbook project for ceilometer
19:35:27 <SergeyLukjanov> clarkb, oh
19:35:39 <clarkb> however, maybe we can do it with sava*cough*?
19:35:45 <fungi> the stackforge/chef-metering-cookbook s/metering/telemetry/ i think
19:35:48 <fungi> something like that
19:36:15 <fungi> SergeyLukjanov: did you have a timeline for when savana projects might be ready to rename?
19:36:29 <SergeyLukjanov> heh, I hope that we'll choose the name in a few weeks
19:36:42 <SergeyLukjanov> but the soft deadline is i3
19:37:01 <SergeyLukjanov> and we already have tons of options
19:37:16 <SergeyLukjanov> so, couldn't guarantee any timeline for savanna renaming
19:37:28 <fungi> we do have a bug with the most recent gerrit commentlink config changes breaking change-id links, but that'll only need a very quick gerrit restart so batching up with actual renames would definitely be better
19:37:38 <jeblair> dguitarbite: around?
19:37:50 <dguitarbite> jeblair: yes
19:38:01 <jeblair> i'm going to jump around on the agenda
19:38:05 <jeblair> #topic Request for Moodle App Integration to infra for Training-Manuals (dguitarbite, sarob)
19:38:12 <jeblair> dguitarbite: what's up? :)
19:38:20 <dguitarbite> hey jeblair :)
19:38:30 <fungi> annegentle had questions on this topic too, so she may want to listen in
19:39:08 <dguitarbite> sarob and our team designing OpenStack-Training did some testing with Moodle App for quizzes and other content delivery which is not covered at present
19:39:26 <dguitarbite> as of now its hosted on aptira servers http://os-trainingquiz.aptira.com/
19:40:40 <dguitarbite> I would like to know if its possible to move it on Infra
19:40:41 <dguitarbite> and if yes, what are the requirements/steps/procedures to follow.
19:41:33 <jeblair> dguitarbite: almost certainly yes
19:41:33 <fungi> dguitarbite: do you/they have at least an outline of the steps necessary to install and configure that site?
19:41:43 <dguitarbite> yes
19:42:07 <fungi> we'd probably be able to spot pain points more easily with access to some documentation about what's involved
19:42:18 <dguitarbite> we have gdoc for the same, I will share the link
19:42:42 <clarkb> there isn't existing config management for it si there? if so that may make a translation to infra puppet simple
19:42:58 <jeblair> dguitarbite: so the cool thing is that anyone can basically do almost everything needed to spin up and maintain a server under openstack-infra
19:43:07 <fungi> the short answer is that someone will need to encode the steps necessary for setting up the server into a puppet manifest and associated files so that it's repeatable
19:43:26 <fungi> and contribute that as a change for review to the openstack-infra/config project
19:43:56 <jeblair> dguitarbite: here's the detailed steps: http://ci.openstack.org/sysadmin.html#adding-a-new-server  but reading that whole page is probably a good idea
19:45:01 <jeblair> dguitarbite: but yeah, what fungi outlined is probably the best way to proceed
19:45:17 <jeblair> dguitarbite: let's spend a bit of time reviewing that doc and talking about it to make sure we're on the same page
19:45:41 <jeblair> dguitarbite: then write a puppet manifest for it
19:45:48 <jeblair> dguitarbite: who installed the current system?
19:46:41 <dguitarbite_> sorry net issue
19:47:10 <jeblair> dguitarbite_: np, you can see the bottom of http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-11-19.01.log.txt in case you missed anything
19:47:11 <fungi> eek, you probably missed most of that
19:47:22 <dguitarbite_> thanks
19:48:02 <clarkb> bots to the rescue
19:48:49 <dguitarbite_> well I installed the current system
19:49:02 <dguitarbite_> also I would love to push some puppet code to infra
19:49:21 <dguitarbite_> I installed *Moodle*
19:49:28 <jeblair> dguitarbite_: perfect; then we'll help you do that!
19:49:59 <dguitarbite_> dstanek: thanks
19:50:43 <jeblair> dguitarbite_: all right, so send us the link to the doc, and we'll look it over to see where the pitfalls might be, then we'll help you get started writing the puppet to manage the server
19:51:26 <jeblair> we only have a few minutes left, so i want to jump to this topic since i think people woke up early for it
19:51:33 <jeblair> #topic  Consider a meeting time that allows team members from AU to participate (anteaya)
19:51:50 <fungi> (...and eu)
19:52:00 <SergeyLukjanov> heh
19:52:03 <jeblair> anteaya proposed we move the meeting to Tuesdays at 2200 UTC
19:52:24 <jeblair> i'd like to make some general comments on the subject...
19:52:31 <dguitarbite_> annegentle: any questions about moodle?
19:52:33 <SergeyLukjanov> 2am for me, can't say that i like this time for meeting ;)
19:52:53 <jeblair> i do think this meeting is important for cross-project collaboration
19:52:57 <mattoliverau> 2200 UTC is 9am Wednesday morning in my Aus (Sydney, Canberra, Melbourne) time zone
19:53:35 <jeblair> it is often good to get a bunch of people working on different issues together here
19:53:38 <fungi> 2200 utc will also probably end up conflicting with jeblair and i on foundation staff meetings once dst starts again, unless the weeks alternate from ours or we convince the staff to shift that
19:53:59 <jeblair> but attending this meeting is not necessary in order to contribute to infra
19:54:19 <jeblair> most of us are in channel and responsive during most of our working days
19:54:41 <jeblair> and so generally if something is blocking you, that can be handled outside of the meeting
19:54:48 <fungi> an alternate proposal i've seen is to have two meetings a week at different times, and people who want to attend both (if it's convenient for them) can cross-pollinate topics
19:54:58 <clarkb> fungi: right, so I was also going to suggest we could do two times, one earlier for EU and one later for AU
19:55:49 <jeblair> i'm open to moving the meetings to accomodate people when there is a need, but i'm not sure we're at that point yet
19:56:37 <fungi> i'm personally cool attending meetings at odd hours for me if there's a group consensus that it will be beneficial
19:56:50 <jeblair> let me put it this way; if this meeting time is inhibiting someone from contributing to openstack-infra, please let me know
19:57:20 <SergeyLukjanov> btw I'm still able to attend 2200 UTC meeting, especially if it'll be not each week ;)
19:58:16 <SergeyLukjanov> 2 mins left
19:58:26 <mattoliverau> Seeing as time doesn't stop, someone will always have to face a bad time. SergeyLukjanov 2am is a horrible time for you to have to get up.
19:58:49 <fungi> and i don't think anyone should necessarily feel compelled to attend the meeting, regardless of what time it's held at. if that gets in the way of contributing we need to figure out ways to improve the ways in which we enable productive contributors
19:59:01 <jeblair> fungi: indeed
19:59:17 <SergeyLukjanov> agreed
19:59:22 <fungi> and that might mean shifting meeting times, but it also could mean lots of other solutions
19:59:30 <SergeyLukjanov> I hope that I can help mattoliverau due to my UTC+0400 tz
19:59:37 <jeblair> okay, let's keep this meeting where it is now; when we find that movining the meeting time (including alternating) would help us achieve a specific goal, let's consider it then.
19:59:57 <jeblair> SergeyLukjanov: that would be great
20:00:11 <fungi> and speaking of time, we're at it
20:00:15 <jeblair> thanks everyone; zaro, we'll try to catch up on gerrit in channel
20:00:18 <jeblair> #endmeeting