19:01:38 #startmeeting infra 19:01:39 Meeting started Tue Feb 11 19:01:38 2014 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:42 The meeting name has been set to 'infra' 19:02:00 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting 19:02:00 jeblair, btw, I'm always here in this time :) 19:02:06 #link http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-04-19.02.html 19:02:23 SergeyLukjanov: i know, i just didn't want to be pushy. :) 19:02:42 jeblair :) 19:02:46 #topic Actions from last meeting 19:02:55 wow there are a lot 19:04:17 #action mordred continue looking into bug 1242569 19:04:18 Launchpad bug 1242569 in openstack-ci "manage-projects error on new project creation" [Critical,In progress] https://launchpad.net/bugs/1242569 19:04:36 clarkb: zmq plugin upgrade? 19:04:55 the patch for zmq plugin has been tested and merged, I need to tag a release and apply it to servers 19:05:04 but I think that owkr is less important now than having reliable logstash 19:05:05 clarkb: what's that in service of? 19:05:13 clarkb: ++logstash 19:05:15 jeblair: adding master data to logstash records 19:05:18 ah 19:05:24 which isn't helpful if logstash is fallen over 19:05:56 fungi: i haven't looked at graphite recently; did you get a chance to look at it and see if we should move it to ssd? 19:06:00 short answer is i don't think there's any need since the cinder volume addition. see the iowait change on the monthly cpu graph 19:06:04 #link http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=440&rra_id=all 19:06:13 and the load average dropped off similarly 19:06:16 #link http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=439&rra_id=all 19:06:23 fungi: cool, yeah, that looks much healthier 19:06:45 the whisper files also aren't going to buy us anything to clean up currently 19:06:48 anyone know anything about virtualenv 1.10.1 pin? 19:06:51 breakdown looks like 1.9% untouched for 8 months, 21% untouched for 2 months, 28% untouched for a month, 42% untouched for a week, 52% untouched for a day 19:07:15 jeblair: yes, we need tox to merge my PR before we can unpin it 19:07:33 jeblair: the virtualenv, tox, and pip pins are all intermingled 19:07:36 that's assuming we want to unpin tox and virtualenv/pip at the same time, which is probably wise 19:07:49 clarkb: roger. i won't echo that action item them 19:07:50 then 19:08:00 anything else from that list we should check in on? 19:08:30 #action jeblair get ssl cert for storyboard 19:08:52 that seems to have covered it 19:08:54 #topic Trove testing (mordred, hub_cap, SlickNik) 19:09:02 i think this is in the same state as last week 19:09:07 #link https://review.openstack.org/#/c/69501/ 19:09:12 pending tempest review 19:09:38 so i'll move on unless there's anything further on this 19:10:10 #topic Tripleo testing (lifeless, pleia2, fungi) 19:10:22 i think it is happening again. 19:10:30 the tripleo-ci cloud is back online thanks to lifeless's ministrations, in nodepool again and zuul is handing it jobs once more 19:10:44 I'm still working on getting fedora building as a slave in nodepool 19:10:49 but we've had some good patches go in 19:10:51 also, with a MUCH larger quota now 19:11:04 pleia2: what's fedora in service of? 19:11:04 in theory, so we can also spin up bare/devstack slaves if we want 19:11:16 jeblair: a nodepool slave 19:11:35 pleia2: right, but to what end? 19:11:48 jeblair: ah, we want to test on both ubuntu and fedora for the entire testing stack 19:12:01 as opposed to ubuntu and rhel/centos i guess 19:12:06 pleia2: this is a tripleo desire? 19:12:13 jeblair: yes 19:13:02 has anybody tried to nodepool-prep a bare centos 6.4 image? 19:13:02 jeblair: but since starting this, I've also bumped into other people interested in running nodepool with fedora, so the work will be valuable beyond just this 19:13:28 pleia2: so for the gate, we took what the project said it wants to support, combined that with the lifecycles of the respective distros, factored to the least common denominator and ended up with precise and centos 19:13:36 from an infra team perspective, having nodepool able to give us resources to run python 2.56 jobs would be really swell 19:13:50 er, 2.6 19:14:16 jeblair: so, from a 'does python work' perspective thats fine 19:14:22 fungi: yes, we definitely need a bare centos6, but it shouldn't be too hard since it's an existing puppet thing 19:14:30 jeblair: but from a deploy perspective, centos is not == fedora 19:14:34 fungi: no, but I did try to do a d-g centos image back when dprince and ArxCruz wrote changes to support centos 19:14:39 er maybe it was fedora, ya fedora 19:15:05 lifeless: definitely; but one of the reasons why we don't test on fedora or saucy is due to the support lifecycle 19:15:29 lifeless: this is probably not a concern for tripleo since its support lifecycle is much smaller than openstack's 19:15:32 as a whole 19:15:50 jeblair: also because we're targeting CD, where we expect folk to be tracking latest everything, more or less. 19:16:02 i think i'm bringing it up to tease out any issues that may be related to this... 19:16:08 i think to some extent our current choices are reflected in the fact that ubuntu uses ltc for uca and rh does rdo on rhel rather than fedora 19:16:10 jeblair: but, there are folk wanting stable release branches - like slagle - so they may want stable OS test jobs. 19:16:35 so i think as long as these nodes are only being used on master, there's probably no issue 19:16:48 fungi: that too 19:17:34 pleia2, lifeless: are you just planning on running tripleo-related jobs on these? 19:20:14 I think we do want to expand it once we have at least a 2nd hardware pool (from redhat) and everything is running fast and stable 19:20:20 but lifeless can answer that better 19:20:28 the main consideration is that in general when we test a release on an os, we like to keep testing it on the same os; since the fast releases of both ubuntu and fedora have shorter lives than openstack stable releases 19:21:03 so if we were to start running nova unit tests on saucy, we would not be able to continue duing that for the whole life of icehouse 19:21:31 that's why the primary testing platforms are lts/centos 19:22:15 we could additionally run tests on fedora or latest-ubuntu, if that got us something, but in general it doesn't 19:22:24 specifically for tripleo, i could see how it would though 19:22:34 similarly, there's little point in supporting security fixes for a project which is only being tested on distros which are no longer themselves under security support 19:23:18 so running tripleo master on the CD platforms it wants to deploy on makes sense to me. more so than say nova unit tests or even the devstack-gate. 19:23:22 and the chances that a deployer will upgrade the operating system underneath openstack without upgrading openstack are somewhat low 19:23:27 fungi: ++ 19:23:48 which leaves us with stable releases on distro releases which outlive or at least keep pace with them 19:23:59 ++ 19:24:00 ok so 19:24:36 the RH cloud is a)more capacity but more importantly its the multi-vendor multi-site redundancy we were told we needed for gating 19:25:00 so the goal with the RH cloud is to get to having gate jobs for deploys. 19:25:10 got it. so fedora is being used for the undercloud instances then? 19:25:24 fungi: there's several facets 19:25:41 fungi: the RH region will be running on top of Fedora, so that we're running two regions with two different distros 19:25:52 thats a coverage thing to find things tests don't, as much as anything ;) 19:25:58 fungi: separately 19:26:15 neat 19:26:31 fungi: we want to know that we work on fedora, so we want to gate it - because all the devs work on fedora there, and breaking folk is not nice 19:26:49 fungi: but also, we'll have a production region running fedora. We want that to not break. 19:26:58 so circular logic :) 19:27:01 sure 19:27:07 lifeless: what do you want to gate on fedora? unit tests? devstack-gate? 19:27:17 jeblair: tripleo-gate 19:27:49 jeblair: we want to gate tripleo-gate on fedora and ubuntu and possibly more in future; nothing to do with unittests or d-g 19:27:54 makes sense. 19:27:57 jeblair: but tripleo-gate will include tempest 19:28:19 what else 19:28:27 oh, we're happy for excess capacity to be d-g nodes 19:28:35 and they can be whatever you want to upload to glance 19:28:41 centos etc 19:28:44 lifeless: sure. so that all makes sense and should work fine as long as you are focusing on CD and master 19:28:50 yup 19:28:50 yay actual usable glance! 19:29:15 lifeless: if tripleo and tripleo-gate start to want to service stable branches, then you'll run into the same disconnect that left us with lts/centos 19:29:19 I think when slagle's stable branch stuff happens, in the first iteration it won't be tested on updates. 19:29:40 simply because we don't have capacity at the moment 19:29:42 lifeless: this is perhaps not a problem we need to solve now though, but to be aware of for later. 19:30:03 also, yay actual usable glance! :) 19:30:10 yea, i'm not honestly sure if we're going to want the stable branches stuff for fedora 19:30:12 if vendors want to step up with enough capacity for that, in enough regions to satisfy the infra redundancy stuff, then we can do tests for those branches on suitable oses 19:30:15 i suspect centos would be fine 19:30:34 slagle: yah 19:30:57 ok. i think this has been useful. thanks. 19:31:05 anything else on this topic? 19:31:11 however if you don't test master on centos too, then come release time you're stuck with software which probably only runs on fedora 19:31:30 I think that's it 19:31:42 fungi: yep. so that could look like master on centos+fedora, then stable on centos. 19:32:02 jeblair: similar for ubuntu family, and debian family, and so on. 19:32:08 agreed 19:32:16 but 19:32:18 baby steps 19:32:23 * fungi was singling fedora/centos out as merely an example 19:32:43 #topic Requested StackForge project rename (fungi, clarkb, zhiwei) 19:32:46 Specifically - I want to get the current check jobs covering more of the feature set, and I want to get them gating. 19:32:49 then add width. 19:33:10 lifeless: ++depth first. 19:33:21 hrm I thought zhiwei was going to attend the meeting? are you around? 19:33:47 clarkb: he did pop up on sunday i think (which was probably his monday) saying he was back and ready when we are 19:34:14 too bad he missed the oslo renames 19:34:31 I can do this weekend, weekend after that is harder, iirc holiday and stuff 19:34:34 i'd prefer to batch this with openstack downtime. 19:34:40 jeblair: wfm 19:35:04 which projects should be renamed? 19:35:16 looks like I've missed it 19:35:16 SergeyLukjanov: the chef cookbook project for ceilometer 19:35:27 clarkb, oh 19:35:39 however, maybe we can do it with sava*cough*? 19:35:45 the stackforge/chef-metering-cookbook s/metering/telemetry/ i think 19:35:48 something like that 19:36:15 SergeyLukjanov: did you have a timeline for when savana projects might be ready to rename? 19:36:29 heh, I hope that we'll choose the name in a few weeks 19:36:42 but the soft deadline is i3 19:37:01 and we already have tons of options 19:37:16 so, couldn't guarantee any timeline for savanna renaming 19:37:28 we do have a bug with the most recent gerrit commentlink config changes breaking change-id links, but that'll only need a very quick gerrit restart so batching up with actual renames would definitely be better 19:37:38 dguitarbite: around? 19:37:50 jeblair: yes 19:38:01 i'm going to jump around on the agenda 19:38:05 #topic Request for Moodle App Integration to infra for Training-Manuals (dguitarbite, sarob) 19:38:12 dguitarbite: what's up? :) 19:38:20 hey jeblair :) 19:38:30 annegentle had questions on this topic too, so she may want to listen in 19:39:08 sarob and our team designing OpenStack-Training did some testing with Moodle App for quizzes and other content delivery which is not covered at present 19:39:26 as of now its hosted on aptira servers http://os-trainingquiz.aptira.com/ 19:40:40 I would like to know if its possible to move it on Infra 19:40:41 and if yes, what are the requirements/steps/procedures to follow. 19:41:33 dguitarbite: almost certainly yes 19:41:33 dguitarbite: do you/they have at least an outline of the steps necessary to install and configure that site? 19:41:43 yes 19:42:07 we'd probably be able to spot pain points more easily with access to some documentation about what's involved 19:42:18 we have gdoc for the same, I will share the link 19:42:42 there isn't existing config management for it si there? if so that may make a translation to infra puppet simple 19:42:58 dguitarbite: so the cool thing is that anyone can basically do almost everything needed to spin up and maintain a server under openstack-infra 19:43:07 the short answer is that someone will need to encode the steps necessary for setting up the server into a puppet manifest and associated files so that it's repeatable 19:43:26 and contribute that as a change for review to the openstack-infra/config project 19:43:56 dguitarbite: here's the detailed steps: http://ci.openstack.org/sysadmin.html#adding-a-new-server but reading that whole page is probably a good idea 19:45:01 dguitarbite: but yeah, what fungi outlined is probably the best way to proceed 19:45:17 dguitarbite: let's spend a bit of time reviewing that doc and talking about it to make sure we're on the same page 19:45:41 dguitarbite: then write a puppet manifest for it 19:45:48 dguitarbite: who installed the current system? 19:46:41 sorry net issue 19:47:10 dguitarbite_: np, you can see the bottom of http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-11-19.01.log.txt in case you missed anything 19:47:11 eek, you probably missed most of that 19:47:22 thanks 19:48:02 bots to the rescue 19:48:49 well I installed the current system 19:49:02 also I would love to push some puppet code to infra 19:49:21 I installed *Moodle* 19:49:28 dguitarbite_: perfect; then we'll help you do that! 19:49:59 dstanek: thanks 19:50:43 dguitarbite_: all right, so send us the link to the doc, and we'll look it over to see where the pitfalls might be, then we'll help you get started writing the puppet to manage the server 19:51:26 we only have a few minutes left, so i want to jump to this topic since i think people woke up early for it 19:51:33 #topic Consider a meeting time that allows team members from AU to participate (anteaya) 19:51:50 (...and eu) 19:52:00 heh 19:52:03 anteaya proposed we move the meeting to Tuesdays at 2200 UTC 19:52:24 i'd like to make some general comments on the subject... 19:52:31 annegentle: any questions about moodle? 19:52:33 2am for me, can't say that i like this time for meeting ;) 19:52:53 i do think this meeting is important for cross-project collaboration 19:52:57 2200 UTC is 9am Wednesday morning in my Aus (Sydney, Canberra, Melbourne) time zone 19:53:35 it is often good to get a bunch of people working on different issues together here 19:53:38 2200 utc will also probably end up conflicting with jeblair and i on foundation staff meetings once dst starts again, unless the weeks alternate from ours or we convince the staff to shift that 19:53:59 but attending this meeting is not necessary in order to contribute to infra 19:54:19 most of us are in channel and responsive during most of our working days 19:54:41 and so generally if something is blocking you, that can be handled outside of the meeting 19:54:48 an alternate proposal i've seen is to have two meetings a week at different times, and people who want to attend both (if it's convenient for them) can cross-pollinate topics 19:54:58 fungi: right, so I was also going to suggest we could do two times, one earlier for EU and one later for AU 19:55:49 i'm open to moving the meetings to accomodate people when there is a need, but i'm not sure we're at that point yet 19:56:37 i'm personally cool attending meetings at odd hours for me if there's a group consensus that it will be beneficial 19:56:50 let me put it this way; if this meeting time is inhibiting someone from contributing to openstack-infra, please let me know 19:57:20 btw I'm still able to attend 2200 UTC meeting, especially if it'll be not each week ;) 19:58:16 2 mins left 19:58:26 Seeing as time doesn't stop, someone will always have to face a bad time. SergeyLukjanov 2am is a horrible time for you to have to get up. 19:58:49 and i don't think anyone should necessarily feel compelled to attend the meeting, regardless of what time it's held at. if that gets in the way of contributing we need to figure out ways to improve the ways in which we enable productive contributors 19:59:01 fungi: indeed 19:59:17 agreed 19:59:22 and that might mean shifting meeting times, but it also could mean lots of other solutions 19:59:30 I hope that I can help mattoliverau due to my UTC+0400 tz 19:59:37 okay, let's keep this meeting where it is now; when we find that movining the meeting time (including alternating) would help us achieve a specific goal, let's consider it then. 19:59:57 SergeyLukjanov: that would be great 20:00:11 and speaking of time, we're at it 20:00:15 thanks everyone; zaro, we'll try to catch up on gerrit in channel 20:00:18 #endmeeting