19:01:17 #startmeeting infra 19:01:19 Meeting started Tue Apr 8 19:01:17 2014 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:23 The meeting name has been set to 'infra' 19:01:39 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:02:25 to start out, clarkb and i are trying to get through the heartbleed impact and reset a lot of account creds, so keeping this short will be in our best interest 19:02:32 #topic Actions from last meeting 19:02:44 #link http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-04-01-19.02.html 19:02:53 jeblair delete puppet-dashboard.o.o server and dns 19:02:59 i believe that has happened 19:03:07 jeblair send revised repo rename list to tc 19:03:09 DNS definitely happened 19:03:11 anybody know if that happened? 19:03:20 fungi: yes, I believe he started a thread about it 19:03:24 * clarkb digs it up 19:03:25 good enough 19:03:41 o/ 19:03:59 hrm maybe not. The thread I see is from before the last meeting to the infra list 19:04:04 o/ 19:04:32 better safe than sorry 19:04:36 #action jeblair send revised repo rename list to tc 19:04:45 nibalizer propose change to lower puppetboard 'unresponsive' timeout to 30 mins 19:04:52 done, merged 19:04:54 that got proposed, merged 19:04:55 great 19:05:09 thanks nibalizer! 19:05:11 mordred make an abbreviated projects.yaml with only projects using storyboard as their primary tracker 19:05:28 still a thing? 19:05:55 o/ 19:06:30 mordred: if you're still around ^ (or krotscheck if you happen to know)? 19:07:07 It hasn’t happened yet to my best ability. 19:07:16 Sorry 19:07:19 krotscheck: thanks! good enough 19:07:20 to my best knowledge 19:07:20 #action mordred make an abbreviated projects.yaml with only projects using storyboard as their primary tracker 19:07:30 nibalizer write lp->storyboard migration script 19:07:52 not done 19:07:57 need any help with that? or should i be reviewing something already? 19:08:03 i haven't even gotten my dev storyboard up 19:08:09 okay, no rush 19:08:11 my first angular js app 19:08:17 #action nibalizer write lp->storyboard migration script 19:08:23 we just carry forward 19:08:27 ya sorry, if this is very pressing might not be for me, but i am making slow steayd prograess and having a fun time 19:08:31 that covers the action items from the last meeting 19:08:56 #topic Dealing with puppet changes (jeblair) 19:09:28 i think this is the thing where we want to make sure that any large changes don't get approved without someone (doesn't have to be the approver or even a core) watching puppetboard to make sure it worked 19:10:02 * mordred will do it - sorry - got busy 19:10:06 in case it's not, i'll leave it on the agenda for next week and jeblair can discuss whatever he wanted to discuss 19:10:32 anybody know whether that was indeed the case, or if we need to talk about anything with regard to it? 19:10:44 fungi, I think that I'm following this rule ;) 19:11:07 fungi: i dont know 19:11:08 SergeyLukjanov: i hope that i am, but i am also a very forgetful creature 19:11:23 okay, moving along... 19:11:32 #topic Using storyboard (jeblair) 19:11:47 I have to step out, sorry 19:11:54 i think we decided to start using storyboard for some smaller infra-related projects, but i don't think we've done so yet 19:11:54 * nibalizer will read scrollback 19:12:02 fungi, yup, that's a problem sometimes for me too ;) 19:12:23 did anybody have any updates specific to this they wanted to impart? 19:12:30 fungi, we need to complete "make an abbreviated projects.yaml with only projects using storyboard as their primary tracker" first 19:12:34 IIRC 19:12:47 SergeyLukjanov: okay, good to know. prerequisite 19:12:57 then that's probably all we have for updates on that topic 19:13:03 it sounds like this action item == this topc 19:13:14 agreed. i'll leave it on the agenda for next week 19:13:20 #topic Project renames 19:13:39 There’s a patch to update task statuses as well, but I’m on that and a new patch should be up for review as soon as I get to an internet that opens gerrit ports. 19:13:51 thanks krotscheck! 19:13:53 stackforge/barbican -> openstack/barbican needs to happen at some point... anybody know the timeline? 19:14:08 is that one critical or just waiting for a convenient window? 19:14:20 I haven't heard 19:14:25 guessing convenient window 19:14:28 DinaBelova: any new word on the climate rename? was a new name chosen yet? 19:14:34 fungi, the same was done in several weeks after the incubation approved for sahara 19:14:47 fungi, yep, it was chosen 19:14:50 I 19:14:53 fungi, it's still not checked by foundation folks :( 19:14:54 and the winner is... 19:14:56 oh 19:15:07 have contacted with foundation.. 19:15:07 okay, well, we'll pretend it's not chosen for now 19:15:19 until you get final approval 19:15:27 DinaBelova, the candidate is Blazar? 19:15:31 the best candidate was blazar, will hope it'll be the winner 19:15:33 yep 19:15:43 sounds good--i can't wait 19:15:47 ;) 19:15:47 #topic Fedora gate support (ianw) 19:15:57 hi 19:16:13 fungi, it'll be great to combine barbican, climate and attic changes 19:16:14 ianw: did you have some specific bits you wanted to talk about? i see clarkb and mordred added a couple sub-topics for it 19:16:21 SergeyLukjanov: agreed 19:16:29 i added those on their behalf after a discussion on friday 19:16:29 fungi: basically is it ok for us to ignore hpcloud 1.0 for fedora testing 19:16:47 fungi: we cannot add our own images to hpcloud 1.0 and we need to build fedora images 19:16:55 because no one has up to date fedora images for us 19:17:17 so mostly looking for some consensus on what cross cloud compat is required for us to take on new testing 19:17:18 re attic, I don't see ay responses to the jeblair's follow up in tc ml - http://lists.openstack.org/pipermail/openstack-tc/2014-April/000608.html 19:17:59 i'm also a little fuzzy on what testing we would shift to fedora nodes, to avoid exploding the test matrix and quota burn unnecessarily 19:18:28 fungi: ya, mordred suggested something like the postgres test, but I think it could be any of the more "fringe" tets 19:19:12 good question re test matrix... 19:19:24 o/ 19:19:43 okay, so the thought is that we could potentially start testing on fedora without exploding our node count and without effectively losing any existing test coverage 19:19:55 ttx, evening 19:20:29 however, there are also concerns around being able to switch from fedora20-fedora21 (for example) within minimal effort in the time between when one reaches end of support and the other is available for use 19:21:32 fungi: right, so my comment about that was we would need to have dib working then the people wanting to test on fedora would be responsible for updating dib 19:21:44 fungi: if they don't we switch $test to ubuntults 19:22:06 and also there was the suggestion that we would consider dropping the requirement that we keep testing stable branches of openstack on the versions of fedora they were tested on at release time (which possibly means not supporting testing stable releases on fedora at some point in the cycle) 19:22:15 gah, lag 19:22:22 correct 19:22:37 I think the fedora folks are ok with that because that version of fedora would not be supported either 19:22:40 so testing on it isn't a big win 19:22:50 ++ 19:22:55 mainly because backporting bug fixes and reqs changes to stable so as to work on newer fedora is probably out of scope 19:22:55 which makes this a concern on our side. eg are we worried about losing coverage of our stable branches 19:23:04 fungi: exactly 19:23:39 and the work required to switch, say, postgres+qpid from fedora to trusty when the time comes 19:23:51 fungi: we wouldn't switch 19:23:55 we would just drop the test 19:24:04 which is why we would make it a more fringe tes 19:24:13 that feels like stable regressions waiting to happen 19:24:13 at least that is what I would argue for 19:24:25 fungi: I agree, but they already happen in ways we can't keep up with 19:24:33 that is true 19:24:36 fungi: so this doesn't really make the current situation better or worse 19:24:52 what if fedora tests ran on a separate cloud? could that work 19:25:10 and have them as a separate test, so they can be dropped per the concerns above 19:25:24 ianw: no, anything in the gate needs multiple clouds 19:25:41 ok 19:25:52 mainly so that a cloud provider outage doesn't block our ability to test and merge changes 19:26:44 what about only running a separate fedora job for certain projects ... mainly changes to devstack. there's only a few of them per-day 19:27:17 ianw: that should be doable. Similar to how tripleo testing happens 19:27:54 another alternative might be periodic bitrot jobs, though those have a tendency to bitrot, as irony would have it 19:28:08 starting with that job, keeping it separate seems low risk 19:28:52 ianw: so maybe we start there and avoid most of these layer 8 problems. Prove that it works then make it gating 19:29:09 it seems like it might be worth investigating in that direction with more of a poc 19:29:10 ianw: should help the decision making around gating too especially if we can point at reliable test 19:29:18 ++ 19:29:26 yeah, exactly 19:29:30 maybe the existing fedora testing pleia2 and others have been doing have us pretty close to what we would need for that already? 19:30:16 this is what i've been doing with the redhatci 19:30:40 see -> https://review.openstack.org/#/c/86107/ for a comment example 19:31:08 excellent. i agree that if there's enough community support and donated resources to keep it running, and it's all free software, there's no good reason to turn down the additional qa we get from it 19:31:09 fedora has been quite stable with devstack changes, the job shouldn't be too much trouble 19:31:44 we just need to make sure our stability needs are met, and that we don't put ourselves in a bad position if that situation changes 19:31:46 but long term i'd like the gate testing the stable fedora, and redhatci testing the "next" version so that the transition is easy 19:32:59 ianw: i think that's reasonable based on the current information we have 19:33:15 any other opinions/input? next steps? 19:33:34 nope I htink that is a good start 19:33:45 will give us a lot more data to work with 19:34:22 okay, so basically we'll see what transpires and reevaluate based on additional data 19:34:37 and in the meantime, acknowledge that it looks useful and promising 19:35:19 okay, enough of that for today 19:35:28 #topic public service announcements 19:35:54 we spent a good chunk of yesterday dealing with security updates 19:35:58 #link http://heartbleed.com/ 19:36:36 anybody who reads bill's pundit's random computer blog probably knows about that, so no need to elaborate now 19:37:10 but plan on it severely impacting infras ability ot get normal work done for a bit 19:37:24 still working on regenerating keys, credentials, passwords and so on in the wake of the security fixes getting applied yesterday, just to have an extra security assurance in case anyone did actually manage to leech sensitive data out of any servers 19:37:53 and yes, we're notably absent from getting other things done, so sorry about that 19:37:53 yup, some of these changes may have user facing impact as well. 19:38:12 however we will announce those changes more formally if/when they happen 19:38:16 correct. there will likely be further disruption with service restarts for new account auth data and such 19:39:09 #topic Open discussion 19:39:36 just a reminder, i'm gone all next week and the week after, and won't be around in irc or reading e-mail 19:39:44 have fun! 19:40:02 jeblair is busy this week at pycon (mordred: are you there/going too?) 19:40:02 fungi, have fun!! 19:40:06 I will be out monday morning until my eyeball can read text again (I get a second round of eye dilation) 19:40:10 Hi. I'm with Huawei and we're exploring providing a nodepool to OpenStack, but I need a point of contact to get the prereq's figured out. 19:40:28 rockyg: jeblair (who is flying to pycon right now) is the person to contact 19:40:43 rockyg: you can ping him on irc and or send him email 19:40:55 Great! will do. Any idea on how many machines to start? 19:41:31 I would like to propose for next week to start a small discussion point with infra in the meeting about the proposed Vinz review system. 19:41:32 rockyg: I'm not sure if there is a lower bound but the flavor size we look at is ~4core 8GB nodes 19:41:43 rockyg: i think we've been targeting a minimum quota sufficient to run 100 instances with 4 or 8 cores (depending on core speed) and 8gb ram each 19:41:49 phschwartz: can you update the agenda with that? I will get you a link 19:42:13 clarkb: will do 19:42:23 I'll email jeblair. That way he can respond on his timeline. The size is great info. Thanks, again. 19:42:24 phschwartz: https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting 19:43:47 okay, i think that's it for this week 19:43:59 you all get 15 minutes back ;) 19:44:08 thanks everybody! 19:44:18 #endmeeting