19:01:28 #startmeeting infra 19:01:29 Meeting started Tue Mar 17 19:01:28 2015 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:31 O/ 19:01:33 The meeting name has been set to 'infra' 19:01:38 #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:01:39 #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-03-10-19.01.html 19:01:44 #topic Actions from last meeting 19:01:45 jeblair nibalizer work through openstackinfra-httpd publishing 19:01:45 jeblair fix openstackinfra account on puppetforge 19:01:51 so i did the second thing 19:01:53 o/ 19:01:54 o/ 19:01:58 o/ 19:02:01 o/ 19:02:03 so we should actually have perms to publish to the forge 19:02:06 yay for stuff getting done! 19:02:13 woo! 19:02:14 the first one is still in progress 19:02:31 notably, we're reworking the tag push job to do tag versioning 19:02:45 i saw one of those changes fly by earlier 19:02:53 at any rate, this is unblocked 19:03:02 #topic Schedule next project renames 19:03:15 i'd like to continue to defer this until the governance changes land for bindep and os-client-config 19:03:22 ++ 19:03:31 o/ 19:03:34 makes sense 19:03:47 okie-dokie. also we want to update the table default encoding in trove for the db when it happens 19:04:11 is that our oldest outstanding still-active bug? 19:04:13 since that needs gerrit to be offline while we restart the trove instance 19:04:14 fungi: same db as the subunit2sql ? 19:04:38 SpamapS: no, the utf-8 default 19:04:39 .зфке 19:04:43 sorry 19:04:45 fungi: we may have downtime before then 19:04:54 but if not, yes :) 19:04:57 jeblair: fair point, we have the distro upgrade coming 19:05:10 #topic Priority Efforts (Swift logs) 19:05:13 #link https://review.openstack.org/#/q/status:open+topic:enable_swift,n,z 19:05:26 empty 19:05:32 we have some images with the index overwrite bug fixed 19:05:44 but about 3 images in hpcloud failed to build this morning 19:05:51 or, well, at least 3 of one type 19:05:54 possibly more overall 19:06:10 so we can probably verify that bug is fixed on the ones that did build 19:06:22 but we do still need more images built before we can consider it completely solved 19:06:35 aside from that... are we now ready to switch over python jobs? 19:06:56 (heh, i'm assuming we still haven't done that -- correct me if i'm wrong) 19:06:57 o/ 19:07:13 jeblair: I am ready to switch over python jobs 19:07:23 I thought we did 19:07:50 anteaya: you are correct, we did 19:07:59 https://review.openstack.org/#/c/156521/ 19:08:30 so next question, aside from the index problem, are there any other problems we've seen since that merged? 19:09:15 I have not seen any 19:09:15 nothing so far afaik 19:09:29 I haven't heard anyone scream 19:09:41 then i guess we're ready for jhesketh to propose the next step. i'm not sure if we're ready for devstack jobs yet, or do we need to do more non-devstack jobs first 19:09:43 we have 99 problems, but so far this has not been one 19:09:52 so let's ask him when he's online :) 19:09:59 #topic Priority Efforts (Nodepool DIB) 19:10:02 #link https://review.openstack.org/#/q/status:open+topic:dib-nodepool,n,z 19:10:25 anything blocking this? 19:10:39 yeah - Ng is trying to get rackspace to enable glance on his account 19:10:44 so that he can test the configdrive changes there 19:11:07 oh that's great! 19:11:21 otherwise, I need to stop futzing with puppet apply and dive back into teh nodepool-shade patch 19:11:40 i've been uploading more bindep changes to get the features we'll need implemented for distro package management, and doing some more experimental testing on nova for the job run-time database config abstraction 19:12:11 fungi: should we go ahead and add infra-core to bindep? 19:12:20 (anticipating its move to infra) 19:12:35 i'm fine with that 19:12:44 lifeless already said he was cool with it 19:12:51 he did 19:13:03 done 19:13:10 #link https://review.openstack.org/158098 19:13:13 #info infra-core added to bindep 19:13:13 I've been poking at a simplified way to spin up a nova-api with fake stuff on the backend for testing nodepool easily. 19:13:21 is the highest priority one there, since no jobs pass without it 19:13:24 (and keystone and glance too) 19:13:39 hence all my other changes depending on it 19:13:53 fungi: I was just about to review that one 19:13:53 SpamapS: i'm really keen on nodepool testing! 19:14:01 thanks anteaya 19:14:02 fungi: will do so after meetings 19:14:07 sure 19:14:12 jeblair: yeah, I think the best bang-for-our-buck will be simpler functional testing. 19:14:55 not that it's super hard to spin up a devstack, but if we can have a single command that spins up what would be devstack in 1s... that seems like a useful thing for other purposes too. :) 19:15:06 ianw added config file validation to nodepool 19:15:20 i think we should add a job to system-config that runs that on our nodepool config file 19:15:36 jeblair: changes out for that 19:15:44 o/ 19:15:47 and once that's in place, i'll be much happier approving nodepool changes :) 19:15:55 also, i'd appreciate some eyes on the f21 d-i-b build -> https://review.openstack.org/#/c/163982/ 19:16:03 ianw: cool, point me at them! 19:16:05 clarkb has verified that one also 19:16:18 #link https://review.openstack.org/164901 19:16:28 #link https://review.openstack.org/164904 19:16:56 thanks. i'll check those out today 19:17:04 #topic Priority Efforts (Migration to Zanata) 19:17:09 ^ reviews, i can put them in the dib-nodepool topic if you want 19:17:13 #link https://review.openstack.org/#/q/status:open+topic:zanata,n,z 19:17:48 pleia2: you're still iterating on that? 19:17:58 yep 19:18:00 < StevenK> pleia2: Update for meeting. I am still plugging away, trying to convince maven-release-plugin to build so maven-sortpom-plugin can build, so zanata-parent can. 19:18:11 so that's the client packaging stuff 19:18:26 packaging the java client, obviously (maven) 19:18:37 why are we packaging the java client? 19:19:01 because we need it installed server side to run some of the automated scripts 19:19:16 pleia2: what scripts? 19:19:18 it's the first work item on the spec 19:19:33 Could it just be run from {however upstrema packages it} ? 19:19:48 so it is... 19:19:59 all the stuff transifex does now, like submitting changes to gerrit when they're over 75% complete and such 19:20:17 AJaeger is more familiar with the scripts 19:20:20 pleia2: but we run that in zuul jobs 19:20:40 ok, well we need to client somewhere in order to process these things 19:21:09 and to use it in our infrastructure, it should be packaged 19:21:12 so installed server side in this case means actually on the proposal job worker 19:21:17 if we're going to package things to install onto our test nodes, we'll need to figure out repository management. 19:21:24 or maybe some specialized equivalent to the proposal worker 19:21:50 I would point out that thus far we have never made packaging something in distro packages a pre-req to installation in our infrastructure 19:22:15 so we could run this on the zanata server, or we could emulate what we're doing now and run it on the proposal slave 19:22:24 pleia2: what os will the zanata server be running? 19:22:25 yeah, that's definitely a good question. how does release management of the client tie into the packaging plan? are we going to be rebuilding packages for it? 19:22:34 jeblair: Ubuntu 14.04 19:23:29 fungi: that's a question for StevenK during AU daytime, he may have a plan, and if not, we should talk about one if we are packaging 19:23:58 okay, so i think we need to have more of a discussion here, since i have no idea what we will do with a locally built package (we've never had one before) 19:24:18 right, if someone is working on getting a package of the client into ubuntu universe, then i guess it's not directly a burden on the infra team 19:24:33 yeah, if that's what that means, then we might be set 19:24:52 (assuming the timeframe works, and we aren't going to require new features rapidly) 19:25:00 I don't think the intent was to push it upstream 19:25:06 maybe a ppa or something 19:25:11 so let's fungi, mordred, pleia2, jeblair, and StevenK chat later 19:25:12 but if we're building packages for it, then we're distributing them onto the server ourselves, and lose a lot of the actual benefit to it being packages at all (vs doing a 'make && make install' or whatever) 19:25:16 sounds good 19:25:34 as for the puppet module, cinerama has continued work on that and we've gone through some iterations 19:25:46 that 19:25:47 yeah, just want to make sure the benefit outweighs the ongoing effort there 19:25:51 that's this one: https://review.openstack.org/#/c/147947/ 19:25:53 yup. next big challenge is the openid stuff 19:26:04 don't forget the apache proxy one 19:26:21 http://zanata.org/help/cli/cli-install/ <-- looks pretty straight forward and probably similar to the way gerrit is installed yes? 19:26:26 on line 135 of this etherpad we have plans for subsequent patches defined, so this first one doesn't continue to overwhelm us https://etherpad.openstack.org/p/zanata-install 19:26:50 https://review.openstack.org/#/c/164011/ 19:26:58 SpamapS: I think the "it should be packaged" thing came out of summit, or there was miscommunication, either way it ended up as the first work item on the spec :\ 19:27:24 but we'll talk about it later 19:27:34 pleia2: well technically it is packaged.. on maven. ;) 19:28:08 yeah, always has been 19:28:32 that's it from me 19:28:42 yeah. i think we're not all on the same page about it. we probably glossed over something or forgot to write something down. but we'll talk about it when StevenK is up and fix it before we get too far down the road. 19:29:03 i think we'll work it out and manage. :) 19:29:09 right. i mostly want to make sure we're not creating unnecessary makework for people 19:29:14 I just downloaded the dist.tar.gz to my trusty box, and it works w/ no additional steps 19:29:23 pleia2: thanks! 19:29:31 Maven and java packaging are _incredibly_ hard to do up to Debian policy standards. 19:29:33 apparently it's a bit of a beast to package, so not doing it would be good I think 19:29:44 SpamapS: yeah 19:29:55 #topic Priority Efforts (Downstream Puppet) 19:29:57 i agree someone likely misunderstood the context in which the word "package" was thrown around 19:29:58 #link https://review.openstack.org/#/q/status:open+topic:downstream-puppet,n,z 19:30:10 hi 19:30:13 hi 19:30:22 this is in flight https://review.openstack.org/#/c/162830/ 19:30:30 and this quietly merged: https://review.openstack.org/#/c/162819/ 19:30:46 so if there are no objections ill put up ore patches to do what 162819 did to more node defs 19:31:06 nibalizer: no objections here, thanks! 19:31:19 jeblair: you want a patch per node or a big doom patch or do you have a preference? 19:31:19 so nibalizer,as we were talking, it should be good to isolate puppet install on a module, what do you think? 19:31:45 nibalizer: patch per node to avoid conflicts and make it more reviewable would be best i think 19:32:05 this is going to cause reviewer eye strain, so we should be nice :) 19:32:08 jeblair: okay, i agree, just didn't want to flood the review queue unneccesarily 19:32:09 +1 for reviewable 19:32:15 puppet-openstackci should be approved after today's tc meeting, then should be able to start submitting patches to that 19:32:45 asselin: ++ 19:32:47 nibalizer: it is flooded 19:32:48 yolanda: i want to land 162830 first but then yea spinning management of pupepet files/master/service out into a module is a great idea 19:33:02 i'm ok to take it as it's part of my efforts downstream as well 19:33:25 yolanda: i'm not sure i'm following your suggestion 19:33:51 jeblair, so basically, isolate the part of the template about "break this into openstack_project::puppet" into an independent module 19:34:02 as it's adding logic, that shouldn't be on system-config 19:34:20 if cores could please review 162830 that would be great, because the longer it lives unmerged the more rebasing everyone will have to do when it lands 19:34:22 oh i see 19:34:32 it's the bit that manages how we install+manage puppet itself 19:34:33 I am +1 on yolandas plan 19:35:03 yeah, that sounds good to me 19:35:03 jeblair, nibalizer, so if you are ok, i can propose a project for it and take ownership, i was planning to do it anyway downstream , so better if that's an upstream effort 19:35:08 I am a fan of that being more standalone 19:35:32 so, um, would the module name be "puppet-puppet" ? :) 19:35:44 yolanda: sure, also note that there are a few modules floating around github/forge that do exactly this -> manage puppet master/client configs so we could evaluate some of those if we wanted 19:36:04 nibalizer, sure, i can take a look at them 19:36:04 i dont have any links handy 19:36:13 jeblair: probably 19:36:19 if they are ok for us we could reuse 19:36:24 #action yolanda investigate existing or creating a new puppet-puppet module 19:36:26 puppet-install_puppet? 19:36:33 jeblair: i actually want to name my next puppet-puppet module 'diphosphorous' 19:36:43 nibalizer: creative 19:36:55 anyways thats all I got 19:37:21 nibalizer: pp. kk. 19:37:29 thanks! 19:37:37 #topic Priority Efforts (Askbot migration) 19:37:40 #link https://review.openstack.org/#/q/status:open+topic:askbot-site,n,z 19:38:05 fungi: i think you spun up a server, yeah? 19:38:13 the test server is running with a data import from the 15th 19:38:23 i've vetted and updated the migration instructions 19:39:03 preliminary testing with the new server suggest it's working fine, but mrmartin (who said he's unavailable for today's meeting) wants to put it through it's paces for a couple weeks before we schedule any maintenance to swap them 19:39:26 okay, so we're waiting on an okay from him before we proceed 19:39:37 yep 19:39:48 and looks like the instructions just got approved 19:39:54 #link https://review.openstack.org/160693 19:39:57 for the curious 19:40:03 #info replacement server is available for testing; waiting on okay from mrmartin to proceed 19:40:26 #action mrmartin test new ask server and advise on when to proceed with migration 19:40:46 fungi: thanks! 19:40:48 #topic Priority Efforts (Upgrading Gerrit) 19:40:52 #link https://review.openstack.org/#/q/status:open+topic:gerrit-upgrade,n,z 19:41:10 so i think we scheduled the server/OS move for this weekend? 19:41:21 March 21 19:41:21 is that correct? 19:41:27 that is this weekend, yes 19:41:30 saturday in fact 19:41:33 did we say a time? 19:41:58 i just found a bug that needs a fix. #link https://review.openstack.org/#/c/165145/ 19:42:07 unless there was a follow-up announcement i missed while vacationing, i think we have not set a time yet 19:42:30 so let's do that now 19:42:51 so should probably give the -dev ml a heads up about the time as usual, but also anyone we already notified about the ip address change 19:42:54 morning would be better for me 19:43:08 here is the post: http://lists.openstack.org/pipermail/openstack-dev/2015-February/056508.html 19:43:16 how about 1500 utc? 19:43:23 thanks anteaya 19:43:27 i'm cool with pdt morning (which will be edt ~lunchtime or early afternoon for me) 19:43:30 np 19:43:38 1500utc wfm 19:43:47 anytime that works for west coast folks 19:44:02 yep, wfm 19:44:03 who's going to be around for this? 19:44:07 o/ 19:44:12 o/ 19:44:15 o/ 19:44:18 o/ 19:45:00 so we have 3 roots 19:45:21 clarkb sounded like he would be around too 19:45:28 o/ 19:45:30 last time we talked about it 19:45:38 * fungi for sure 19:45:39 cool, i think we have more than enough then 19:45:41 (sorry, I'm free) 19:45:51 and everyone is okay with 1500 19:45:52 so... 19:45:57 yeah, we have way more people than we need on hand for it, so we're set 19:46:08 #agreed maintenance starts at 1500 utc march 21 19:46:28 i'll send an announcement 19:46:34 great 19:46:42 on the topic of dates 19:46:43 again, i think that change i linked early should be in before trusty upgrade. 19:46:46 #action jeblair send follow up announcement with time 19:47:03 the agenda says the gerrit upgrade is April 10 but last meeting we agreed to May 9 19:47:07 AGREED: Gerrit 2.9 upgrade Saturday May 9, 2015 (jeblair, 19:49:54) 19:47:37 zaro: thanks -- please keep an eye on that and make sure it merges before saturday :) 19:47:44 zaro: any other changes that need to happen before then? 19:47:51 no 19:48:04 anteaya: yes, i'm bad about updating the agenda. may 9 is really it; the date should just be removed from the agenda 19:48:17 jeblair: ah okay, just wanted to make sure we were clear 19:48:36 anteaya: thanks 19:48:41 thank you 19:49:13 zaro: anything else? i figure next week we can make sure we have all the changes we need for the gerrit upgrade lined up. 19:49:29 nope, ready to go. 19:49:42 #topic IRC policy 19:49:50 sorry, i forgot to add this to the agenda 19:49:58 but really quickly, the irc policy governance change merged 19:50:15 so now we should make sure all channels are logged 19:50:18 so we need to add a lot more channels to eavesdrop 19:50:22 .pp 19:50:29 yeah, so two things about that: 19:50:39 do we have a canonical list of channels? 19:50:48 1) we should mass-add a bunch in one go 19:51:03 2) i should write up a spec for how to refactor our irc stuff to make this less insane 19:51:10 i will try to do that by next week 19:51:14 yep, avoid disrupting meetings by having lots of little additions 19:51:22 does the resolution say how we identify what constitutes an official project channel? 19:51:24 I think we also need to check foundership on all the channels too 19:51:30 jeblair: ++ move the eavesdrop stuff to project-confg ? 19:51:35 it's been a back-burner item for me for a while, but i think this increases the priority 19:51:36 Insane Relay Chat 19:51:51 nibalizer: yeah, that's part of the problem that needs solving 19:51:51 SpamapS: i think you misspelled "inane" 19:52:23 so i wrote the policy to say "openstack related channels" 19:52:41 i interpret that to mean any channel we officially do anything in 19:52:45 I think as long as a channel is mostly about an openstack project, it qualifies 19:52:56 #rdo channels? 19:52:57 like #openstack-$PROJECT 19:53:10 RDO is not an openstack project team, it's a distro team 19:53:16 so i further think that means "anything in accessbot should be logged, and we should do nothing in any channel without it being in accessbot" 19:53:21 it came up 19:53:30 when I say "openstack project" I mean openstack project team in the governance sense 19:53:37 ttx: #rdo would like to participate in our bots 19:53:43 if it's not an openstack-related channel, then we shouldn't be in it 19:53:45 ttx: there is consternation as to whether or not that should be allowed 19:53:48 mordred: freeriders! 19:53:49 me too but someone wants bots in #rdo-puppet 19:53:55 yeah, they want gerritbot announcing changes for stuff in #rdo 19:54:06 i don't know whether "#rdo-puppet" is an openstack-related channel 19:54:10 It is. 19:54:11 do we have bots in stackforge channels ? 19:54:18 we do 19:54:27 ttx: yes; stackforge projects are related to openstack 19:54:27 It's for discussion of the puppet modules around the RDO distribution. 19:54:32 I would consider those external to the openstack policy 19:54:37 well, rdo is a disto of openstack - they're everybit as much a part of our community as the folks working on vmware or hyperv drivers 19:54:45 ttx: the TC created stackforge 19:55:07 It's a community project, with participation from multiple organizations - it's not *just* a redhat project. It's part of the OpenStack community, imo 19:55:13 sure, my point is that with biugtent we have a definition of what is an openstack thing and what is not 19:55:30 ttx: yes, but we're not fully there yet 19:55:45 ttx: it would be premature to cut off stackforge from that considering that we have not asked stackforge projects to move yet 19:55:57 seems simpler to enforce IRC policy to "openstack projects" since we have a definition for that 19:56:05 ttx: I dont' see what that gets us though 19:56:08 not saying we should deny bots to friends 19:56:17 oh - wait 19:56:21 I may understand what your'e saying 19:56:25 This kind of sounds like something where the TC should be asked for more guidance? 19:56:31 any chance we can come back to this discussion? I'd like folks to at least look at suggested scalable election tools 19:56:32 you're saying only _Force_ appying the policy to openstack projects 19:56:41 not that yo uthink we should exclude people who are not openstack projects 19:56:42 If #rdo-* doesn't want stuff logged, I think we should accept that (not that they are asking that) 19:56:45 mordred: +1 19:56:51 ttx: ++ 19:56:53 mordred: exactly 19:57:16 anteaya: ooh, scalable election tools? 19:57:20 I'm trying 19:57:21 * mordred wants to know about those 19:57:28 there's an etherpad! 19:57:32 i don't think our bots belong in those channels then 19:57:36 we haven't changed topics 19:57:41 i think our irc policy sholud be consistent 19:57:46 jeblair: ++ 19:57:52 jeblair: you could say "bots only go to channels that follow IRC policy", I guess. 19:58:09 and i intentiolly wrote the resolution to say openstack-related channnels to include stackforge projects 19:58:17 because i think they generally want to be part of this community 19:58:39 jeblair: those projects did not submit to TC oversight, so you can't force them to anything. I guess you can force them in exchange for bot service 19:58:44 About elections tools, the idea to get candidacy as a change, and confirmation as approval from election officials sounds very cool 19:58:54 tristanC: we haven't changed topics 19:58:55 ttx: correct, we can make it a condition of bot service 19:59:01 jeblair: ++ 19:59:02 jeblair: that works for me 19:59:02 ttx: We don't mind having #rdo and/or #rdo-puppet logged. Could be useful. 19:59:03 and I think that's fair 19:59:07 but we're about to change meetings.... 19:59:12 So, whichever way that goes is fine. 19:59:15 yeah, if we treat all channels equally regardless of official status, then we can have one canonical list of channels we configure our bots to join 19:59:17 rbowen: I think #rdo* is the wrong example :) 19:59:24 to the very meeting where this topic belongs. ;) 19:59:39 SpamapS: bah - it'll be a year before we get to this topic in the next meeting 19:59:44 tristanC: I'm sorry we didn't get to that topic. i did not expect this to be controversial. 19:59:50 mordred: status quo ftw 20:00:13 thanks all 20:00:14 #endmeeting