19:02:00 #startmeeting infra 19:02:01 Meeting started Tue Sep 15 19:02:00 2015 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:05 The meeting name has been set to 'infra' 19:02:12 jeblair: you need a sacrificial keyboard 19:02:29 clarkb: oh, of course, i have a model-m; i can just throw it in the dishwasher 19:02:34 o/ 19:02:40 #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:02:43 #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-08-19.01.html 19:03:01 #topic Actions from last meeting 19:03:05 clarkb see if github api supports transfers yet 19:03:23 I did and no 19:03:37 you ca ndo renames with the edit repo api call but transfers dont appear to be a thing via the api 19:03:40 #info github api does not support transfers :( 19:03:48 there is even an open bug against githubs bug tracker for it 19:04:07 clarkb: so, post oct 17 we might be in a better place 19:04:11 i suppose we have one month for people to pester that bug asking for the feature 19:04:25 o/ 19:04:44 well bug is ancient 19:04:49 so wouldnt get my hopes up 19:04:57 mordred look into better version of https://review.openstack.org/105057 or improve it 19:05:05 mordred isn't here.... 19:05:09 and it doesn't look like that got improved 19:05:29 i missed some of the action on friday; were we able to use ansible at all? 19:05:50 we were not, I think jasondotstar was looking at it and said we shouldn't 19:05:52 just the playbook we already have to clean up slave workspaces 19:06:07 we decided the ~6 hours we had was better spent prepping rather than ansible scrambling 19:06:16 I'm sorry for beeing late... 19:06:21 o/ 19:06:45 o/ 19:06:48 fell back on my script-creating-script ouroboros 19:06:49 I am working on it 19:06:49 # https://review.openstack.org/222726 19:06:54 #link https://review.openstack.org/222726 19:07:00 should we say that ship has sailed and just do our usual for oct 17? 19:07:01 * fungi fails at irc today 19:07:13 wasn't ready by last friday 19:07:41 jeblair: ya, and we were taking about testing it against review-dev too 19:07:47 so that may happen prior to the 17th 19:08:17 clarkb: +1 19:08:25 k, so keep working on it because it's still useful even after the stackforge move, but not counting on it at this point for that 19:08:32 (and maybe we'll be surprised) 19:08:39 ack. 19:09:22 jasondotstar: cool, thanks! 19:09:40 #topic Specs approval: Artifact Signing Toolchain (fungi) 19:09:47 #link Artifact Signing Toolchain https://review.openstack.org/213295 19:09:51 #info Artifact Signing Toolchain spec was approved 19:10:03 \o/ 19:10:05 and it merged 19:10:06 noted, thanks all 19:10:31 (also, it's going to be someone else's turn to try to remember to do that soon ;) 19:10:53 #topic Schedule Project Renames 19:11:11 anyone want to talk about oct 17? 19:11:34 i don't have anything, but figure we should start talking about next steps soon 19:11:44 fwiw, the wiki list is a small fraction of what we need to address 19:11:56 at least so far 19:12:18 we have 331 stackforge repos at the moment 19:12:36 hopefully projects will step it up 19:13:09 maybe we should start sending some targeted emails, to recent contributors or project originators 19:13:22 is gertty moving to git.inaugust.com? 19:13:24 ;) 19:14:11 but yeah, we could automate some sort of mass contact attempt 19:14:12 probably not :) 19:14:39 i'll take that one 19:14:52 #action jeblair automate some sort of mass contact attempt for stackforge move 19:14:58 cool! 19:15:36 we have a next step; anything else on this? 19:16:09 let's say next week we'll look for someone to volunteer for writing some scripts; so heads up. :) 19:16:26 #topic Priority Efforts (Swift Logs) 19:16:45 #link https://review.openstack.org/#/c/214207/ 19:16:55 "Dynamic folder indexes are ready for review" 19:17:23 \o/ 19:17:44 which is yay, because i think that was our major blocker for turning it on on more jobs 19:17:52 agreed 19:18:02 and also annoying for some jobs already using that 19:18:11 thanks jhesketh! 19:18:16 yeah, some reviews would be good 19:18:23 (it's been up a couple of weeks) 19:18:37 i see that. bumping up my list 19:18:40 works okay locally.. is a little ugly, but at least functional 19:19:04 i love functional :) 19:19:24 #topic Priority Efforts (Migration to Zanata) 19:19:28 jhesketh: mind if i update the review topic on it to match what's in the logs-in-swift spec? 19:19:38 fungi: not at all, thanks 19:19:43 so translators are using Zanata now, just wrapping up some things WRT jenkins periodic jobs 19:19:45 sorry I forgot to use the topic! 19:20:05 * fungi would like to be able to pretend that's the reason he hadn't reviewed it yet, but alas 19:20:30 pleia2: cool 19:20:30 nothing else meeting worthy :) 19:20:32 pleia2: has a change up to give zanata a bit more memory 19:20:41 yeah https://review.openstack.org/#/c/223721 19:20:44 translate-dev ran out of heap space this morning and got stuck spinning cpu 19:20:47 i restarted wildfly 19:20:55 jeblair: this patch will fix that 19:21:00 neat 19:21:09 also we should probably add them to cacti? 19:21:19 https://review.openstack.org/#/c/223687/ 19:21:21 yep ^^ 19:21:51 neat...if i say a third thing, will there already be a change in review for it? ;) 19:22:00 quick, make it something important! 19:22:04 haha 19:22:13 I do have a docs change WIP, that's important, I hope to finish it this afternoon 19:22:36 docs are cool 19:22:40 and important 19:22:43 we never had a docs translate page, will soon 19:22:49 ;) 19:23:22 pleia2: thanks! 19:23:26 #topic Ansible roles under -infra (pabelanger) 19:23:37 ohai 19:23:53 #link http://lists.openstack.org/pipermail/openstack-dev/2015-September/073857.html 19:24:12 so, this one is just starting the discussions around how it would look like moving ansible roles upstream into the OpenStack git workflow. 19:24:39 So, for example, I have ansible-role-nodepool, that I would like to start using downstream. I know of some other people that would also like to consume it. 19:24:47 so, I'm looking for a new home for the module 19:25:01 from our Gozer team we'd like to give a chance to that ansible roles as well 19:25:14 trying to understand if -infra would be a good location for it, or some other place (git.o.o/openstack)? 19:25:26 this is a role to deploy nodepool? eg, replace puppet-nodepool? 19:25:34 I don't want to convince -infra to use ansible to provision nodepool 19:25:45 in the past we'd approved inclusion of distro packaging repos for infra's software as part of infra even though we weren't expecting to consume said packages immediately (if ever) 19:25:59 jeblair: no replacement, just another method to deploy nodepool 19:26:01 fungi: i think we're still hoping too someday :) 19:26:06 heh, true 19:26:24 pabelanger: but exclusive with puppet-nodepool, right? they occupy the same space? 19:26:30 correct 19:26:38 same space 19:26:43 config management fight! YAY! 19:26:50 basically :) 19:26:55 so productive 19:27:17 seems like alt pieces of the puzzle should go in their own repos 19:27:39 Personally, I am not sure -infra is the place for it. Since -infra would not consume it. 19:27:48 however, people in infra will use it 19:27:55 downstream 19:28:24 o/ 19:28:32 if the scope is just for infra components, i think that infra is the place, under a separate project with independent group 19:28:48 pabelanger: i realize it's not what your asking, but let me just mention this for those who haven't heard it before -- we can definitely discuss replacing our use of puppet in infra with something else, though it needs to be a plan we get consensus on, and it needs to (eventually) be a wholesale replacement. we don't want two config management systems in use at once. 19:29:47 in our infra puppet modules we have support implemented for platforms we don't (and don't expect to) run. this seems like a similar situation 19:30:06 jeblair: Right. And I don't want to do 2 different systems. Honestly, I see some of this moving independent of -infra and don't want to force the issue to replace puppet upstream 19:30:12 O/ 19:30:30 just looking for the best place to get the openstack git workflow, without setting up something external 19:31:30 so the question is a) openstack-infra, b) openstack-ansible, c) some new project-team, d) unofficial repos in the openstack namespace 19:31:47 fungi: yeah, i'm sympathetic, though i don't want to do something to damage the progress we've made on actually collaborating on puppet; i'm a little worried about having two infra-official-looking ways of deploying stuff. 19:32:03 that's certainly a legitimate concern 19:32:09 jeblair: I am also worried that this will also just add to our workload after we hvae said "well we probably won't use this" 19:32:19 or at least that assertion seems to be out there right now 19:32:25 I also wonder who would review it? would current infra cores want to? 19:32:29 specs, testing, etc 19:32:39 i am unlikely to review changes for things we're not running 19:32:44 greghaynes: no, if infra-core is not using it. I don't see them reviewing 19:32:51 i'm lucky to review changes for things we are :) 19:32:56 well, which we're not running and not working toward running 19:33:03 another reason I don't think -infra is the place 19:33:14 yea, so then im not sure what the value is of it being in infra land 19:33:43 that leaves openstack name space and possible ansible team 19:35:36 how is that going to live with general openstack ansible projects? 19:36:17 they both use ansible I guess 19:37:22 also curious whether there's a downstream that's seriously interested in swapping out all the infra puppet they're using with ansible, in which case that tips the scale a bit for "might use it upstream if we replace everything" since that might already get us to the halfway mark 19:37:51 well it would be better to justify that a little bit more beyond "bceause we are alergic to puppet" if so 19:38:06 agreed 19:38:09 since we actually have to switch in production 19:38:31 well, in our case, we started some orchestration with infra-ansible, so using ansible for the components looks as a natural step. We don't have a plan to replace puppet, but at least we are curious to test that approach 19:38:31 my reading is that there is lukewarm reception to hosting them in infra. we're pretty accomodating in hosting some amount of related infrastructure that we aren't directly consuming, but this seems to be a bit too far away from what we're doing with too little potential for collaboration (unless we decide to switch) for some of us to incorporate it into infra 19:38:42 yolanda: that's the thing... 19:38:50 i'm certainly allergic to puppet, doesn't stop me from using it. i also might discover i'm allergic to ansible if i used it as much as i use puppet now 19:39:12 yolanda: as a group, we've said "ansible makes sense for orchestration, but puppet is sufficient, if not better, at configuration management" so that's the direction we're going 19:39:39 fungi: yeah, few of us are puppet cheerleaders, but we get over it because it lets us work together 19:39:52 Ya, I don't think this is at the stage to justify moving from puppet to ansible. But, I do think there is some downstream folks that are interested in ansible. 19:40:47 sounds like another "when you have a hammer every problem looks like a nail" situation. once you have the tool implemented for something it's good at, there's a temptation to use it to solve other problems too just because you already have it 19:40:51 pabelanger: we don't have to cover it here, but i'd like to know if, especially after we get rid of the puppetmaster with the ansible-launch-node work, why diverging is better than collaborating 19:41:19 s/if// 19:41:42 jeblair: right, I can go into some details about that, maybe an email, at least explaining why I'm working on ansible stuff ATM 19:42:19 the downstream justification would be interesting, if nothing else 19:42:42 Ya, I can get some notes together and continue on ML 19:42:50 pabelanger: that would be cool if you could send it to the -infra list; we'll get some more eyes on it 19:42:57 ack 19:43:39 pabelanger: anyway, i'm not getting very strong support for inclusion into infra. but we don't want to lose opportunities to collaborate. so maybe we call this a tentative "doesn't sound like the right place" for right now, but continue exploring on the ml? 19:44:30 sure, that works for me. 19:44:37 thanks 19:44:39 #topic Nodepool consuming requirements (ianw) 19:44:51 hello 19:45:09 this came up yesterday, i think there might be old discussions around this before my time 19:45:21 "is nodepool a openstack project" 19:45:37 what is the context for the question? 19:45:48 I have never asked myself that question that I recall 19:45:50 we could rescope the question to "do all openstack projects participate in global requirements sync?" 19:46:10 anteaya: consuming global requirements 19:46:19 ianw: thank you, now I understand 19:46:21 my concern here is I don't want to have to justify Infra's use of every lib we use 19:46:36 Oh, that discussion ;( I gave up frustrated with openstack-manuals at one point ;( 19:46:44 in general, no, we do not hold infra projects to the same standard. our goals and needs are deliberately separate from openstack. 19:46:44 because we have made some very specific decisions as far as design and deps that don't line up with openstack very well 19:46:45 as of right now, the next dib release will break nodepool, which kind of sucks 19:47:08 ianw: well, i mean, we're a git commit away from fixing it :) 19:47:09 though pymysql is now used by everyone else at least (that was the big one) 19:47:38 jeblair: yeah, because we noticed because pabelanger was playing with it. it would suck if puppet autodeployed it at a bad time though 19:47:44 i'm less worried about having to justify choice of dependencies, more worried about maintaining some logical separation between the projects which you install to run an instance of "openstack" (for some definitions thereof) and what we develop and maintain to support our community infrastructure 19:47:54 i don't think nodepool, or other infra projects, should participate in global reqs sync. we are not intending to be part of the combined deliverable we vaguely call openstack. 19:48:19 ianw: yeah, though, tbh, it's probably not going to kill us if pbr flaps on nodepool.o.o 19:48:28 AIUI the problem that came up here is that dib does req sync and had an incompatible dep with nodepool 19:48:37 which should be rare, I hope 19:49:09 the other thing is people are packaging nodepool, and version skew is an issue 19:49:28 version skew how? 19:49:30 greghaynes: yeah, and any lib can cause that. i think the overlap is small enough that we don't need the full power of the requirements repo to address it when it does 19:49:35 well, plenty of nodepool's dependency chain might release versions with incompatible transitive dependencies. dib is little different in that regard 19:49:52 * jeblair just lets fungi talk :) 19:50:06 heh, after you! 19:50:13 agreed 19:51:15 and we can definitely broaded requirements ranges as needed for packagers 19:51:18 i think one place we've seen some overlap between vaguely openstack services and openstack infra projects is solum, which was considering using zuul as its scheduler 19:51:37 so it feels like there's not much love for syncing with global requirements 19:52:21 i think the situation we engineered requirements synchronization to solve is not a problem we run into often enough in infra projects 19:52:28 maybe we should just ensure we're testing dib from git? that would give us a heads up on issues there, at least? 19:53:04 not a bad idea, maybe a non-voting job? 19:53:13 Does the nodepool dsvm test not do that? 19:53:18 greghaynes: it does not 19:53:36 greghaynes: it pip installs nodepool's source which installs dib from pypi 19:53:45 ah 19:53:49 can you elaborate on the degree of impact? is it going to crash nodepoold when we update dib? or keep it from starting when we restart? or keep us from being able to upgrade/reinstall it? 19:53:51 (i would also like to avoid pulling nodepool into a shared queue with dib, at least at the moment) 19:54:59 fungi: it will be a pbr conflict; i'm not 100% sure but i guess it would be broken on restart? 19:55:05 if the majority of the impact is that our nodepool unit or integration tests begin failing when dib releases, that's more or less working as intended 19:55:05 at least 19:55:14 I think it would just fail to upgrade nodepool 19:55:21 but restarts would run the old version 19:56:04 just want to make sure that we don't implement a solution to this which is even more invonvenient than the problem 19:56:46 so in summary ; global requirements = no ... testing dib from git = propose something and we can see 19:56:52 ? 19:57:30 that's my take 19:57:33 ianw: ++ 19:57:43 ok, thanks 19:57:46 sgtm 19:57:49 sounds good, I think you just need a second dsvm job that does dib from source 19:57:57 just an extra devstack flag iirc 19:58:13 Ya, that is how I caught the issue, git install and simple dib create 19:58:43 ianw: thanks 19:58:44 #topic Open discussion 19:58:48 2 mins! 19:59:00 so i need reviews for https://review.openstack.org/#/c/206582/ 19:59:09 anteaya needs that to start using apache logrotation 19:59:21 it has a +2, but needs more eyes on it 19:59:29 fewer apache logs on gerrit, yay! 20:00:04 oh, who was looking into vcsrepo updating to upstream and getting off our fork? check in with me 20:00:13 yes, getting that in soon would be good 20:00:13 mrmartin? 20:00:19 thanks everyone! 20:00:21 #endmeeting