19:04:40 #startmeeting infra 19:04:41 Meeting started Tue Aug 2 19:04:40 2016 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:04:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:04:45 The meeting name has been set to 'infra' 19:04:48 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:04:53 #topic Announcements 19:05:00 #info Reminder: late-cycle joint Infra/QA get together to be held September 19-21 (CW38) in at SAP offices in Walldorf, DE 19:05:02 #link https://wiki.openstack.org/wiki/Sprints/QAInfraNewtonSprint 19:05:30 #info Reminder: get me additions for the announcements section of the meeting if you have any additions for next week 19:05:42 #topic Actions from last meeting 19:05:48 #link http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-07-26-19.03.html 19:05:54 pleia2 Submit a spec to host an instance of limesurvey 19:06:12 o/ 19:06:20 #link https://review.openstack.org/349831 spec "Survey Server" 19:06:27 pleia2: thank you 19:06:36 maybe next week we'll see it proposed for council vote 19:06:48 she mentioned earlier she won't be around for the meeting today 19:06:57 but her spec is here in her place! 19:07:03 fungi start a poll for infra mascot 19:07:10 done, have a meeting topic later to discuss the results 19:07:12 i haven't officially closed the poll yet, so that i can maintain some air of mystery around the results until we get to that point in the meeting (even i have no idea what people picked!) 19:07:14 stay tuned... 19:07:28 *drumroll* 19:07:32 #topic Specs approval: PROPOSED: Pholio Service Installation (craige, fungi) 19:07:43 #link https://review.openstack.org/340641 spec "Pholio Service Installation" 19:07:46 craige probably isn't around for the meeting but this is already in progress 19:07:52 so i'm going ahead and seeing if we can get council agreement on it in its latest iteration 19:07:56 allowing us to track the changes a little better 19:08:15 any objection to putting it to a vote by this time thursday? 19:08:22 I do not object 19:08:37 we can still tweak the spec if need be after it is published 19:08:44 no, earlier is better with specs. :) 19:09:00 #info Spec "Pholio Service Installation" is open for council voting un til 19:00 UTC Thursday, August 4 19:09:14 #undo 19:09:15 Removing item from minutes: 19:09:19 #info Spec "Pholio Service Installation" is open for council voting until 19:00 UTC Thursday, August 4 19:09:29 stray space snuck into that first one 19:09:38 ah 19:10:27 basically this is a distillation of the earlier phabricator spec but just for the pholio design wireframe/mockup subsystem so the ui/ux team can move off of the (proprietary) invision service they're using 19:10:46 which is a great direction 19:10:48 they've already tried out a demo craige put together a while back and are very eager to be able to use it once we can get it running 19:10:56 oh wonderful 19:11:35 piet has already started archiving pdfs of their old work from invision so they'll have it to refer to for historical context 19:11:47 #topic Priority Efforts 19:11:55 looks like no new blockers on the agenda this week 19:12:17 #topic Infra mascot/logo (fungi) 19:12:30 thought about saving this for last, but i guess we'll just get it out of teh way now 19:12:57 The poll has been ended. It was announced to end 2016-08-01 23:59:59 UTC. 19:13:03 #link http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_c2c11d642eafb0e0 and the results and just now coming in... 19:13:10 *drumroll* 19:13:10 is this going to be like brexit? ("i only voted for ... because i didn't think it would win") 19:13:16 ;_; 19:13:28 top choices in descending order: 19:13:34 ant 19:13:38 honeycomb 19:13:41 too soon :( 19:13:43 bee 19:13:48 turtle 19:13:51 SotK: :( 19:14:04 beaver/woodchuck (i think we just planned for this to be beaver after subsequent discussion) 19:14:10 SotK, Zara: sorry :( 19:14:16 well as I said before, ant and bee are ruled by queens 19:14:29 * bkero petitions the leadership for a re-vote followed by searching Google for 'what is the infra?' 19:14:32 which I didn't think the group would like the symbology of 19:14:40 i'll give the full ranked list to heidijoy for review so she can cross off ones that are in conflict with what other teams have already picked 19:14:54 well keystone picked turtle 19:14:58 yep 19:15:15 i like the ant. they work together and build things. the queen is just a figure head after all 19:15:30 anyway, we'll see what heidijoy comes back with 19:15:47 can I now nominate mordred as our queen? 19:16:04 Ant and Bee feeding strategies are also a good map to cores in different projects. 19:16:31 fair point. my mom does a lot of beekeeping and i hear all sorts of great ideas i should begin applying to our team ;) 19:16:42 Shrews: dress him up for a pic 19:16:48 zaro: ++ 19:16:58 lol 19:16:58 * fungi would pay for queen mordred photos 19:17:10 crown and sceptre are a must though 19:17:22 fungi: I want to see your version of the nectar dance 19:17:39 that'll take some planning 19:17:51 there are some dramatic videos out there of swarms of bees being removed from walls and things 19:18:07 have ended up chain-watching when in the grip of insomnia before 19:18:19 okay, unless anyone else has any important mascot discussion items, we can revisit this next week with something solid 19:18:38 we'll have an ascii version of the mascot, right? 19:18:50 i thought that was the only version we'd have? ;) 19:18:55 ++ 19:19:01 #topic AFS mirror for puppet module Git repos (pabelanger) 19:19:05 #link https://review.openstack.org/345669 AFS mirror for puppet module Git repos 19:19:09 hello 19:19:18 the floor is yours 19:19:28 and maybe a wall or two 19:19:32 This has come up as part of the effort to remove private infrastructure from tripleo-ci 19:19:44 as a result, they currently mirror some git repos (github.com) to that server 19:20:08 there were some objections about mirroring git repos however 19:20:11 i think a great example for this is our infra puppet module tests 19:20:15 since we don't really like doing that 19:20:39 i'd be more on board with this if we had a story about how it was needed by more than one project 19:21:01 we have jobs that reclone a ton of puppet modules from github for testing (should/can we get them from tarballs or something instead?) 19:21:21 what about puppet-openstack-puppet? 19:21:36 Right, I think we could make this use tarballs for puppet modules, and I think tripleo is on board with that. 19:21:42 which means we don't need a git afs mirror 19:22:02 (what do they need -- while i go look up which order their project's name is in) 19:22:29 PuppetOpenStack! that's the one :) 19:22:38 hi 19:22:41 basically, this is a list of there current mirror: http://8.43.87.241/repos/ a lot of things are puppet modules 19:23:18 Non-openstack puppet modules at that 19:23:19 not sure I'm in the context but we have tarballs for every puppet module, tarballs.openstack.org/puppet-* 19:23:30 except non PuppetOpenStack :-) 19:23:38 yeahhh 19:23:38 right, this applies to external puppet modules 19:24:00 EmilienM: in ci, where do your external dependencies come from? 19:24:50 github 19:25:21 so would a local mirror as either a git repo or (daily generated?) tarballs be helpful? 19:25:31 which is why i was using our puppet module tests as an exampkle 19:25:33 example 19:26:07 we do see false negative results from time to time because we failed to clone from github successfully in one of our module test jobs 19:26:41 I actually don't know the requirement for tripleo-ci, I was hoping just to offer up another mirror that was under the control of openstack-infra 19:26:49 I can go back and find out the answers 19:26:53 fungi: ya. i guess we count too, though i'm personally not sure our problems rise to the point of needing this. 19:27:24 agreed, we're an example of the behavior, not necessarily a reason to change anything 19:28:01 A lot of the external module cloning comes from using a tool called r10k to clone repos from a file 19:28:19 if it's just tripleo that needs this, then, for me, this underscores the unusual situation that tripleo-ci is in 19:28:21 puppet-openstack sees the same false negatives issue and would benefit from such a mirror 19:28:27 Here: http://git.openstack.org/cgit/openstack/puppet-openstack-integration/tree/Puppetfile 19:28:55 i can't speak for chef but i would expect they would see the same thing from time to time 19:29:09 (which we should probably address one way or the other -- either make it part of the ci system available to everyone, or run it as a third-party ci) 19:29:36 it would be important to make sure that things like r10k can deal with the mirror if it's present and if not use the stated upstream 19:29:45 i also wonder if a proxy would be more appropriate for this? 19:29:52 and/or figure out what pre-cloning would be good for 19:30:11 our _mirrors_ are _mirrors_ where as this is starting to look more like 'cache some local data'. 19:30:34 right 19:30:42 a proxy would work too 19:30:58 ssl ? 19:32:11 mordred: i believe true http proxies support ssl 19:32:20 yeah, looking at the proposed change it's not entirely clear to me how this would be leveraged 19:32:43 I am happy to table this for now and move to the next topic. I don't have a need for an answer today, but something I wanted to highlight and get the ball rolling on 19:33:14 how we actually serve up the cache in a way jobs could conveniently consume would be a big (missing) part of this puzzle for me 19:33:54 Aren't things like this something we traditionally preload on nodepool images? 19:33:58 pabelanger: i'm happy to discuss further any of the points i raised if you are interested 19:34:37 jeblair: sure, I'd like that 19:35:01 bkero: we have time. there's only one other item on the agenda for today anyway and it's a quick one 19:35:06 er, pabelanger ^ 19:35:11 sorry bkero 19:36:37 i'd be curious how tripleo-ci jobs are currently using their cache of these repos 19:37:02 do they have their own custom tooling that knows to look for cached copies before downloading? 19:37:14 or are they using some existing puppet ecosystem tool that has that as a feature? 19:37:36 right, custom tooling. 19:37:40 like, a primary clone url and a backup one? 19:38:10 from what I see, they simply clone from the mirror 100% of the time 19:38:14 would you also want the smart http git backend cgi set up on the mirror servers, with the model you're suggesting? 19:38:31 or were you thinking they'd just be served as flat filesystem copies? 19:39:20 that way the patch is written is just to use apache to server up the bare git repos. It's a pretty crude first step 19:40:33 I think for now, the question highlight this is not ready to move forward at the moment and maybe requires a spec to be put in place 19:40:47 and I am happy to do that and talk more about it 19:40:52 sounds good--looking forward to reading that 19:40:54 thanks pabelanger! 19:41:02 #topic Attesting to our new artifact signing key (fungi) 19:41:12 #link http://docs.openstack.org/infra/system-config/signing.html#attestation Attesting to our new artifact signing key 19:41:57 in short, we have a documented process for infra-root members confirming this signing key via ssh access checking local disk on the puppetmaster server 19:42:27 it would be excellent if as many of us as possible could follow that process soon and push key signatures up to the keyserver network for it 19:43:07 i've just done a successfuly test as part of the bindep 2.0.1 release today to create detached signatures for its sdist and wheel and serve them from tarballs.o.o alongside 19:43:14 er, successful test 19:43:19 excellent 19:43:23 yaay! 19:43:30 has anyone other than you attested so far fungi? 19:43:45 anteaya: not to my knowledge 19:43:47 and soon expect that all our new release artifacts on tarballs.o.o 19:43:47 anteaya: i think we were waiting for fungi to tell us he was ready 19:43:50 thanks 19:43:51 fungi: cool, just use our most connected key? 19:43:55 jeblair: ah, very good 19:44:04 ianw: yep 19:44:17 er, that all our new release artifacts on tarballs.o.o will start having these 19:44:41 also the same key is being used by dhellmann in tests now for automated signing of git tags 19:44:53 i think we have a couple of outstanding patches that need review 19:45:14 #link https://review.openstack.org/#/q/topic:artifact-signing+status:open 19:45:16 fungi: signing01 creates the git tags? 19:45:31 jeblair: right, from metadata approved by the release team in the releases repo 19:45:38 cool, gtk 19:45:51 jeblair: or at least that's their thought behind it. they want to get out of teh business of pushing tags into gerrit themselves 19:46:13 yep 19:46:44 anyway, the attestation instructions are hopefully pretty straightforward 19:47:04 i included an example using gnu privacy guard v2 19:47:29 but really it's just a matter of retrieving, signing and pushing up the keysig via whatever tool you're normally comfortable with 19:47:33 fungi: the instructions are straightforward - however, "Some" confused me for about 5 seconds, fwiw 19:47:59 the important bit is making sure the fingerprint of teh key on puppetmaster.o.o matches the fingerprint of teh key you retrieved 19:48:36 mordred: the "Some Cycle" is a new HBO production based on a long running fantasy series 19:49:30 mordred: yeah, the "Some" was a placeholder because i was trying to generalize this process to apply to each time we rotate our per-cycle keys 19:49:31 how original 19:50:00 okay, any questions on this? if not i'll move on to open discussion time 19:50:09 fungi: yah - I understand that now. also, I have signed the key 19:50:17 thanks! 19:50:22 thank YOU 19:51:10 #topic Open discussion 19:51:49 mordred: depending on how close the ansible bits are to being used I'd rather work on adding gentoo support to them than puppet then ansible both 19:51:57 also, hi 19:52:52 hi prometheanfire - I think rcarrillocruz has started looking at that area 19:53:52 cool, I'll try to remember to bug them 19:53:56 i think we're confused about this again 19:54:05 what ansible bits? 19:54:19 jeblair: indeed 19:54:21 :D 19:54:22 the ones to replace puppet in nodepool images 19:54:27 so mordred stuff is about creating base images 19:54:32 there are no ansible bits to replace puppet in nodepool images 19:54:32 last week we talked about approving these changes for storyboard gerrit integration: https://review.openstack.org/330922 https://review.openstack.org/330925 and https://review.openstack.org/344519 19:54:35 i thought the role was about images in general 19:54:50 prometheanfire: everything that you and i and others talked about this the last time is still the case 19:54:58 so yeah, we can use that to create base images and follow the plan as you explained to me about using plain dib elements and nothing puppet 19:55:00 it's still outstanding, was wondering whether if we can get eyes on it? 19:55:36 zaro: i think i +2'd them all already, so probably if one other helpful core reviewer has time it'll be a quick review 19:55:38 jeblair: just to be clear then, is the ansible work being done only for infra side? not nodepool side? 19:56:27 zaro: hrm, i haven't +2'd those. i'll review this afternoon 19:56:29 fungi: a few of them already have 2 +2 but no approval 19:56:36 zaro: I think mordred felt that some of that series would require a gerrit restart or reindex 19:56:37 and approve ;) 19:56:58 zaro: oh, right, that was an outstanding question 19:57:10 zaro: do any require a reindex? 19:57:28 the one that adds story tracking lookups implied possibly an addition to the search indexes, so does that require an offline reindexing? 19:57:35 prometheanfire: i honestly don't know much about it. i think it's an experimental effort to see about using ansible to create base images for infra. i'm not certain that we have really discussed it as a group or decided it's a direction we want to go. 19:58:04 jeblair: ah, k, guess it'd be further off then anyway and I should do puppety stuff 19:58:25 prometheanfire: that is, to create base images for our long running servers 19:58:25 fungi: none really needs reindex. 344519 will only work after reindex 19:58:36 i've as of yet heard nothing to indicate that it would be sane to replace teh puppet bits in our image builds with ansible vs just some shell scripts 19:59:04 zaro: okay, so if we want the feature implemented, we need to plan for a reindex outage (4+ hours or so) 19:59:10 jeblair: ah, diferent then 19:59:32 * Zara is really excited about gerrit integration! :D 19:59:33 fungi: sure, that's accurate 20:00:02 zaro: thanks. we'll make sure to incorporate that into planning for the restart 20:00:22 \o/ 20:00:34 oh, we're out of time--thanks everyone! 20:00:38 #endmeeting