19:02:39 #startmeeting infra 19:02:40 Meeting started Tue Aug 11 19:02:39 2015 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:43 The meeting name has been set to 'infra' 19:02:46 #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:02:46 #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-08-04-19.03.html 19:02:53 #topic Actions from last meeting 19:02:59 so, i forgot this topic last meeting 19:03:02 and we had a bunch 19:03:09 o/ 19:03:31 pleia2 provide swift log upload traceback 19:03:31 jhesketh update os-loganalyze to use pass-through rules 19:03:31 jhesketh/clarkb have os-loganalyze generate indexes for directories which lack them 19:03:44 those 3 all seem related; anyone know if they happened? 19:03:56 i think the pass-through merged a few hours ago? 19:04:00 * fungi finds it 19:04:21 pass through is in 19:04:30 needs checking can use coverage for that 19:04:50 o/ 19:04:52 cool 19:04:55 jhesketh was talking to notmyname about doing listingd for index gen not sure if patvh is up yet 19:05:14 #link https://review.openstack.org/208767 19:05:42 i don't see anything likely for indexes 19:06:04 ianw update yum dib element to support disabling cache cleanup 19:06:21 anyone know the status on that one? 19:06:31 There are some dib patches up 19:06:47 Clint was working on something related too, though f22 specific i think 19:07:11 https://review.openstack.org/#/c/211434/1 19:07:14 no, my bad. that was pabelanger 19:07:28 * fungi has no idea why he mixed them up 19:07:35 * Clint shrugs. 19:07:45 cool 19:07:47 mordred investigate problem uploading images to rax 19:07:50 ya. I have some dib stuff up for fedora 22. Around dnf and such. But the changes are not in diskimage-builder, but system-config right now 19:08:02 jeblair: that seems to juts be owrking now 19:08:07 jeblair: not sure if mordred did anything to it 19:08:11 I can review ianw work too 19:08:18 clarkb: okiedokie :) 19:08:20 yeah, we're getting fairly consistent uploads to rax now, i think. double-checking 19:09:14 well, maybe. we have uploads from today and a week ago 19:09:50 so mayhaps not 19:10:16 :/ 19:10:53 The other issue in that vain (nodepool dib working) is rootfs resizing currently isnt working for those nodes 19:11:10 s/vain/vein 19:11:26 I havent had time to get that fully fixed in dib :( 19:11:58 nibalizer add beaker jobs to modules 19:12:06 in review 19:12:17 #link https://review.openstack.org/#/c/208799/ 19:12:23 cool 19:12:23 nibalizer make openstackci beaker voting if it's working (we think it is) 19:12:31 anteaya: asked tha twe start with one before exploding so thats why that looks a little weird 19:12:40 also in review 19:12:43 #link https://review.openstack.org/#/c/208631/ 19:13:09 and 19:13:10 nibalizer create first in-tree hiera patchset 19:13:16 #link https://review.openstack.org/#/c/206779/ 19:13:20 also in review :) 19:13:29 w00t 19:13:37 jeblair write message to openstack-dev with overall context for stackforge retirement/move 19:13:37 nibalizer: I do like to see a new job building first, thanks 19:14:05 i wrote this: https://etherpad.openstack.org/p/3GYKL57APR 19:14:19 will send it later today 19:14:30 jeblair start discussion thread about logistics of repo moves 19:14:41 i'll start that after sending the first message 19:15:04 #topic Specs approval 19:15:15 we don't have any on the agenda today 19:15:28 i'll just note that i merged this: 19:15:34 #info greghaynes primary assignee on nodepool workers spec 19:15:35 #link nodepool workers spec https://review.openstack.org/208442 19:15:49 since greghaynes volunteered to take on an unassigned spec 19:15:52 yay! :) 19:15:56 thanks greghaynes 19:16:04 :) note the gerrit topic 19:16:06 reviews welcome 19:16:36 i also pushed up this change to specify zuulv3 would happen in branches on nodepool and zuul: 19:16:39 #link specify branch-based development for zuulv3 https://review.openstack.org/211687 19:16:51 which we discussed briefly last week 19:17:17 and finally, i've proposed we make maniphest a priority effort: 19:17:18 #link add maniphest to priority efforts https://review.openstack.org/211690 19:17:53 we should probably get formal votes on that 19:18:46 #link voting on maniphest priority effort open until 2015-08-13 1900 UTC 19:18:47 er 19:18:48 completely agree 19:18:55 #undo 19:18:55 Removing item from minutes: 19:19:00 #info voting on maniphest priority effort open until 2015-08-13 1900 UTC 19:19:15 #link add maniphest to priority efforts https://review.openstack.org/211690 19:19:17 just for good measure 19:20:01 #topic Restore from backup test (jeblair) 19:20:01 I'm out, weather is bad 19:20:30 anyone want to take this on? 19:21:17 take on the issue, or take on talking about it here in the mtg? 19:21:24 I hear greghaynes typing so maybe he does 19:21:26 I wish :( too many things ATM 19:21:30 take on the issue 19:21:31 clarkb: gotcha 19:21:47 I'm new. Looking for a challenge.... sounds fair :-) 19:21:54 what's this entail? 19:21:57 Any info on how it would work? 19:21:57 I would except babies and I am one of the two people that I think has attempted it in the past 19:22:03 we need someone to write up a plan (could be anyone), and a root member to execute it 19:22:05 and jeblair wanted new eyes on it iirc 19:22:50 pabelanger: we talked about it a little at the last meeting 19:23:23 pabelanger: but i think mostly what needs to happen first is to decide what it is we want to verify and how to go about it 19:23:26 okay. I mean, I can look around and see what is needed. If nobody else has the time 19:23:38 #link backup documentation http://docs.openstack.org/infra/system-config/sysadmin.html#backups 19:24:02 Yep, I think the consensus meeting was to do a one time backup restore test to information gather on what all we would need 19:24:08 er, the consensus last meeting 19:24:08 pabelanger: jasondotstar: i'm happy to work with either or both of you on it 19:24:27 okay. Add my name to the list. I don 19:24:31 't mind look into it 19:24:36 looking* 19:24:39 same here. 19:24:50 we seem to be down a few root admins for today's meeting, which is probably not helping the volunteering 19:25:11 #action pabelanger,jasondotstar look into restore-from-backup testing 19:25:21 pabelanger: if you want to take point that's cool. this will help me get my feet wet 19:25:33 sure 19:25:44 fungi: yeah, less than half of us are here today? 19:25:51 seems that way 19:25:55 pabelanger: cool 19:25:58 ah, august 19:26:28 i wouldn't have picked this time of year to buy and move into a house, but at least i'm coming out the other end of that timesink now 19:26:31 anyway, this is something that will benefit from new eyes, so cool. 19:26:51 #topic Fedora 22 snapshots and / or DIBs feedback (pabelanger) 19:26:58 ohai 19:27:24 so, thanks to the help of people here. We can acually provision a jenkins node using fedora 22 (which is puppet4). 19:27:51 our first use of puppet4? :) 19:28:08 So, my questions are more for root admins about how to get fedora22 actually running. Questions are if we can use snapshots or continue work on dibs 19:28:32 I have a few reviews up for cache_devstack and could use some eyes 19:28:46 the downside with snapshots, hpcloud doesn't have fedora22 images 19:28:51 not sure what is required to add them 19:29:02 i'd like to push on dibs if possible 19:29:02 as for DIBs, well I think people know the state of it 19:29:33 it would probably entail in-place upgrade of a f21 image to f22, if that's somethinf fedora can do (i may be showing my debian derivative bias here) 19:29:38 I'm dubious that we should be running Puppet4 on one random machine 19:29:44 the snapshots would, i mean 19:29:59 nibalizer: is there a way to run puppet3? 19:30:12 jeblair: I haven't verfied, but yes there should be 19:30:13 so, my question about dibs is. What would be a reasonible timeframe to get dibs all working? 19:30:42 but yes, i think adding new snapshot images goes against the "when you find yourself in a hole, the first thing to do is stop digging" adage 19:30:53 puppetlabs doesn't package puppet for fedora 22 yet, which implies they're not testing on fedora 22 yet, so i would be wary of trying to run it 19:31:10 moreover fedora is packaging it completely differently than puppetlabs will be 19:31:14 the puppet4 support comes from fedora/epel repos I guess 19:31:22 pabelanger: frankly, i don't think we can really set that. so much of the nodepool dib work seems to be blocked on mordred who is not around 19:31:37 ya so the two organizations that would package puppet4 are going about it different ways 19:31:39 right, the main reason for this, is some downstream teams that would require fedora22. Even puppet bits 19:32:06 plus, some efforts for projects pulling in newer libs 19:32:06 which means its not really stable yet 19:32:33 I agree puppet 4 is new, however if puppet 3 is required. We could do what the puppet openstack team does and uninstall puppet a job launch, and setup a gem 19:32:34 what do you mean by this: 19:32 < pabelanger> plus, some efforts for projects pulling in newer libs 19:33:17 jeblair: my understanding. A few teams at RedHat are wanting bleeding edge packages for experimental support 19:33:38 not 100%, just head rumblings 19:33:43 heard* 19:33:49 k 19:33:51 /stupid keyboard 19:34:38 regarding puppet3/4 -- we don't run much puppet on these nodes, and we want to run even less in the future; i'd prefer to use the same puppet from the puppetlabs repo, but if that's not possible, i'm not too worried about using puppet4 here 19:35:27 if we can finish the dib work, we can start on that effort in earnest 19:35:52 yeah, it's local puppet apply, no remote/puppetmaster involvement at all 19:36:15 ya really for the provisioning step, puppet3 vs puppet4 we won't feel much of a difference 19:36:34 I owuld reccomend we write the 10 lines of shell to get the puppetlabs repo in place and just use puppet3 19:36:47 especially since I'm the one who will be getting pinged because puppet4 did something stupid 19:36:51 nibalizer: er, i thought it wasn't an option? 19:36:55 * jeblair is confused 19:36:59 nibalizer: puppetlabs doesn't package for fedora 22 yet 19:37:07 http://yum.puppetlabs.com/ 19:37:13 either 3 or 4 19:37:18 crinkle: jeblair so gem install puppet --version=3 or something 19:37:25 or add the f21 packages :) 19:37:48 thaght might be my debian showing 19:38:23 nibalizer: pabelanger seems to be saying that this is working now; is it worth doing more work for that? 19:38:51 Ya, fedora 22 is working with -infra puppet modules 19:38:53 probably not 19:39:29 so yea we can just go with puppet4 19:39:31 so maybe we try this out, and if we keep shooting ourselves in the foot, then invest in puppet3ifying it? 19:39:42 so much so, I want to get a puppet apply test node going to make sure we are gating on changes for puppet 4 19:40:03 ya thats a decent idea 19:40:06 even if non-voting 19:40:26 trusty/precise though, since we don't run any services on f22 19:41:09 but f22 nodes, once available wont hurt 19:41:10 nibalizer: we do run the slave template through the apply test, so that at least can run on f22 19:41:47 pabelanger: got your questions more or less answered? 19:42:16 jeblair: so, Fedora 22 dibs? Drop snapshot? 19:42:51 i think so; it's work either way, and at least this way we're all focused on the same probs 19:43:26 worst case, while f22 images are experimental we can get by with only having them in hpcloud 19:43:42 okay 19:43:48 and continue working on getting red hat based systems working with glean 19:44:09 whatever the blocker is there at the moment 19:44:11 Right, I haven't even tried fedora22 with uploading yet 19:44:16 so, not sure what will happen 19:44:37 either way, will continue hacking away on it 19:45:04 thanks! 19:45:08 #topic puppet-pip vs puppet-python (rcarrillocruz) 19:45:26 right so pip is installed from install_puppet.sh then unmanaged by puppet modules 19:45:50 we found this problem downstream, were puppet and pip are not installed in that way for some cases 19:46:03 we were relying on puppet-pip that doesn't do what it promises 19:46:17 in what case does install_puppet.sh not get run? 19:46:30 infra-ansible for example 19:46:49 it's not a part of the automation itself, so right now is just a manual step that is not related to the automation 19:46:52 nibalizer: downstream consumers of our puppet modules not provisioning systems the way we do 19:47:08 whats infra-ansible? 19:47:21 clarkb, is a project we have downstream, to automate a whole infra 19:47:31 ok, couldn't you have it run install_puppet.sh? 19:47:36 why is this downstream? 19:47:36 basically "getting pip installed on your server" is an exercise left to the reader for puppet modules where we use the pip package provider 19:47:49 fungi: in my mind i completely agree with you 19:47:58 yolanda: isn't that one of our highest priority efforts we're supposed to be working on together upstream? 19:48:02 if you want to use infra modules you need a couple prereqs such as puppet and pip 19:48:05 jeblair, we are 19:48:16 we could do better about listing whats needed 'before you begin' 19:48:18 it's on github at the moment, in development process 19:48:33 yolanda: no, that's not how we work 19:48:48 jeblair why do you say that? 19:49:01 * fungi notes that "on github" is not "working upstream" 19:49:08 we don't work by doing things off in the corner on github 19:49:26 I am in favor of using an upstream puppet module for this 19:49:50 pabelanger: i think the premise is flawed 19:49:58 jeblair but we cannot work all the time by proposing upstream specs and waiting for approval 19:50:02 because we simply don't have the time 19:50:16 so in some cases we need to cook some things downstream then try to reuse 19:50:35 yolanda: i'm sorry you feel that way. i could not disagree more about that, and the way you are dealing with it. 19:50:39 let's move on to another topic. 19:50:43 #topic Nodepool REST API spec (rcarrillocruz) 19:51:06 that has been pending review for long time 19:51:16 so Ricky needed some reviews to move it forward 19:52:18 it looks like it could use some more elaboration and agreement 19:52:27 link? 19:52:29 particularly the keystone thing seems vague 19:52:39 #link https://review.openstack.org/141016 19:52:42 thanks! 19:53:27 so, someone who actually knows something about keystone, and its suitability here should probably weigh in on that 19:53:57 I can. I wrote something using both pecan and keystone last year. And can see how it would work 19:54:15 pabelanger: cool, thanks 19:54:30 i do worry a little about tying nodepool to keystone, but i'll save my comments for the spec 19:55:10 fungi: ya, it would be optional and easy to disable. 19:55:19 or easy to enable ;) 19:55:32 that would seem to imply that in order to use this, you need to have a cloud account on one or more or all providers 19:55:40 and i don't really understand how that fits with the nodepool usage model 19:55:56 or why we would assume that would be the case 19:56:15 or, it looks like mordred suggested we could run a keystone specifically for this 19:56:23 which sounds heavyweight, but what do i know 19:56:31 that sounds ... like a lot of work 19:56:41 i'd prefer to keep it simple really 19:56:59 i must have mis-skimmed it because i thought it was more like treating nodepoold as a rest server in an existing cloud environment (similar to nova, glance, et cetera) 19:57:47 That's what I assumed 19:57:55 where the only benefit was not having to use some not-from-openstack authentication mechanism 19:57:58 there is also very little indication of why it's needed. nodepool is designed to have no user interface 19:58:23 jeblair, it's one of the most important features needed for downstream consumption 19:58:33 on our daily basis, users are requesting to hold nodes all the time to debug issues 19:58:36 yolanda: the spec should probably mention that 19:58:46 a proof of concept using http basic auth would probably get us most of the way and then you could fairly easily add other auth mechanisms supported by apache 19:59:00 yolanda: right, so there's a use case that should be described, and then we can elucidate requirements from that 19:59:11 jeblair, can you note it on the review? not mine 19:59:20 * fungi will add notes too 19:59:22 for instance, it sounds like a simple http access credential might suffice 19:59:34 yolanda: of course... it was proposed as a meeting topic though. ;) 20:00:08 IIRC, pecan already supports hooks into keystoneclient. Which makes the integration into keystone that much easier 20:00:24 yes, and great to be talking about it. So the most important need is autohold of nodes but having some way to interact with nodepool features sounds like a good idea to me 20:00:31 time is up; thanks everyone 20:00:34 #endmeeting