19:00:39 #startmeeting infra 19:00:40 Meeting started Tue Aug 25 19:00:39 2015 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:00:43 o/ 19:00:44 #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:00:44 #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-08-18-19.01.html 19:00:46 The meeting name has been set to 'infra' 19:00:50 #topic Specs approval: Host Stackalytics Service 19:00:51 o/ 19:00:53 o. 19:00:53 o/ 19:00:59 #link Host Stackalytics Service spec https://review.openstack.org/#/c/187715/ 19:01:27 pabelanger: put this on the agenda i believe 19:01:35 he did 19:01:42 o/ 19:01:51 i'm not entirely clear on where we are with mirantis and stackalytics.org 19:01:53 \o 19:02:01 jeblair: they are all behind it last I heard 19:02:13 the patchset prior to the last has jaypipes +1 19:02:21 then it got removed for grammar fixes 19:02:22 cool, and the spec certainly says they are behind the idea of running it in infra generally at lesat 19:02:30 time to merge 19:02:34 anteaya: i think we can mentally carry that over 19:02:40 I'm for that 19:02:53 so yeah, it does seem like this has been discussed and revised appropriately and is ready for voting 19:02:57 any objections? 19:03:17 i'm not convinced default_data.json should move into openstack-infra/project-config but we can fine-tune when it gets to that point 19:03:47 o/ 19:03:53 fungi: oh good point. that may want to be its own repo (we're collecting repos that are for data files) 19:04:16 but agree, it won't kill us if it is, and we can amend the spec with that improvement 19:04:57 on the whole the plan looks sound 19:05:11 #info Host Stackalytics Service spec voting open until 2015-08-27 19:00 UTC 19:05:21 pabelanger: w00t! thanks! 19:05:26 o/ 19:05:40 #topic Schedule Project Renames 19:05:40 jeblair: np 19:06:00 o/ 19:06:18 i last wee sept 11 or 12 was tentatively suggested for the pre-summit renames of stuff already in the queue 19:06:36 those dates still work fo rme so +1 19:06:37 still works for me 19:06:38 er, you get the idea ^ 19:06:57 slight preference for friday so I can weekend on the weekend 19:07:00 i'm still good with that, so if there are more people behind the idea now that everyone's back from the ops mid-cycle, let's nail it firmly 19:07:10 that is the ironic mid-cycle which I won't be attending 19:07:17 those dates are fine with me 19:07:31 looks clear from a release mgmt pov 19:07:43 those dates are good for me, also slight pref for 11th 19:07:56 ya it should be after the milestone and before freeze crazyness 19:08:06 I can do those days 19:08:08 oops Im wrong that ironic was in august 19:08:08 no preference 19:08:14 no mid-cycle on those dates 19:08:19 lets do 11 then, objections? 19:09:06 and time -- 2300? 19:09:11 fine 19:09:11 wfm 19:09:12 wfm 19:09:33 wfm 19:09:35 are we freezing the list to what is currently there? 19:09:43 #agreed next project renames on fri, sept 11 at 2300 utc 19:09:51 or can folks stuff more in, and if they can can we have a cut off date? 19:09:59 anteaya: it's still a couple weeks out, I think we can give some time 19:10:09 can we have a cut off date? 19:10:18 lets do just limit it to big tent projects though 19:10:28 I can agree to that 19:10:37 since this is still manual, it will take some work 19:10:45 things in goverance/projects.yaml then? 19:10:49 i don't want to deal with the stackforge flood without automation 19:10:55 anteaya: yeah 19:11:03 yep. moving non-official projects from stackforge to openstack namespace should simply happen all at once. there's no urgency for those 19:11:04 works for me as a definition 19:11:33 #agreed moving official projects only; stackforge move will happen later (with automation) 19:11:53 (with science!) 19:11:58 we also need to nail down the stackforge date... 19:12:04 what did we propose? 19:12:10 was going to ask you 19:12:15 * jeblair digs 19:12:25 I don't think we proposed oe for stackforge 19:12:37 oct 17 or nov 7 19:12:38 it was a date in october and one in nov 19:12:39 we proposed two rough timeframes 19:12:41 is what i put in the email 19:12:42 yeah 19:12:44 oh right form the email 19:12:46 no one expressed a pref 19:12:47 I was thinking last meeting 19:12:50 nov 7 I will be at pycon.ca 19:12:59 preference for oct 17 19:13:04 i'm better with sooner rather than later, since there were no real objections expressed 19:13:08 though I may be chrismasing 19:13:17 ya earlier is probably better 19:13:24 get it done in the quiet before the summit 19:13:33 then dont hae to worry about it when recovering from summitting 19:13:45 I'm traveling pretty much from oct 13th onward (for what feels like forever, since summit) 19:13:46 sounds good to me 19:14:09 pleia2: wave to mordred as you fly past each other 19:14:12 that's <2 months though, so we need to create that wiki sign-up page pretty much immediately so that projects can't claim they didn't have enough time to find out and add themselves 19:14:19 * mordred plans waving at pleia2 19:14:19 jeblair: heh, right 19:14:22 fungi: agreed 19:14:30 fungi: very much so 19:14:49 so how about i set up the wiki page and send an announcement about oct 17th? 19:14:59 sounds good 19:14:59 sounds good to me 19:15:09 you typed that faster than i could, so all yours! ;) 19:15:12 #agreed move stackforge projects to openstack oct 17th 19:15:22 how much down time will we need for that one do you think? 19:15:23 #action jeblair send announcement about project renames sept 11 19:15:31 the github changes will take for ever 19:15:32 #action jeblair send announcement about wiki page and stackforge move 19:15:56 anteaya: we could probably let the github changes lag behind on that one 19:16:04 oh okay 19:16:04 they will become eventually consistent 19:16:17 might want to include that bit in the announcement 19:16:31 to get infront of the tide hitting the channel 19:16:46 well, if we want the renames/transfers in github to dtrt, then we likely need to make sure we disable replication to there until we finish that part 19:16:47 anteaya: yes, though perhaps in a later announcement when we know more about the actual mechanics? 19:17:00 jeblair: yep, as long as I have something to point to 19:17:03 fungi: i don't think it'll hurt 19:17:06 don't care when 19:17:23 fungi: some replications will just get rejected if the repos have the wrong names 19:17:27 fungi: it will just error 19:17:43 fungi: but then catch up later, and we can trigger a full run on completion 19:17:58 oh, for some reason i thought manage-projects was going to create the "new" repos there 19:18:10 fungi: oh hrm it will do that 19:18:21 so it just dependso nwhether or not we need the github redirects 19:18:22 so just trying to remember to account for that 19:18:30 but not a plan we need to hash out in this meeting 19:18:34 perhaps a meeting agenda item for automated rename planning? 19:18:35 yeah, i bet we can work out a way :) 19:18:37 fungi: ++ 19:19:08 cool, moving on then 19:19:12 #topic Priority Efforts (Migration to Zanata) 19:19:39 so, we're in a great spot, the translators are planning on using Zanata for Liberty translations :) 19:19:47 woohoo! 19:19:47 pleia2: w00t! 19:19:50 yay 19:19:50 i've seen much recent traffic on the i18n ml about this 19:20:03 I have one last review up to fix our installation process: https://review.openstack.org/#/c/192901/ 19:20:13 then we should be ready to launch our production server at translate.openstack.org 19:20:31 yay 19:20:35 if we can do that by the end of the week, next week can be spent making sure all our scripts are still working as expected, and do a hand off to the translations folks after labor day 19:20:41 StevenK also has a handful of changes related to automagically configuring zanata fo rprojects that get translated 19:20:51 I hvae reviewed the stack would be good if some other cores could get through those 19:20:55 * jeblair does happy dance 19:20:58 pleia2: not being familiar with the db schema complexity in zanata, i was somewhat dubious of the "copy all the tables from the dev server" approach 19:21:04 we also need some details from Carlos about importing the user directory out from the -dev instance so they don't need to set all that up again 19:21:09 is that going to be as simple as it was made to sound? 19:21:18 fungi: me too, I kept resisting, they all insist, so I'm leaning on Carlos for a solid plan 19:21:46 adding coordinators and users is actually a pain for the users, so I sympathize 19:22:27 unfortunately I have some unexpected travel next week, so I'm going to put all this into an email to openstack-infra so we can make sure we have a plan 19:22:42 and I am happy to cover for pleia2 again 19:22:47 as I mostly have a picture of what is going on now 19:22:48 and I can check in next week if needed 19:22:51 thanks clarkb! 19:23:00 greghaynes: ah, the problem is that dnsmasq is binding to eth2.25 and not eth2 for some reason 19:23:03 oh doh 19:23:07 hah.. misconfigured, thats why 19:23:22 SpamapS: also mis-channeled :) 19:23:31 so we've got a little work to do, and only a little bit of time, but I think it's all doable 19:23:48 pleia2: cool, zanata reviews to the top of the list then 19:23:48 pleia2: nice work carrying this all through 19:23:52 dohhh soo sory 19:23:54 jeblair: thanks 19:23:56 anteaya: pleia2 ++ 19:24:50 #topic Priority Efforts (Downstream Puppet) 19:25:49 httpd module proved to have some races, difficult to fix. We propose to move to puppetlabs-apache instead (pabelanger, yolanda) 19:25:57 so that raised these days 19:26:04 when moving to httpd, some races are shown 19:26:23 there have been some efforts to fix it, either on the httpd module or in the manifests using it 19:26:41 a couple things here, the races are simple to fix and moving to puppetlabs apache is not simple. I still have a strong preference for sticking with httpd module at least until it is possible to change modules then we can worry aboutswitching 19:26:42 and this also raised the topic about why are we using a pretty old fork, and putting efforts on fixing that 19:26:46 we didn't change anything in the module though, so I think these issues existed in the old apache module we were using 19:26:48 but it really isn't possible to switch yet 19:27:06 nibalizer: they did, I think we just tickled a puppet "bug" by changing the name hwich appears to have changed execution order 19:27:28 clarkb, i agree that we cannot move in short, but have a plan to migrate 19:27:34 or this was happening before and we didn't notice becaues it's eventually consistent 19:27:39 crinkle: or that 19:27:39 hrm... one second 19:28:02 so my thinking is we've upped our tetsing game and that has exposed the issue, not the fork 19:28:20 clarkb: can you quickly summarize the recent past and current issues here? i'm slightly confused and am losing track of which modules are which and what we're doing :) 19:28:25 yes, i agree it has always failed 19:28:28 jeblair: yup 19:28:28 it works at second pass 19:28:32 (i'm hoping this will be beneficial for some other folks here too) 19:28:33 nibalizer: i didn't find it through testing at all. i found it because i needed to deploy a new servert 19:28:54 fungi, i have hit that several times downstream, but just fixed by re-executing puppet 19:29:03 jeblair: yes thank you 19:29:05 so the issue is that puppetlabs-apache completely removed the ability to ship in a complete working vhost template file. Instead you have to use a bunch of puppet dsl primitives to construct the vhost. This makes certain things like mod_rewrite extremely difficult to use and we use a lot of mod_rewrite 19:29:26 o/ 19:29:27 I noticed this issue while writing acceptance tests for puppet-gerrit 19:29:29 our solution to this was keep using the old version of the module that lets us use it the way we want to under a new name so we don't conflict with third party modules that do use upstream apache 19:29:48 my preference would be to fix race issue, so not to block people. In parallel start migration to puppet-apache again, while keeping everybody happy. 19:29:50 clarkb, pabelanger had several changes that proved this could be easily solved, either writing a wrapper 19:29:53 now that we have renamed the module we find that mod installation doesn't happen before starting the apache service by default which leads to puppet apply races 19:29:55 or sending the patch upstream 19:30:02 the basic races we've exposed: if you try to install configuration files into directories created by the apache package, you have to make sure the package is installed before you do that. if you want to activate vhosts which depend on certain default-disabled apache modules, you have to make sure those modules get enabled before the vhost configuration is applied 19:30:04 you can easily fix this with require/before in your code that uses the apache module 19:30:18 fungi: right 19:30:23 clarkb, it means fixing all of our manifests 19:30:29 we used to call our fork puppet-apache, now we call it puppt-httpd, and both of those have that problem (inheritied from the old thing we forked from), yeah? 19:30:30 + adding some documentation about this usage 19:30:39 while we have alternatives that work from scratch 19:30:57 jeblair: the old puppet-apache wasn't really a fork just an older checkout and yes according ot crinkle and nibalizer the bug likely existed in both places 19:30:58 jeblair, yes 19:31:10 we are noticing it more now with testing 19:31:24 awesome, i think i am caught up, thanks! 19:31:46 clarkb: thanks for explaining 19:31:48 however i wasn't seeing this issue prior to the module rename, so suspect that puppet happened to be getting lucky and arbitrarily picking the working order before and is now picking a non-working order 19:31:57 fungi: oooh 19:32:05 fungi, i saw that downstream when spinning new modules 19:32:07 * anteaya is also actually able to follow along 19:32:07 we can't fix this in the module 19:32:09 ? 19:32:28 like, have httpd::module have the correct requirements? 19:32:30 jeblair: there are a few ways forward 19:32:38 cool, let's enumerate 19:32:40 but no clearly best option 19:32:48 jeblair,httpd_mod is a custom puppet type, so it will need some extra work 19:33:01 1) fix it in the module itself by hacking the type 19:33:14 2) put require => Service['httpd'] around our puppet code where necessary 19:33:21 s/require/before 19:33:24 i think the various proposals are: 1. fix it in each module which uses openstackinfra/httpd; 2. fix it in openstackinfra/httpd; 3. get puppetlabs/apache working (perhaps by submitting the support we need for our workflow or perhaps by making a shim/wrapper module) 19:33:25 yes sorry 19:33:41 3) Create a httpd::mod defined type that wraps httpd_mod and adds the before 19:33:51 there was a proposal already written by glauco here https://review.openstack.org/216436 19:34:06 4) Pivot to later puppetlabs-apache module 19:35:14 I have some opinions on which ones I like the most, I think others do as well 19:35:34 i guess nibalizer's 1, 2 and 3 are sub-options of my #2 19:36:24 Can we take working code from puppet-apache, back port into puppet-httpd? 19:36:25 fungi: no my #2 is your #1, and my 2 and 3 are sub-options of your #2 19:36:25 regarding nib.4 and fungi.3, wasn't there some resistance upstream to the 'dump a vhost' model? 19:36:31 option 1 means hiding a problem on the module, you need to fix all the manifests, and make new manifest using that to work in the same way. I'd expect , when i use a module, that i can rely on the features, without having to know the internals of it 19:36:55 this leave simple migration path to puppet-apache, vs us creating our own custom path 19:37:07 jeblair: I am pretty sure they would accept that patch if it's not already in there 19:37:14 so does that mean that to use puppetlabs apache, we either try to convince them, or start using the module the way they intend (which may be a great amount of work for us)? 19:37:19 crinkle: oh ok 19:37:34 anyway, since the numbers are now confusing... the objections to fixing it in each calling module is obviously code duplication. the objections to fixing it in openstackinfra/httpd are mostly that it's throwing good money after bad/reinventing the wheel. the objections to switching to puppetlabs/apache are that we suspect it still doesn't do the things we need 19:37:34 pabelanger mentioned using custom_fragment which might be what we need 19:38:06 fungi: also that switching to puppetlabs/apache will take a while and we want a fix soon 19:38:09 I think this is being brought up as a critical flaw, which I disagree with 19:38:18 i feel like fixing it in openstackinfra/httpd is throwing a small amount of good money after bad, and outweighs the other two. 19:38:20 crinkle: I looked into that back when I was told "just use custom fragment" and I wasn't happy with it 19:38:21 the nature of puppet is needing to establish your before/after 19:38:22 jeblair, fungi, even if we use puppetlabs-apahce the way they like, it shouldn't be complex, creating a wrapper and using some features it offers, pabelanger had a decent sample 19:38:30 I like treating files as files in puppet 19:38:39 nibalizer: agreed 19:38:49 what about custom_fragment => template('my template') 19:38:59 crinkle: you can't because then you lose all the header stuff 19:39:02 clarkb, nibalizer: is that an argument for doing before in our leaf puppet modules? 19:39:03 clarkb: okay 19:39:05 crinkle: its very opinionated or was when I looked at it 19:39:12 jeblair: yes 19:39:15 jeblair: yes 19:39:25 right, we want it to be able to replace the entire vhost config with a template and not make decisions for us 19:39:39 most puppet resources don't automagically position themselves in the graph, I'm not sure why we're so suprised this doesn't 19:39:40 here, that's pabelanger sample https://review.openstack.org/#/c/216747/ 19:39:52 Honestly, I still don't fully understand what puppet-httpd provides that puppet-apache doesn't do. I _think_ the custom_fragment is what people are looking for but it would be good to list some place what people actually want. I think some people want a blank vhost.conf file that apache will load out side of puppet control? 19:39:55 uses custom_fragment + template 19:39:56 i think that being explicit and adding befores where they are needed is the puppet way of doing things 19:40:01 the red in https://review.openstack.org/#/c/216747/2/modules/openstack_project/templates/grafana.vhost.erb is the problem 19:40:03 yolanda_: ^ 19:40:18 pabelanger: i think clarkb just explained why custom_fragment is insufficient 19:40:28 clarkb, nibalizer: in general i agree... i guess it boils down to specifics; if we use mod_rewrite in our vhost, we need to specify that, but i think that if we say we use mod rewrite in puppet, then the httpd puppet module should know to install httpd first. 19:40:47 jeblair ++ 19:40:50 jeblair: ++ 19:40:54 thats fair 19:41:06 jeblair: well, the problem that arises is that it currently tries to apply the vhost configuration before it has enabled mod_rewrite 19:41:32 so not specifically a race around the httpd service 19:41:40 fungi: that sounds like a module bug to me; i think all mod enablement should happen before writing vhost config 19:41:59 fungi, jeblair, that is fixed in the puppelabs-apache , adding a notify apache::service, that restarts on each vhost or mod change 19:42:37 yolanda_: yeah, it just sounds like a simple fix to our module to clear these errors, so why not do that first? 19:42:38 but we have other unrelated ordering issues too, such as we were previously relying on the apache2 package to create /etc/apache2 and its subdirectories, but didn't require that package or the module before trying to stick files into them 19:42:42 jeblair: that doesn't really jive with most of puppet 19:42:57 consider that if you would like a user and an ssh key you do have to explicitly set the ordering there 19:42:58 jeblair, that's what glauco tried on his patch, but it's a fix on ruby 19:42:59 nibalizer: that's unfortunate, because that's how you admin apache 19:43:06 our httpd_mod is a custom puppet type, not a defined type 19:43:09 so people were not trusting it 19:43:25 yolanda_: right so my suggestion ws use a defined type 19:43:31 jeblair: puppet allows you to do this of course, but the idea is to put the ordering in control of the operator, and not auto-figure-it-out 19:43:32 so some of my concern is that we have multiple bugs exposed here, and people are focusing on one or another and each may need to be solved in different ways or even different places 19:43:47 clarkb yes, and that's when conversation raised about putting efforts there, or moving to apache 19:43:48 I explained why the other implementation is likely buggy but not one seems to understand the puppet internals enough to confirm (which is a good reason not to do it on its own) 19:43:51 everywhere in infra, we tend to want puppet to do less, and we'll tell it what to do, so I'm somewhat suprised we want puppet to be clever here 19:43:52 fungi: ! 19:43:56 I'm also pretty sure glauco_'s patch doesn't work, based on the result of the beaker test 19:44:00 we are also hitting some problems on our fork, so we should consider if the effort is needed 19:44:30 nibalizer: there's never a reason to enable a module _after_ restarting a configuration with a new vhost 19:44:38 crinkle: it worked fine with ubuntu trusty. Not sure why it failed for centos. 19:44:58 nibalizer: that's something that the module should handle for you, because it is a very well understond ordering of operations that has only one correct answer 19:45:22 jeblair: sure 19:45:24 nibalizer: i agree, anytime it is unclear, users should be allowed to express options 19:45:31 i think we may need to go to the ml on this 19:45:39 there are currently two patches proposed to httpd to enable that functionality 19:45:53 nibalizer: can you send a summary email to the list with options, and we can try to further refine? 19:45:57 well, more to the point, configuration (vhosts or otherwise) and enabling modules should be done and any service restarts deferred until that has all completed 19:46:00 sure 19:46:28 #action nibalizer send summary email of puppet apache issues to infra list 19:46:40 (i picked nibalizer since he enumerated the options so well earlier :) 19:46:53 nibalizer: nice enumeration 19:47:16 thanks all, hopefully this was a useful start 19:47:17 #topic Summit space (jeblair) 19:47:29 goodness that time again? 19:47:32 ttx would like to know like right now what we need in terms of summit space 19:47:33 already? 19:47:47 all of it 19:47:50 a table on a deck 19:47:56 with umbrellas 19:47:58 i know dhellmann has at least one session he wants to propose 19:48:01 and baby geese 19:48:28 and devananda suggested maybe we could share a meetup space and do some cross-pollination with ironic/bifrost since we're infra-cloudy 19:48:35 I think our working group times were valuable (particularly for me working with newcomers), so space for that again would be nice 19:48:41 anyone else have a feeling for what we should request? 19:48:51 pleia2: ack 19:48:57 can you summarize the format/options this time around? is it fishbowls and boardrooms again, or something new this time? 19:48:58 asselin_: do you need a space for openstackci this time? 19:49:19 I'll likely work with translator sessions again, no no needs on the infra side 19:49:19 * Fishbowl slots (Wed-Thu) 19:49:20 * Workroom slots (Tue-Thu) 19:49:23 * Contributors meetup (Fri) 19:49:31 a puppety workroom slot would be great 19:49:48 also 19:49:48 - We have about twice as many workrooms than we have fishbowls 19:49:48 - We have less rooms compared to Vancouver 19:49:49 - Rooms (especially workrooms) are generally smaller 19:49:49 - We have a lot more teams asking for space, thanks to the Big Tent 19:49:53 anteaya, I was hoping to talk about common-ci. Not sure if my official presentation got accepted or not. If not, then yes it would be nice to do that still. 19:50:14 so that's nice: it sounds like we like workrooms and we're workroom heavy 19:50:15 can we pencil in a time for asselin_ and friends? 19:50:34 anteaya: let's just throw out ideas here, and i'll collect them and try to summarize space needs generally 19:50:41 okay, so workrooms will probably need their topics better scoped this time. we had a lot of random people glom onto our workgroup sessions in vancouver which tended to make them less productive 19:50:42 very good 19:50:48 i think we need to estimate first, then maybe get detailed later 19:51:06 space to hack on zuulv3 details may be useful 19:51:06 other than supporting asselin_'s efforts there isn't anything I personally am working on that needs a space 19:51:06 likely remote this time, so fat pipe to asterisk server works for me :) 19:51:25 fungi: yah, and related to what pleia2 said, we may want to try to focus something on newcomers 19:51:32 I liked the "get something done in 40 min" concept 19:51:44 although unsuccessful 19:51:48 ooh, right. infra 101 tarpit/honeypot ;) 19:51:58 ttx: we got it done the next week and it is now awesome :) 19:52:00 ha ha ha 19:52:39 infra 101 would be great to get reps from each project to make sure they understand how to set up tests 19:52:43 adn read logs 19:52:54 at least someone from each project 19:53:00 yeah, irc meeting management in code review was a success, so a great example we can try to match for scoping the next round of ideas 19:53:07 anteaya: in that case, should probably be a x-project session 19:53:22 I can get behind that 19:53:29 okay, let me know if you have other ideas 19:53:31 #topic Request to use cloud server for developer.rackspace.com rather than Cloud Sites (annegentle) 19:53:34 as long as people leave with something useful 19:53:37 annegentle: around? 19:54:27 i assume the idea here is moving the developer.openstack.org site to a vhost on static.openstack.org (or to a separate server completely) 19:54:42 but having the reasons explained would be great 19:55:16 I know we wanted to swift host these things and we probably can with with we know about swift now 19:55:29 clarkb: actually we did not want that 19:55:31 since we can generate indexes for the entire paths of docs (but we don't for logs and that is the remaining work item there) 19:55:33 jeblair: oh? 19:55:34 also, there are challenges around the scp publisher plugin... its support for layout is not identical to what you can do with the ftp publisher 19:55:52 clarkb: http://specs.openstack.org/openstack-infra/infra-specs/specs/doc-publishing.html is pending completion of swift logs 19:56:03 clarkb: but we actually can't use swift for serving things. 19:56:06 fungi: yeah 19:56:21 jeblair: right apache would proxy like with the logs 19:56:27 so we _can_ move them, but it's not clear to me that it would be better than ftp at this point 19:56:33 jeblair: I am saying that we can do that now the work required for docs should be complete 19:56:41 clarkb: ok, but that's still not the plan we wrote :) 19:56:45 clarkb: feel free to amend :) 19:56:53 I must be misremembering then will reread 19:57:06 yeah, the current spec is filesystems (maybe afs) 19:57:08 clarkb: it's complicated, fortunately, at least the spec captures all the reqs 19:57:22 and rsync if memory serves 19:57:31 fungi: could pivot to afs, but afs isn't in the current plan either; basically rsync. 19:57:39 did we change it? I swear it was swift based when it started 19:58:04 clarkb: it hasn't changed in a while :( 19:58:10 swift is a middle-man 19:58:15 yep 19:58:38 so while the spec is "docs publishing via swift" it's not serving them from the copies in swift 19:58:49 annegentle: maybe post to the infra list? we'd love to help but don't know the problem. :) 19:58:54 #topic Open discussion 19:59:04 Haha, 1 min 19:59:09 going to point my stuff into -infra 19:59:10 use wisely 19:59:13 elections are coming up and I can't commit to help tristanC running them this time around, don't need to have special powers (fungi pulls data for us), just need to pay attention and be responsive about election things, happy to answer questions abut it but it's coming up soon 19:59:44 election schedule is on the release schedule: https://wiki.openstack.org/wiki/Liberty_Release_Schedule 19:59:46 o/ just saying infra-cloud is ho-humming along, still fighting hardware issues but getting closer at least. :) 20:00:07 SpamapS: woot! 20:00:09 thanks all! 20:00:11 #endmeeting