19:01:36 #startmeeting infra 19:01:37 Meeting started Tue Dec 1 19:01:36 2015 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:41 The meeting name has been set to 'infra' 19:01:42 o/ 19:01:46 ahoy 19:01:47 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:01:58 #topic Announcements 19:02:14 i'm not aware of any important announcements. anyone have anything pressing here? 19:02:35 nope 19:02:40 log_processor has been split out into its own project 19:02:40 #topic Actions from last meeting 19:02:47 #link http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-11-24-19.01.html 19:02:51 there were none 19:03:04 o/ 19:03:07 #topic Specs approval 19:03:28 also none for this week, though dhellmann has a release automation one which will likely be on the proposed list next week 19:03:48 #topic Priority Efforts: Nodepool provider status (jeblair) 19:03:57 howdy 19:04:05 o/ 19:04:06 this is our specless/ongoing priority effort 19:04:09 o/ 19:04:13 so we have ovh providing 160 nodes now... 19:04:22 they're still hoping to add more as they are able 19:04:23 o/ 19:04:26 that's an impressive addition 19:04:30 thanks ovh! 19:04:48 from what i can see, runtimes are generally between rax and hpcloud, but occasionally worse than hpcloud 19:04:54 #info OVH nodepool worker count is now up to 160 19:05:03 probably due to our oversubscription 19:05:28 so my questions for the group are: are we happy with that level of performance so far? and do we want to consider them in production? 19:05:40 signs point to yes 19:06:19 other clouds have variance as well so the occasional job being slower isn't abnormal 19:06:21 i haven't heard any complaints about jobs which have run there, at least 19:06:35 +1 yes 19:06:36 the only complaints I hvae seen have been related to disk size iirc 19:06:47 this is 160 nodes between a couple regions in ovh, yeah? 19:06:48 jeblair: can you point us to a graphite/grafana graph showing this data? 19:06:48 clarkb: i think we got a larger disk along the way 19:06:49 as there is no large ephemeral drive for the jobs to rely on 19:06:52 o/ 19:06:53 we're up to 80 now 19:07:09 jeblair: I believe it was the ansible folks assuming that a large ephemeral drive was mountable 19:07:12 yeah, i think we determined that was plenty for grenade at least 19:07:32 nibalizer: not yet, i'm working on asking our providers for permission to make such a thing public 19:07:53 nibalizer: so far that's looking promising 19:07:57 so i hope we'll have that soon 19:08:05 (having said that, the data *are* in graphite) 19:08:13 jeblair: impressive 19:08:25 jeblair: fair 19:08:29 thanks 19:08:54 so sounds like we're agreed ovh is successfully in production in nodepool for us? 19:09:08 cool, i'll let them know... 19:09:12 quick update on the others: 19:09:21 #agreed OVH nodepool implementation is successfully in production 19:09:27 yay! 19:09:32 :) 19:09:56 we just got enough floating ips from bluebox to use the full cpu capacity; we might see runtimes drop there, but we're also expecting to get replacement hardware which is faster so expect them to increase afterwords 19:10:14 again yay! 19:10:27 what's the total number of ip addresses/anticipated capacity for that deployment? 19:10:30 that's 39 nodes; i'm also hoping that will increase, but that's at the "desire" stage rather than "implementation" 19:10:35 got it 19:10:51 fungi: 316 vcpus/ 8 per VM 19:11:07 and i think we're about ready to dip our toes into internap, waiting to be able to confirm my changes to nodepool work there 19:11:15 EOT 19:11:17 awesome 19:11:24 yay 19:11:34 thanks bluebox, ovh, and internap! 19:11:38 #info Nodepool use of Bluebox is progressing 19:12:02 #info Nodepool use of Internap resources is on the way soon 19:12:34 #topic Priority Efforts: Gerrit 2.11 Upgrade (zaro, et al) 19:12:51 * notmorgan is here too. 19:13:08 mostly want to make sure we hammer at this and don't let the reschedule slip any further than we can help 19:13:10 #link https://etherpad.openstack.org/p/test-gerrit-2.11 19:13:15 notmorgan: you said you had an apache workaround for the openid thing? 19:13:23 clarkb: for the double-slash thing 19:13:25 fungi: yes 19:13:34 the redirections are not really fixable in apache 19:13:42 notmorgan: right 19:13:48 notmorgan: can you link it here? 19:13:50 they seem to be missing important data from the query-string 19:13:56 two patches up for the double slash redirect issue are at the bottom of the etherpad zaro linked to 19:13:59 #link https://review.openstack.org/#/c/249714/ 19:14:02 redirect seems pretty minor though. it's only when user is not logged in. 19:14:06 that removes the double slashes 19:14:57 yeah, while it's an annoying regression i'm of the opinion we could go forward and just let the dev community know it's a rough patch while upstream works through a fix 19:14:57 the only option to restore redirect (afaict) is to use proxypass instead of mod_rewrite+mod_proxy 19:15:13 notmorgan: or fix gerrit 19:15:13 and that would hurt a lot of offload we do 19:15:25 clarkb: that was assuming not getting a fix from upstream :) 19:15:55 or rather as an interim workaround while upstream gerrit continues the debate on how they want it fixed 19:15:55 revert #link https://review.openstack.org/248411 would fix both 19:16:35 I think reverting the breaking change in our gerrit makes sense as long as we are confident that the issue (what was it by the way) the breaking change fixes won't affect us 19:17:10 zaro: ++ 19:17:27 i agree it's always good to try to help fix these things, though at the moment the delay seemed (from the upstream bug) more one of deciding what was an acceptable solution rather than the actual writing it 19:17:34 clarkb: agree. a more complete fix as long as it doesn't cause major issues is better. 19:17:42 if we think it's going to get fixed upstream, sure; but otherwise, we'll just have this conversation again in a year when we've forgotten everything. :) 19:17:43 fungi: from what zaro tells me the redirect double // issue is the last work item, if you feel it isn't a blocker perhaps we can move to discussing a time for the upgrade and contine to discuss the fix in the interm 19:17:51 breaking change was supposed to fix dropped tokens. it like it keeps the tokens in other situations but drops it for our situation. 19:17:59 *situation/configuration 19:18:14 I haven't noticed any bad side affects from the revert. 19:19:18 running with backport fixes from newer upstream releases is one thing, running with our guess as to how upstream will fix an outstanding bug is a lot further down the road to running a fork again 19:19:30 so just want to make sure we consider that situation carefully 19:19:41 I don't think we expect upstream to revert do we? 19:19:56 clarkb: very unlikely 19:20:13 clarkb: i wouldn't expect it, i would expect a further-down-the-road fix that takes a stab at addressing this new case 19:20:14 they have -2 the revert patch have they not? 19:20:45 -1 from one of the cores (hugo) i think. 19:21:16 * anteaya tries to find 19:21:34 attempted revert at bottome of ehterpad 19:21:44 basically, i really don't want to ask notmorgan to spend another holiday doing this again next year, so if we have a workaround that isn't the revert, i have a slight preference for that; unless we're really sure it's going to get fixed upstream for realz by the next time we upgrade. 19:22:33 jeblair: i appreciate the sentiment :) 19:22:45 i agree with jeblair, notmorgan workaround would be preferable atm. 19:22:46 i too am inclined to take notmorgan's partial workaround and just make it clear in the upgrade announcement that redirects on login aren't quite how they're supposed to be 19:22:59 fungi: well its worse than that 19:23:02 fungi: they don't work at all 19:23:14 you will end up back at / 19:23:21 regardless of where you started 19:23:25 clarkb: correct. 19:23:28 that's :( 19:23:37 which to me as a user of the web interface that logs in a bnuch because gerrit likes to log me out will make me very unhappy 19:23:45 i have an approach that could fix it, but it requires layering in another proxy point. 19:23:50 which is why my preference is the revert 19:23:52 #link of zaro's revert patch that is -1'd by hugo (the author of the breaking patch) https://gerrit-review.googlesource.com/#/c/72720/ 19:23:52 because it's the only way to address a QSA 19:24:05 in mod_rewrite post proxy 19:24:08 it's not pretty 19:24:19 (or using something that can act on L7 like HAProxy) 19:24:31 but i could add the correct token back in that way 19:24:40 i would *prefer* to not go down that path 19:25:43 anteaya: hugo was not the author 19:26:29 zaro: wasn't this the patch that created the problem? https://gerrit-review.googlesource.com/#/c/57800 19:26:39 anteaya: that looks like the right number 19:27:05 yes. owner is hugo, author is simon 19:27:07 sorry don't mean to create noise 19:27:12 zaro: ah sorry 19:27:36 so sounds like we should maybe push upstream a bit more on this and maybe offer our own fix or strategy for one that isn't a revert? 19:27:44 i didn't consider that gerrit has a tendency to logout some users often. it tends to leave me logged in pretty consistently but my case may not be representative 19:27:57 fungi: I get logged out probably 20 times a day 19:28:00 ouch 19:28:09 +1 clarkb 19:28:14 fungi: i am regularly logged out 19:28:21 well no redirect will be a pain for me as I often have several patches open before I am logged in 19:28:33 but once logged in I'm logged in for the day 19:28:35 if you want me to spin a patch that will re-add the token back in, i can do that on top of my current one 19:28:40 just to show how it would work 19:28:47 clarkb: ouch how? 19:28:52 but i really don't think you'll like it 19:29:30 there is a 3rd option 19:30:02 nibalizer: when you switch between tabs it decides the new tab needs to log in and the old tab is logged out 19:30:11 we can layer in L7 routing and handle the offload there instead of direct in mod_rewrit e(aka haproxy or another proxy tier like i have been describing) then proxypass to gerrit directly 19:30:44 it seems proxypass handles the querystring correctly fwiw. 19:30:48 clarkb: ahh, yeah i work around that by backing up to the original page again and doing a refresh before retrying to click something on it 19:31:08 that seems to then notice it's logged in anew 19:31:24 rather than logging me out 19:31:28 fungi: interesting. 19:31:45 yes if I have one tab open and log in with that tab then all other tabs have me logged in 19:32:00 if I don't it logs me out all the time 19:32:00 * SotK is also rarely logged out automatically 19:32:15 it tends to only happen if I log in on a different machine 19:32:16 can i ask what's everybody's concern regarding the revert? 19:32:17 clarkb: nibalizer: Yup, learned that the hard wall long ago. It stinks 19:32:54 zaro: it's technical debt that makes it harder for us to upgrade. we can accept that, but someone needs to pay it down before the next time we upgrade. 19:32:56 zaro: i think the biggest concern is "what is the real upstream fix going to be down the line? and will this revert make it significantly harder to follow future releases" 19:33:11 zaro: jeblair said it better ;) 19:33:23 zaro: concern being that we don't know (and have reason to expect) that's not how upstream will fix it if ever, so we're carrying a divergence for an indeterminate/indefinite timeframe 19:33:35 fwiw the gerrit folks said they would like to help us but they can't reproduce the issue 19:33:39 do you mean to the gerrit 2.12? or 2.11.x? 19:33:47 in the comments on https://gerrit-review.googlesource.com/#/c/57800 19:33:53 anteaya: I thought it was reproduceable using our mod rewrite rules? 19:33:54 if i had to guess it should be fixed in 2.12 19:34:01 clarkb: they cant' reproduce 19:34:08 anteaya: reproduction was clarified in the bug 19:34:11 #link https://code.google.com/p/gerrit/issues/detail?id=3365 19:34:17 i was unable to reproduce complete broken-ness locally 19:34:29 according to the comments on https://gerrit-review.googlesource.com/#/c/57800 they can't reproduce, unless that is old information 19:34:45 but i could repro the // issue, it just worked. 19:34:54 fwiw, i'm going to work on it and it seems like hugo is willing to help 19:34:58 at least david commented that he was able to reproduce 19:35:10 (in the bug) 19:36:06 bug date is more recent than comments on 57800 19:36:12 dimtruck is able to repro. i'm working on setting up the same ENV 19:37:02 from what i hear, hugo is on paternatify leave for next few weeks 19:37:27 so anyway, i guess we need to arrive at a decision on how we're addressing the login redirect, or continue to defer the upgrade maintenance 19:38:45 i'm +1 for notmorgan workaround, but sounds like ppl are unhappy with that so +1 for revert as well. 19:39:03 we have carrying a fork with the broken change reverted as option 1, leaving login redirects broken as option 2, or switching to something with mod_proxy as option 3 19:39:03 i'm okay with the revert if zaro's going to continue to work on the real fix 19:39:51 I'm fine with whatever is decided 19:39:58 we didn't really discuss option 3 much, but i feel like adding in haproxy for this is maybe a bit heavyweight 19:40:03 according to hugo's reply on the proposed revert there are other dependent commits which also need reverting? 19:40:49 i think there was only 1 and i don't think it affects us. 19:41:11 ping me if any further apache work is needed, happy to dig in on that front. 19:41:13 is option 3 a no go either? 19:41:21 i was more asking whether we need to rever that too, or if it'll still build sanely 19:41:25 can we afford to lose the mod_rewrite offload? 19:41:30 er, to revert 19:41:51 if we can, then... the proxypass is the lowest impact change. 19:42:21 fungi: still builds no problem, it's on review-dev.o.o now 19:42:32 i guess the /p/ rewrite is the main thing we get there. /fakestore is really only relevant to review-dev and we could probably live without a robots.txt 19:42:52 the /p/ rewrite is heavily used 19:43:29 by zuul? 19:43:31 hm. 19:43:37 and people 19:43:56 * mordred waves - apologizes for blow calendar timezones somehow 19:43:58 we've tried really hard to get people to stop using that local mirror, if memory serves 19:44:12 so maybe i can maintain the /p/ rewrite and still mod_proxy 19:44:20 now that i think about it 19:44:23 fungi: I suppose breaking it may get them to stop for good :) 19:44:27 also now it's unable to serve shallow clones because of git smart http on trusty 19:44:33 notmorgan: oh that would be neat 19:44:44 jeblair: i'll circle up on that here shortly. 19:45:00 it might be less pretty but it might be workable. i think a couple spare blocks will suffice 19:45:20 but let me 2x check, if i can't lets fall back to the revert or punt while zaro works on this. 19:45:34 i'll have an answer on if i can do this by tomorrow-ish fwiw 19:46:15 fungi: i'm not sure what's using it 19:46:25 so it sounds like maybe the consensus is that we'd like to do option 3 (mod_proxy) if it can still support our needs, but that a fallback to option 2 (revert the breaking change and work upstream on a real fix) is still preferable to option 1 (leave it broken and work with upstream on a real fix) 19:46:45 fungi: I can live with that 19:47:11 +1 19:47:14 in that case i'd be inclined to try to pick a maintenance window and assume we'll do 3 if possible otherwise 2 19:47:23 +1 19:47:31 I'm for that too 19:48:00 wfm 19:48:20 to stay away from a christmas rollback I think we should do something prior to the 19th 19:48:21 assuming no other objections, when do people like for the window. another wednesday like the last one (so that we can have fairly immediate confirmation under load rather than waiting two days to find load breaks us)? 19:48:23 ok i'll commit to having an answer/update tomorrow then 19:48:35 so we have windows ASAP for the way forward 19:48:42 I'm available any day between now and the 19th 19:48:46 any day is fine with me 19:49:32 < dec 19 seems to work with the release schedule 19:49:43 also, one thing i want to keep in mind is that we still have the openstack-attic/akanada typo to clean up. i have a feeling that needs to be fixed before our cruft repo cleanup step in the maintenance 19:49:43 we don't really have anything big there until mid-jan 19:49:44 i'm available, except new years weekend. 19:49:46 so december 16th? 19:49:55 I'm around all month 19:50:14 the 16th is two weeks from tomorrow 19:50:22 dec16 wfm 19:50:23 I'm fine with that 19:50:34 zaro: is your proxypass review still up? 19:50:40 16th works for me. what was the start time for the previously scheduled maintenance? we could just shoot for that again 19:50:45 zaro: so i don't need to go re-creating from scratch 19:50:46 mordred: how's dec16 for you? 19:50:51 looking 19:51:04 nibalizer: you want to have another go at the maintenance announcements for this? 19:51:10 mordred: (hopefully your rollback is self-sufficient at this point, but in case we need some of your brain, would be nice to have) 19:51:16 fungi: i do 19:51:16 jeblair: I may be driving/roadtrip - but I may also be available - or could make myself so 19:51:24 what time are we gonna do it 19:51:28 jeblair: oh. wait. you may want my BRAIN? 19:51:32 #action nibalizer send announcement about rescheduled gerrit upgrade maintenance 19:51:33 mordred: i can volunteer to call you and tell you to pull over 19:51:33 notmorgan: #link https://review.openstack.org/#/c/243879/ 19:51:38 zaro: thnx 19:51:44 1700 utc: http://lists.openstack.org/pipermail/openstack-dev/2015-November/078113.html 19:52:05 thanks anteaya 19:52:08 welcome 19:52:13 nibalizer: so starting at 1700 utc on wednesday december 16th 19:52:18 woot 19:52:23 jeblair: sandy can drive - I can tether 19:52:29 nibalizer: same time as last time 19:52:45 #agreed Gerrit 2.11 upgrade rescheduled to 17:00 UTC Wednesday December 16th 19:52:50 woot 19:52:58 \o/ 19:53:17 i think krotscheck/greghaynes have one topic on the agenda as well, if we can switch to that for the remaining few minutes 19:53:30 tc meeting is also canceled today 19:53:34 I put it on the agenda without checking with greghaynes. 19:53:36 I do? 19:53:38 ha 19:53:40 #topic Mirror efforts (krotscheck, greghaynes) 19:53:40 Oh 19:53:41 heh 19:53:49 Surprise! 19:53:58 * ruagair has maniphest updates too. 19:54:08 greghaynes: okay what's this all about? 19:54:12 ;) 19:54:14 We've got a bunch of mirror things in queue. First is greghaynes's wheel-mirror work. 19:54:18 what's all this then?!? 19:54:19 (the TC meeting isn't happening, so if we go over, we won't break anything) 19:54:23 Hehe 19:54:43 (i quickly replaced the tc meeting with another meeting so will be half-here) 19:54:54 i think i'm +2 on all the wheel mirror patches for whatever that's worth 19:55:00 Is there any coordination that needs to be done to land that other than landing the patches, jobs, and spinning up the instances? 19:55:34 The instances exist, hiera data needs to get added for ssh host keys though 19:55:37 i can help spin up the mirror build instances for that, though more than happy for someone else to volunteer 19:55:43 #link https://review.openstack.org/#/q/status:open+branch:master+topic:feature/wheel-mirror,n,z 19:55:49 oh, we have the instances? even better 19:55:58 I thought you made them ;) 19:56:09 Yay meetings! 19:56:32 er, heh 19:56:51 greghaynes: The one patch I seem to be msising is one that actually starts using our wheel mirrors, is that somewhere? 19:56:51 what are they called? 19:56:59 greghaynes: if i made them, then they should respond to ping via whatever the dns names are 19:57:15 mordred: /.*wheel-mirror-.*\.openstack\.org/ 19:57:23 no. they do not exist 19:57:32 I can help make them 19:57:41 fungi: ya, I might have misinterpreted something we chatted about when we were figuring out host keya 19:57:43 krotscheck: i believe using the wheel mirrors is automagical, or seemed to be last time we tried to do this 19:58:02 * krotscheck likes magic. 19:58:04 some magicsauce in pip that knows to check specific paths for possible wheels? 19:58:15 dstufft would know 19:58:18 wheelpeople presumably know more than i do about this, yeah 19:58:19 krotscheck: there isn't one AFAIK 19:58:30 * nibalizer has to bounce right at noon, sorry 19:58:32 extra-index-url 19:58:39 Is what you want 19:58:53 In the interest of coordination, can we schedule a date/time to babysit these patches through and make sure the world doesn't explode? I'm available before noon PDT most days. 19:59:00 oh, so we do need an extra-index-url with the platform name encoded? 19:59:10 fungi: yep 19:59:27 Unless someone has a major issue with them, that is. 19:59:35 * krotscheck spends the rest of the day sitting on actual babies. 19:59:46 mmm. sitting 19:59:55 seems like they're pretty much all in shape last i looked, though i and other reviewers could certainly have missed something 19:59:59 * mordred can help with the root portions of this - is also availabe in mornings 20:00:19 mordred, greghaynes: How does tomorrow morning around 10AM PDT sound? 20:00:32 do we have a volunteer to build the instances who isn't me? otherwise i'll try to do that after the meeting now that i don't have a tc meeting to lurk 20:00:51 * krotscheck is assuming mordred is that volunteer. 20:00:54 and also sounds like we need to update hiera for ssh keys 20:00:56 krotscheck: works for me 20:01:22 fungi: yes. I will do that 20:01:43 krotscheck: tomorrow morning I will be on an aeroplane 20:01:47 krotscheck: can we do friday morning? 20:01:48 thanks mordred. in that case i'll work on the stable maintenance electorate list stuff instead since that's also urgent 20:01:59 yes I want to vote 20:02:00 Friday also works 20:02:06 as it turns out we do have >1 candidate for ptl 20:02:09 greghaynes, mordred: Ditto. 10AM PDT Friday 20:02:28 fungi: also, I need electorate lists for N and O name elections while you're at it 20:02:40 krotscheck: i think you mean PST 20:02:59 * mordred stabs daylight savings time in the face 20:03:02 mordred: get up with me after this and remind me what i gave you last time (or whether you got it from the foundation site admins instead) 20:03:19 Clint: You are correct. 20:03:23 10AM PST Friday 20:03:45 1800 UTC Friday 20:03:59 I'll leave the remaining mirror things on the agenda for next week. 20:04:11 I figure that we'll do a similar coordination thing there. 20:04:28 sounds good 20:04:58 krotscheck, fungi: do we have a doc of what server(s) we need? 20:05:12 * krotscheck defers to greghaynes 20:05:40 mordred: theu 20:05:42 grrr 20:05:54 they're in one of the changes but yes get greghaynes to point you to the list 20:05:59 kk 20:06:03 i don't recall the exact one 20:06:09 we're over time, but if people want to stick around ruagair had some maniphest things to mention since we have the room what with there being no tc meeting this week 20:06:12 mordred: Not a doc, should be findable in the change but I need to mentally re-page in where 20:06:37 * anteaya is willing to stick around to listen to ruagair 20:06:43 EOT for me. 20:06:50 Go go maniphest things 20:06:52 #topic Priority Efforts: maniphest migration (ruagair) 20:06:56 \o/ 20:07:01 Lot's of updates. 20:07:28 Phab + OpenID (login.ubuntu.com) works nicely using mod_auth_openid 20:07:37 ncie 20:07:59 Only snag is #link https://github.com/bmuller/mod_auth_openid mod_auth_openid is abandonware. 20:08:30 So we'd need to consider whether we want to adopt it. 20:09:02 I'm currently working on the last piece of the migration process: 20:09:30 Scraping OpenIDs from launchpad to insert into Pah. 20:10:06 it's too bad clarkb had to drop out, but sync up with him since i know he's looked at it too 20:10:06 Once I've completed that, I'm happy to open up instance more broadly that I already have. 20:10:27 on mod_auth_openid being abandonware that is 20:10:28 I think clarkb promised to adopt it for fun, fungi :-) 20:10:37 heh 20:11:18 hi, wanted to show a change for glean, for infra-cloud efforts: https://review.openstack.org/#/c/252037/ 20:11:20 II'm re-writing this #link https://review.openstack.org/#/c/240795/ into python to have a more complete process. 20:11:36 sorry ruagair, i interrupted :( 20:11:48 Some of it will get spun off into ansible and puppet, of course. 20:11:49 yeah, no updates of substance for 18 months, and no real activity for over 2 years 20:11:54 No probs yolanda :-D 20:11:58 wasn't that one of the questions from last week about phabricator use? 20:12:08 There was anteaya. 20:12:17 as in who is currently using it and what is their understanding for doing so? 20:12:33 Currently I have two users, yolanda and GheRivero who are using a stable instance I have up. 20:12:51 ruagair, no much activity for that on the latest weeks 20:12:57 i believe GheRivero was poking a bit more 20:13:09 :-) 20:13:33 sdague is rather keen to use Phab as soon as I think OpenID is integrated in a prod fashion. 20:13:45 Which is not far off. 20:14:38 aside from the lack of activity or upstream maintenance on mod_auth_openid were there any other concerns with it? does it have any missing features/bugs that you found? 20:14:50 Not that I have found. 20:15:07 It worked rather trivialy. 20:15:20 if not, then we could just use it and revisit adopting it if it still lacks an upstream once we find something we need to improve with it 20:15:37 That's what my thoughts were. 20:15:58 It's time I put up an etherpad on this I think. 20:16:04 List off the status and issues. 20:16:22 sounds good. anything else on this topic before i give yolanda the floor for infra-cloud needs? 20:16:34 EOT. 20:16:36 o/ 20:16:48 fungi, i have fat fingers, the interruption was not intented... so now, infra-cloud: https://review.openstack.org/#/c/252037/ 20:16:52 i just wanted to make sure we're on the same page about usage -- 20:17:12 that we definitely want people to be able to poke at test instances and stuff 20:17:19 yolanda: jeblair still had a point for the current topic 20:17:28 i'll switch the topic once we're ready 20:17:29 but that we don't want to do ad-hoc hosting projects in phab 20:17:32 ok 20:17:39 I agree jeblair. 20:17:43 cool 20:17:50 right, there was a question last week on that 20:18:13 Yes, I was fighting kangaroos at that time :-/ 20:18:14 something in an earlier status update about people already using maniphest in anger because of not wanting to continue on lp any longer 20:18:42 right, i don't think that was happening but we were jumping to conclusions based on lack of data :) 20:18:45 crocodile wrestling is no longer the national pasttime? it's roofights now? 20:18:59 * fungi updates his notes 20:19:17 #topic Priority Efforts: infra-cloud (yolanda) 20:19:24 No, that's not a reality fungi. We have yolands and GheRivero poking and that's it. sdague *wants* to use it seriously but is not at this point. 20:19:35 #link https://review.openstack.org/252037 20:19:47 ok so we got some races on glean 20:20:02 related on the events where glean was executed 20:20:19 rcarrillocruz created that fix, we have tested that intensively on our environment, and we got rid of the races 20:20:49 i wanted to show that, and ask for reviews 20:21:16 greghaynes, can you add more details to that review? 20:21:27 fungi: fwiw, my declared desire for maniphest is to get the kanban board for visualizing and tracking cross project efforts, which launchpad is not really suitable for. And our current approach is ghetto kanban in etherpad. 20:21:28 okay, so that addresses a bug which is impeding infra-cloud deployment in one of our regions 20:21:44 thanks for the info and the fix rcarrillocruz, yolanda 20:21:48 in other news, we consistently deploy 90 out of 93 machines in East 20:22:07 yolanda: I dont remember all the details - I just put a review asking for them ;) 20:22:27 sdague: fwiw have you looked at the lp->kanban thing ? 20:22:33 sdague: which renders bugs as cards ? 20:22:58 greghaynes, so the issue is that glean was executed when a network interface was detected 20:23:20 in our deployments, it showed a constant race, causing the vlan information to don't be created if that interface was not detected 20:23:42 yolanda: ah, right, it wasnt a race - we just dont do the dependency detection 20:23:46 greghaynes: no, it's not about the vlan interfaces. The fix i pushed and got merged already created vlan interfaces attached to physical interfaes 20:23:53 oh? 20:23:57 then I am confused 20:23:58 switching that event, to start networking event, and configuring all interfaces there at same time , proved to solve our issue 20:24:03 anyhow, we can chat out of meeting? 20:24:07 sure 20:24:10 i can stay a bit online 20:24:38 sdague: we've also merged a first pass of kanban board stuff in SB now, with patches to make it more usable in review, that you could check out if you like 20:25:01 okay, so seems like we've made good use of our extra meeting time. anything else we need to cover this week before i #endmeeting? 20:25:07 (its not very discoverable yet though, since its not entirely merged) 20:25:56 thanks fungi 20:26:02 thanks everybody! 20:26:08 #endmeeting