19:03:51 #startmeeting infra 19:03:51 Meeting started Tue Oct 14 19:03:51 2014 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:03:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:03:55 The meeting name has been set to 'infra' 19:03:58 #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:04:00 #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-10-07-19.00.html 19:04:20 Oh! Ok, yeah, i do have something there, my bad. 19:04:27 #topic Actions from last meeting 19:04:38 AJaeger_ draft documentation publishing job for devstack 19:04:53 https://review.openstack.org/126716 - waiting for a patch to merge into devstack first 19:04:55 #link https://review.openstack.org/#/c/126716/ 19:05:07 oh, excellent. i haven't reviewed that yet ;) 19:05:07 patch by mordred is approved, should be in later today... 19:05:08 AJaeger_: they were approved this morning 19:05:16 anteaya: not yet merged 19:05:24 o/ 19:05:29 o/ 19:05:32 #link https://review.openstack.org/#/c/126714/ 19:05:38 7 and 8 in the gate 19:05:38 #link https://review.openstack.org/#/c/126720/ 19:05:53 o/ 19:05:54 so cool 19:05:59 * AJaeger_ will remove the WIP once the devstack patches are merged 19:06:13 #topic Priority Efforts 19:06:14 feel free to remove already 19:06:22 Swift logs 19:06:27 Sorry I have nothing to report (been away for a week) 19:06:30 #link https://etherpad.openstack.org/p/swift_logs_next_steps 19:06:39 jhesketh: me too so i wouldn't have noticed ;) 19:06:48 I have one little thing 19:06:52 i gather we are running more jobs with swift uploads now 19:06:59 we definitely are 19:07:06 The infra jobs are 19:07:08 and we should be experimenting with fetching them and checking on performance, right? 19:07:18 I got the updated upload script onto all the slaves so now we can clean up where we were fetching the old console log without timestamps 19:07:24 jhesketh: ^ is there a change for that yet? 19:07:47 jeblair: yup. I did some initial `time wget $URL` tests and its not so great :/ 19:07:51 Yep but I haven't done anything towards comparing performance 19:08:12 its at least an order of magnitude slower. Goes from 200ms ish time frame to 2000ms timeframe in the best case 19:08:18 clarkb: not yet, I'll do that today 19:08:30 I have seen requests take over 20 seconds too 19:08:38 Yeah first loads seem terrible 19:08:48 (I didn't do anything scientifically, just wanted to get an initial feel for it) 19:09:10 Not sure if it's reauthing or something else 19:09:11 are we still avoiding stream parsing because it's hard? 19:09:37 istr a conversation about how we needed to download the whole file before os-log-analyze started parsing it 19:09:53 jeblair: what do you mean by stream parsing? 19:10:11 i mean having os-loganalyse act in a streaming manner 19:10:26 Oh right, we are doing that (ie fetching and serving in chunks) 19:10:35 ok 19:10:55 I meant to ask about chunk size though but we can do that after the meeting 19:11:19 64k I /think/ 19:11:21 jhesketh: it shouldn't have to auth at all, right? 19:11:32 os-loganalyze is anonymous read access, yeah? 19:11:53 jeblair: it is not anonymous it must auth due to the way rax does public anonymous access (CDN required) 19:12:01 jeblair: for downloading we auth with a ro account 19:12:02 jeblair: so we could set up CDN and do it properly anonymously 19:12:37 * mordred cries 19:12:46 Yeah we aren't using any cdn features so I'm not sure what difference they'll make 19:13:17 okay, so it's probably worth looking into whether we are re-authing, the chunk size, and whether using anonymous cdn are faster 19:13:20 well, "your" cdn is the only way to get anonymous read access without a separate authenticated proxy 19:13:34 (not that i blame you for all rackspace design decisions) 19:13:57 Heh 19:14:08 but it's definitely useful feedback we should be providing to rax 19:14:09 jhesketh: that's a lovely cdn you have! 19:14:40 "we don't need a cdn, we just want to publish some files" 19:14:42 can I ask a stupid question? 19:14:44 in this case, i don't think we need to worry about any dns stuff, since it's backend, we can just use the really-long-url supplied 19:14:57 jeblair: good point 19:15:15 oh, yeah if there's a way we can simply hyperlink the hash urls, then done 19:15:20 is it possible to re-do os-log-analyze as javascript, embed it in the log file before we upload to swift, then serve directly to user from cdn? 19:15:38 mordred: yes, but that breaks the use case for non browser users 19:15:44 ah. k 19:15:46 nevermind then 19:16:04 unless we have separate urls for mangled and non-mangled copies of files 19:16:06 mordred: the main objection from sdague to pre-processing is he wants to be able to have old log files receieve new highlighting treatment 19:16:18 (and doing the embed at upload is == pre-processing) 19:16:38 we could embed a link to the os-loganalyze javascript file 19:16:43 ya 19:16:46 and a function invocation 19:16:49 I think there was also concerns about JavaScript/browser performance 19:16:53 also os-loganalyze is slowing down though. i think it's worth revisiting that assumption at some point 19:17:07 ++ 19:17:28 * mordred just brainstorming options in case all of our swift fetch/process/sever stories are all 2s 19:17:29 jhesketh: i think he's given up on firefox working regardless, so as long as it works well in chrom* we're probably no worse off 19:17:45 :-( Firefox 19:17:53 huh, I use it with firefox and its fine 19:17:54 firefox working -> firefox being remotely performant loading large files of any sort 19:18:01 clarkb: with really long nova log files? 19:18:09 jeblair: last I looked at them yes 19:18:15 it just require a couple seconds of patience 19:18:15 (i regret saying 'working', that was very misleading) 19:18:31 clarkb: oh yeah, i don't think people have patience anymore 19:18:41 how quaint 19:18:42 firefox teaches patience 19:18:51 * mordred has slow internet 19:18:56 EVERYTHING is slow all the time 19:19:18 are you on a string between two cans? 19:19:24 meh, i use a variant of ff and have had no issues looking at really long nova logs in os-la... then again i have patience by the bucketload 19:19:26 anteaya: it feels like it 19:19:32 fungi: ++ 19:19:35 but I also open the files directly with vim 19:19:36 fungi: you do 19:19:38 anyway, we have a few more things to look at before we revisit basic assumptions 19:19:44 :o http://foo works great :) 19:19:55 heh 19:19:55 Config repo split 19:20:04 anteaya: has reviews up! 19:20:14 #link https://review.openstack.org/#/q/topic:config-rename,n,z 19:20:16 i vote we rename at 20:00 on friday 19:20:16 I do 19:20:21 last meeting "we" said we'd do this last friday but instead we decided to release openstack or something 19:20:24 and some even pass jenkins atm 19:20:36 20:00fri wfm 19:20:37 we gave it a miss so we wouldn't risk upsetting ttx's rc activities last weekend 19:20:47 2000 friday sounds good here as well 19:20:55 I am willing to bump stuff for ttx 19:20:55 but juno is out on thursday, so friday should be all clear 19:21:04 and pleia2 already has an announcement drafted 19:21:14 so review, I will keep them wip or -2 so they don't get merged 19:21:29 and then I will probably need to rebase them all just before we get going 19:21:51 anteaya: i've already -2'd the project-config review so the others can be un-wip'd 19:21:59 okay 19:22:21 -2 the parent on the config repo too 19:22:27 the first project-config review in line i mean. the others can't merge anyway if it doesn't. (though the top openstack-infra/config change needs similar) 19:22:28 awesome. we should probably retroactively put anteaya as the primary assignee of the spec when we move it to 'implemented'. :) 19:22:31 fungi: I'll dig that one up after the meeting 19:22:33 yeah, will do 19:22:39 fungi: thanks 19:22:53 jeblair: aw, thanks :D 19:23:00 if anyone wants to make other edits (jeblair?), my draft is here: https://etherpad.openstack.org/p/system-config-announce 19:23:09 #link https://etherpad.openstack.org/p/system-config-announce 19:23:37 #link https://review.openstack.org/#/q/status:open+topic:config-rename,n,z 19:23:41 etherpad roulette (what colour am I now?) 19:23:45 pleia2: looks good 19:23:50 oh, anteaya already linked the topic 19:23:51 pleia2: ++ 19:24:01 fungi: thanks though 19:24:14 Nodepool DIB 19:24:19 ohai 19:24:22 clarkb: want to summarize? 19:24:50 * mordred hands clarkb wet cats to throw 19:25:18 * jeblair imagines mordred's "pool" of cats 19:25:23 wet cats are what makes the summary 19:25:29 sure. Basically held up by a couple dib things we need to get sorted out in the near future. First need to upgrade nodepool.o.o to trusty in order to build centos7 images. But Need to wait for a DIB bugfix that will let us set TMPDIR so that we can use rax perforamnce nodes instead of standard nodes 19:25:30 jeblair: catpool? 19:25:50 I also have a dib change up that will spit out multiple copies of a single image in different formats. This is needed to make images for hpcloud and rax 19:26:03 long term, there is a spec jeblair wrote to make image builds farmed out to gearman workers 19:26:14 well, unless we want nodepool to convert immediately prior to upload 19:26:16 #link https://review.openstack.org/#/c/127673/ 19:26:33 i don't consider that a priority spec at this point 19:26:34 so that in he future instead of needing to upgrade the main nodepool service we can just spin up gearman worker nodes with the proper software and have them build the images for us 19:26:42 ++ 19:26:59 but i do think it will help alleviate some classes of problems we have been seeing (and are likely to see repeats of) 19:27:05 which also works around platform-cpecific shortcomings 19:27:06 * mordred also wants to write a spec for glance about glance-side image format conversion 19:27:07 I intend of building a new nodepool node after release and as soon as the dib bug around TMPDIR is fixed 19:27:19 s/intend of/intend to/ 19:27:30 sounds like a plan 19:27:33 mordred: ++ i agree thats a feature glance should have 19:27:37 mostly I think replacing nodepool has potential to disrupt release by starving nodes if two nodepools fight for quota so I want to wait for that to be done 19:28:12 i think if the new nodepool is spun up in parallel during a quiet-ish period, should be non-impacting (as long as we're running under quota for a sustained period) 19:28:17 there will likely be a short time period where nodepool is off and alien nodes will need to be deleted 19:28:21 fungi: yup 19:28:46 Docs publishing is waiting on more feedback from swift logs 19:28:50 Jobs on trusty 19:29:08 #link https://etherpad.openstack.org/p/py34-transition 19:29:35 still in progress, some movement on blocking bugs. i'm currently (as we speak) running another pass to see what's still broken, or if new broken has emerged 19:29:43 fungi: we should test the proposed python3.4 package(s) 19:29:54 i'll update that etherpad once the current pass finishes 19:30:04 upstream ubuntu has a PPA with the SRU backported python3.4 fix 19:30:07 yes, that's on my to do list once i see what's still showing broken 19:30:12 awesome 19:30:15 #info upstream ubuntu has a PPA with the SRU backported python3.4 fix 19:30:35 right, now it just needs to actually end up in trusty-backports 19:30:59 hopefully if we follow up to the bug with positive confirmation we don't see the issue with the proposed backport package, that should help 19:31:04 ++ 19:31:15 anyway, that's the recap 19:31:18 #topic Kilo summit topic brainstorming pad (fungi) 19:31:26 #link https://etherpad.openstack.org/p/kilo-infrastructure-summit-topics 19:31:51 i've mostly just been prodding people to put random ideas on that etherpad while you were out. but it's probably about time now to start deciding what we're going to talk about in paris 19:31:53 work is ongoing in that etherpad as we speak! :) 19:32:43 jeblair: we cleared up the thing I added. Anything else you wanted from me on that? 19:32:53 jeblair: your sentence under mine makes sense to me 19:32:59 yep. we can either take time in today's meeting to try and decide stuff, or just work on that out of band over the coming week 19:33:26 clarkb: no, i think that's something we should talk about it some form 19:33:45 i just added functional testing in there for completeness, but honestly at this point, i think it needs to be a tc cross-project session 19:34:18 pleia2: do you think that we are at a point with the translation tools for a session to be effective? 19:34:39 for the third-party CI session, I'll remove it, I added it here before I knew anteaya was working on getting it added another way 19:34:43 jeblair: yes, I've been chatting with Daisy over the past couple of days and she would really like to have one 19:35:05 jeblair: if we can't, we'll find a table somewhere and pull in some key people, we need to meet somehow 19:35:07 pleia2: i ask because we have talked about replacing it at the last two summits -- i don't want to have another session where the outcome is "evaluate options" :) 19:35:28 pleia2: agreed, we need to sit together. 19:35:43 pleia2: okay, if you think it's ready for real outcome, i'm game. :) 19:35:53 jeblair: yeah, it's going to be either pootle or zanata, now that I understand how we use transifex (thanks AJaeger_!) I'm in a much better spot 19:36:13 also, the i18n team has had demos of both 19:36:20 I put infra-manual on the etherpad just because activity there is low, doesn't have to be a summit item as long as we can move past the blockages 19:36:45 yeah, I'm in the midst of guiding them through some additional pootle workflows, but at this point I think they'd be happy with either one, we need to look at it from the infra side 19:36:58 perhaps we can agree that if we have that session, we either collectively decide, or if we fail, mordred or i flip a coin and that's binding. :) 19:37:00 I'll do whatever prep is required to help the team make that decision, so please let me know 19:37:00 indeed, infra-manual needs some review love. 19:37:06 the two primary blockages being, lack of style guide and not a lot of core reviews 19:37:18 or you can flip a mordred 19:37:19 jeblair: I want to flip the coin 19:37:30 * anteaya wants to see a mordred flip 19:37:44 maybe I should create an etherpad now to gather my own thoughts pros/cons 19:37:45 if I do a standing backflip, can I just decide all of the things by fiat? 19:37:58 jhesketh SergeyLukjanov thanks for the reviews in infra-manual 19:37:59 can action me to do that :) 19:37:59 okay, so let's add more stuff to that etherpad and we'll revisit next week 19:38:06 mordred: but one flip per decision is required 19:38:14 clarkb: eww. too much exercise 19:38:26 pleia2: i think you can action yourself? 19:38:42 #action pleia2 to create etherpad with infra-facing pros/cons on zanata vs pootle 19:38:43 jhesketh: how about follow up session on jenkins deprecation? 19:38:51 I don't think non-chairs can 19:38:52 ooooh 19:39:10 * mordred supports jenkins deprecation 19:39:25 #action pleia2 to create etherpad with infra-facing pros/cons on zanata vs. pootle 19:39:33 pleia2: we'll find out 19:39:34 zaro: you are adding that to the etherpad, right? 19:39:41 remember we have four 40minute discussion sessions, but we also have a room shared with rm/qa all friday to get stuff done 19:40:12 we get ttx 19:40:16 fungi: is that a room where we can have separate discussions - or only together with rm/qa? 19:40:24 so may be best to frame options between "can come to a decision in 40 minutes" vs "should collaborate in teams" at this phase 19:40:31 AJaeger_: how sharp are your elbows? 19:40:36 anteaya: it's on. 19:40:43 fungi: good point 19:40:44 fungi: ++ also whether or not we want to get stuff on a schedule in hopes that others may attend 19:40:46 AJaeger_: i'm assuming it's a pit they throw us all in, with assorted weapons, and we fight our way out 19:40:52 AJaeger_: not sure how the room layout it, last time it was all one room, and different tables 19:41:05 zaro: thanks 19:41:11 AJaeger_: i think it's together, so it's good to have something semi-formal, like "we're going to talk about this for 20 mins" and have a list of topics 19:41:13 on Friday my elbows want me that sharp anymore ;) 19:41:26 ttx: can you describe the layout for that briefly (if you're around)? 19:41:33 #topic Kilo cycle Infra liaisons... should we have them? (fungi) 19:41:34 AJaeger_: :D 19:41:40 fungi: I am 19:42:01 #topic Kilo summit topic brainstorming pad (fungi) 19:42:08 fungi: layout for meetup portion ? 19:42:11 yep 19:42:13 ttx: yeah 19:42:48 fungi: I don't have the final plans yet, but we should have a set of tables, chairs, and whiteboards in a room 19:42:59 everyone in one room? 19:43:04 which we can probably hack into wahtever layout works for us 19:43:09 anteaya: yes 19:43:15 everyone as in infra/QA/rm 19:43:21 ah 19:43:30 we have our own room 19:43:39 so we'll want to discuss topics that involse (or at least ar not boring to) all of infra/qa/rm 19:43:39 better than I was imagining 19:43:44 involve 19:43:55 like the infra/qa meetup, but with ttx there :) 19:44:00 ttx what you do want to talk about on the friday? 19:44:09 * fungi wants to fight wit the lirpa 19:44:26 jeblair: you read my mind 19:44:27 or should we have an etherpad to collect idea? 19:44:38 fungi: what's a lirpa? 19:44:44 anteaya: hmm, maybe we can abuse the infra etherpad 19:44:45 fungi: that's one of the weirder things i've seen you say. and that's saying something. 19:44:52 ttx abuse away 19:44:59 mordred: star trek geekdom. weapon from the "amok time" episode 19:45:01 anteaya: I expect most discussions to be infra-related 19:45:07 oh! 19:45:19 I thought you were describing a group of people against whom you wanted to wage combat 19:45:22 (referring to getting thrown in a pit with assorted weapons) 19:45:23 ttx: I started a new heading 19:45:51 fungi: to fight a rancor? 19:45:57 oh wait wrong universe 19:46:01 rancor++ 19:46:05 we have some folks with agenda items wanting to discuss them 19:46:13 clarkb: wfm 19:46:31 let's skip liasons and move on 19:46:44 i'm also going to skip the devstack topic as it was covered 19:46:46 yep, tha was lower-priority 19:46:48 #topic Keystone testing and Defcore integration (hogepodge) 19:47:04 also morganfainberg ^ 19:47:04 we should test keystone 19:47:13 ooh, great idea 19:47:15 hogepodge: what's up? :) 19:47:23 mordred: approved! 19:47:38 * clarkb does a tl;dr as he remembers it. hogepodge can correct if it is wrong 19:47:44 * morganfainberg is here 19:47:45 mordred, ++ 19:48:16 defcore is relying on tempest for integration testing. However it sounds like the bulk of the testing for keystone would be functional and would be hit transitively by defcore 19:48:32 honestly, i think that's defcore's problem 19:48:34 so wondering how to make keystone tests in tempest/defcore tooling that don't make people cranky. 19:48:56 i think it's a design flaw in the defcore process that i suspect is well-known at this point 19:49:10 it does seem more like a qa/tempest design discussion 19:49:22 * mordred not sure this is an infra topic - although broadly he DOES think that there should be some direct keystone tests in tempest 19:49:32 so i would say that designing a test strategy around defcore is the tail wagging the dog :) 19:49:56 mordred: maybe? or maybe it should just be keystone functional testing 19:50:03 jeblair: I think it is more lets design keystone tests around testing keystone then figure out how to make that work for everyone 19:50:06 morganfainberg: ^ 19:50:07 sure 19:50:07 jeblair, and that was my initial thought 19:50:17 agreed. i think tempest not having direct tests of keystone quite possibly reflects end user experience. defcore/refstack would likely do well to stick to testing things end users can test 19:50:30 service catalog 19:50:35 mordred: ya 19:50:44 right, that's the main bit i interact with in keystone as a user 19:50:52 as a person currently being screwed by bad service catalog choices, I'd like that to be a thing 19:50:52 and that actually is an integration point 19:50:57 yah 19:51:05 (since a service catalog catalogues other services :) 19:51:09 so token issue / validate / catalog 19:51:09 ++ 19:51:17 but anyway, probably not something infra is in an immediate position to do things about 19:51:21 seems like that is the key integration points a user would see 19:51:28 fungi: ya 19:51:30 BUT ... not really an infra topic, except that we're huge openstack users and want it all to work 19:51:37 even if the first two are a side-effect of the rest of everything 19:51:43 I tried pointing morganfainberg and hogepodge at mtreinish and sdad 19:51:53 sdad, ha 19:51:53 sdad should keep the new nick 19:52:25 mordred, ++ 19:52:27 I thinkit is great 19:52:39 morganfainberg: i'm pretty sure we need to talk about functional testing approaches in kilo at the summit, probably in a cross project session. so i guess keep an eye out for that 19:52:46 jeblair, sold. 19:52:57 morganfainberg: anything else you need from us? 19:53:08 jeblair, nope 19:53:12 woot 19:53:16 not sure if hogepodge needs anything else. 19:53:16 #topic Drop gate-{name}-python26 from python-jobs template and specify it explicitly (krotscheck, fungi) 19:53:20 Brief summary: Storyboard’s tests broke on py26. I asked for help on channel, mordred and fungi went: Why are we doing py26 anyway? The discussion expanded to: Why are we doing py26 on anything? and “Can we finally get rid of py26?”. fungi proposed solution -> pull gate-{name}-python26 build from python-jobs template, add it to all projects manually so they can deprecated it as they see fit, but do it after thursday. I volunteered to 19:53:21 it on the agenda. Discuss. 19:53:42 yay dropping py26 for master! 19:53:48 kill py26 for storyboard ++ 19:54:01 py26 is already dead on storyboard. 19:54:03 krotscheck: I like your summary and direction 19:54:12 anteaya: I also have a newsletter! 19:54:21 right, we can keep the py26 jobs on projects with stable/juno branches as explicit additions next week, i think 19:54:30 kill py26 for everything ++ 19:54:31 krotscheck: I'm sure it is wonderful 19:54:34 fungi, krotscheck: that all sounds reasonable to me 19:54:36 fungi: well we need it for those without too 19:54:43 fungi: aren't we keeping it on all the clients and so on too? 19:54:43 clarkb: and those which are deps 19:54:45 yes 19:55:10 does dropping 2.6 mean needs to work on py3? 19:55:14 zaro: no 19:55:15 so some (most?) oslo libs, clients, et cetera 19:55:23 Personally, I feel like this is pretty straightforward. Would an up/down vote be appropriate? 19:55:28 zaro: though it should make it a little easier 19:55:39 zaro: but the tough py3k problems are independent of py26 19:55:40 krotscheck: I see only upvotes, I see no downvotes 19:55:41 krotscheck: no, i think we have rough consensus even without discussing in the meeting 19:55:47 Woot 19:56:18 #topic DriverLog: new approach for maintainers (irina_pov) 19:56:26 that has since been removed 19:56:29 irina left 19:56:31 wah waaaah 19:56:34 this came out of cinder 19:56:34 i did add a quick topic 19:56:42 at mrmartin's request 19:56:56 anteaya: is it something we should discuss? 19:56:57 oh, we have ianw's on still too 19:56:58 they don't want to use the wikipages for thrid party operators email addresses 19:57:06 and what to use something else 19:57:10 so this was the proposal 19:57:12 what is the something else? 19:57:14 and she isn't here 19:57:20 something called driver log 19:57:25 ah - so, discussing without her is likely hard? 19:57:26 I was hoping to hear what that meant 19:57:27 oh, have driverlog track those... got it 19:57:40 okay, well, hopefully she'll show up again sometime. :) 19:57:41 I said we would listen to a proposal 19:57:42 http://stackalytics.com/report/driverlog 19:57:44 I think? 19:57:51 but the wikipages are in the ci.opentsack.org requirements 19:57:57 which we enforce 19:58:04 mordred: no, there is a cinder patch 19:58:13 mordred: don't think it is related to stackalytics 19:58:15 okie. 19:58:17 I sure hope not 19:58:20 * mordred hides 19:58:23 done 19:58:26 of course we can change them if there is a reasonable proposal 19:58:37 them -> the requirements 19:58:40 #topic Centos7 bring-up (ianw 10/14/14) 19:59:04 ttx: tc is on hiatus, right? 19:59:17 this is blocked a bit on clarkb's trusty-nodepool iirc? 19:59:28 ya 19:59:33 yeah, i have a proposal out to install a later tar 19:59:34 unless we want centos7 the old way on hpcloud 19:59:40 https://review.openstack.org/#/c/127859/ 19:59:44 jeblair: yes 19:59:52 clarkb: so that might be an intermediate step 20:00:06 last week we thought the images were close, but it does seem to be blocked now 20:00:23 how urgently does openstack need centos7 images? 20:01:00 jeblair: I think the idea is they would replace f20 images? so before f20 EOLs 20:01:04 which isn't that soon 20:01:26 jeblair: not sure if *need* is the word, but i'd love to have testing in place ASAP 20:01:38 so we're looking at ~2weeks to complete the next step (which will either work or expose the next problem) 20:01:53 well I have tested a centos7 build locally with dib on trusty and it seemed happy 20:01:53 people are using it for devstack, but it's not tested properly 20:02:22 ianw: would waiting that long (with the associated risk we hit another block) try your patience exessively? :) 20:03:07 jeblair: we can wait to build images, but in the mean time i'd like to just use the hpcloud images, that should be straight-forward and allow a (non-voting) check job for devstack 20:03:32 that said, i haven't tried them as yet, maybe there are issues. but i can work on that basically immediately 20:03:50 i'm very hesitant to have images on only one provider 20:04:03 jeblair: we already have it working on rax 20:04:07 oh ok :) 20:04:08 with their image 20:05:01 if both providers have the image, I don't see anything wrong with that as a short-term 20:05:03 personally 20:05:05 ianw: so hopefully not too much work to add it to hpcloud? 20:05:05 that's assuming there's no love for the idea of building a bespoke tar on precise nodes 20:05:16 ianw: I really don't like the idea of the random tar install by dib 20:05:25 ianw: I would uninstall dib if it did that to me 20:05:32 jeblair: i'll work on it and see, send some changes out later 20:05:44 we are over time 20:05:49 clarkb: it doesn't overwrite thought, and it does only look for precise, and only when building centos7 20:05:56 once a year we get to finish a meeting 20:06:04 jesusaurus: tc is not meeting this week 20:06:04 jesusaurus: ya that was what the question to ttx about tc being out today was about 20:06:10 ahh 20:06:16 jesusaurus: thanks though 20:06:23 yeah, i think we should use the hpcloud image now, then proceed with dib as we discussed earlier 20:06:26 ianw: still dib is thee to build images not sideload software into the encapsulating OS 20:06:27 and avoid the tar solution for now 20:07:03 sound good to everyone? 20:07:04 jeblair / clarkb : ok, cool, i'll work on that and see how we go 20:07:09 jeblair: wfm 20:07:15 thanks! 20:07:41 the security question or are we done? 20:07:49 #topic Apache mod_security for openstackid.org... opinions? (mrmartin, fungi) 20:08:00 i'm going to stretch it to 15 past 20:08:03 yay 20:08:40 what is this one about? 20:08:41 if mrmartin is going to tune mod security I guess I don't mind. The "app" surface is relatively small right? 20:08:41 ok, so the basic statment here is that we need to put some security app in front of openstackid. 20:08:55 ahh, yeah, mrmartin just wanted to make sure we got some consensus on that change. i think it can happen in review 20:09:02 mrmartin: i think many of us are wondering why 20:09:22 jeblair: because it is handling sensitive user account data. 20:09:25 but ya I agree. If the auth app isn't secure... 20:09:34 if it's because we think it's not very well written and subject to security issues, i'd rather just not use it 20:09:54 if it's because we think, hey, we should do everything we can, that's hard to argue againt 20:10:06 generally i've seen mod_security do a great job of rejecting and logging random malicious queries your app would just reject anyway, and not much else 20:10:41 but that's doesn't mean it wouldn't necessarily help in some circumstances 20:10:42 so this is the point, mod_security is open-source tool that still needs a maintenance 20:10:47 fungi: that's only after you've tuned it to stop rejecting valid queries 20:10:54 jeblair: RIGHT ;) 20:11:05 and of course it also requires an app testing to setup the proper rules 20:11:19 mrmartin: have you tried using it in a local environment to set up the rules? 20:11:20 we had another option, using cloudflare as a front-end security solution 20:11:47 we've definitely seen cloudflare break access to the www.openstack.org site on multiple occasions 20:11:56 fungi: and no one knew why 20:12:06 fungi: but is it happened, because the rules were not set up properly? 20:12:10 so i'm a little disinclined to agree to that, but i also don't see this as a dichotomy 20:12:54 mrmartin: it was because it wasn't tunable in the ways we needed for at least one of those situations, if my memory is correct 20:13:18 i'd like to point out that none of the tools that this thing is being used to regulate access to is using mod_security or cloudfare or anything similar, really. 20:13:19 as in they asked the vendor, and ended up just bypassing cloudflare after they couldn't make it work 20:13:30 I don't think it's useful to argue the merits of cloudflare technically - it's a closed source service 20:14:03 so let's strike it and talk about modsec 20:14:20 yeah, it's worth noting that openstackid has a limited-scope database account for access to the current foundation member database, whereas the code running the www site is written by the same people and has unrestricted access to the same 20:14:32 i don't think any of the current root users have time or inclination to debug modsec issues 20:14:41 jeblair: agreed 20:14:44 so i personally won't be volunteering for its maintenance 20:14:58 however, if mrmartin wants to provide a fully-formed modsec config for it that works 20:15:09 i find that i have little room to argue against that 20:15:29 rulesmay need to be updated over time as the app changes but that isn't something that requires a rooter 20:15:40 jeblair, fungi: it can be also a point, that you don't suggest to use any front-end security appliances. 20:15:45 i won't stand in the way of that, but i will also be among the first to disable it and troubleshoot later if it ends up blocking legitimate authentication traffic 20:15:45 (other than, to be honest, the fact that it is thought to be necessary increases my suspision of the app in genereal and would make me inclined to accept a replacement that is considered both more simple and more secure) 20:15:57 but if somebody maintains the modsec rules could it get a green lamp? 20:16:07 fungi: i agree with you 20:16:46 is anyone here going to -2 a working modsec rule set? 20:16:49 no 20:16:52 no 20:17:03 no 20:17:05 mrmartin: so it sounds like most of us is indifferent to mod_security on the whole, but also not strongly in favor of it or any similar measure 20:17:20 and before the meeting ends, remember to vote vote vote! 20:17:21 mrmartin: do you have what you need? 20:17:30 ok, and what you think about not making a decision about this topic now? 20:17:41 because I saw so different opinions now. 20:17:45 with the caveat that if we're that worried the application is riddled with security holes, maybe we should hold off using it rather than plastering over it with an ids 20:18:08 fungi: ok, and if do a pentest on the app? 20:18:34 if / if we 20:18:44 that sounds like a great idea 20:18:46 testing is always a great idea. but if we think it shouldn't be deployed before a battery of security tests are run then let's say that 20:19:23 yeah. we also do have some openid experts in the community. inviting them to review/audit/test would be a good idea. 20:19:29 fungi: ok, it handles auth data. do you know that this code is perfect? 20:19:37 mrmartin: you had also asked about a code audit i think... and i recommended you reach out to the openstack-security mailing list if that's still something you'd like to see done 20:19:49 mrmartin: i generally expect that no code is perfect 20:20:06 okay, we're 5 mins over being 15 mins over, so it's time to end. thanks everyone! 20:20:12 thank you 20:20:13 #endmeeting