16:01:11 #startmeeting Solum Team Meeting 16:01:12 Meeting started Tue Oct 21 16:01:11 2014 UTC and is due to finish in 60 minutes. The chair is dimtruck. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:15 The meeting name has been set to 'solum_team_meeting' 16:01:24 #link https://wiki.openstack.org/wiki/Meetings/Solum Our Agenda 16:01:33 #topic Roll Call 16:01:38 Devdatta 16:01:42 Gil Pilz 16:01:44 murali allada 16:01:46 Melissa Kam 16:01:53 Dimitry Ushakov 16:01:54 james li 16:02:35 #topic Announcements 16:02:47 (none on the wiki) 16:02:55 anyone have specific announcements they'd like to make? 16:04:09 if you think of announcements in other parts of the meeting, we can get back to them in the open discussion part 16:04:10 #topic Review Action Items 16:04:28 (none on the wiki) 16:04:45 anyone have any action items they'd like to discuss? 16:04:51 yes 16:05:07 ok gpilz - the floor is yours 16:05:17 i had an unofficial action item to talk to Pierre about https://review.openstack.org/#/c/115685/ 16:05:26 ok 16:05:35 how did that go? 16:05:37 i contacted him and he said I could take the helm on this 16:05:54 ok 16:06:04 currently 16:06:19 it looks like Jenkins is complaining.. probably it needs a rebase 16:06:21 i'm going to need a bit of help with review.opentstack.org making that happen 16:06:32 gpliz: sure 16:06:42 needs a rebase and, as we discussed, needs to process Accept headers properly 16:07:00 oh! its the proper HTTP 1.1 way 16:07:04 that you wanted right? 16:07:15 I thought there was a separate bug that I had created for it 16:07:27 do you have a reference to that bug? 16:07:32 i didn't see it 16:08:14 let me check.. in any case it will be implemented only after we get this current patch landed 16:08:33 we can take a call whether to integrate that in the current patch or do it as a separate one 16:08:34 okay 16:09:03 is there anything else on this topic? 16:09:10 nope 16:09:14 if not dimtruck you can move to the next topic 16:09:37 #action revisit this topic once the current patch has landed 16:09:46 ok, cool. any other action items? 16:10:01 #topic Blueprint/Task Review 16:10:13 (devkulkarni) Quick status check of conversion of bash scripts to Python (do this only if ravips is around) 16:10:34 currently we have lot of core functionality in bash scripts 16:10:34 i don't see ravips on 16:10:58 this topic is to convert those to Python so that we can harden that stuff with tests 16:11:13 this is a work item we should get started on. we should email ravi and get an update. 16:11:16 ravips has started on it I believe, so just wanted to see what the status is 16:11:25 cool 16:11:39 muralia: yes, good point. I will take that as an action item 16:11:47 we can take an action item to follow up with him either on #solum or the next meeting 16:12:00 dimtruck, you can assign that to me 16:12:07 #action devkulkarni to follow up with ravips regarding bash script to python conversion 16:12:18 moving on to: (devkulkarni) Quick status check of WSGI issue that dimtruck was investigating 16:12:24 okay. 16:12:24 i have a few updates on this 16:12:39 dimtruck: let me give the problem description first 16:13:14 so the issue is that using wsgiref.simple_server to spin up http server for solum, there are issues with hanging requests on methods not defined in Pecan 16:13:16 so the issue that we are running into is when a call arrives for a resource which expects body but if body is not provided, our server will hang 16:13:26 right 16:13:52 so there are a multitude of solutions to this, mainly replacing the simple_server dependency 16:13:55 this is bad as it becomes a vector for denial-of-service attack on our server 16:14:00 correct 16:14:28 we can alleviate this by removing simple_server dependency and utilizing mod_wsgi instead 16:14:45 this was already done in keystone 16:14:45 dimtruck: I read your comment about mod_wsgi 16:14:59 but I don't understand where would we specify it? 16:15:11 so this would be an apache configuration 16:15:11 is it part of devstack setup? 16:15:15 correct 16:15:22 to fix this issue, do we just need to replace the app server? 16:15:28 are we currently configuring anything related to apache on devstack? 16:15:31 instead of jsut directly spinning up an http server we have apache server doing this 16:15:39 keystone runs apache 16:15:54 i have to verify if this is in devstack; however. i haven't done that 16:16:00 so you are saying we should look at keystone's devstack setup 16:16:04 right 16:16:08 okay 16:16:29 do you want to do that investigation and come back with more insights on how to use it? 16:16:34 i'll update the bug and put more step-by-step details in the report for next week 16:16:35 yes sir 16:16:39 are there any ripple effects into the API of doing this? 16:16:43 nope! 16:16:47 good question gpliz 16:17:04 simple_server is only utilized to spin up the http server to host Pecan wsgi 16:17:04 I would imagine dimtruck's investigation should include that question 16:17:10 another issue related to this is https://bugs.launchpad.net/solum/+bug/1359516 16:17:12 Launchpad bug 1359516 in solum "Needs to handle http header 'X-Forwarded-Proto'" [Undecided,Confirmed] 16:17:21 solum ignores X-Forwarded-Proto header 16:17:28 #link https://bugs.launchpad.net/solum/+bug/1359516 16:17:42 and never return https endpoints 16:18:01 james_li: please elaborate 16:18:04 looks like we can definitely kill 2 birds with 1 stone with this effort 16:18:06 thanks james_li 16:18:24 oh, so mod_wsgi will also resolve the bug that james_li pointed out? 16:18:25 yeah that one just jump into my mind, sorry 16:19:16 hmm, looking into more details here, this might be something different 16:19:20 looks like that is something different 16:19:26 right 16:19:28 i'll research more into that and see 16:19:28 ok 16:19:37 ok cool 16:20:02 so dimtruck, could you please make sure that you take into account gpilz's question as part of your investigation? 16:20:27 #action dimtruck to follow up on bugs 1359516 and investigate for any specific issues in replacing simple_server with mod_wsgi 16:20:27 Launchpad bug 1359516 in solum "Needs to handle http header 'X-Forwarded-Proto'" [Undecided,Confirmed] https://launchpad.net/bugs/1359516 16:20:44 thanks dimtruck 16:20:53 (devkulkarni) Solum installation guide 16:21:02 ok, so about this.. 16:21:35 on friday last week, someone reached out to us on the solum channel asking whether we have installation guide to install solum on multi-node OpenStack 16:21:56 I have created the linked bug to track that request 16:22:38 I wanted to brainstorm with everyone 16:23:00 what would require for installing solum on such a setup? 16:23:12 we have the vagrant setup 16:23:24 #link https://bugs.launchpad.net/solum/+bug/1382660 16:23:26 Launchpad bug 1382660 in solum "Create installation instructions" [Undecided,New] 16:23:48 how much work would it take for us to go from that devstack setup to a multi-node setup? has anyone tried this before with any other OpenStack services? 16:24:23 I remember recently seeing an email from the docs team where they had a guide on OpenStack installation. 16:24:56 my initial thought is we can start with multi-node setup for any other OS service and add required solum services to the mix piece-by-piece 16:25:02 #link http://docs.openstack.org/trunk/config-reference/content/configuring-multiple-compute-nodes.html 16:25:11 this is what i've found in my research 16:25:15 it's for nova 16:25:17 thanks dimtruck for the link 16:25:41 one of the main blockers for us for the mulit-node setup is going to be the shelve dependency 16:25:41 i'm not sure, however, what it would take to have the entire devstack scale out like that 16:26:27 we cannot use barbican yet, so probably we should think about adding a solum table instead of using shelve 16:26:33 what do you all think? 16:26:56 agree with shelving shelve :) 16:27:06 +1 16:27:28 muralia, gpilz: thoughts? 16:27:57 thats true, shelve will be a blocker. 16:28:18 for multinode we should just use barbican 16:28:30 and not replicate key storage 16:28:30 that might be our biggest bottleneck. Here's another doc on how to set up devstack on multiple nodes. Not sure how up to date it is though: http://devstack.org/guides/multinode-lab.html 16:28:32 no opinion (not qualified) 16:28:36 muralia: +1 on that 16:29:10 cool, so who wants to volunteer on doing this research? 16:29:44 essentially setting up a multi-node devstack and putting together a spec for replacing shelve (do we need a spec)? 16:30:04 devkulkarni: when we switch to barbican later, how about the migration work? 16:30:32 james_li: good point. we should be using alembic to do that 16:30:33 we dont need a spec for replacing shelve. solum already works with barbican. theres nothing new to add. 16:30:59 how about tables within solum itself 16:31:09 agree we don't need a spec 16:32:14 hmm, I'm not sure we want to add tables in Solum to store keys. Why duplicate that work. 16:32:18 apart from shelve are there any other issues that stand out? 16:32:34 yay for no spec. So I can volunteer on spinning up a multi-node devstack with solum and come back to the team with my findings. We can find out then if there are any other issues. 16:32:46 thanks dimtruck 16:32:47 devkulkarni1: not to me at this time 16:33:17 #action dimtruck to report back results of multi-node devstack with solum setup 16:33:35 so then at this time, do we keep shelve until barbican is "ready"? 16:33:46 and just repro it on every node? 16:34:04 we can try adding barbican to the mix 16:34:11 cool! 16:34:12 dimtruck: 16:34:32 so I would say in your investigation if it is not too much of hassle, try adding barbican to the mix 16:34:35 barbican_disabled: False in solum.conf Done! 16:34:55 if it does turn out its not possible, then I don't see any other option that adding tables to Solum 16:35:07 or have a nfs 16:35:16 based solution for handling shelve's backend 16:35:42 cool! 16:36:00 but that will be too much compared to adding tables to Solum 16:36:02 okay 16:36:21 We can then "shelve" it until we find out whether barbican works for us in its present form 16:36:30 (i'm all punny today) 16:36:34 ok, moving on 16:36:49 #topic Cutting Juno Release 16:37:00 (nothing on the wiki) 16:37:08 probably adrian_otto might be doing it before the summit 16:37:35 it will be good to take a look at what we achieved as a team in Juno 16:37:48 hmmm, he won't be around next week as well..and the week after is the summit 16:37:50 I remember we had speced three "milestones" 16:37:57 Juno-1, Juno-2, Juno-3 16:38:25 I don't remember at the top of my head — but some things that come to mind are: 16:38:34 custom language pack support, CI support 16:38:51 integration with private github repos 16:38:58 integration of CAMP 16:39:06 retry check 16:39:15 that's right — retry check 16:39:25 initial stages of the logging work 16:39:41 that list looks nice 16:39:45 +1 16:40:02 +1 16:40:11 hopefully we will have a complete list when adrian_otto cuts the final Juno release 16:40:25 cool! should i add it as an action item for next week? 16:40:39 dimtruck: yes, that will be appropriate 16:40:54 #action adrian_otto to cut the final Juno release 16:40:55 the summit is fast approaching and we should have a release cut soon 16:41:20 dimtruck: you can move on to the next topic 16:41:24 #topic October Elections 16:41:42 so adrian_otto is supposed to communicate about this I believe 16:41:53 lets just carry that forward to the next meeting 16:41:56 so last week we decided to hold off 'til this week to see if there's a challenger 16:42:13 so he wanted the campaign to be until 10/27 with the election on 10/28 16:42:20 our next meeting is on 10/28 16:42:31 I see 16:42:42 is there an email out? 16:42:43 not sure if we wanted to postpone it one more day...leaving it to you guys to decide 16:43:00 not sure if there's an email but last week's meeting minutes covered the gist of it :) 16:43:02 yeah, may be we should do that 16:43:06 oh okay 16:43:17 https://gist.github.com/anonymous/c4fe855cbbc45b30f38b 16:43:35 a quick gist with the relevant conversation 16:44:32 we should wait for more people. lets give it one more week. 16:45:10 that's fair 16:45:18 let's move on then and carry this item over to next week 16:45:55 #topic Open Discussion 16:46:02 https://review.openstack.org/#/c/128768/ 16:46:03 okay, I have one topic 16:46:07 needs reviewers 16:46:16 gpilz: will look at it 16:46:30 and actually my topic was relevant to this as well 16:47:06 so dimtruck and mkam realized that camp default being on causes lot of functional tests to fail 16:47:17 really? 16:47:17 dimtruck: could you elaborate on the issue for us please? 16:47:20 yeah 16:47:56 sure, so if we install solum without camp, all the camp-specific functional tests return a 404 response code 16:48:20 that's the gist of it :) 16:48:25 right 16:48:31 oh - that make sense 16:48:41 right 16:49:02 so gpilz I wanted to revisit the discussion that we had had sometime back 16:49:03 but that isn't "the default being on causes a lot of tests to fail" 16:49:22 that's "if you change things from the default configuration, some tests fail" 16:49:47 I can change my tests to be no-ops if CAMP isn't enabled 16:49:47 gpilz: yeah, you are right.. I got the wording wrong.. main point was 16:50:15 not everyone would run camp 16:50:22 okay, that would make sense 16:50:58 gpilz: you can do that as a separate patch 16:51:04 right 16:51:09 cool 16:52:18 awesome! 16:52:19 are there any other topics/things that anyone have on mind? 16:52:38 we should probably think of doing that for some other work 16:52:44 such as barbican-specific tests 16:52:51 dimtruck: makes sense 16:52:53 if we have any...not sure that we do at this time 16:52:59 alright! 16:53:07 dimtruck: could you or mkam take an action item on this please? 16:53:07 if there are no other topics to discuss, i'll end the meeting 16:53:14 oh sure! 16:53:45 #action gpilz to add no-ops if CAMP is not enabled 16:53:56 #action mkam to add no-ops if BARBICAN is disabled 16:54:04 thanks everyone for attending! 16:54:07 thanks dimtruck 16:54:10 #endmeeting