16:01:11 <dimtruck> #startmeeting Solum Team Meeting
16:01:12 <openstack> Meeting started Tue Oct 21 16:01:11 2014 UTC and is due to finish in 60 minutes.  The chair is dimtruck. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:13 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:15 <openstack> The meeting name has been set to 'solum_team_meeting'
16:01:24 <dimtruck> #link https://wiki.openstack.org/wiki/Meetings/Solum Our Agenda
16:01:33 <dimtruck> #topic Roll Call
16:01:38 <devkulkarni> Devdatta
16:01:42 <gpilz> Gil Pilz
16:01:44 <muralia> murali allada
16:01:46 <mkam> Melissa Kam
16:01:53 <dimtruck> Dimitry Ushakov
16:01:54 <james_li> james li
16:02:35 <dimtruck> #topic Announcements
16:02:47 <dimtruck> (none on the wiki)
16:02:55 <dimtruck> anyone have specific announcements they'd like to make?
16:04:09 <dimtruck> if you think of announcements in other parts of the meeting, we can get back to them in the open discussion part
16:04:10 <dimtruck> #topic Review Action Items
16:04:28 <dimtruck> (none on the wiki)
16:04:45 <dimtruck> anyone have any action items they'd like to discuss?
16:04:51 <gpilz> yes
16:05:07 <dimtruck> ok gpilz - the floor is yours
16:05:17 <gpilz> i had an unofficial action item to talk to Pierre about https://review.openstack.org/#/c/115685/
16:05:26 <devkulkarni> ok
16:05:35 <devkulkarni> how did that go?
16:05:37 <gpilz> i contacted him and he said I could take the helm on this
16:05:54 <devkulkarni> ok
16:06:04 <devkulkarni> currently
16:06:19 <devkulkarni> it looks like Jenkins is complaining.. probably it needs a rebase
16:06:21 <gpilz> i'm going to need a bit of help with review.opentstack.org making that happen
16:06:32 <devkulkarni> gpliz: sure
16:06:42 <gpilz> needs a rebase and, as we discussed, needs to process Accept headers properly
16:07:00 <devkulkarni> oh! its the proper HTTP 1.1 way
16:07:04 <devkulkarni> that you wanted right?
16:07:15 <devkulkarni> I thought there was a separate bug that I had created for it
16:07:27 <gpilz> do you have a reference to that bug?
16:07:32 <gpilz> i didn't see it
16:08:14 <devkulkarni> let me check.. in any case it will be implemented only after we get this current patch landed
16:08:33 <devkulkarni> we can take a call whether to integrate that in the current patch or do it as a separate one
16:08:34 <gpilz> okay
16:09:03 <devkulkarni> is there anything else on this topic?
16:09:10 <gpilz> nope
16:09:14 <devkulkarni> if not dimtruck you can move to the next topic
16:09:37 <dimtruck> #action revisit this topic once the current patch has landed
16:09:46 <dimtruck> ok, cool.  any other action items?
16:10:01 <dimtruck> #topic Blueprint/Task Review
16:10:13 <dimtruck> (devkulkarni) Quick status check of conversion of bash scripts to Python (do this only if ravips is around)
16:10:34 <devkulkarni> currently we have lot of core functionality in bash scripts
16:10:34 <dimtruck> i don't see ravips on
16:10:58 <devkulkarni> this topic is to convert those to Python so that we can harden that stuff with tests
16:11:13 <muralia> this is a work item we should get started on. we should email ravi and get an update.
16:11:16 <devkulkarni> ravips has started on it I believe, so just wanted to see what the status is
16:11:25 <muralia> cool
16:11:39 <devkulkarni> muralia: yes, good point. I will take that as an action item
16:11:47 <dimtruck> we can take an action item to follow up with him either on #solum or the next meeting
16:12:00 <devkulkarni> dimtruck, you can assign that to me
16:12:07 <dimtruck> #action devkulkarni to follow up with ravips regarding bash script to python conversion
16:12:18 <dimtruck> moving on to:  (devkulkarni) Quick status check of WSGI issue that dimtruck was investigating
16:12:24 <devkulkarni> okay.
16:12:24 <dimtruck> i have a few updates on this
16:12:39 <devkulkarni> dimtruck: let me give the problem description first
16:13:14 <dimtruck> so the issue is that using wsgiref.simple_server to spin up http server for solum, there are issues with hanging requests on methods not defined in Pecan
16:13:16 <devkulkarni> so the issue that we are running into is when a call arrives for a resource which expects body but if body is not provided, our server will hang
16:13:26 <dimtruck> right
16:13:52 <dimtruck> so there are a multitude of solutions to this, mainly replacing the simple_server dependency
16:13:55 <devkulkarni> this is bad as it becomes a vector for denial-of-service attack on our server
16:14:00 <dimtruck> correct
16:14:28 <dimtruck> we can alleviate this by removing simple_server dependency and utilizing mod_wsgi instead
16:14:45 <dimtruck> this was already done in keystone
16:14:45 <devkulkarni> dimtruck: I read your comment about mod_wsgi
16:14:59 <devkulkarni> but I don't understand where would we specify it?
16:15:11 <dimtruck> so this would be an apache configuration
16:15:11 <devkulkarni> is it part of devstack setup?
16:15:15 <dimtruck> correct
16:15:22 <muralia> to fix this issue, do we just need to replace the app server?
16:15:28 <devkulkarni> are we currently configuring anything related to apache on devstack?
16:15:31 <dimtruck> instead of jsut directly spinning up an http server we have apache server doing this
16:15:39 <dimtruck> keystone runs apache
16:15:54 <dimtruck> i have to verify if this is in devstack; however.  i haven't done that
16:16:00 <devkulkarni> so you are saying we should look at keystone's devstack setup
16:16:04 <dimtruck> right
16:16:08 <devkulkarni> okay
16:16:29 <devkulkarni> do you want to do that investigation and come back with more insights on how to use it?
16:16:34 <dimtruck> i'll update the bug and put more step-by-step details in the report for next week
16:16:35 <dimtruck> yes sir
16:16:39 <gpilz> are there any ripple effects into the API of doing this?
16:16:43 <dimtruck> nope!
16:16:47 <devkulkarni> good question gpliz
16:17:04 <dimtruck> simple_server is only utilized to spin up the http server to host Pecan wsgi
16:17:04 <devkulkarni> I would imagine dimtruck's investigation should include that question
16:17:10 <james_li> another issue related to this is https://bugs.launchpad.net/solum/+bug/1359516
16:17:12 <uvirtbot> Launchpad bug 1359516 in solum "Needs to handle http header 'X-Forwarded-Proto'" [Undecided,Confirmed]
16:17:21 <james_li> solum ignores X-Forwarded-Proto header
16:17:28 <devkulkarni> #link https://bugs.launchpad.net/solum/+bug/1359516
16:17:42 <james_li> and never return https endpoints
16:18:01 <devkulkarni> james_li: please elaborate
16:18:04 <dimtruck> looks like we can definitely kill 2 birds with 1 stone with this effort
16:18:06 <dimtruck> thanks james_li
16:18:24 <devkulkarni> oh, so mod_wsgi will also resolve the bug that james_li pointed out?
16:18:25 <james_li> yeah that one just jump into my mind, sorry
16:19:16 <dimtruck> hmm, looking into more details here, this might be something different
16:19:20 <devkulkarni> looks like that is something different
16:19:26 <devkulkarni> right
16:19:28 <dimtruck> i'll research more into that and see
16:19:28 <james_li> ok
16:19:37 <devkulkarni> ok cool
16:20:02 <devkulkarni> so dimtruck, could you please make sure that you take into account gpilz's question as part of your investigation?
16:20:27 <dimtruck> #action dimtruck to follow up on bugs 1359516 and investigate for any specific issues in replacing simple_server with mod_wsgi
16:20:27 <uvirtbot> Launchpad bug 1359516 in solum "Needs to handle http header 'X-Forwarded-Proto'" [Undecided,Confirmed] https://launchpad.net/bugs/1359516
16:20:44 <devkulkarni> thanks dimtruck
16:20:53 <dimtruck> (devkulkarni) Solum installation guide
16:21:02 <devkulkarni> ok, so about this..
16:21:35 <devkulkarni> on friday last week, someone reached out to us on the solum channel asking whether we have installation guide to install solum on multi-node OpenStack
16:21:56 <devkulkarni> I have created the linked bug to track that request
16:22:38 <devkulkarni> I wanted to brainstorm with everyone
16:23:00 <devkulkarni> what would require for installing solum on such a setup?
16:23:12 <devkulkarni> we have the vagrant setup
16:23:24 <dimtruck> #link https://bugs.launchpad.net/solum/+bug/1382660
16:23:26 <uvirtbot> Launchpad bug 1382660 in solum "Create installation instructions" [Undecided,New]
16:23:48 <devkulkarni> how much work would it take for us to go from that devstack setup to a multi-node setup? has anyone tried this before with any other OpenStack services?
16:24:23 <devkulkarni> I remember recently seeing an email from the docs team where they had a guide on OpenStack installation.
16:24:56 <devkulkarni> my initial thought is we can start with multi-node setup for any other OS service and add required solum services to the mix piece-by-piece
16:25:02 <dimtruck> #link http://docs.openstack.org/trunk/config-reference/content/configuring-multiple-compute-nodes.html
16:25:11 <dimtruck> this is what i've found in my research
16:25:15 <dimtruck> it's for nova
16:25:17 <devkulkarni> thanks dimtruck for the link
16:25:41 <devkulkarni> one of the main blockers for us for the mulit-node setup is going to be the shelve dependency
16:25:41 <dimtruck> i'm not sure, however, what it would take to have the entire devstack scale out like that
16:26:27 <devkulkarni> we cannot use barbican yet, so probably we should think about adding a solum table instead of using shelve
16:26:33 <devkulkarni> what do you all think?
16:26:56 <dimtruck> agree with shelving shelve :)
16:27:06 <james_li> +1
16:27:28 <devkulkarni> muralia, gpilz: thoughts?
16:27:57 <muralia> thats true, shelve will be a blocker.
16:28:18 <muralia> for multinode we should just use barbican
16:28:30 <muralia> and not replicate key storage
16:28:30 <dimtruck> that might be our biggest bottleneck.  Here's another doc on how to set up devstack on multiple nodes.  Not sure how up to date it is though: http://devstack.org/guides/multinode-lab.html
16:28:32 <gpilz> no opinion (not qualified)
16:28:36 <dimtruck> muralia: +1 on that
16:29:10 <dimtruck> cool, so who wants to volunteer on doing this research?
16:29:44 <dimtruck> essentially setting up a multi-node devstack and putting together a spec for replacing shelve (do we need a spec)?
16:30:04 <james_li> devkulkarni: when we switch to barbican later, how about the migration work?
16:30:32 <devkulkarni1> james_li: good point. we should be using alembic to do that
16:30:33 <muralia> we dont need a spec for replacing shelve. solum already works with barbican. theres nothing new to add.
16:30:59 <devkulkarni1> how about tables within solum itself
16:31:09 <devkulkarni1> agree we don't need a spec
16:32:14 <muralia> hmm, I'm not sure we want to add tables in Solum to store keys. Why duplicate that work.
16:32:18 <devkulkarni1> apart from shelve are there any other issues that stand out?
16:32:34 <dimtruck> yay for no spec.  So I can volunteer on spinning up a multi-node devstack with solum and come back to the team with my findings.  We can find out then if there are any other issues.
16:32:46 <devkulkarni1> thanks dimtruck
16:32:47 <dimtruck> devkulkarni1: not to me at this time
16:33:17 <dimtruck> #action dimtruck to report back results of multi-node devstack with solum setup
16:33:35 <dimtruck> so then at this time, do we keep shelve until barbican is "ready"?
16:33:46 <dimtruck> and just repro it on every node?
16:34:04 <devkulkarni1> we can try adding barbican to the mix
16:34:11 <dimtruck> cool!
16:34:12 <devkulkarni1> dimtruck:
16:34:32 <devkulkarni1> so I would say in your investigation if it is not too much of hassle, try adding barbican to the mix
16:34:35 <dimtruck> barbican_disabled: False in solum.conf  Done!
16:34:55 <devkulkarni1> if it does turn out its not possible, then I don't see any other option that adding tables to Solum
16:35:07 <devkulkarni1> or have a nfs
16:35:16 <devkulkarni1> based solution for handling shelve's backend
16:35:42 <dimtruck> cool!
16:36:00 <devkulkarni1> but that will be too much compared to adding tables to Solum
16:36:02 <devkulkarni1> okay
16:36:21 <dimtruck> We can then "shelve" it until we find out whether barbican works for us in its present form
16:36:30 <dimtruck> (i'm all punny today)
16:36:34 <dimtruck> ok, moving on
16:36:49 <dimtruck> #topic Cutting Juno Release
16:37:00 <dimtruck> (nothing on the wiki)
16:37:08 <devkulkarni1> probably adrian_otto might be doing it before the summit
16:37:35 <devkulkarni1> it will be good to take a look at what we achieved as a team in Juno
16:37:48 <dimtruck> hmmm, he won't be around next week as well..and the week after is the summit
16:37:50 <devkulkarni1> I remember we had speced three "milestones"
16:37:57 <devkulkarni1> Juno-1, Juno-2, Juno-3
16:38:25 <devkulkarni1> I don't remember at the top of my head — but some things that come to mind are:
16:38:34 <devkulkarni1> custom language pack support, CI support
16:38:51 <devkulkarni1> integration with private github repos
16:38:58 <devkulkarni1> integration of CAMP
16:39:06 <dimtruck> retry check
16:39:15 <devkulkarni1> that's right — retry check
16:39:25 <devkulkarni1> initial stages of the logging work
16:39:41 <devkulkarni1> that list looks nice
16:39:45 <dimtruck> +1
16:40:02 <muralia> +1
16:40:11 <devkulkarni1> hopefully we will have a complete list when adrian_otto cuts the final Juno release
16:40:25 <dimtruck> cool! should i add it as an action item for next week?
16:40:39 <devkulkarni1> dimtruck: yes, that will be appropriate
16:40:54 <dimtruck> #action adrian_otto to cut the final Juno release
16:40:55 <devkulkarni1> the summit is fast approaching and we should have a release cut soon
16:41:20 <devkulkarni1> dimtruck: you can move on to the next topic
16:41:24 <dimtruck> #topic October Elections
16:41:42 <devkulkarni1> so adrian_otto is supposed to communicate about this I believe
16:41:53 <devkulkarni1> lets just carry that forward to the next meeting
16:41:56 <dimtruck> so last week we decided to hold off 'til this week to see if there's a challenger
16:42:13 <dimtruck> so he wanted the campaign to be until 10/27 with the election on 10/28
16:42:20 <dimtruck> our next meeting is on 10/28
16:42:31 <devkulkarni1> I see
16:42:42 <devkulkarni1> is there an email out?
16:42:43 <dimtruck> not sure if we wanted to postpone it one more day...leaving it to you guys to decide
16:43:00 <dimtruck> not sure if there's an email but last week's meeting minutes covered the gist of it :)
16:43:02 <devkulkarni1> yeah, may be we should do that
16:43:06 <devkulkarni1> oh okay
16:43:17 <dimtruck> https://gist.github.com/anonymous/c4fe855cbbc45b30f38b
16:43:35 <dimtruck> a quick gist with the relevant conversation
16:44:32 <muralia> we should wait for more people. lets give it one more week.
16:45:10 <dimtruck> that's fair
16:45:18 <dimtruck> let's move on then and carry this item over to next week
16:45:55 <dimtruck> #topic Open Discussion
16:46:02 <gpilz> https://review.openstack.org/#/c/128768/
16:46:03 <devkulkarni1> okay, I have one topic
16:46:07 <gpilz> needs reviewers
16:46:16 <devkulkarni1> gpilz: will look at it
16:46:30 <devkulkarni1> and actually my topic was relevant to this as well
16:47:06 <devkulkarni1> so dimtruck and mkam realized that camp default being on causes lot of functional tests to fail
16:47:17 <gpilz> really?
16:47:17 <devkulkarni1> dimtruck: could you elaborate on the issue for us please?
16:47:20 <devkulkarni1> yeah
16:47:56 <dimtruck> sure, so if we install solum without camp, all the camp-specific functional tests return a 404 response code
16:48:20 <dimtruck> that's the gist of it :)
16:48:25 <devkulkarni1> right
16:48:31 <gpilz> oh - that make sense
16:48:41 <devkulkarni1> right
16:49:02 <devkulkarni1> so gpilz I wanted to revisit the discussion that we had had sometime back
16:49:03 <gpilz> but that isn't "the default being on causes a lot of tests to fail"
16:49:22 <gpilz> that's "if you change things from the default configuration, some tests fail"
16:49:47 <gpilz> I can change my tests to be no-ops if CAMP isn't enabled
16:49:47 <devkulkarni1> gpilz: yeah, you are right.. I got the wording wrong.. main point was
16:50:15 <devkulkarni1> not everyone would run camp
16:50:22 <devkulkarni1> okay, that would make sense
16:50:58 <devkulkarni1> gpilz: you can do that as a separate patch
16:51:04 <gpilz> right
16:51:09 <devkulkarni1> cool
16:52:18 <dimtruck> awesome!
16:52:19 <devkulkarni1> are there any other topics/things that anyone have on mind?
16:52:38 <dimtruck> we should probably think of doing that for some other work
16:52:44 <dimtruck> such as barbican-specific tests
16:52:51 <devkulkarni1> dimtruck: makes sense
16:52:53 <dimtruck> if we have any...not sure that we do at this time
16:52:59 <dimtruck> alright!
16:53:07 <devkulkarni1> dimtruck: could you or mkam take an action item on this please?
16:53:07 <dimtruck> if there are no other topics to discuss, i'll end the meeting
16:53:14 <dimtruck> oh sure!
16:53:45 <dimtruck> #action gpilz to add no-ops if CAMP is not enabled
16:53:56 <dimtruck> #action mkam to add no-ops if BARBICAN is disabled
16:54:04 <dimtruck> thanks everyone for attending!
16:54:07 <devkulkarni1> thanks dimtruck
16:54:10 <dimtruck> #endmeeting