22:00:17 #startmeeting Solum Team Meeting 22:00:17 Meeting started Tue Sep 16 22:00:17 2014 UTC and is due to finish in 60 minutes. The chair is devkulkarni. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:00:20 The meeting name has been set to 'solum_team_meeting' 22:00:36 #link https://wiki.openstack.org/wiki/Meetings/Solum#Agenda_for_2014-09-16_2200_UTC 22:00:43 our agenda ^^ 22:00:53 #topic Roll Call 22:00:54 James Li 22:00:57 Devdatta Kulkarni 22:00:59 o/ 22:01:03 Ravi Sankar Penta 22:01:23 Dimitry Ushakov 22:01:55 lets give few minutes for folks to chime in 22:02:38 Anyone else who would like to be recorded in attendance, please chime in at any time. 22:02:57 #topic Announcements 22:03:08 Anyone has any announcements for the team? 22:04:23 okay. lets move on to the next topic. 22:04:40 #topic Review Action Items 22:05:03 I remember there was one action item for adrian_otto to send out a poll for discussing CAMP 22:05:18 I haven't seen that email. So that will be carried to the next meeting 22:05:51 any other action item that any of you remember of? 22:06:40 okay.. if you remember anything please feel free to bring it up in the open discussion session 22:06:58 #topic BP/Task Review 22:07:14 So I have two things listed for us to discuss.. 22:07:24 First is wsgi related bug. 22:08:02 dimtruck recently identified that Solum API will hang if we make requests to the root resource for all HTTP methods except GET 22:08:28 please take a moment to read through the bug description and the comments 22:08:45 devananda: link? 22:08:58 james_li: https://bugs.launchpad.net/solum/+bug/1367470 22:08:59 Launchpad bug 1367470 in solum "Solum api hangs on non GET root requests" [Undecided,New] 22:09:07 correct - the issue is due to wsgiref module that we depend on for simple_server 22:09:21 we utilize this module in api and, i believe, shell 22:09:32 builder* 22:09:57 dimtruck, PaulCzar: how do you want us to proceed on this bug? 22:10:38 dimtruck: thanks for summarizing 22:10:43 so what are our options? 22:10:48 we discussed using mod_wsgi module in apache 22:11:09 apparently, there's a precidence in keystone 22:11:19 so we can piggy back on their implementation 22:11:39 another option to use a different wsgi server implementation - but i'm not sure what is available in that space 22:11:40 how would it work in our devstack and tempest setup 22:11:45 dimtruck: yeah that's what I would like to do. I'm not sure how they handle it in the gate etc 22:11:54 but we can investigate and figure it out 22:11:59 right 22:12:22 we'd have to spin up apache with that mod in a pre_test_hook in tempest 22:12:24 okay.. so apart from Keystone way is there any other way? 22:12:34 dimtruck: is way smarter than those keystone guys, so he will have no trouble making it work 22:12:39 ;) 22:13:05 dimtruck: do you want to take this up for investigation? 22:13:09 yes sir 22:13:16 cool 22:13:21 PaulCzar: ha! 22:13:23 dimtruck: happy to help with it if you want 22:13:48 https://pypi.python.org/pypi/WSME supports Multi-protocol : REST+Json, REST+XML, SOAP, ExtDirect and more to come 22:13:49 ravips what are your thoughts? 22:14:14 ravips: did you get a chance to look at the bug? 22:14:16 do we have feature parity with apache mod_wsgi ? 22:14:54 not sure I understand the question.. feature parity of solum with mod_wsgi or feature parity of wsme with mod_wsgi? 22:15:04 understood at the high level..just worried that switching 22:15:06 we don't really use wsgiref outside of simple_server but that's definitely an investigation we'd need to make 22:15:26 to other tool might have some other issues 22:15:27 ravips: yes, that is a valid concern 22:15:56 does anybody know what do other projects do? has anybody else ran into this issue? 22:16:34 we already have a dependency on wsme 22:16:37 i meant feature parity of wsme with mod_wsgi 22:16:57 so this is definitely a possibility 22:17:13 dimtruck, what is definitely a possibility? 22:17:14 ravips: not sure at this time 22:17:22 just replacing wsgiref with wsme 22:17:31 ah okay. 22:17:37 since dependency is already there in solum 22:17:39 so basically that is the other option 22:17:41 is we already have a dependency on wsme then that sounds like an easy(ish) win 22:17:43 do we know of any workaround with wsme for this issue? 22:17:48 we switch the server 22:18:02 ravips: good question. 22:18:06 we'll need to test it out 22:18:17 PaulCzar and I can take that action item, devkulkarni 22:18:30 dimtruck: awesome !! was just about to ask that.. 22:18:35 (since he volunteered to help :) ) 22:19:06 #action dimtruck (with help from PaulCzar) will investigate using wsme 22:19:22 okay.. moving on 22:19:42 The next thing I had was PaulCzar's CoreOS patch.. but 22:20:04 since we have ravips and PaulCzar here, should we discuss about the private github repo patch first (or instead)? 22:20:16 yeah lets cover that first as its more pressing 22:20:22 okay.. 22:20:30 sure 22:20:44 so PaulCzar, could you please update us on the latest for this patch? 22:21:32 I did all the rebasing yesterday which was quite a task in itself and then with some help from dimtruck this morning managed to figure my way through a few bugs in the barbican calls 22:21:51 got all the tests ( unit and functional ) passing locally ( devstack/trusty ) 22:21:59 thanks for pushing that patch forward PaulCzar 22:22:03 but we have a voting gate on f20 22:22:10 thanks Paul and Dimitry 22:22:12 and barbican doesn't test at all against f20 22:22:40 hmm.. so does that mean we are stuck until barbican starts testing with f20. 22:22:42 ? 22:22:43 so I propose that we switch our voting functional tests to the trusty test 22:22:50 and move f20 to non-voting 22:22:52 I was able to run barbican on F20 before and gate-solum-devstack-dsvm-f20 passed just before I left for vacation...not sure what got changed after that 22:23:09 oh okay.. 22:23:15 and have a followup for somebody to work with barbican to get the f20 tests working 22:23:24 ravips: do you want to investigate that a bit now? 22:23:48 yeah, I need sometime..at least a day to look at our changes and any new barbican changes 22:23:54 PaulCzar: seems like a good middle ground approach.. 22:24:03 ravips: okay, makes sense. 22:24:13 then we can decide whether to make f20 gate voting or non-voting ..thats my opinion 22:24:22 ravips: do you think you might be looking at this immediately? 22:24:31 yes 22:24:41 ravips: I agree. 22:24:53 if somebody can remind me of the location of the infra config stuff that sets which gates are voting I can prepare a review to switch it out 22:24:58 ravips: cool. awesome!! 22:25:07 that way we have both options in flight 22:25:11 PaulCzar: it's in zuul 22:25:19 i'm looking for the yaml file 22:25:24 PaulCzar: I suggest we wait for ravips's investigation 22:25:52 PaulCzar: https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml 22:26:01 thanks clarkb 22:26:02 devkulkarni: I'm okay with that. I just don't want to sit on this for too long or we risk a painful rebase again 22:26:12 PaulCzar: fair enough 22:26:21 ravips: what are your thoughts? 22:26:46 I agree with Paul, rebase is painful and I have experienced that before 22:27:14 lets wait for a day and either we can make f20 gate non-voting or will fix the issue 22:27:14 ravips: yes.. I meant whether to make f20 non-voting will be based on outcome of your investigation 22:27:17 given that ubuntu is the primary OS choice for openstack I think it makes sense for that to be our main gate 22:27:19 okay 22:27:33 lets wait for a day and then we can take a decision 22:27:46 okay. We'll make a call thursday morning ? 22:27:50 ravips, is it okay for me to tag an action for you to do the investigation? 22:27:56 sounds good to me 22:27:59 PaulCzar: sounds good 22:28:33 #action ravips will investigate f20 gate for failing barbican tests and come back with a suggestion for whether to make f20 non-voting 22:28:58 cool 22:29:18 so the next one on my agenda was the CoreOS patch by PaulCzar 22:29:30 #link https://review.openstack.org/#/c/102646/ 22:29:45 background... 22:30:06 the initial implementation of the VM based stuff was to use DIB and/or DIB + app injection at build time 22:30:22 this is an expensive and insecure operation ( root! ) and not a lot of fun 22:31:25 this implementation throws that out and uses docker to build and deploy the application ( much like we do with docker based ) and then deploy a VM running coreos which starts that docker image up on boot. 22:32:09 Initial implementation utilizes the docker registry, but we should be able to switch across to pull from glance like the docker driver does 22:32:23 how does heat know the image to use has CoreOs on it? 22:32:43 does operator configure heat for this? 22:33:10 PaulCzar: why to run docker inside the VM..to run the app as non root user? 22:33:15 devkulkarni: right now it expects there to be an image with the name of 'coreos' and expects the operator to have already uploaded that to glance ( we do this in devstack ) 22:33:31 ravips: so that docker is our only artifact format 22:33:35 PaulCzar: oh okay 22:34:00 devkulkarni: longer term we should allow the operator to specify and image uuid to use in the solum.conf 22:34:01 uniformity argument 22:34:15 PaulCzar: makes sense 22:34:48 so looks like all the gates are happy for it 22:34:53 all you need is reviews 22:35:10 ravips: it also allows the possibility for us to down the track utilize a container schedulure like fleet/mesos/kubernetes to be able to pack more than one container into a VM 22:35:12 ravips: mind taking a look at it? 22:35:23 whenever you get a chance 22:35:25 which I think will be important for multitenant openstacks 22:35:27 this approach doesn't need root privileges to run the app? 22:35:52 sure, I will review it 22:36:18 PaulCzar: +1 for making Solum compatible with container scheduling systems that are coming up 22:36:26 ravips: correct, docker daemon runs with root privileges on the VM ... but the app can be started with any user inside the container. When docker lands user namespaces, then root in the container will not be root outside the container 22:37:00 PaulCzar: so what user it will be in case of Solum? 22:37:14 the Solum admin user working on behalf of the tenant? 22:37:42 devkulkarni: a non openstack user ... just some random user defined in the container 22:37:47 tbd 22:37:58 okay. 22:38:05 that makes sense 22:38:22 user security is still up in the air in the docker ecosystem ... so we should just keep it lightweight and evolve with it 22:38:30 ravips: do give feedback on that patch if you have any (thanks for looking at it) 22:38:39 will do 22:38:52 PaulCzar: makes sense 22:38:59 okay.. 22:39:14 so apart from that I don't have anything for the BP/Task Review 22:39:32 anyone has anything to discuss? if not, we can move to the open discussion 22:39:48 any outstanding specs ? 22:39:57 PaulCzar: good point 22:40:03 lets take a look 22:40:12 #link https://review.openstack.org/#/q/status:open+solum-specs,n,z 22:40:32 why is the 'friendliness' spec not merged yet? 22:41:13 PaulCzar: the deployment agent spec.. is the CoreOS patch the code for that spec? 22:41:33 devkulkarni: there's a few comments from pierre that I just saw that I need to address 22:41:46 PaulCzar: you want to take care of the suggestions by Pierre on the friendliness spec sometime soon? 22:41:53 cool 22:42:03 devkulkarni: on the deployment agent spec ... is just a brain dump of some thoughts on how we could interact with schedulers on the host vms 22:42:25 it would build on top of the core patch 22:42:27 coreos 22:42:39 so its scope is broader than the core os patch 22:42:48 okay.. 22:43:04 correct 22:43:06 anything else? 22:43:27 if not lets go to open discussion and continue 22:43:33 #topic Open Discussion 22:43:47 So I have two topics.. 22:43:58 first is converting our bash scripts to Python 22:44:15 ravips, PaulCzar: what are your thoughts on this? where should we start? 22:45:15 lets summarize the benefits by converting bash scripts to python..what do we gain by doing this change 22:45:56 we get better error logging, testing of the code 22:46:05 stacktrace yes 22:46:28 ravips: low hanging fruits we get to use the same tooling ( oslo-logs etc ) rather than trying to recreate them in shell 22:46:28 may be not everything needs to be converted to Python 22:46:36 devkulkarni: +1 22:46:55 items in the common function ( such as logging ) would be a good start 22:47:36 ok, if we decide to move to python..we should do it sooner, our bash scripts are expanding and it will be huge task later on 22:48:10 ravips: apart from stacktrace, don't you think we will get other benefits? 22:48:35 slight overhead 22:49:09 ravips: +1 on doing it sooner 22:49:12 ravips:? 22:49:19 +2 on doing it sooner 22:49:20 that was -ve point 22:50:02 +2 to sooner 22:50:34 we will need a well thought out approach for going about this 22:51:04 We can utilize it as a way to formalize the LP handler ( random name I'm giving to contrib/lp-cedarish, contrib/lp-dockerfile ) and a contract between Solum and the Handler 22:51:11 we can start small.. may be with the common functions that PaulCzar mentioned 22:51:29 by moving all the common stuff to python in solum and leave the low-level stuff in the LP handler 22:51:31 PaulCzar: yes! 22:51:37 +1 22:51:39 as bash or python or powershell 22:51:59 we need a somewhat formalized contract between Solum and the language packs 22:52:01 POWERSHELL WAS A JOKE <--- to anyone reading the transcript later :P 22:52:45 the contract needs to be such that it allows us to use heroku style buildpacks and cloudfoundry style buildpacks within solum 22:53:26 7 minute warning. 22:53:49 correct. so we define the inputs and outputs to each of the following ( example ) test, build, [release?], [deploy?] 22:54:09 so I guess we generally agree on converting bash scripts to Python. 22:54:32 about the contract — I don't think we will have it in a single iteratino 22:54:36 iteration 22:54:42 but we can start on it.. 22:55:01 devkulkarni: I agree, keep it loose and tighten it up as we go 22:55:42 we can stop if there is nothing else (we can take the remaining item on the open discussion to the next meeting) 22:56:15 PaulCzar: +1 22:56:49 okay then.. thanks all for joining us. see you next week. 22:56:52 #endmeeting