21:01:49 #startmeeting crossproject 21:01:49 Meeting started Tue Dec 16 21:01:49 2014 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:53 The meeting name has been set to 'crossproject' 21:01:53 o/ 21:01:58 Our agenda for today: 21:02:12 #link http://wiki.openstack.org/Meetings/CrossProjectMeeting 21:02:36 Do we have joehuang around 21:03:12 Let's invert the two agenda items then to give him a chance to join 21:03:35 #topic Providing an alternative to shipping default config file (ttx) 21:03:48 There was a recent thread on the operators ML complaining about the removal of default config files from git: 21:03:53 #link http://lists.openstack.org/pipermail/openstack-operators/2014-December/005658.html 21:04:06 That thread was derailed to talk about packaging, but I think the original concern is valid: default config files have value to operators and we removed them 21:04:18 Now, we removed them for a reason: it was pretty painful to keep them in sync and often resulted in various failures 21:04:33 can we build them in the docs, instead of the source tree? that would make them available but not gum up the git repos with more automated changes or files that are out of date 21:04:37 So my question is, what can we offer (ideally a standard solution) to give operators those files back, while not restoring the original problem 21:04:50 dhellmann: fung i suggested something of that vein yes 21:04:56 fungi* 21:04:57 and by docs, I mean the developer docs so they are updated on every commit 21:05:06 generate and post the sample as part of dev docs 21:05:09 although it certainly wouldn't hurt to add them to the other docs as well 21:05:12 aren't more docs moving into the source tree? 21:05:15 +1 21:05:16 ahh, yes zigo said he wasn't going to be around for the meeting, but wanted to pass along a recommendation of having the sdist step generate sample configs to include in the tarballs 21:05:24 that would work for me for what I need the sample files for 21:05:34 fungi, that was my view, if sdist can do that, i'd like it there 21:05:36 mfisch: which solution would work for you? 21:05:39 dhellmann: I would suggest to add it OS-manuals 21:05:48 so the problem with the "just have sdist run tools/config/generate_sample.sh -b . -p nova -o etc/nova" is that the result heavily depends on the env it's being run on 21:05:56 ildikov: the problem there is that isn't rebuilt on every merge into a project 21:05:57 dhellmann: I think that would be clearer as we have already a whole config reference there 21:05:57 dhellmann: anywhere in a tree I can get to 21:06:03 I like the docs post because we run it 21:06:03 and while i agree, i think having the code doc build step also generate them and include them somewhere would be a useful addition 21:06:19 i'm fine with either solution 21:06:20 some projects have a tox genconfig env for building the config sample 21:06:28 which sometimes works 21:06:34 i'd like to evict the sample config from keystone if we had a better alternative 21:06:40 such as docs 21:06:50 dhellmann: yes, I know, it just messes up a bit the purpose of each docco we have 21:06:57 fungi: having the sdist build it might be challenging, since the tool uses entry points to find the options so the code has to be "installed" for it to work 21:07:00 but i know until we have that alternative keystone will continue to do the manual updates prior to releases. 21:07:03 one sucky thing is after you've run it, you have a file sitting around that may or may not be accurate anymore 21:07:04 also it needs the dependencies installed 21:07:05 dhellmann: ++ 21:07:09 ttx: we control the environment for the sdist built in our post jobs which generates our tarballs 21:07:12 and having to run tox every time you want to look at a config reference is annoying 21:07:16 +1 russellb 21:07:16 having the sample config in tree is kind of handy since it's easier to see what the output will look like in a review. 21:07:24 fungi: right, but encouraging others to run it might be counterproductive 21:07:25 would having 'tox -egenconfig' be a standard help at all? 21:07:31 dhellmann: we already do when we run sdist 21:07:33 and keeping your mac up to date on 10 different requirements files from 10 projects daily is not cool 21:07:34 dhellmann: maybe we can refer the developer docs from manuals 21:07:46 fungi: oh, because we're running it under "tox -e venv"? 21:07:50 I look at these about once every few weeks 21:07:52 dhellmann: yep 21:07:57 bknudson, if it was generated like docs - and visble that would be fine as well, -edocs instead of -egenconfig vs in-tree 21:07:57 jogo: that would add a bit of predictability for sure 21:08:06 ttx: i took the recommendation not as suggesting consumers rerun sdist 21:08:13 fungi: ok, in that case as long as we're doing it in our build and not when someone checks out the source and runs "python setup.py sdist" I think it's ok 21:08:27 ttx: but rather that the sdist _we_ build could include those files 21:08:47 Ideally we would adopt a common solution, so that ops don't have to find out the way each project decided to make that default config file available 21:08:49 via whatever mechanism 21:08:55 ttx +1 for common solution 21:09:01 +1 for common too 21:09:06 fungi: I wonder if that would require any manifest trickery, but that's an implementation detail 21:09:10 that doesn't require gating on a static file in the tree. 21:09:17 and yes, that's what i took as the reason for discussing it in the cross-project meeting. standardizing on a mechanism and location 21:09:17 ttx: ++ 21:09:41 jogo: for instance in Ceilometer we have it 21:09:47 fungi: I know zigo runs sdist to rebuild tyarballs from git, that's why I mentioned it 21:10:18 fungi: fwiw, some projects have not adopted the new config generator, and that needs project-specific args, so we probably want a tox.ini or shell script interface 21:10:20 ttx: actually he said he doesn't 21:10:42 ttx: he tars up the contents from git plus files added to create the debian source packages 21:10:45 fungi: ah. pretty sure he used to though 21:10:53 fungi: in fact, probably a shell script called by tox, so you can "tox -e venv -- tools/genconfig.sh" and other devs can "tox -e genconfig" 21:11:40 dhellmann: agreed, but i think where the opposition on the ops list is coming from is "i don't want to have to run something to generate sample configs, just tell me where to get them from" 21:11:43 So I think tox -e genconfig + inclusion in dev docs sounds like the way to go 21:11:58 fungi: right, I'm just proposing the common API for our infrastructure to use to build them when packaging 21:12:03 fungi: also, it's not always simple to set up the env 21:12:13 dhellmann: sure, that works for me as a solution 21:12:28 update the package job to call "tox -e venv -- tools/genconfig.sh" before "tox -e venv -- python setup.py sdist" 21:12:28 sdague: agreed, that's probably the largest reason why they don't want to have to run something to generate them 21:12:58 dhellmann: yep. and add a similar step to the doc build job for the individual projects too 21:13:03 right 21:13:15 I'd say the next step is a openstack-specs spec 21:13:15 and then hyperlink those files in the template or something 21:13:25 or literalinclude 21:13:26 so that we can get ops and PTLs +1s on it 21:13:36 Ops need to be able to get previous versions, too. 21:14:03 not just current and release 21:14:03 Rockyg: previous versions with what granularity? 21:14:05 are we sure it's a good idea to make building an sdist more complicated? 21:14:05 Rockyg: we'd need a definition of "previous versions" 21:14:30 jeblair: I suggested having tox -e genconfig + inclusion in dev docs 21:14:34 jeblair: this would be an optional step our build job would do, and that wouldn't be done by someone building an sdist by hand elsewhere 21:14:38 So, if a team is running say a month behind the head of tree... 21:14:43 things with stable/ ? 21:14:43 jeblair: how much more complicated is sdist made by it? 21:14:44 i mean, everyone knows how to build an sdist, right? except we're proposing that _openstack_ have a different way of building them, so if you want to build it and get the same content, you have to do something extra 21:15:06 jeblair: well, that's a fair point 21:15:10 Rockyg: previous versions are always buildable from git. and the doc changes are already documented between openstack releases. 21:15:39 Rockyg: that's a good argument in favor of storing them in tarballs 21:15:48 dhellmann: what's the reason not to generate them in the sdist step? 21:15:49 right. and the ops are saying they don't want to build the sample configs from git 21:16:19 jeblair: in order for the config generator to work the code for the project and all of its dependencies need to be installed so the entry points work 21:16:23 Yes. Tarballs will probably satisfy most of the devops and the ones it doesn't are likely very capable of rebuilding from git 21:16:23 jeblair: it's because random people running python setup.py sdist will end up with a partial config file 21:16:31 jeblair: if I check out a git tree and run "python setup.py sdist" it shouldn't install anything 21:16:37 if someone wanted to build a service that generated all the iterated changes for sample configs for each project and stored them in a git repository, that would be one solution to the "history" problem i guess 21:16:39 dhellmann: hmph, is there a way around that ? 21:16:40 dhellmann: got it 21:16:46 sorry catching up but previous versions are awesome 21:16:49 sdague: nothing reliable 21:16:53 that way I can see when an item was added or a default changed 21:17:03 +1 on prev. versions 21:17:12 on some granularity 21:17:16 fungi: that was what one ops guy proposed 21:17:17 we just don't want to be including autogenerated sample configs into project git repos if we can help it. and if we do it would need to be something along the lines of the reqs/tx proposal changes 21:17:19 so to have previous versions, the easiest is to store it in tarball 21:17:23 so... is milestone level granular enough? 21:17:26 jeblair: what if we publish the files with a version number matching the sdist, but not *in* the sdist? 21:17:32 i like the idea of a sample configs git repo 21:17:53 +1 21:17:54 that would work for us 21:18:06 mdorman: I think that's a reasonable idea, but it's orthogonal to publishing the default config 21:18:26 mfisch: the os-manuals config reference has sections for each project which show changes 21:18:30 sdague: i would think milestone level would be good, assuming there aren’t config changes w/in a milestone (which i would hope not) 21:18:35 and something anyone can generate and publish as an advisory dataset with or without our assistance 21:18:45 mdorman: well there are config changes all the time 21:19:07 mdorman: all config changes happen between milestones, that's when the development happens :-) 21:19:09 can you explain "assuming there aren�t config changes w/in a milestone" 21:19:10 maybe i misunderstand what milestone means 21:19:14 ildikov: thats been discussed in the thread, its wrong many times, Ive filed bugs 21:19:19 mdorman: it's a tag 21:19:19 mdorman: we've no way of holding back config changes from master until the milestone is cut 21:19:25 mdorman: config changes happen when libraries get updated which provide new config options into the servers, for example 21:19:28 so we have 3 milestones (roughly ever 7 weeks) then a release 21:19:45 oh, ok. i thoguht milestone == icehouse, juno,kilo, etc. 21:19:49 so if at the milestones we had samples out, would that be granluar enough 21:19:51 mfisch: it can happen that it's not perfectly up to date, but then we should improve that process how it is updated 21:19:53 ah, no, those are releases 21:20:01 dhellmann: interesting; at least there's a clear delivery artifact and process, though perhaps less convenient to consume? actually i don't know about that last part. maybe it's more convenient. 21:20:06 maybe a notification on each merge that modifies the config? That's really waht ops needs. What changed and which build it changed in. 21:20:18 hello, joehuang is just being able to logon the irc. the network is not stable to connect to freenode. 21:20:22 ildikov: not perfectly up to date does not explain the bugs. sorry 21:20:25 mdorman: this is part of where the development workflow pain is coming from... new oslo lib has new config options which suddenly cause teh sample configs on every project on multiple branches to be out of date 21:20:27 jeblair: yeah, as a separate file you can curl or whatever. And we can link to the directory full of them from the docs, and not have to make the docs build more complex either 21:20:32 it seems like all the things being asked for (by users of the config files) are solved by "keep a config file in the source tree". the pain point is devs keeping it up to date? 21:20:51 Rockyg: the changes which alter sample configuration aren't in the same projects which need the sample configs 21:21:00 joehuang: we inverted the two agenda topics. Currently discussing default config files 21:21:02 fungi: understood, thanks 21:21:03 mfisch: ok, I will check the bugs 21:21:04 the pain came from the config file including options from other libs 21:21:08 (the pain for devs) 21:21:12 fungi yeah, it's a bitch;-) 21:21:18 joehuang: should be back on cascading in 10-&5min 21:21:18 notmyname: some of the configuration options are defined in libraries, not under the control of the app, and so the file can become out of date without the app devs realizing it 21:21:19 notmyname: perhaps -- we could have robots keep it up to date, though there's also the idea that autogenerated content shouldn't be in vcs. 21:21:20 otherwise it's straight forward to auto generate it 21:21:22 notmyname: right the pain is because libraries can define options 21:21:24 klindgren_ pinnnng 21:21:44 so the valid config for a project depends on the library versions 21:21:48 RESTful config-files-as-a-service 21:21:54 would it be a problem to generate it into another repo to not pollute yours? 21:21:59 sorry it takes half an hour for me to connect to the channel 21:22:08 a cronjob that generates them into github solves most of my needs 21:22:12 but not everyone 21:22:14 +1 yeah it seems like a separate sample config repo is a goodsolution for both sides? 21:22:16 but we could certainly have the proposal bot keep it up to date for each project-branch combo 21:22:22 so that would be the post merge publish 21:22:34 jeblair, we could just have proposal bot put the configs into the main trees as well 21:22:48 jeblair: yeh, though if we did that we need to make it so it's basically a noop test job 21:22:54 because these are going to change *a lot* 21:23:00 morganfainberg: yeah, that's what i was thinking; the 'other git repo' convo started mid stream in my response :) 21:23:05 jeblair: i wonder what we'd trigger that on... or just a periodic job like we do for translation updates? 21:23:06 morganfainberg: adding them to the tree after the merge means they are out of date if you check out the version with the merge 21:23:10 and while i know it's not a snappy turn around (instantaneous) it does meet our current needs. 21:23:34 except that the config in any given repo will be wrong after an option is changed or added 21:23:38 it's also only accurate if you have the same library versions 21:23:43 and that 21:23:59 the sample config is not based on the application; it does not belong inside the application 21:24:00 isn't that true for making it a release artifact as well? 21:24:00 same library versions as what? 21:24:02 i think it would be accurate enough, maybe even add a "last updated " line? 21:24:11 dhellmann: yeah, i think it's never really up to date necessarily anyway, and the only way we enforced it mostly before was to bring development on a project to a halt until it got corrected 21:24:13 bknudson: as the build 21:24:19 bknudson: if oslo.messaging adds an option, your config file is out of date 21:24:21 so you know what the range that change spans? 21:24:23 given a time reference I can go back to the project repo and see the change 21:24:40 fungi: yeah, it was easier when all of the options were inside the app because of the incubated code 21:24:40 "why guy changed this default from A to B and whats the commit log say as to why" 21:24:42 sdague, fungi: i was proceeding under the assumption that infrequent repo updates of the config file would be okay, based on the idea that publishing them with tarballs was okay. 21:24:43 bknudson: the version of all the libraries that have options is also needed in addition to the version of the project source 21:24:45 seems like that would always be the case since a range of library versions are supported 21:25:20 so mfisch, as long as you know when they were updated and you have timeframes on the updates - that meets your needs? 21:25:30 jeblair: that seems fine to me too. i'm not convinced that up-to-the-minute sample configs were part of the request 21:25:35 bknudson: true, which is why the most accurate way to get a sample file is to make it yourself using the versions of all of the libs you're running on your system -- but that's much less convenient 21:25:36 ideally I'd have every single change but I could deal with a time reference 21:25:37 X change spans commit aef123 to fff342 21:25:56 hm. if ops are running code from git they *must* generate their config by their own.because the combination of config options is different depending what you have installed. so even if there is a repository with the config files for the different projects I think it's very unlikely that the libs used to generated the configs have the same version than in the ops environemtn 21:26:33 mfisch: also, again, there's no discrete mapping from a sample configuration back to a commit in a particular git repo 21:26:34 toabctl: right. I thought the separate repository was for hand-crafted examples of specific use cases ("here's nova with qpid" and "here's nova with rabbit") 21:26:35 so the config has b 21:26:37 toabctl, remember these are strictly sample configs - an example reference. you don't need them to configure a service. most ops/deployers run from a stable release afaik 21:26:38 er 21:26:46 maybe the libraries need their own config files 21:26:54 asalkeld: +1 21:26:55 fungi: we could have the config generator put version strings at the top in comments 21:27:11 having the library versions in the sample config would be good. 21:27:15 +1 21:27:17 do we have a clear idea of what the constraints of the issue are. What specific cases are we trying to address. As there may not be a one size fits all answer 21:27:18 bknudson +1 21:27:31 dhellmann: does that get you info on where each config option came from and the history of the code which determined it? 21:27:35 bknudson: +1 21:27:41 jogo: +1 21:27:50 jogo: yes, I feel like we won't find the solution here, the problem space is more complex than it seems 21:27:53 fungi: not entirely, no, but we know where each option comes from so I think we can include that in the output if we don't already 21:28:05 by "know" I mean we know which entry point 21:28:17 use case: bug in library. want to update. Does it change the config? 21:28:36 ttx: so, someone should work up a spec? 21:28:42 dhellmann: i was referring to mfisch's request to be able to figure out why a config option changed by looking at the sample config itselg 21:28:42 Anyone volunteering to summarize the problem and the perveived solutions ? 21:28:43 itself 21:28:46 dhellmann: yes 21:28:53 Rockyg: that is a great example of a use case, thanks 21:29:09 fungi: yeah, that would require a much much smarter config generator that pulled in git commit messages or something 21:29:10 dhellmann: or at least an ML thread if there is not enough meat for a spec yet 21:29:32 ttx: I'd rather go ahead and start with a skeleton spec and have some use cases proposed there 21:29:45 dhellmann ++ 21:29:47 but as I'm not going to write it, I'll leave that decision up to the author :-) 21:29:53 so i think as a takeaway we can at least summarize some of these options as doable and take the temperature on the ml thread>? 21:29:53 dhellmann: is that a rhetorical "I" or are you volunteering ? :) 21:30:03 fungi: ++ 21:30:07 May I request a CC to the operators list on this discussion? 21:30:25 mfisch: that was the ml thread i was talking about 21:30:35 ok 21:30:40 I thought you meant -dv 21:30:41 -dev 21:30:54 mfisch: the one mentioned in the meeting agenda 21:30:58 fungi: you up for that ? 21:31:13 mfisch: it will most likely move to dev, but when it does, it will get announced on the ops list as moving 21:31:20 perfect 21:31:35 ttx: sure. i'll take a crack at ml as next step, then see if someone else wants to step up for a spec if people can agree on one particular solution as better than the others for the requested purpose 21:31:49 #agreed summarize some of these options as doable and take the temperature on the ops ml thread? 21:32:02 #action fungi to take a crack at ml as next step, then see if someone else wants to step up for a spec if people can agree on one particular solution as better than the others for the requested purpose 21:32:10 fungi: many thx 21:32:12 fungi: ask for use cases that can be included in spec. 21:32:26 Rockyg: great idea--will make sure i do 21:32:27 Fungi: and thanks for taking this on 21:32:34 yw 21:32:35 ok, back to the first agenda item 21:32:42 thanks for the discussion this has been a thorn for sometime 21:32:46 thanks 21:32:48 #topic Next steps for cascading (joehuang) 21:32:51 So... a bit of history first 21:32:56 joehuang posted about his "cascading" approach to scaling OpenStack back in October 21:33:04 There was a thread back then, mostly asking about the difference with the "Cells" approach 21:33:12 Then it was discussed as part of the "scaling approaches" cross-project session in Paris 21:33:21 Now joehuang is wondering about the next steps, which prompted a new thread 21:33:30 That thread mostly questioned the need and priority for another scaling approach 21:33:40 If I had to summarize I'd say that: 21:33:48 https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack ? 21:33:51 "scaling openstack" == functional code or the organization? 21:33:54 (1) the cascading approach requires important changes and is heavily cross-project: therefore it requires strong buy-in from everyone in order to be successful 21:34:03 notmyname: code 21:34:05 (2) nobody (except its promoter) was really excited by the idea of spending any time on this 21:34:31 ...so I don't really see this effort as succeeding, unless the tide turns 21:34:41 which effort? cells? 21:34:47 bknudson: cascading 21:34:48 bknudson: cascading ... 21:34:59 * ttx shall post a few links 21:35:01 ttx: i agree with that summary, that's my take as well 21:35:13 ttx: same 21:35:13 but it's not scaling out for cascading 21:35:17 ttx: it's on telco-NFV todo list, but it needs lots of architecting and design. 21:35:32 It's a big issue in telco. 21:35:33 ttx: Ack 21:35:35 it's about multi-clouds intergation 21:35:53 http://lists.openstack.org/pipermail/openstack-dev/2014-September/047470.html 21:36:04 I think that's the key bit: the real goal is integrating multiple-vendor openstack, not really scaling 21:36:08 original post ^ 21:36:13 and that is the heart of why there is little interest, IMHO 21:36:13 dansmith: yeah, that part scares me 21:36:19 or even if it was all the same vendor, it's still a huge scope 21:36:30 dansmith: there be dragons to support multi vendor 21:36:35 and i don't think that's a smart next step for us to try to tackle 21:36:41 I just want joehuang to have a clear answer 21:36:49 because we may not have been clear enough in the past 21:37:06 on interest/priority/effort 21:37:15 but the implementation is not hard, just add new driver.agent for OpenStack it self. PoC only about 15K source code 21:37:17 russellb: agreed. the pre cells solution of zones was very similar to this and was abandoned due to many obstacles that couldn't be overcome at the time 21:37:17 ttx: what format do you want that in, and from who? 21:37:19 russellb: agreed, I think it's something that comes after a cells model is well established 21:37:24 So, too much work to start building. The work should go into getting a design and architecture that addresses the dragons 21:37:48 each one service may have a driver about 2~4k source code 21:37:48 joehuang: i strongly disagree that this is not hard or complicated :-) 21:37:53 Rockyg: well, more importantly, there are existing scaling efforts that we should get nailed down 21:37:57 first 21:37:59 dansmith: ideally that meeting would put everyone on same page 21:38:04 joehuang, 15k is a large amount of additional code. 21:38:07 and the driver/agent can be decoupled from the tree 21:38:18 and once that's a solved problem, and cross project, then something like this could be considered 21:38:19 imo 21:38:28 morganfainberg: ++ 21:38:30 if you want to maintain some out of tree drivers, by all means, have at it, it's open source :) 21:38:33 Can those scaling issues be addressed so that they inform the multi cloud effort? 21:38:34 joehuang: showing a proof of concept is not hard doesn't mean a full implementation is easy 21:39:09 jogo: exactly, see: cells v1 and the attempts to make it feature complete in Nova 21:39:15 i fundamentally don't see the "multiple cloud effort" as something we should make a priority right now (if ever) 21:39:17 sdague: great example 21:39:20 sdague: like get the architecture of the scaling firmed up so Joe's effort has a reasonable base to build on 21:39:24 i assume that's 15k lines of code, not 15k bytes of code 21:39:30 it's already opensource in stackforge/tricircle 21:39:49 russellb: I agree, it's just not a thing we need to concentrate on right now, hard or not (even though we know it's hard) 21:39:50 russellb: agreed, at least in the short term 21:39:56 fungi, that was my assumption 21:40:03 for cinder, the driver is about only 2k source code 21:40:18 joehuang: have you gotten this running and passed tempest-full in a multi cloud deployment? 21:40:25 joehuang: you mention the possibility to run cascading as an incubated project. How would that work ? Could this be implemented completely outside the existing projects ? 21:40:45 joehuang: you might get further if you talked about impact of teh changes and less about size of them 21:41:07 if I thought every project totally rocked its scalability and such within a single region, then i think we could talk about this as a possible next step 21:41:09 but we're not there 21:41:14 I think everyone agrees that even if this was a good idea, now is not the time to implement it. The question is, what can joehuang do as a next step 21:41:15 russellb: ++ 21:41:16 link https://github.com/stackforge/tricircle 21:41:17 to ttx, yes. the different is that the test environment needs at least 3 openstacks 21:41:19 so i think it's a premature, very large distraction 21:41:26 o/ sorry late 21:41:27 ttx: I susupect that the project would need to track integrated and ensure that a scaling solution doesn't brake them. Negotiate for shared solution 21:41:39 russellb: ++ 21:41:59 joehuang: having complex test needs doesn't prevent it from being developed out of tree 21:42:08 https://github.com/stackforge/tricircle/blob/master/novaproxy/nova/compute/manager_proxy.py seems to have a lot of copied code from nova? 21:42:10 the test use case suit for current openstack can reused for cascading 21:42:12 russellb: ++ 21:42:12 so I think that the fastest path to getting to this, is not spending any time on it, but instead for people interested in multi cloud stuff to actively help with the existing scalability 21:42:18 and help make that solid 21:42:33 then it opens up the discussion in the future 21:42:40 it's inherited from nova-manager 21:42:47 russellb: distraction for folks already working on openstack scaling projects, but not for Joe. But Joe would need to coordinate with those others. 21:42:58 and some unused code not been removed completelt 21:43:01 Rockyg: it requires significant buy-in and coordination across all projects 21:43:08 *signficant* design and code review time 21:43:16 Rockyg: yeh, there is tons of overhead for any feature of this scope 21:43:17 it hugely impacts what will get done in openstack overall 21:43:22 russellb: ++ 21:43:40 russellb: yup. But at least don't design current openstack to make cascading impossible. 21:43:48 I'm with sdague and russellb on this one. I don't think we can afford the distraction, even if driven by a new group 21:44:00 Rockyg: don't think we are/have 21:44:01 ttx: agreed 21:44:02 and to be honest, i'm still unclear on some details of it - it has had some mixed messaging. without a bit more clarity the distraction gets worse. 21:44:18 ttx, russellb, sdague: +1 21:44:33 until very recently it was positioned much more as a scalability solution rather than an interoperability solution 21:44:52 fungi: indeed, and that's the summit session it was slated for: scale-out 21:45:09 hence a lot of the confusion i think 21:45:14 How about joe uses his repository to build a POC. When ready to show, then have a session to discuss where it is limited/broken and where it will likely work. 21:45:34 Rockyg: that's the distraction 21:45:41 redhat openstack can be integreted with original opensrack with cascading 21:45:53 Rockyg: if this isn't a thing we need to work on right now, then that review of the gaps is taking away from other work 21:45:53 Rockyg: sure, we won't (can't, actually) prevent him from developing the solution 21:45:53 to be honest, i'm not interested in that as a problem to solve 21:45:59 at all 21:46:10 and to be clear, that's with my upstream hat on 21:46:23 i just don't think we should be trying to build complex technical solutions for something that's a business issue like that 21:46:30 so i don't think it should be in the discussion 21:46:41 sorry, too much message, not been able to answer one by one 21:46:51 like I said, I think the fastest path to ever working on a thing like this is *not working on it now*, and instead focussing efforts on the scaling priorities already in the projects 21:46:52 dansmith, russellb: just because you aren't interested or have time, doesn't mean joe can't work on it. the nature of Open Source. You don't have to participate. 21:46:59 maybe something not been exaplained 21:47:00 including reviewing those changes 21:47:14 Rockyg: there is a difference between "you can't work on it" and "OpenStack doesn't want to integrate it any time soon" 21:47:15 I don't think we can prevent people from working on this, but I haven't heard much support for this in the short term during this meeting yet. 21:47:16 Rockyg: no, of course not, but you said "have a session to discuss the gaps" 21:47:27 russellb: ++. on the other hand I do like things like Globally Distributed OpenStack Swift Cluster 21:47:36 Rockyg: yeah, I think it's fine for joehuang to work on it, but you're asking for help and participation and not hearing the "we don't have time this cycle" response 21:47:38 Rockyg: sure, you just can't expect people to dedicate cycles to that 21:47:42 Rockyg: he can do whatever he wants, but expecting us to circle around in X months to revisit isn't a thing there seems to be any support for here 21:47:51 Rockyg, and no one is disagreeing. what the disagreement is that we may or may not afford the time as the OpenStack project/PTLs/TC to review it/dedicate cycles to it 21:47:53 dansmith: ++ 21:48:04 This sounds like a great third party product built on top of OpenStack and sold to telcos who need it 21:48:06 * morganfainberg left Core teams out of that 21:48:13 I haven't heard a single PTl (or even a single existing dev) supporting that idea yet 21:48:17 Not the gaps, the POC. It could be a BoF at the summit for that matter. Anyone who wants to, comes, otherwise, it's noise. 21:48:34 Given thatn I just don't think it will be successfully integrated 21:48:35 Rockyg: sure, but expectations need to be set correctly 21:48:40 Rockyg: you can apply for a session but don't be surprised if you don't get the space 21:48:44 joehuang: it sounds like you think we are missing something. What are we missing? 21:49:06 so it's better developed separately as a POC and trying to prove itself useful.. At this moment there is a mindshare problem 21:49:22 "a mindshare problem" is a good way to capture it 21:49:23 because, working off on a fork out of tree on stuff people aren't interested in, just means expect to rewrite it from scratch if there is future interest. 21:49:32 many people have questions, maybe because I did not answer his question 21:49:42 good outcomes which could arise from this as a separate effort among the interested parties would be for them to file bugs they identify in projects which need fixing and are preventing their effort from working as well as it should. things which are legitimately bugs in existing openstack features 21:49:49 sdague: yes. Expectations: joe gets his own team if he wants it to happen. the team is responsible for a POC and addressing whatever questions come up. 21:49:56 fungi, huge +1 on that 21:49:58 it's not a technical problem, and joe thinks it's technical. Not yet, at least. 21:49:58 fungi: ++ 21:50:11 Rockyg: https://github.com/stackforge/tricircle is the POC 21:50:15 its there already 21:50:20 jogo: right :) 21:50:25 Fungi: ++ the responsibility is on Joe 21:50:28 so many messages, I can't answer each one, 21:50:32 and his team. 21:50:44 I think you shouldn't expect that the POC will merge as is either. 21:51:07 joehuang: so I think one of the big questions is, is this a technical solution to a pure business problem? 21:51:23 so is this a fair summary? feel free to continue your POC work, but right now, there seems to be no support from any existing devs, so it may not ever be something accepted 21:51:33 OK, let's slow down thge discussion, so that the message is clearer 21:51:34 I wouldn't expect it to merge. Especially as a POC. It's once those other scaling things happen so that the stage is set, and the POC has been exercised to fix real world problems. 21:51:34 if people want to parcipate in OpenStack, they should participate in OpenStack, and look at the priorities that projects have already set, and help. If they want to do their own thing, that's cool, just don't expect it to become part of OpenStack. 21:51:36 things of course may change in some future release cycle 21:51:44 (the multi vendor / making several unique deployments look like one) 21:52:00 sdague: + 21:52:27 jogo: and is it even a problem (they can be separate regions, of course) 21:52:30 I like russell summary 21:52:33 networking automation across openstack is very important 21:52:47 networking automation across a globally distributed tree of openstack clouds? 21:52:47 umm 21:52:48 joehuang: why and can you clarify that 21:53:02 is a good example of why i think this should be considered out of scope for right now 21:53:09 networking automation? I'm sure that won't be contentious at all ;) 21:53:09 that's a huge effort 21:53:09 refer VDF use case 21:53:15 how about we make neutron work really well for one region first 21:53:20 Yup. I think the key info for joe here, is OpenStack isn't ready for the project, and if he want's it ready sooner than later, he needs to help make it ready by working on scaling being coded now. 21:53:23 russellb: ++ 21:53:36 Rockyg: ++ 21:53:43 Rockyg: yes, that's what I'm hearing, too 21:53:54 ok, let's spell it out 21:53:55 Rockyg, dead on 21:54:08 #agreed OpenStack isn't ready for the project, and if he want's it ready sooner than later, joehuang needs to help make it ready by working on scaling being coded now. 21:54:11 * jogo googles VDF and finds Virginia Defense Force 21:54:21 jogo: lol 21:54:35 #info feel free to continue your POC work, but right now, there seems to be no support from any existing devs, so it may not ever be something accepted 21:54:48 I think that's a fair summary ? 21:54:51 +1 21:54:51 ++ 21:54:55 +1 21:55:09 +1 21:55:10 +1 21:55:14 ++ 21:55:23 +1 21:55:29 +1 21:55:29 not support from the meeting? 21:55:30 +1 21:55:40 +1 21:55:45 joehuang: in Paris? No, I did not detect any support in that meeting 21:55:48 +1 21:56:19 joehuang: I have yet to find an existing core dev in any of the affected project ready to back the idea 21:56:26 so we continue our work, and ignite a discussion later wen new progress comes 21:56:38 joehuang: what is VDF? link 21:56:47 joehuang: that sounds like the best plan for right now 21:56:49 Vodafone 21:57:02 joehuang: also, if you encounter sclaing issues in getting your POC to work (and you will) file them as bugs, and help fixing them 21:57:06 joehuang, and please help contirbute to solving the issues with the scaling in the current projects. 21:57:12 joehuang: ++ and help on openstack work that leads to what you want on cascading 21:57:15 as ttx just said 21:57:29 #topic Open discussion & announcements 21:57:50 We had 1:1 syncs today, kilo-1 tags are getting out as we speak. Logs at: 21:57:54 #link http://eavesdrop.openstack.org/meetings/ptl_sync/2014/ptl_sync.2014-12-16-09.00.html 21:58:08 It looks like the issue with oslo.db and sqlalchemy triggered by the setuptools release this weekend is fixed. Please let me know if you are seeing failures still in your projects. 21:58:22 Also note that we'll discuss openstack-specs as a recurring iteml in future meetings: 21:58:24 there is one more patch to land in juno to pin oslo.db <1.1 21:58:27 #link https://review.openstack.org/#/q/status:open+project:openstack/openstack-specs+branch:master,n,z 21:58:40 PTLs, ops, eveyone shall review those ^ 21:59:07 k 21:59:19 just a minor note, keystoneclient is going to be bumped to 1.0.0 this next release - it's been stable for a looong time, we're just marking it as stable version # (no incompat changes going in, etc) - if anyone has any concerns, let me know [e.g. infra side, etc]. 21:59:26 currently one on log guidelines and one on OSProfiler 21:59:55 Anything else, anyone ? 22:00:26 ttx: meeting scheduling for next 2 weeks? 22:00:31 hope you all have happy holidays, if you're taking any time off :) 22:00:44 eglynn: probably skip next 2 22:00:54 ttx: cool, sounds reasonable 22:01:00 anyone wanting to have a cross-project on Dec 23 or Dec 30 ? 22:01:15 (there is a trick hidden in that question) 22:01:47 my projects would be very cross if they had to meet then;-) 22:02:01 Alright, have a great holiday season everyone! Next meeting January 6 22:02:05 #endmeeting