21:01:49 <ttx> #startmeeting crossproject
21:01:49 <openstack> Meeting started Tue Dec 16 21:01:49 2014 UTC and is due to finish in 60 minutes.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:50 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:53 <openstack> The meeting name has been set to 'crossproject'
21:01:53 <SergeyLukjanov> o/
21:01:58 <ttx> Our agenda for today:
21:02:12 <ttx> #link http://wiki.openstack.org/Meetings/CrossProjectMeeting
21:02:36 <ttx> Do we have joehuang around
21:03:12 <ttx> Let's invert the two agenda items then to give him a chance to join
21:03:35 <ttx> #topic Providing an alternative to shipping default config file (ttx)
21:03:48 <ttx> There was a recent thread on the operators ML complaining about the removal of default config files from git:
21:03:53 <ttx> #link http://lists.openstack.org/pipermail/openstack-operators/2014-December/005658.html
21:04:06 <ttx> That thread was derailed to talk about packaging, but I think the original concern is valid: default config files have value to operators and we removed them
21:04:18 <ttx> Now, we removed them for a reason: it was pretty painful to keep them in sync and often resulted in various failures
21:04:33 <dhellmann> can we build them in the docs, instead of the source tree? that would make them available but not gum up the git repos with more automated changes or files that are out of date
21:04:37 <ttx> So my question is, what can we offer (ideally a standard solution) to give operators those files back, while not restoring the original problem
21:04:50 <ttx> dhellmann: fung i suggested something of that vein yes
21:04:56 <ttx> fungi*
21:04:57 <dhellmann> and by docs, I mean the developer docs so they are updated on every commit
21:05:06 <ttx> generate and post the sample as part of dev docs
21:05:09 <dhellmann> although it certainly wouldn't hurt to add them to the other docs as well
21:05:12 <notmyname> aren't more docs moving into the source tree?
21:05:15 <asalkeld> +1
21:05:16 <fungi> ahh, yes zigo said he wasn't going to be around for the meeting, but wanted to pass along a recommendation of having the sdist step generate sample configs to include in the tarballs
21:05:24 <mfisch> that would work for me for what I need the sample files for
21:05:34 <morganfainberg> fungi, that was my view, if sdist can do that, i'd like it there
21:05:36 <dhellmann> mfisch: which solution would work for you?
21:05:39 <ildikov> dhellmann: I would suggest to add it OS-manuals
21:05:48 <ttx> so the problem with the "just have sdist run tools/config/generate_sample.sh -b . -p nova -o etc/nova" is that the result heavily depends on the env it's being run on
21:05:56 <dhellmann> ildikov: the problem there is that isn't rebuilt on every merge into a project
21:05:57 <ildikov> dhellmann: I think that would be clearer as we have already a whole config reference there
21:05:57 <mfisch> dhellmann: anywhere in a tree I can get to
21:06:03 <ttx> I like the docs post because we run it
21:06:03 <fungi> and while i agree, i think having the code doc build step also generate them and include them somewhere would be a useful addition
21:06:19 <morganfainberg> i'm fine with either solution
21:06:20 <toabctl> some projects have a tox genconfig env for building the config sample
21:06:28 <mfisch> which sometimes works
21:06:34 <morganfainberg> i'd like to evict the sample config from keystone if we had a better alternative
21:06:40 <morganfainberg> such as docs
21:06:50 <ildikov> dhellmann: yes, I know, it just messes up a bit the purpose of each docco we have
21:06:57 <dhellmann> fungi: having the sdist build it might be challenging, since the tool uses entry points to find the options so the code has to be "installed" for it to work
21:07:00 <morganfainberg> but i know until we have that alternative keystone will continue to do the manual updates prior to releases.
21:07:03 <russellb> one sucky thing is after you've run it, you have a file sitting around that may or may not be accurate anymore
21:07:04 <dhellmann> also it needs the dependencies installed
21:07:05 <ttx> dhellmann: ++
21:07:09 <fungi> ttx: we control the environment for the sdist built in our post jobs which generates our tarballs
21:07:12 <russellb> and having to run tox every time you want to look at a config reference is annoying
21:07:16 <mfisch> +1 russellb
21:07:16 <bknudson> having the sample config in tree is kind of handy since it's easier to see what the output will look like in a review.
21:07:24 <ttx> fungi: right, but encouraging others to run it might be counterproductive
21:07:25 <jogo> would having 'tox -egenconfig' be a standard help at all?
21:07:31 <fungi> dhellmann: we already do when we run sdist
21:07:33 <mfisch> and keeping your mac up to date on 10 different requirements files from 10 projects daily is not cool
21:07:34 <ildikov> dhellmann: maybe we can refer the developer docs from manuals
21:07:46 <dhellmann> fungi: oh, because we're running it under "tox -e venv"?
21:07:50 <mfisch> I look at these about once every few weeks
21:07:52 <fungi> dhellmann: yep
21:07:57 <morganfainberg> bknudson, if it was generated like docs - and visble that would be fine as well, -edocs instead of -egenconfig vs in-tree
21:07:57 <ttx> jogo: that would add a bit of predictability for sure
21:08:06 <fungi> ttx: i took the recommendation not as suggesting consumers rerun sdist
21:08:13 <dhellmann> fungi: ok, in that case as long as we're doing it in our build and not when someone checks out the source and runs "python setup.py sdist" I think it's ok
21:08:27 <fungi> ttx: but rather that the sdist _we_ build could include those files
21:08:47 <ttx> Ideally we would adopt a common solution, so that ops don't have to find out the way each project decided to make that default config file available
21:08:49 <fungi> via whatever mechanism
21:08:55 <morganfainberg> ttx +1 for common solution
21:09:01 <mfisch> +1 for common too
21:09:06 <dhellmann> fungi: I wonder if that would require any manifest trickery, but that's an implementation detail
21:09:10 <morganfainberg> that doesn't require gating on a static file in the tree.
21:09:17 <fungi> and yes, that's what i took as the reason for discussing it in the cross-project meeting. standardizing on a mechanism and location
21:09:17 <dhellmann> ttx: ++
21:09:41 <ildikov> jogo: for instance in Ceilometer we have it
21:09:47 <ttx> fungi: I know zigo runs sdist to rebuild tyarballs from git, that's why I mentioned it
21:10:18 <dhellmann> fungi: fwiw, some projects have not adopted the new config generator, and that needs project-specific args, so we probably want a tox.ini or shell script interface
21:10:20 <fungi> ttx: actually he said he doesn't
21:10:42 <fungi> ttx: he tars up the contents from git plus files added to create the debian source packages
21:10:45 <ttx> fungi: ah. pretty sure he used to though
21:10:53 <dhellmann> fungi: in fact, probably a shell script called by tox, so you can "tox -e venv -- tools/genconfig.sh" and other devs can "tox -e genconfig"
21:11:40 <fungi> dhellmann: agreed, but i think where the opposition on the ops list is coming from is "i don't want to have to run something to generate sample configs, just tell me where to get them from"
21:11:43 <ttx> So I think tox -e genconfig + inclusion in dev docs sounds like the way to go
21:11:58 <dhellmann> fungi: right, I'm just proposing the common API for our infrastructure to use to build them when packaging
21:12:03 <sdague> fungi: also, it's not always simple to set up the env
21:12:13 <fungi> dhellmann: sure, that works for me as a solution
21:12:28 <dhellmann> update the package job to call "tox -e venv -- tools/genconfig.sh" before "tox -e venv -- python setup.py sdist"
21:12:28 <fungi> sdague: agreed, that's probably the largest reason why they don't want to have to run something to generate them
21:12:58 <fungi> dhellmann: yep. and add a similar step to the doc build job for the individual projects too
21:13:03 <dhellmann> right
21:13:15 <ttx> I'd say the next step is a openstack-specs spec
21:13:15 <fungi> and then hyperlink those files in the template or something
21:13:25 <dhellmann> or literalinclude
21:13:26 <ttx> so that we can get ops and PTLs +1s on it
21:13:36 <Rockyg> Ops need to be able to get previous versions, too.
21:14:03 <Rockyg> not just current and release
21:14:03 <dhellmann> Rockyg: previous versions with what granularity?
21:14:05 <jeblair> are we sure it's a good idea to make building an sdist more complicated?
21:14:05 <fungi> Rockyg: we'd need a definition of "previous versions"
21:14:30 <ttx> jeblair: I suggested having tox -e genconfig + inclusion in dev docs
21:14:34 <dhellmann> jeblair: this would be an optional step our build job would do, and that wouldn't be done by someone building an sdist by hand elsewhere
21:14:38 <Rockyg> So, if a team is running say a month behind the head of tree...
21:14:43 <jlk> things with stable/<versoin> ?
21:14:43 <sdague> jeblair: how much more complicated is sdist made by it?
21:14:44 <jeblair> i mean, everyone knows how to build an sdist, right?  except we're proposing that _openstack_ have a different way of building them, so if you want to build it and get the same content, you have to do something extra
21:15:06 <dhellmann> jeblair: well, that's a fair point
21:15:10 <toabctl> Rockyg: previous versions are always buildable from git. and the doc changes are already documented between openstack releases.
21:15:39 <ttx> Rockyg: that's a good argument in favor of storing them in tarballs
21:15:48 <jeblair> dhellmann: what's the reason not to generate them in the sdist step?
21:15:49 <Rockyg> right.  and the ops are saying they don't want to build the sample configs from git
21:16:19 <dhellmann> jeblair: in order for the config generator to work the code for the project and all of its dependencies need to be installed so the entry points work
21:16:23 <Rockyg> Yes.  Tarballs will probably satisfy most of the devops and the ones it doesn't are likely very capable of rebuilding from git
21:16:23 <ttx> jeblair: it's because random people running python setup.py sdist will end up with a partial config file
21:16:31 <dhellmann> jeblair: if I check out a git tree and run "python setup.py sdist" it shouldn't install anything
21:16:37 <fungi> if someone wanted to build a service that generated all the iterated changes for sample configs for each project and stored them in a git repository, that would be one solution to the "history" problem i guess
21:16:39 <sdague> dhellmann: hmph, is there a way around that ?
21:16:40 <jeblair> dhellmann: got it
21:16:46 <mfisch> sorry catching up but previous versions are awesome
21:16:49 <dhellmann> sdague: nothing reliable
21:16:53 <mfisch> that way I can see when an item was added or a default changed
21:17:03 <mdorman> +1 on prev. versions
21:17:12 <mdorman> on some granularity
21:17:16 <Rockyg> fungi: that was what one ops guy proposed
21:17:17 <fungi> we just don't want to be including autogenerated sample configs into project git repos if we can help it. and if we do it would need to be something along the lines of the reqs/tx proposal changes
21:17:19 <ttx> so to have previous versions, the easiest is to store it in tarball
21:17:23 <sdague> so... is milestone level granular enough?
21:17:26 <dhellmann> jeblair: what if we publish the files with a version number matching the sdist, but not *in* the sdist?
21:17:32 <mdorman> i like the idea of a sample configs git repo
21:17:53 <ryansb> +1
21:17:54 <mfisch> that would work for us
21:18:06 <dhellmann> mdorman: I think that's a reasonable idea, but it's orthogonal to publishing the default config
21:18:26 <ildikov> mfisch: the os-manuals config reference has sections for each project which show changes
21:18:30 <mdorman> sdague: i would think milestone level would be good, assuming there aren’t config changes w/in a milestone (which i would hope not)
21:18:35 <fungi> and something anyone can generate and publish as an advisory dataset with or without our assistance
21:18:45 <sdague> mdorman: well there are config changes all the time
21:19:07 <dhellmann> mdorman: all config changes happen between milestones, that's when the development happens :-)
21:19:09 <sdague> can you explain "assuming there aren�t config changes w/in a milestone"
21:19:10 <mdorman> maybe i misunderstand what milestone means
21:19:14 <mfisch> ildikov: thats been discussed in the thread, its wrong many times, Ive filed bugs
21:19:19 <ttx> mdorman: it's a tag
21:19:19 <eglynn> mdorman: we've no way of holding back config changes from master until the milestone is cut
21:19:25 <fungi> mdorman: config changes happen when libraries get updated which provide new config options into the servers, for example
21:19:28 <sdague> so we have 3 milestones (roughly ever 7 weeks) then a release
21:19:45 <mdorman> oh, ok. i thoguht milestone == icehouse, juno,kilo, etc.
21:19:49 <sdague> so if at the milestones we had samples out, would that be granluar enough
21:19:51 <ildikov> mfisch: it can happen that it's not perfectly up to date, but then we should improve that process how it is updated
21:19:53 <dhellmann> ah, no, those are releases
21:20:01 <jeblair> dhellmann: interesting; at least there's a clear delivery artifact and process, though perhaps less convenient to consume?  actually i don't know about that last part.  maybe it's more convenient.
21:20:06 <Rockyg> maybe a notification on each merge that modifies the config?  That's really waht ops needs.  What changed and which build it changed in.
21:20:18 <joehuang> hello, joehuang is just being able to logon the irc. the network is not stable to connect to freenode.
21:20:22 <mfisch> ildikov: not perfectly up to date does not explain the bugs. sorry
21:20:25 <fungi> mdorman: this is part of where the development workflow pain is coming from... new oslo lib has new config options which suddenly cause teh sample configs on every project on multiple branches to be out of date
21:20:27 <dhellmann> jeblair: yeah, as a separate file you can curl or whatever. And we can link to the directory full of them from the docs, and not have to make the docs build more complex either
21:20:32 <notmyname> it seems like all the things being asked for (by users of the config files) are solved by "keep a config file in the source tree". the pain point is devs keeping it up to date?
21:20:51 <fungi> Rockyg: the changes which alter sample configuration aren't in the same projects which need the sample configs
21:21:00 <ttx> joehuang: we inverted the two agenda topics. Currently discussing default config files
21:21:02 <mdorman> fungi:  understood, thanks
21:21:03 <ildikov> mfisch: ok, I will check the bugs
21:21:04 <russellb> the pain came from the config file including options from other libs
21:21:08 <russellb> (the pain for devs)
21:21:12 <Rockyg> fungi yeah, it's a bitch;-)
21:21:18 <ttx> joehuang: should be back on cascading in 10-&5min
21:21:18 <dhellmann> notmyname: some of the configuration options are defined in libraries, not under the control of the app, and so the file can become out of date without the app devs realizing it
21:21:19 <jeblair> notmyname: perhaps -- we could have robots keep it up to date, though there's also the idea that autogenerated content shouldn't be in vcs.
21:21:20 <russellb> otherwise it's straight forward to auto generate it
21:21:22 <sdague> notmyname: right the pain is because libraries can define options
21:21:24 <harlowja> klindgren_ pinnnng
21:21:44 <sdague> so the valid config for a project depends on the library versions
21:21:48 <jogo> RESTful config-files-as-a-service
21:21:54 <mfisch> would it be a problem to generate it into another repo to not pollute yours?
21:21:59 <joehuang> sorry it takes half an hour for me to connect to the channel
21:22:08 <mfisch> a cronjob that generates them into github solves most of my needs
21:22:12 <mfisch> but not everyone
21:22:14 <mdorman> +1 yeah it seems like a separate sample config repo is a goodsolution for both sides?
21:22:16 <jeblair> but we could certainly have the proposal bot keep it up to date for each project-branch combo
21:22:22 <sdague> so that would be the post merge publish
21:22:34 <morganfainberg> jeblair, we could just have proposal bot put the configs into the main trees as well
21:22:48 <sdague> jeblair: yeh, though if we did that we need to make it so it's basically a noop test job
21:22:54 <sdague> because these are going to change *a lot*
21:23:00 <jeblair> morganfainberg: yeah, that's what i was thinking; the 'other git repo' convo started mid stream in my response :)
21:23:05 <fungi> jeblair: i wonder what we'd trigger that on... or just a periodic job like we do for translation updates?
21:23:06 <dhellmann> morganfainberg: adding them to the tree after the merge means they are out of date if you check out the version with the merge
21:23:10 <morganfainberg> and while i know it's not a snappy turn around (instantaneous) it does meet our current needs.
21:23:34 <dhellmann> except that the config in any given repo will be wrong after an option is changed or added
21:23:38 <sdague> it's also only accurate if you have the same library versions
21:23:43 <dhellmann> and that
21:23:59 <dhellmann> the sample config is not based on the application; it does not belong inside the application
21:24:00 <jeblair> isn't that true for making it a release artifact as well?
21:24:00 <bknudson> same library versions as what?
21:24:02 <morganfainberg> i think it would be accurate enough, maybe even add a "last updated <XXXX>" line?
21:24:11 <fungi> dhellmann: yeah, i think it's never really up to date necessarily anyway, and the only way we enforced it mostly before was to bring development on a project to a halt until it got corrected
21:24:13 <sdague> bknudson: as the build
21:24:19 <dhellmann> bknudson: if oslo.messaging adds an option, your config file is out of date
21:24:21 <morganfainberg> so you know what the range that change spans?
21:24:23 <mfisch> given a time reference I can go back to the project repo and see the change
21:24:40 <dhellmann> fungi: yeah, it was easier when all of the options were inside the app because of the incubated code
21:24:40 <mfisch> "why guy changed this default from A to B and whats the commit log say as to why"
21:24:42 <jeblair> sdague, fungi: i was proceeding under the assumption that infrequent repo updates of the config file would be okay, based on the idea that publishing them with tarballs was okay.
21:24:43 <sdague> bknudson: the version of all the libraries that have options is also needed in addition to the version of the project source
21:24:45 <bknudson> seems like that would always be the case since a range of library versions are supported
21:25:20 <morganfainberg> so mfisch, as long as you know when they were updated and you have timeframes on the updates - that meets your needs?
21:25:30 <fungi> jeblair: that seems fine to me too. i'm not convinced that up-to-the-minute sample configs were part of the request
21:25:35 <dhellmann> bknudson: true, which is why the most accurate way to get a sample file is to make it yourself using the versions of all of the libs you're running on your system -- but that's much less convenient
21:25:36 <mfisch> ideally I'd have every single change but I could deal with a time reference
21:25:37 <morganfainberg> X change spans commit aef123 to fff342
21:25:56 <toabctl> hm. if ops are running code from git they *must* generate their config by their own.because the combination of config options is different depending what you have installed. so even if there is a repository with the config files for the different projects I think it's very unlikely that the libs used to generated the configs have the same version than in the ops environemtn
21:26:33 <fungi> mfisch: also, again, there's no discrete mapping from a sample configuration back to a commit in a particular git repo
21:26:34 <dhellmann> toabctl: right. I thought the separate repository was for hand-crafted examples of specific use cases ("here's nova with qpid" and "here's nova with rabbit")
21:26:35 <ttx> so the config has b
21:26:37 <morganfainberg> toabctl, remember these are strictly sample configs - an example reference. you don't need them to configure a service. most ops/deployers run from a stable release afaik
21:26:38 <ttx> er
21:26:46 <asalkeld> maybe the libraries need their own config files
21:26:54 <russellb> asalkeld: +1
21:26:55 <dhellmann> fungi: we could have the config generator put version strings at the top in comments
21:27:11 <bknudson> having the library versions in the sample config would be good.
21:27:15 <mdorman> +1
21:27:17 <jogo> do we have a clear idea of what the constraints of the issue are. What specific cases are we trying to address. As there may not be a one size fits all answer
21:27:18 <morganfainberg> bknudson +1
21:27:31 <fungi> dhellmann: does that get you info on where each config option came from and the history of the code which determined it?
21:27:35 <toabctl> bknudson: +1
21:27:41 <ildikov> jogo: +1
21:27:50 <ttx> jogo: yes, I feel like we won't find the solution here, the problem space is more complex than it seems
21:27:53 <dhellmann> fungi: not entirely, no, but we know where each option comes from so I think we can include that in the output if we don't already
21:28:05 <dhellmann> by "know" I mean we know which entry point
21:28:17 <Rockyg> use case:  bug in library.  want to update.  Does it change the config?
21:28:36 <dhellmann> ttx: so, someone should work up a spec?
21:28:42 <fungi> dhellmann: i was referring to mfisch's request to be able to figure out why a config option changed by looking at the sample config itselg
21:28:42 <ttx> Anyone volunteering to summarize the problem and the perveived solutions ?
21:28:43 <fungi> itself
21:28:46 <ttx> dhellmann: yes
21:28:53 <jogo> Rockyg: that is a great example of a use case, thanks
21:29:09 <dhellmann> fungi: yeah, that would require a much much smarter config generator that pulled in git commit messages or something
21:29:10 <ttx> dhellmann: or at least an ML thread if there is not enough meat for a spec yet
21:29:32 <dhellmann> ttx: I'd rather go ahead and start with a skeleton spec and have some use cases proposed there
21:29:45 <Rockyg> dhellmann ++
21:29:47 <dhellmann> but as I'm not going to write it, I'll leave that decision up to the author :-)
21:29:53 <fungi> so i think as a takeaway we can at least summarize some of these options as doable and take the temperature on the ml thread>?
21:29:53 <ttx> dhellmann: is that a rhetorical "I" or are you volunteering ? :)
21:30:03 <dhellmann> fungi: ++
21:30:07 <mfisch> May I request a CC to the operators list on this discussion?
21:30:25 <fungi> mfisch: that was the ml thread i was talking about
21:30:35 <mfisch> ok
21:30:40 <mfisch> I thought you meant -dv
21:30:41 <mfisch> -dev
21:30:54 <fungi> mfisch: the one mentioned in the meeting agenda
21:30:58 <ttx> fungi: you up for that ?
21:31:13 <Rockyg> mfisch:  it will most likely move to dev, but when it does, it will get announced on the ops list as moving
21:31:20 <mfisch> perfect
21:31:35 <fungi> ttx: sure. i'll take a crack at ml as next step, then see if someone else wants to step up for a spec if people can agree on one particular solution as better than the others for the requested purpose
21:31:49 <ttx> #agreed summarize some of these options as doable and take the temperature on the ops ml thread?
21:32:02 <ttx> #action fungi to take a crack at ml as next step, then see if someone else wants to step up for a spec if people can agree on one particular solution as better than the others for the requested purpose
21:32:10 <ttx> fungi: many thx
21:32:12 <Rockyg> fungi: ask for use cases that can be included in spec.
21:32:26 <fungi> Rockyg: great idea--will make sure i do
21:32:27 <Rockyg> Fungi: and thanks for taking this on
21:32:34 <fungi> yw
21:32:35 <ttx> ok, back to the first agenda item
21:32:42 <mfisch> thanks for the discussion this has been a thorn for sometime
21:32:46 <joehuang> thanks
21:32:48 <ttx> #topic Next steps for cascading (joehuang)
21:32:51 <ttx> So... a bit of history first
21:32:56 <ttx> joehuang posted about his "cascading" approach to scaling OpenStack back in October
21:33:04 <ttx> There was a thread back then, mostly asking about the difference with the "Cells" approach
21:33:12 <ttx> Then it was discussed as part of the "scaling approaches" cross-project session in Paris
21:33:21 <ttx> Now joehuang is wondering about the next steps, which prompted a new thread
21:33:30 <ttx> That thread mostly questioned the need and priority for another scaling approach
21:33:40 <ttx> If I had to summarize I'd say that:
21:33:48 <bknudson> https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack ?
21:33:51 <notmyname> "scaling openstack" == functional code or the organization?
21:33:54 <ttx> (1) the cascading approach requires important changes and is heavily cross-project: therefore it requires strong buy-in from everyone in order to be successful
21:34:03 <russellb> notmyname: code
21:34:05 <ttx> (2) nobody (except its promoter) was really excited by the idea of spending any time on this
21:34:31 <ttx> ...so I don't really see this effort as succeeding, unless the tide turns
21:34:41 <bknudson> which effort? cells?
21:34:47 <ttx> bknudson: cascading
21:34:48 <russellb> bknudson: cascading ...
21:34:59 * ttx shall post a few links
21:35:01 <russellb> ttx: i agree with that summary, that's my take as well
21:35:13 <dansmith> ttx: same
21:35:13 <joehuang> but it's not scaling out for cascading
21:35:17 <Rockyg> ttx: it's on telco-NFV todo list, but it needs lots of architecting and design.
21:35:32 <Rockyg> It's a big issue in telco.
21:35:33 <mestery> ttx: Ack
21:35:35 <joehuang> it's about multi-clouds intergation
21:35:53 <ttx> http://lists.openstack.org/pipermail/openstack-dev/2014-September/047470.html
21:36:04 <dansmith> I think that's the key bit: the real goal is integrating multiple-vendor openstack, not really scaling
21:36:08 <ttx> original post ^
21:36:13 <dansmith> and that is the heart of why there is little interest, IMHO
21:36:13 <jogo> dansmith: yeah, that part scares me
21:36:19 <russellb> or even if it was all the same vendor, it's still a huge scope
21:36:30 <jogo> dansmith: there be dragons to support multi vendor
21:36:35 <russellb> and i don't think that's a smart next step for us to try to tackle
21:36:41 <ttx> I just want joehuang to have a clear answer
21:36:49 <ttx> because we may not have been clear enough in the past
21:37:06 <ttx> on interest/priority/effort
21:37:15 <joehuang> but the implementation is not hard, just add new driver.agent for OpenStack it self. PoC only about 15K source code
21:37:17 <alaski> russellb: agreed.  the pre cells solution of zones was very similar to this and was abandoned due to many obstacles that couldn't be overcome at the time
21:37:17 <dansmith> ttx: what format do you want that in, and from who?
21:37:19 <sdague> russellb: agreed, I think it's something that comes after a cells model is well established
21:37:24 <Rockyg> So, too much work to start building.  The work should go into getting a design and architecture that addresses the dragons
21:37:48 <joehuang> each one service may have a driver about 2~4k source code
21:37:48 <russellb> joehuang: i strongly disagree that this is not hard or complicated :-)
21:37:53 <sdague> Rockyg: well, more importantly, there are existing scaling efforts that we should get nailed down
21:37:57 <sdague> first
21:37:59 <ttx> dansmith: ideally that meeting would put everyone on same page
21:38:04 <morganfainberg> joehuang, 15k is a large amount of additional code.
21:38:07 <joehuang> and the driver/agent can be decoupled from the tree
21:38:18 <sdague> and once that's a solved problem, and cross project, then something like this could be considered
21:38:19 <morganfainberg> imo
21:38:28 <mestery> morganfainberg: ++
21:38:30 <russellb> if you want to maintain some out of tree drivers, by all means, have at it, it's open source :)
21:38:33 <Rockyg> Can those scaling issues be addressed so that they inform the multi cloud effort?
21:38:34 <jogo> joehuang: showing a proof of concept is not hard doesn't mean a full implementation is easy
21:39:09 <sdague> jogo: exactly, see: cells v1 and the attempts to make it feature complete in Nova
21:39:15 <russellb> i fundamentally don't see the "multiple cloud effort" as something we should make a priority right now (if ever)
21:39:17 <jogo> sdague: great example
21:39:20 <Rockyg> sdague: like get the architecture of the scaling firmed up so Joe's effort has a reasonable base to build on
21:39:24 <fungi> i assume that's 15k lines of code, not 15k bytes of code
21:39:30 <joehuang> it's already opensource in stackforge/tricircle
21:39:49 <dansmith> russellb: I agree, it's just not a thing we need to concentrate on right now, hard or not (even though we know it's hard)
21:39:50 <sdague> russellb: agreed, at least in the short term
21:39:56 <morganfainberg> fungi, that was my assumption
21:40:03 <joehuang> for cinder, the driver is about only 2k source code
21:40:18 <jogo> joehuang: have you gotten this running and passed tempest-full in a multi cloud deployment?
21:40:25 <ttx> joehuang: you mention the possibility to run cascading as an incubated project. How would that work ? Could this be implemented completely outside the existing projects ?
21:40:45 <anteaya> joehuang: you might get further if you talked about impact of teh changes and less about size of them
21:41:07 <russellb> if I thought every project totally rocked its scalability and such within a single region, then i think we could talk about this as a possible next step
21:41:09 <russellb> but we're not there
21:41:14 <ttx> I think everyone agrees that even if this was a good idea, now is not the time to implement it. The question is, what can joehuang do as a next step
21:41:15 <sdague> russellb: ++
21:41:16 <jogo> link https://github.com/stackforge/tricircle
21:41:17 <joehuang> to ttx, yes. the different is that the test environment needs at least 3 openstacks
21:41:19 <russellb> so i think it's a premature, very large distraction
21:41:26 <thingee> o/ sorry late
21:41:27 <Rockyg> ttx: I susupect that the project would need to track integrated and ensure that a scaling solution doesn't brake them.  Negotiate for shared solution
21:41:39 <mestery> russellb: ++
21:41:59 <ttx> joehuang: having complex test needs doesn't prevent it from being developed out of tree
21:42:08 <mikal> https://github.com/stackforge/tricircle/blob/master/novaproxy/nova/compute/manager_proxy.py seems to have a lot of copied code from nova?
21:42:10 <joehuang> the test use case suit for current openstack can reused for cascading
21:42:12 <dhellmann> russellb: ++
21:42:12 <sdague> so I think that the fastest path to getting to this, is not spending any time on it, but instead for people interested in multi cloud stuff to actively help with the existing scalability
21:42:18 <sdague> and help make that solid
21:42:33 <sdague> then it opens up the discussion in the future
21:42:40 <joehuang> it's inherited from nova-manager
21:42:47 <Rockyg> russellb: distraction for folks already working on openstack scaling projects, but not for Joe.  But Joe would need to coordinate with those others.
21:42:58 <joehuang> and some unused code not been removed completelt
21:43:01 <russellb> Rockyg: it requires significant buy-in and coordination across all projects
21:43:08 <russellb> *signficant* design and code review time
21:43:16 <sdague> Rockyg: yeh, there is tons of overhead for any feature of this scope
21:43:17 <russellb> it hugely impacts what will get done in openstack overall
21:43:22 <sdague> russellb: ++
21:43:40 <Rockyg> russellb: yup.  But at least don't design current openstack to make cascading impossible.
21:43:48 <ttx> I'm with sdague and russellb on this one. I don't think we can afford the distraction, even if driven by a new group
21:44:00 <russellb> Rockyg: don't think we are/have
21:44:01 <dansmith> ttx: agreed
21:44:02 <morganfainberg> and to be honest, i'm still unclear on some details of it - it has had some mixed messaging. without a bit more clarity the distraction gets worse.
21:44:18 <morganfainberg> ttx, russellb, sdague: +1
21:44:33 <fungi> until very recently it was positioned much more as a scalability solution rather than an interoperability solution
21:44:52 <dansmith> fungi: indeed, and that's the summit session it was slated for: scale-out
21:45:09 <fungi> hence a lot of the confusion i think
21:45:14 <Rockyg> How about joe uses his repository to build a POC.  When ready to show, then have a session to discuss where it is limited/broken and where it will likely work.
21:45:34 <dansmith> Rockyg: that's the distraction
21:45:41 <joehuang> redhat openstack can be integreted with original opensrack with cascading
21:45:53 <dansmith> Rockyg: if this isn't a thing we need to work on right now, then that review of the gaps is taking away from other work
21:45:53 <ttx> Rockyg: sure, we won't (can't, actually) prevent him from developing the solution
21:45:53 <russellb> to be honest, i'm not interested in that as a problem to solve
21:45:59 <russellb> at all
21:46:10 <russellb> and to be clear, that's with my upstream hat on
21:46:23 <russellb> i just don't think we should be trying to build complex technical solutions for something that's a business issue like that
21:46:30 <russellb> so i don't think it should be in the discussion
21:46:41 <joehuang> sorry, too much message, not been able to answer one by one
21:46:51 <sdague> like I said, I think the fastest path to ever working on a thing like this is *not working on it now*, and instead focussing efforts on the scaling priorities already in the projects
21:46:52 <Rockyg> dansmith, russellb:  just because you aren't interested or have time, doesn't mean joe can't work on it.  the nature of Open Source.  You don't have to participate.
21:46:59 <joehuang> maybe something not been exaplained
21:47:00 <sdague> including reviewing those changes
21:47:14 <russellb> Rockyg: there is a difference between "you can't work on it" and "OpenStack doesn't want to integrate it any time soon"
21:47:15 <mestery> I don't think we can prevent people from working on this, but I haven't heard much support for this in the short term during this meeting yet.
21:47:16 <dansmith> Rockyg: no, of course not, but you said "have a session to discuss the gaps"
21:47:27 <jogo> russellb: ++. on the other hand I do like things like Globally Distributed OpenStack Swift Cluster
21:47:36 <dhellmann> Rockyg: yeah, I think it's fine for joehuang to work on it, but you're asking for help and participation and not hearing the "we don't have time this cycle" response
21:47:38 <ttx> Rockyg: sure, you just can't expect people to dedicate cycles to that
21:47:42 <dansmith> Rockyg: he can do whatever he wants, but expecting us to circle around in X months to revisit isn't a thing there seems to be any support for here
21:47:51 <morganfainberg> Rockyg, and no one is disagreeing. what the disagreement is that we may or may not afford the time as the OpenStack project/PTLs/TC to review it/dedicate cycles to it
21:47:53 <mestery> dansmith: ++
21:48:04 <edleafe> This sounds like a great third party product built on top of OpenStack and sold to telcos who need it
21:48:06 * morganfainberg left Core teams out of that
21:48:13 <ttx> I haven't heard a single PTl (or even a single existing dev) supporting that idea yet
21:48:17 <Rockyg> Not the gaps, the POC.  It could be a BoF at the summit for that matter.  Anyone who wants to, comes, otherwise, it's noise.
21:48:34 <ttx> Given thatn I just don't think it will be successfully integrated
21:48:35 <sdague> Rockyg: sure, but expectations need to be set correctly
21:48:40 <anteaya> Rockyg: you can apply for a session but don't be surprised if you don't get the space
21:48:44 <jogo> joehuang: it sounds like you think we are missing something. What are we missing?
21:49:06 <ttx> so it's better developed separately as a POC and trying to prove itself useful.. At this moment there is a mindshare problem
21:49:22 <russellb> "a mindshare problem" is a good way to capture it
21:49:23 <sdague> because, working off on a fork out of tree on stuff people aren't interested in, just means expect to rewrite it from scratch if there is future interest.
21:49:32 <joehuang> many people have questions, maybe because I did not answer his question
21:49:42 <fungi> good outcomes which could arise from this as a separate effort among the interested parties would be for them to file bugs they identify in projects which need fixing and are preventing their effort from working as well as it should. things which are legitimately bugs in existing openstack features
21:49:49 <Rockyg> sdague:  yes.  Expectations: joe gets his own team if he wants it to happen.  the team is responsible for a POC and addressing whatever questions come up.
21:49:56 <morganfainberg> fungi, huge +1 on that
21:49:58 <ttx> it's not a technical problem, and joe thinks it's technical. Not yet, at least.
21:49:58 <sdague> fungi: ++
21:50:11 <jogo> Rockyg: https://github.com/stackforge/tricircle  is the POC
21:50:15 <jogo> its there already
21:50:20 <dansmith> jogo: right :)
21:50:25 <Rockyg> Fungi:  ++  the responsibility is on Joe
21:50:28 <joehuang> so many messages, I can't answer each one,
21:50:32 <Rockyg> and his team.
21:50:44 <mestery> I think you shouldn't expect that the POC will merge as is either.
21:51:07 <jogo> joehuang: so I think one of the big questions is, is this a technical solution to a pure business problem?
21:51:23 <russellb> so is this a fair summary?  feel free to continue your POC work, but right now, there seems to be no support from any existing devs, so it may not ever  be something accepted
21:51:33 <ttx> OK, let's slow down thge discussion, so that the message is clearer
21:51:34 <Rockyg> I wouldn't expect it to merge.  Especially as a POC.  It's once those other scaling things happen so that the stage is set, and the POC has been exercised to fix real world problems.
21:51:34 <sdague> if people want to parcipate in OpenStack, they should participate in OpenStack, and look at the priorities that projects have already set, and help. If they want to do their own thing, that's cool, just don't expect it to become part of OpenStack.
21:51:36 <russellb> things of course may change in some future release cycle
21:51:44 <jogo> (the multi vendor / making several unique  deployments look like one)
21:52:00 <mestery> sdague: +
21:52:27 <russellb> jogo: and is it even a problem (they can be separate regions, of course)
21:52:30 <ttx> I like russell summary
21:52:33 <joehuang> networking automation across openstack is very important
21:52:47 <russellb> networking automation across a globally distributed tree of openstack clouds?
21:52:47 <russellb> umm
21:52:48 <jogo> joehuang: why and can you clarify that
21:53:02 <russellb> is a good example of why i think this should be considered out of scope for right now
21:53:09 <mestery> networking automation? I'm sure that won't be contentious at all ;)
21:53:09 <russellb> that's a huge effort
21:53:09 <joehuang> refer VDF use case
21:53:15 <russellb> how about we make neutron work really well for one region first
21:53:20 <Rockyg> Yup.  I think the key info for joe here, is OpenStack isn't ready for the project, and if he want's it ready sooner than later, he needs to help make it ready by working on scaling being coded now.
21:53:23 <mestery> russellb: ++
21:53:36 <ttx> Rockyg: ++
21:53:43 <dhellmann> Rockyg: yes, that's what I'm hearing, too
21:53:54 <ttx> ok, let's spell it out
21:53:55 <morganfainberg> Rockyg, dead on
21:54:08 <ttx> #agreed OpenStack isn't ready for the project, and if he want's it ready sooner than later, joehuang needs to help make it ready by working on scaling being coded now.
21:54:11 * jogo googles VDF and finds Virginia Defense Force
21:54:21 <mestery> jogo: lol
21:54:35 <ttx> #info feel free to continue your POC work, but right now, there seems to be no support from any existing devs, so it may not ever  be something accepted
21:54:48 <ttx> I think that's a fair summary ?
21:54:51 <dansmith> +1
21:54:51 <russellb> ++
21:54:55 <morganfainberg> +1
21:55:09 <asalkeld> +1
21:55:10 <dhellmann> +1
21:55:14 <jogo> ++
21:55:23 <edleafe> +1
21:55:29 <markmcclain> +1
21:55:29 <joehuang> not support from the meeting?
21:55:30 <mestery> +1
21:55:40 <alaski> +1
21:55:45 <russellb> joehuang: in Paris?  No, I did not detect any support in that meeting
21:55:48 <sdague> +1
21:56:19 <ttx> joehuang: I have yet to find an existing core dev in any of the affected project ready to back the idea
21:56:26 <joehuang> so we continue our work, and ignite a discussion later wen new progress comes
21:56:38 <jogo> joehuang: what is VDF? link
21:56:47 <dhellmann> joehuang: that sounds like the best plan for right now
21:56:49 <joehuang> Vodafone
21:57:02 <ttx> joehuang: also, if you encounter sclaing issues in getting your POC to work (and you will) file them as bugs, and help fixing them
21:57:06 <morganfainberg> joehuang, and please help contirbute to solving the issues with the scaling in the current projects.
21:57:12 <Rockyg> joehuang: ++ and help on openstack work that leads to what you want on cascading
21:57:15 <morganfainberg> as ttx just said
21:57:29 <ttx> #topic Open discussion & announcements
21:57:50 <ttx> We had 1:1 syncs today, kilo-1 tags are getting out as we speak. Logs at:
21:57:54 <ttx> #link http://eavesdrop.openstack.org/meetings/ptl_sync/2014/ptl_sync.2014-12-16-09.00.html
21:58:08 <dhellmann> It looks like the issue with oslo.db and sqlalchemy triggered by the setuptools release this weekend is fixed. Please let me know if you are seeing failures still in your projects.
21:58:22 <ttx> Also note that we'll discuss openstack-specs as a recurring iteml in future meetings:
21:58:24 <dhellmann> there is one more patch to land in juno to pin oslo.db <1.1
21:58:27 <ttx> #link https://review.openstack.org/#/q/status:open+project:openstack/openstack-specs+branch:master,n,z
21:58:40 <ttx> PTLs, ops, eveyone shall review those ^
21:59:07 <asalkeld> k
21:59:19 <morganfainberg> just a minor note, keystoneclient is going to be bumped to 1.0.0 this next release - it's been stable for a looong time, we're just marking it as stable version # (no incompat changes going in, etc) - if anyone has any concerns, let me know [e.g. infra side, etc].
21:59:26 <ttx> currently one on log guidelines and one on OSProfiler
21:59:55 <ttx> Anything else, anyone ?
22:00:26 <eglynn> ttx: meeting scheduling for next 2 weeks?
22:00:31 <russellb> hope you all have happy holidays, if you're taking any time off :)
22:00:44 <ttx> eglynn: probably skip next 2
22:00:54 <eglynn> ttx: cool, sounds reasonable
22:01:00 <ttx> anyone wanting to have a cross-project on Dec 23 or Dec 30 ?
22:01:15 <ttx> (there is a trick hidden in that question)
22:01:47 <Rockyg> my projects would be very cross if they had to meet then;-)
22:02:01 <ttx> Alright, have a great holiday season everyone! Next meeting January 6
22:02:05 <ttx> #endmeeting