02:31:26 <ekcs> #startmeeting congressteammeeting
02:31:27 <openstack> Meeting started Fri Nov 24 02:31:26 2017 UTC and is due to finish in 60 minutes.  The chair is ekcs. Information about MeetBot at http://wiki.debian.org/MeetBot.
02:31:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
02:31:30 <openstack> The meeting name has been set to 'congressteammeeting'
02:32:08 <ekcs> hello all. collaborative list of meeting topics is here as usual: https://etherpad.openstack.org/p/congress-meeting-topics
02:34:49 <ramineni_> ekcs: hi
02:35:01 <ekcs> hi ramineni_ !
02:38:05 <ekcs> ok let’s get started then.
02:38:14 <ekcs> topics here again: https://etherpad.openstack.org/p/congress-meeting-topics
02:38:26 <ekcs> #topic longer dev cycle discussion
02:38:49 <ekcs> just a quick pointer to a discussion going on for those who might be interested.
02:39:19 <ekcs> basically there is a proposal going around to lengthen the release cycle from the current < 6 months to 9-12 months.
02:40:11 <ramineni_> ekcs: ah ok, thanks for the pointer, didnt notice that in ML
02:40:15 <ekcs> came out of some thought to relieve upgrade pressure on ops but also to accommodate the now slower development of openstack as a whole as it becomes more mature.
02:40:29 <ekcs> links to a couple messages on the topics etherpad.
02:41:07 <ekcs> it’s discussed in the SIG mailing list
02:41:27 <ekcs> so yea those interested can go follow/join that discussion.
02:42:36 <ekcs> anything to talk about here?
02:43:13 <ramineni_> no
02:43:22 <ekcs> great ok. next topic then.
02:43:28 <ekcs> #topic queens-2
02:43:45 <ekcs> speaking of release cycles, the queens-2 milestone is coming up haha.
02:43:49 <ekcs> week of 12/4
02:44:21 <ekcs> just a reminder for us to think about which things we want to get merged by the milestone.
02:46:00 <ekcs> ok moving on then.
02:46:04 <ekcs> #topic patches
02:46:31 <ekcs> anything we’d like to discuss on patches?
02:46:33 <ekcs> server: https://review.openstack.org/#/q/project:openstack/congress+status:open
02:46:34 <ekcs> dashboard: https://review.openstack.org/#/q/project:openstack/congress-dashboard+status:open
02:46:35 <ekcs> client: https://review.openstack.org/#/q/project:openstack/python-congressclient+status:open
02:46:43 <ramineni_> ekcs: yes,
02:47:04 <ramineni_> ekcs: one request , can we merge any ongoing tempest patches , so that i can start working on split the repo
02:47:37 <ramineni_> ekcs: otherwise , it will be duplicate effort , raise again in new repo
02:48:42 <ekcs> great which patches are we talking about?
02:48:46 <ekcs> https://review.openstack.org/#/c/437470/ ?
02:49:03 <ekcs> not this one right? this is different: https://review.openstack.org/#/c/520879/
02:49:44 <ekcs> and this one? https://review.openstack.org/#/c/518647/
02:50:14 <ramineni_> ekcs: yes
02:50:41 <ramineni_> and gate patches, looks almost ready
02:51:02 <ramineni_> ekcs: all the jobs seems to be green now
02:52:53 <ekcs> great. I’ll go ahead and merge them. I typically wait a bit to let pepole comment, but these are pretty straigthforward infra type patches.
02:53:21 <ramineni_> ekcs: great , thanks ..and one more help :)
02:53:28 <ramineni_> ekcs: https://review.openstack.org/#/c/518862/
02:53:47 <ramineni_> this patch , i removed some outdated code, but strangely HAHT tests are failing..
02:54:21 <ekcs> ah i see. i can go ahead and take a look.
02:54:26 <ramineni_> ekcs: could you take a look on why are they failing , it will be faster for you to indentify the issue
02:54:39 <ramineni_> ekcs: thanks
02:55:33 <ramineni_> ekcs: do you want to disucss on this patch , seems we are not in agreement
02:55:34 <ramineni_> https://review.openstack.org/#/c/520917/
02:56:36 <ekcs> yea let’s talk about it. on the first point of whether it is stable, I’m not sure. though I don’t see any indication in docs that it should be treated as unstable. how do you look at it?
02:57:14 <ramineni_> im not sure either , i thought its used for testing framework generally
02:57:15 <ekcs> the documentation IS fairly limited. merely mentioning the in-memory in the table without saying much about it.
02:57:34 <ramineni_> ekcs: i have also searched , but couldnt find anythng
02:57:58 <ramineni_> ekcs: and main problem is , im not in agrreement in adding to congress code
02:58:16 <ekcs> ok.
02:58:31 <ramineni_> thirdparty lib options, change , when we first started there is no transport_rl option , then they introduced it , now they made it mandatatory
02:59:10 <ramineni_> i feel documentation is better place for these type of changes , than everytime changing code if something congress depends on changes
02:59:50 <ramineni_> and second point , that is not congress config option, why to maintain in our code
03:00:26 <ramineni_> just doesnt feel right
03:00:36 <ekcs> makes sense. on the first point, I see the issue, but it seems doing it in doc is even worse for the deployer.
03:00:46 <ramineni_> ekcs: no why
03:01:40 <ekcs> right now deployer is running older version let’s say. and upgrades to queens. if it’s only in doc, then their congress stops working until they find the right thing in doc and change their config.
03:01:52 <ekcs> if it’s added as default in code, then it continues to work like they expect.
03:01:56 <ekcs> same thing going forward.
03:02:09 <ekcs> every time there is a change, if we change the default in code things continue to work on every release.
03:02:10 <ramineni_> ekcs: we cant do anything if other libs changes , deployers need to adapt , if keystone config changes/ or anyther project
03:02:50 <ramineni_> ekcs: we cant increase maintanabiity of code , and also , in production mostly rabbit driver is used , so it shouldnt break
03:03:36 <ramineni_> oslo.messaging didnt take care of upgrading then?
03:03:53 <ramineni_> ekcs: looks like their issue than ours
03:04:04 <ekcs> maybe I need to understand something more precisely.
03:04:37 <ekcs> if in congress we set transport_url to X, is it setting the config for congress only or for the whole stack? I thought it’s for congress only.
03:04:52 <ramineni_> ekcs: its for congress only
03:04:52 <ekcs> but totally changes how I look at it if it’s for the whole stack.
03:04:56 <ekcs> ok.
03:05:23 <ramineni_> but generally they use same for all components
03:05:44 <ramineni_> as oslo.messaging is common dependency
03:06:09 <ekcs> ok. so for the deployers who run congress on rabbit, either way doesn’t affect them.
03:06:20 <ramineni_> yes
03:07:06 <ekcs> for the deployers who run congress in-mem, I feel like the best of all worlds is then to use the default, but also document the option for them to change if they upgrade the library without upgrading to a new openstack release.
03:07:37 <ekcs> if they upgrade whole release, then we can keep up with the new config should it be changed.
03:08:05 <ekcs> if they independently upgrade the library, which can always lead to breakage because new libraries have not been tested with this OS release,
03:08:18 <ekcs> then at least they have docs to look to to see what might be wrong.
03:08:48 <ramineni_> >>  if they independently upgrade the library, which can always lead to breakage because new libraries have not been tested with this OS release,
03:08:58 <ramineni_> i agree , its not recommended
03:09:46 <ramineni_> so whats your are recommending?
03:10:48 <ekcs> so I feel like for the deployers who upgrade whole releases, putting the default in our code works best. do you agree?
03:11:01 <ramineni_> ekcs: no
03:11:53 <ramineni_> they should add the config , if they are using kombu+memory ,
03:12:25 <ekcs> ok great help me understand. i’m thinking every openstack release, we make sure the default config value works. (we shoulb probably add the single node test). so they never have to worry about it.
03:12:40 <ekcs> where does that model break for the whole release upgraders?
03:13:08 <ramineni_> ekcs: defaukt value of oslo.messaging config option **
03:13:11 <ramineni_> its not ours
03:13:55 <ramineni_> ekcs: i think previous they have this as default value, now they removed?
03:14:24 <ramineni_> in messaging lib
03:15:12 <ekcs> ok. so what practical problem does the whole-release upgrade deployer run into because we specified our default for the oslo-messaging option?
03:16:04 <ramineni_> ekcs: my point is, thats not the right way to address problem .. so next time some other lib changes their defaults, do you intend to add it congress code everytime?
03:16:08 <ekcs> i understand the discomfort with specifying default value to another library. but first I’m trying to understand the practical impact.
03:16:32 <ekcs> ok I see your point.
03:16:49 <ekcs> I think it comes down to two different ways to seeing the relationship between congress and in-memory messaging.
03:17:36 <ekcs> I’m seeing in-memory messaging as part of congress. like the engine of a car.
03:17:53 <ekcs> actually bad example haha ignore that.
03:18:32 <ekcs> so let’s think about this.
03:20:36 <ramineni_> ekcs: ok, if you feel its the right solution for you , changing the congress code, please go ahead , im not saying it doesnt address problem, its just the bad way to address the problem , so im against
03:22:04 <ramineni_> ekcs: we can move on now
03:22:55 <ramineni_> ekcs: im also not sure the intent of oslo.mesaging to remove the default config option
03:24:06 <ramineni_> if indetified the problem correctly, :) you can also check the devstack installation once , as im only one right now experienced the issue
03:25:59 <ekcs> hmmm. to do feel it’s the right solution, but how I feel could easily be wrong so I’m looking to thoroughly understand each other’s take on the issue.
03:27:01 <ekcs> basically if we end up agreeing, then I feel like there is more confidence we have chosen a good option. if we don’t agree, then i’m much less confident in either choice.
03:27:02 <ramineni_> ekcs: ok, you can try to reproduce the issue once, if we both think transport_url default value is the problem,
03:27:42 <ramineni_> ekcs: then we can check with oslo.messaging team , why they have removed the kombu+memory as default option, its better to know the reason before going ahead
03:28:07 <ekcs> ok good idea.
03:28:41 <ramineni_> ekcs: great .. then we can think on solutions , again
03:28:51 <ekcs> ok we’re almost out of time. thanks for having this discussion. at least I understand the issue and your thinking much better.
03:29:06 <ekcs> any last things to bring up?
03:29:23 <ramineni_> ekcs: nothng more from my side :)
03:29:38 <ekcs> ok then. have a great weekend!
03:29:45 <ekcs> talk to you next time.
03:29:51 <ramineni_> ekcs: u too.. bye :)
15:00:14 <openstack> lennyb__: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
15:00:27 <openstack> lennyb__: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
15:00:57 <openstack> lennyb__: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
15:08:31 <frickler> #endmeeting