15:00:17 <ramishra> #startmeeting heat
15:00:18 <openstack> Meeting started Wed Nov 23 15:00:17 2016 UTC and is due to finish in 60 minutes.  The chair is ramishra. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:19 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:21 <openstack> The meeting name has been set to 'heat'
15:00:30 <ramishra> #topic roll call
15:01:12 <ricolin> 0/
15:01:45 <therve> Yop
15:02:22 <cwolferh> o/
15:02:38 <ramishra> hmm.. US guys probably are in holiday mode already:)
15:02:43 <syjulian> o/
15:03:10 <cwolferh> holiday is so close :-)
15:03:48 <ramishra> ok, let's start, it should be quick one
15:03:53 <ramishra> #topic adding items to agenda
15:04:08 <ramishra> https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda_.282016-11-23_1500_UTC.29
15:04:15 <ramishra> #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda_.282016-11-23_1500_UTC.29
15:05:01 <ramishra> Any more items, we only have one bp to discuss I think.
15:05:36 <ramishra> #topic resource type versions
15:05:49 <ramishra> #link https://blueprints.launchpad.net/heat/+spec/heat-resource-type-versions
15:06:04 <ramishra> syjulian: you've added this, right?
15:06:11 <syjulian> yes
15:06:18 <zaneb> o/
15:06:31 <ramishra> hi zaneb
15:06:45 <ramishra> we discussed that during the summit.
15:06:51 <ramishra> it looks ok to me.
15:06:54 * zaneb blames daylight savings
15:07:01 <ricolin> lol
15:07:22 <syjulian> yup. we're kinda wondering how registering would work
15:07:22 <ramishra> zaneb: you were not present during that discussion, wdyt?
15:08:14 <ramishra> #link https://review.openstack.org/398601
15:08:47 <zaneb> reading now... seems reasonable
15:09:10 <ricolin> I think it might be a good start to support micro version
15:09:21 <therve> syjulian, I feel that the problem description is really lacking
15:09:34 <therve> It basically says "we don't support version, let's do it"
15:09:55 <therve> I want to know why we need resouce versions
15:10:37 <cwolferh> what's the advantage over the resource itself just having an arbitrary my_version property and the resource just knows what to do internally to itself?
15:12:09 <ricolin> cwolferh: I think current approch is more like allow V1 and V2 share same resource type
15:12:17 <ramishra> therve: we do lot of hacks err.. translations to be backward compatible.
15:12:19 <therve> cwolferh, That's one solution not mentioned too, yeah. But you need to keep the property schema compatible if you do that
15:12:38 <ramishra> therve: it sould probably mention that.
15:12:44 <therve> ramishra, Well yeah and that won't solve that?
15:12:44 <cwolferh> therve, ah, true
15:13:19 <therve> ramishra, Unless we somehow start to enforce versions, you can't break compatibility
15:13:29 <ramishra> therve: yeah
15:14:30 <ramishra> May be should review and put these comments on the spec, however, it would be good to add this.
15:14:40 <therve> Yeah I'll do that
15:15:52 <syjulian> yes that will help
15:16:07 <ramishra> syjulian: I think we would review and provide the comments.
15:16:08 <therve> syjulian, We're also here to discuss it though :)
15:16:13 <zaneb> just added some thoughts on the spec
15:16:17 <therve> Anyway
15:16:18 <syjulian> gotcha
15:16:46 <syjulian> we decided on versions so when a resource has a new version we won't have to register it in a new name
15:17:13 <ramishra> it's way to get attention of the reviewers, by adding it to the meeting agenda:)
15:17:36 <syjulian> it does help :) . thank you guys for looking at it
15:18:48 <ramishra> should we move on now?
15:18:58 <ramishra> #topic gate issues
15:19:29 <therve> Urg it's bad
15:19:32 <ramishra> I just thought to give an update on the issues we had in the last few days.
15:19:57 <ramishra> Number of things broken after the jobs changed to xenial
15:19:58 <ricolin> heatclient 1.6 current been add to black list
15:20:32 <ramishra> ricolin: yeah, it had a backward incompatible change, my mistake
15:20:48 <ramishra> https://review.openstack.org/#/c/401096/
15:21:08 <ramishra> we have a new release now and it should merge anytime.
15:21:21 <ramishra> I mean the global requirement change.
15:21:34 <ramishra> Things seems to be in control now.
15:22:09 <ramishra> therve: most of jobs are passing since a few hours;)
15:23:01 <therve> ramishra, There are 50% slower for some reasons though
15:23:16 <ramishra> if someone can help land this https://review.openstack.org/#/c/400689/ stable/newton would be fixed.
15:24:31 <zaneb> +2'd
15:24:52 <ramishra> therve: you can self approve;)
15:25:19 <therve> ramishra, I don't have stable permissions for some reasons
15:25:40 <ramishra> skraynev: ^^
15:25:44 <zaneb> really?
15:25:51 * zaneb will look in to that
15:26:23 <zaneb> ramishra: do you not have stable +A permissions either?
15:26:24 <ricolin> I think Zane can just +workflow and let it land
15:26:41 <ramishra> therve: it seems issues with some clouds, some of the jobs are finishing in 30-35 mins.
15:27:13 <ramishra> therve: like this one finished quick https://review.openstack.org/#/c/400961/
15:27:28 <therve> ramishra, Okay we'll see :)
15:28:00 <zaneb> I thought https://review.openstack.org/#/admin/groups/152,members was the only group needed.
15:28:06 * zaneb goes looking for more
15:28:23 <zaneb> oh ho https://review.openstack.org/#/admin/groups/536,members
15:28:43 <zaneb> I cant edit though :/
15:29:41 <ramishra> ok, let's move on.
15:29:50 <ramishra> #topic placeholder
15:29:59 <ramishra> #link https://blueprints.launchpad.net/heat/+spec/implement-placeholder
15:30:19 <ramishra> ricolin: stage is yours;)
15:30:23 <ricolin> Need some review on that BP
15:31:10 <zaneb> yeah, I need to find some time to have a look
15:31:36 <ramishra> ricolin: ok just reviews:)
15:31:43 <ricolin> yep
15:32:01 <zaneb> ricolin: there isn't a spec for this, right?
15:32:02 <ricolin> Maybe we can get it done before Pika
15:32:24 <ramishra> btw, there are number of bps for review, we need to see what we can do in this cycle.
15:32:25 <ricolin> we have a spec
15:32:39 <ramishra> we don't have much time though and the holidays.
15:32:46 <ricolin> https://review.openstack.org/#/c/392499/
15:33:00 <zaneb> ah!
15:33:13 <zaneb> ok, linked it from the bp
15:33:20 <ricolin> ramishra: It's a holiday week for us!
15:33:26 <zaneb> I can review that first, it'll be quicker :)
15:33:28 <ricolin> I mean U.S.
15:34:09 <ricolin> zaneb: yah! don't want you guys go for those 15 patches first
15:34:41 <ramishra> Request to review the specs and then see if/what we can do this cycle.
15:35:02 <zaneb> ok, will do
15:35:05 <ricolin> ramishra: +1
15:36:32 <zaneb> ramishra: btw I have another topic once we have finished the agenda
15:36:47 <ramishra> we have translation, property, template migrate group and more;)
15:36:54 <ramishra> property group
15:37:07 <ramishra> zaneb: sure
15:37:33 <ramishra> ricolin: the next topic is also spec review request I think?
15:37:43 <ramishra> #topic ironic resource
15:38:16 <ricolin> ramishra: kind of, but I would like to know what guys think about it
15:38:48 <ramishra> ricolin: I thought we had agreed to add the ironic resources, no?
15:38:54 <ramishra> zaneb, therve ?
15:38:55 <zaneb> remind me why we're doing this again?
15:39:33 <zaneb> most people will never see it, since it's admin-only, so no big deal...
15:39:59 <ricolin> zaneb: ironic resource should seperate to manage resource and deploy resource
15:39:59 <zaneb> just not sure that it's actually a useful thing
15:40:22 <ramishra> zaneb: I think there was an earlier discussion about it, not able to locate it.
15:40:25 <ricolin> and manage resource will be a good item to add in to heat
15:40:54 <therve> ricolin, Are you going to use this?
15:40:55 <zaneb> ok, so it's not actually about going around nova for allocating servers, just about managing the stuff that Ironic knows about
15:40:58 <zaneb> ?
15:41:28 <ricolin> zaneb: Yep
15:41:30 <ramishra> zaneb: yeah, that was also my impression without reading the spec
15:41:59 <zaneb> ok, sounds harmless enough
15:42:00 <ricolin> zaneb: I add all ironic resource in spec
15:42:27 <ramishra> ok, sounds good.
15:42:39 <ricolin> zaneb: would like to let nova do it job and use ironic resource in heat to cover the rest
15:42:56 <zaneb> I'm not actually convinced that any of our admin-only resources are really useful but they're mostly harmless as long as they're hidden
15:43:17 <zaneb> (from non-admins)
15:43:44 <ramishra> yeah, I've seen keystone resources used a lot though.
15:43:54 <ramishra> I see number of bugs reported.
15:43:56 <ricolin> therve: It will be a greate thing if we can use heat to control baremetral
15:44:11 <therve> ricolin, That's not my question though :)
15:44:45 <ricolin> ricolin: I'm more thinking about Ironic+magnum case can be done just in heat
15:45:26 <zaneb> ramishra: keystone resources would be _really_ useful if only you didn't have to be an admin to use them :/
15:46:47 <therve> (got to go, bbl)
15:47:03 <ramishra> ok, should we move on, we've few more mins.
15:47:11 <ramishra> #topic open discussion
15:47:33 <ramishra> therve: np, thanks:)
15:47:46 <ramishra> zaneb: you wanted to discuss something?
15:47:48 <ricolin> zaneb: aren't you have some topic?
15:48:03 <zaneb> ramishra: ah, yes. backpressure.
15:48:40 <zaneb> specifically, how, in convergence, do we limit the amount of work that we try to do at one time
15:48:55 <zaneb> so that we don't e.g. run out of DB connections
15:49:48 <ramishra> zaneb: I don't have a clue, you know better:)
15:49:54 <zaneb> right now we do it by limiting the thread pool size...
15:50:25 <zaneb> but as we've found this week that's apparently not an ideal solution
15:50:57 <zaneb> small limit -> everything is slow
15:51:06 <zaneb> large limit -> run out of db connections
15:51:13 <cwolferh> it's not an ideal solution in the db connection case. but are there others?
15:51:17 <ramishra> zaneb: Isn't it deployment specific
15:51:19 <ramishra> ?
15:51:42 <ramishra> I mean someone can increase max_connections if they want?
15:51:57 <ramishra> if the concurrency is high.
15:52:06 <zaneb> I guess
15:52:13 <ramishra> Though I may be missing something.
15:52:36 <zaneb> we either need to publish the correct tuning parameters for different sized deployments or make this Just Work imho
15:53:19 <ramishra> zaneb: I saw this patch from therve https://review.openstack.org/#/c/400155/
15:53:39 <zaneb> the ideal thing would be to enumerate our bottlenecks (like db connections) and apply backpressure, using the message queue as a buffer
15:54:32 <zaneb> iow the exact opposite of therve's slow-start patch ;)
15:54:48 <zaneb> I don't have a solution here
15:54:57 <ramishra> zaneb: yeah:)
15:55:14 <zaneb> just wanted to socialise the idea a little bit
15:55:28 <ramishra> zaneb: Would it be good to start a ML thread for it?
15:55:46 <zaneb> it probably would
15:56:13 <zaneb> therve: can you give a quick update on exactly what we found with the gate and how we fixed it?
15:56:38 <ramishra> zaneb: I'm not sure if therve is still around.
15:56:52 <zaneb> oh, missed that he dropped out
15:56:58 <ramishra> zaneb: we did not know the exact root cause.
15:57:05 <zaneb> ramishra: can *you* give a quick update? ;)
15:57:09 <ramishra> it started when we moved to xenial
15:57:19 <ramishra> but it seems it's fine now.
15:58:00 <ramishra> We changed the executor_thread_pool_size to 8 from default 64
15:58:05 <ramishra> in master
15:58:29 <ramishra> that's actually used as the max_overflow of connections in oslo.db
15:58:44 <ramishra> max_overflow is 40 by default
15:59:02 <zaneb> what does overflow mean in this context?
15:59:22 <ramishra> we have a pool_size of 5 (default)
16:00:01 <ramishra> so if more connections are needed from the pool, it can overflow to the overflow limit
16:00:19 <ramishra> so the max connections from the pool that can be used is 5+40=45
16:00:29 * zaneb scratches head
16:00:45 <zaneb> it sounds like we're saying the pool_size is meaningless?
16:01:19 <ramishra> zaneb: http://docs.openstack.org/developer/oslo.db/api/options.html
16:01:24 <inc0> << taps into meeting doors >>
16:01:39 <ricolin> time's up
16:01:40 <ramishra> I think we've to end.
16:01:50 <zaneb> ok
16:01:58 <ramishra> thanks all
16:02:02 <inc0> thank you guys!:) sorry, but I sense our meeting will be packed too
16:02:03 <ramishra> #endmeeting heat