15:00:09 <ricolin> #startmeeting heat
15:00:10 <openstack> Meeting started Wed Jul 19 15:00:09 2017 UTC and is due to finish in 60 minutes.  The chair is ricolin. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:13 <openstack> The meeting name has been set to 'heat'
15:00:21 <ricolin> #topic roll call
15:00:54 <kazsh> Hi team, sorry missed the last video call..
15:01:03 <zaneb> o/
15:01:04 <ramishra> Hey!
15:01:04 <kiennt> o/
15:01:10 <ricolin> o/
15:01:33 <elynn> I missed the video either, too late to saw the email.
15:01:49 <LanceHaig> o/
15:01:53 <ricolin> kazsh elynn , no worry, that's a meetup and people're free to join and out:)
15:02:32 <ricolin> #topic adding items to agenda
15:02:33 <ricolin> #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda_.282017-07-19_1500_UTC.29
15:04:30 <ricolin> #topic weekly report
15:04:57 <ricolin> This week is for non client library release
15:05:23 <ricolin> And next week will be feature freeze for heat
15:05:41 <ricolin> so I think we should release heat-agents this week
15:05:59 <ricolin> which add the py35 support for heat-agents
15:06:17 <ramishra> there is nothing in the review queue for heat-agents
15:06:25 <ricolin> yes
15:06:39 <ricolin> so clean
15:07:02 <ricolin> I will release it after this meeting
15:07:44 <ramishra> May be a test patch to see if the py35 job is ok
15:07:51 <ricolin> And gate report, we suffer two gate broken together, and rabi quick fix it:)
15:08:32 <ramishra> heat-agents py35 job is added but we did not have anything in the queue to test it;)
15:08:45 <ricolin> ramishra, I will use the exist one alrough it says no review:)
15:10:47 <ricolin> ramishra,  https://review.openstack.org/485234 done:)
15:11:11 <ramishra> ok, IMO we should test the job before the release, though I don't expect any issues.
15:11:58 <ricolin> ramishra, agree:) That code works but would be better to test on gate for check it out
15:12:21 <ricolin> okay other update
15:12:30 <ricolin> We might be able to do our meeting in our own irc channel
15:12:34 <ricolin> #link https://review.openstack.org/#/c/485117/
15:12:43 <ricolin> once this landed
15:13:00 <ricolin> That's all I got for update:)
15:13:15 <ricolin> #topic video meetup feedback
15:13:55 <ricolin> Any feedback on the meetup format?:)
15:15:11 <ricolin> How about the sounds, screen share, or any other quality issue?
15:15:32 <ramishra> It was quite late for me, else all good:)
15:15:42 <ricolin> lol
15:15:50 <zaneb> sound quality was average to poor I thought
15:16:12 <zaneb> screen sharing doesn't work on modern OSes
15:16:20 <zaneb> remote control didn't work
15:16:40 <zaneb> but it was still cheaper than flying to the PTG, so there's that I guess
15:16:41 <ricolin> zaneb, I think that's because my fedora
15:17:03 <zaneb> ricolin: or mine
15:17:16 <zaneb> I did not disable wayland
15:17:39 <ricolin> So yesterday was OpenStack Taiwan Day and I have raise this idea to Foundation
15:17:45 <ramishra> yeah, we can surely use it for PTG if more people want to participate remotely
15:18:10 <ricolin> Appear they might be able to provide some Hardware if that's what we going to run it
15:18:57 <ricolin> Also any other software suggestion here?
15:18:59 <ramishra> Unless there is objection to using zoom, I used it the first time and it was ok.
15:19:09 <ricolin> we currently try on Zoom btw
15:19:25 <kazsh> WebEX ?
15:19:42 <ricolin> kazsh, that might be on option too
15:19:49 <kazsh> quality itself is good but need to pay money..
15:20:00 <ricolin> kazsh, then no:)
15:20:09 <kazsh> or some one who has the contract need to be host
15:20:47 <ricolin> ramishra, Foundation use Zoom as well, so I think we should be good
15:21:08 <ricolin> kazsh, I don't have one:)
15:22:04 <ricolin> I will write to Foundation and hope they can give us some good network in PTG:)
15:22:26 <ricolin> So all of our members can join!
15:22:46 <kazsh> Nobody join PTG onsite ?
15:22:54 <ricolin> I will
15:23:00 <kazsh> ricolin :)
15:23:03 <ricolin> but not sure others
15:23:18 <kazsh> well I'm planning
15:23:24 <ricolin> ramishra, zaneb any settle down?
15:23:31 <ricolin> kazsh, good!
15:23:37 <kazsh> but still need to get my boss's approval haha
15:23:51 <kazsh> ricolin: keep you updated !
15:23:55 <ricolin> kazsh, everyone suffer from that actually:)
15:23:58 <ricolin> kazsh, thx:)
15:24:21 <ricolin> that's move to next topic:)
15:24:25 <ricolin> #topic Rolling upgrade and discussion
15:25:09 <ricolin> anyone wanna take over?:)
15:25:19 <kiennt> Hi all. I want to discuss about rolling upgrade.
15:25:24 <ricolin> :)
15:25:40 <kiennt> The related patch sets
15:25:42 <kiennt> #link https://review.openstack.org/#/c/407989
15:25:47 <kiennt> #link https://review.openstack.org/#/c/475853/
15:25:55 <kiennt> #link https://review.openstack.org/#/c/482048/
15:26:47 <kiennt> What's your opinion about our rolling upgrade? Do you think we can merge it in this cycle? :)
15:27:56 <ricolin> I have leave comment in the guideline one
15:28:31 <kiennt> ricolin: I just saw it minute ago, will push new patch set
15:28:34 <kiennt> thank you
15:28:35 <ricolin> that one looks good to me, but we still have to add some warning info in doc about what zaneb raised in comment
15:28:37 <ramishra> zaneb, I saw your comment on https://review.openstack.org/#/c/475853, so if we wait for the last engine to finish all processing it has to do (as it's the only one listening on the old vhost) would not it be ok?
15:29:05 <zaneb> ramishra: it also has to drain the message queue
15:29:09 <ramishra> all api's would be migrated to the new vhost and all other engines
15:29:35 <ramishra> Would not it send to itlself to process those messages?
15:29:57 <ramishra> we wait for the queues to be drained by themselves?
15:30:53 <zaneb> in general I don't think our graceful shutdown waits for queues to drain (otherwise it would never shut down)
15:31:50 <zaneb> so if we tell the last engine to gracefully shutdown, it will finish what it is working on, which may post more messages to the queue (that only it is listening on), and then shut down leaving them unprocessed
15:31:52 <ramishra> No I mean, if we wait till all messages in that vhost queues are processed by the last engine?
15:32:21 <zaneb> ramishra: yes, if we can find a way to do that it'll work
15:35:08 <ramishra> zaneb: Just wait for sometime? or find a way to check the queues for unprocessed messages? Probably add a note in the documentation.
15:35:43 <ramishra> regarding the multinode grenade job, I'm not sure how much it would help with the rolling upgrade testing
15:35:51 <ramishra> I've put my comment there.
15:36:31 <ramishra> currently with the grenade job we don't run any tests after upgrade
15:37:30 <ramishra> I've a patch to run api tests before and after upgrade https://review.openstack.org/#/c/460542/, may be we should land that.
15:38:05 <kiennt> ramishra: I just pushed new patch this morning as you suggested.
15:38:26 <ramishra> kiennt: OK, I'll check that
15:38:27 <kiennt> +1 for ramishra's patch.
15:39:15 <kiennt> api tests will be good. I notice we have heat_upgradetests directory
15:39:36 <kiennt> but it do nothing. Just empty scripts.
15:39:48 <kiennt> Can we remove it?
15:40:36 <ramishra> kiennt: we used to run tempest heat tests earlier after upgrade, which was removed from tempest tree.
15:40:51 <ramishra> so we don't do any testing after upgrade now
15:42:10 <ramishra> Anyway, we should also check if adding a multinode job is enough to get the tag;)
15:43:08 <ricolin> So we might claim  https://review.openstack.org/#/c/460542 as our basic after upgrade test?
15:43:19 <ricolin> s/might/may/
15:44:25 <ricolin> If yes, then we might get the tag:)
15:44:36 <kiennt> ramishra: i think its enough. Because another projects that already got the tag, have gate job
15:45:14 <kiennt> I asked Swift core team, one of them - cschwede told me:
15:45:20 <kiennt> Swift rolling upgrades are not tested yet in the gate, but Swift supports that since the beginning
15:45:56 <ramishra> ricolin: for rolling upgrade testing we need nodes in the multinode job to run different releases of heat(upgrade one and leave the other one with older version), which is not the case I think;) and then run the tests.
15:47:02 <ramishra> But if that's good enough to get the tag, we should go for it:)
15:47:33 <ricolin> I will ask QA about it:)
15:48:22 <ricolin> kiennt, anything you wish to discuss now:)?
15:48:48 <kiennt> ricolin: Great! :) No, that's all.
15:49:08 <ricolin> kiennt, cool!
15:49:11 <ricolin> #topic Open discussion
15:49:34 <ricolin> Anything would like to raise for discuss?:)
15:49:49 <kazsh> Suppose not so much people will visit Sydney.. but I submitted a couple of Heat related CFP
15:49:57 <kazsh> Just a heads-up..
15:50:04 <kazsh> Plz cross your fingers :)
15:50:34 <ricolin> kazsh, may the force of OpenStack be with you:)
15:51:02 <kazsh> thank you ricolin, yep hope those will be pass in this time!
15:51:31 <ricolin> I also got one talk" Advanced Orchestration for Containers Clusters
15:51:43 <ramishra> zaneb: If you've not seen https://bugs.launchpad.net/heat/+bug/1705170
15:51:43 <openstack> Launchpad bug 1705170 in heat "test_stack_update_with_conditions failing with KeyError intermittently" [Undecided,New]
15:51:45 <zaneb> 23 patches need review before next week to land the stack-definition series
15:51:48 <ricolin> so if people can help to vote for these two:)
15:52:07 <kazsh> Sounds really interesting >>" Advanced Orchestration for Containers Clusters
15:52:36 <kazsh> How to Make OpenStack Heat Better based on Our One Year Production Journey
15:52:47 <kazsh> "Get Familar with OpenStack Heat Workshop"
15:53:03 <kazsh> above two are mine :)
15:53:23 <ricolin> kazsh, will look at those
15:53:41 <zaneb> ramishra: hadn't seen that, will post a fix
15:53:58 <kazsh> ricolin: thanks a lot !
15:53:59 <ramishra> zaneb: great:)
15:55:01 <ricolin> Do hope to land stack-definition
15:55:14 <ricolin> and get-reality
15:55:27 <ramishra> let's land as many patches before it's broken again;)
15:56:08 <ricolin> ramishra, thanks barbican and glance to contribute the fun:)
15:56:09 <ramishra> I mean the gate is broken again:)
15:57:01 <ricolin> ramishra, now?
15:58:13 <ramishra> No.. no.. I meant review and land patches before the gate is broken again... too late for me...
15:58:27 <ricolin> ramishra, don't scare me!
15:58:32 <zaneb> lol
15:58:49 <kazsh> haha
15:58:54 <kiennt> lol
15:59:14 <ricolin> Anyway, big goal these two week land goals, land BPs and hope no gate fix:)
15:59:54 <ricolin> anything else?
16:00:01 <ricolin> if no, shall we call the meeting off:)
16:00:57 <ricolin> Okay thanks all, and please help on review:)
16:00:58 <ricolin> #endmeeting