17:00:06 <oomichi> #startmeeting qa
17:00:09 <openstack> Meeting started Thu Sep  1 17:00:06 2016 UTC and is due to finish in 60 minutes.  The chair is oomichi. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:14 <openstack> The meeting name has been set to 'qa'
17:00:18 <oomichi> Hello - who's here for today?
17:00:21 <dpaterson> o/
17:00:46 <oomichi> today's agenda: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_September_1st_2016_.281700_UTC.29
17:00:48 <oomichi> hi
17:01:46 <scottda> Hi
17:01:57 <oomichi> hi
17:02:20 <hogepodge> o/
17:02:32 <oomichi> ok, let's start meeting
17:02:52 <oomichi> #topic QA/Infra Code Sprint
17:02:59 <mtreinish> o/
17:03:04 <oomichi> #link https://wiki.openstack.org/wiki/Sprints/QAInfraNewtonSprint
17:03:29 <oomichi> We have a meetup in Germany the next next week
17:03:41 <oomichi> The above link is the info
17:04:09 <oomichi> The remaining sheets are just 5, so please regisger if interested in joining
17:04:36 <oomichi> QA topics of the meetup are collected on #link https://etherpad.openstack.org/p/qa-infra-newton-midcycle
17:05:08 <oomichi> If having some ideas or woking items, please write them down on :)
17:05:39 <oomichi> That is from me about this topic :)
17:06:15 <oomichi> do we have some questions about the meetup now(like trip or nice dinner)
17:06:20 <oomichi> ?
17:06:56 <oomichi> ok, let's move on
17:06:58 <mtreinish> nothing from me
17:07:07 <oomichi> #topic OpenStack Summit
17:07:21 <oomichi> mtreinish: ok, thanks. I got it
17:07:55 <oomichi> The next OpenStack summit doesn't have many slots design sessions by comparing the previous one
17:08:17 <oomichi> on the previous summit, we have 8 slots for QA sessions
17:08:45 <oomichi> but the next summit will be 7 due to shrinking the desin session time
17:08:48 <hogepodge> more projects with less space and less time
17:09:02 <oomichi> hogepodge: yeah, that is right
17:09:37 <oomichi> 7 slots also is not concrete yet
17:10:09 <dpaterson> Is there anything we can do to help make the 7 a sure thing?
17:10:11 <oomichi> so we need to collect sessions idea after the meetup and need to consider the less slot
17:10:25 <mtreinish> oomichi: if there is less available slots thats fine, we can make it work
17:10:34 <mtreinish> oomichi: but the other thing to remember is ocata is a very short cycle
17:10:41 <oomichi> dpaterson: we just need to wait for the other projects request for slots
17:10:56 <mtreinish> so less slots might not be a bad thing
17:11:07 <mtreinish> we don't want to overcommit to work if there isn't much time
17:11:16 <oomichi> mtreinish: yeah, a nice point. we need to select workable items for short time
17:11:28 <oomichi> mtreinish: yeah I agree
17:12:01 <oomichi> ok, thanks for feedback, are there more topics for the summit?
17:12:18 <oomichi> I guess we need to concentrate on the meetup before the summit ;)
17:13:00 <oomichi> ok, let's move on
17:13:09 <oomichi> #topic Specs Reviews
17:13:31 <oomichi> Current spec review is #link https://review.openstack.org/#/q/status:open+project:openstack/qa-specs,n,z
17:13:55 <oomichi> #link https://review.openstack.org/#/c/354877/ is really nit patch
17:14:18 <oomichi> #link https://review.openstack.org/#/c/349730/ is I'd like to get more feedback
17:14:38 <mtreinish> oomichi: do we need to update implemented specs like that
17:14:44 <mtreinish> it feels like a waste of time
17:14:50 <mtreinish> and also changing history
17:14:58 <mtreinish> when we implemented that spec it was tempest_lib
17:15:26 <oomichi> mtreinish: which spec do you mean?
17:15:32 <mtreinish> 354877
17:16:31 <oomichi> mtreinish: ah, a nice point
17:16:38 <oomichi> mtreinish: ok, I drop it
17:17:28 <oomichi> ok, I dropped
17:17:47 <oomichi> do we have more topics about the spec?
17:18:25 <oomichi> ok, let's move on
17:18:29 <oomichi> #topic Tempest
17:18:41 <oomichi> #link https://review.openstack.org/#/q/project:openstack/tempest+status:open
17:19:06 <oomichi> #link https://review.openstack.org/#/c/317088/ is up on the agenda
17:19:38 <oomichi> maybe castulo did it up, but he is not here
17:20:12 <oomichi> someone wants to pick it up here?
17:21:45 <oomichi> the other topic is I feel it is nice to release a new Tempest after milestone-3(the end of this week) to avoid impact to the gate
17:22:11 <oomichi> or someone wants to release it soon?
17:22:11 <mtreinish> oomichi: no one consumes tempest from release in the gate really
17:22:16 <mtreinish> it shouldn't have any impact there
17:22:55 <mtreinish> but waiting is fine
17:22:57 <oomichi> mtreinish: ah, I guess puppet jobs are using that
17:23:12 <mtreinish> on stable branches maybe
17:23:24 <mtreinish> but it should be capped if so
17:23:47 <mtreinish> the reason you would do that is to avoid potential issues with master
17:24:21 <oomichi> mtreinish: yeah, that would be a pain to developers
17:24:31 <oomichi> just before milestone-3
17:25:06 <oomichi> ok, it is fine to wait for just 3 days :)
17:25:45 <oomichi> the next one is bug triage
17:26:05 <oomichi> now the launchpad of tempest has 200+ bug reports
17:26:34 <oomichi> that is hard to triage all of them by a single person
17:27:06 <mtreinish> oomichi: 200+ untriaged bugs?
17:27:10 <oomichi> to see the progress easily, the graph is #link https://github.com/oomichi/bug-counter#current-graph
17:27:38 <mtreinish> oomichi: that looks like there are only 10-20 bugs that are untriaged
17:27:44 <mtreinish> most are in progress
17:28:06 <oomichi> mtreinish: yeah, but actually, in-progress patches also is not in progress
17:28:25 <oomichi> over 2 years, many patches are abandoned and no progress.
17:28:25 <mtreinish> doesn't it mean there is a review up for it?
17:28:42 <mtreinish> I thought jeepyb changes the status back if the patch is abandonded
17:28:43 <oomichi> abandoned patches don't affect the status of LP
17:29:23 <oomichi> oh, I didn't know jeepyb. is that implemented recently?
17:30:04 <mtreinish> oomichi: no, it's a very old project. That's the tool that does the gerrit lp integration
17:30:12 <mtreinish> well very old in openstack terms :)
17:30:35 <oomichi> mtreinish: heh, it is nice to check current status of that:)
17:30:51 <oomichi> #action oomichi checks jeepyb status
17:31:35 <mtreinish> oomichi: https://github.com/openstack-infra/jeepyb/blob/master/jeepyb/cmd/update_bug.py
17:32:06 <oomichi> mtreinish: thanks, oh no activity over 8 months
17:32:58 <oomichi> ok, do we have more topics related to Tempest?
17:33:56 <oomichi> #topic DevStack + Grenade
17:34:14 <oomichi> #link https://review.openstack.org/#/q/project:openstack-dev/devstack+status:open
17:34:25 <oomichi> #link https://review.openstack.org/#/q/project:openstack-dev/grenade+status:open
17:35:10 <oomichi> do we have more topics related to them this week?
17:36:11 <oomichi> ok, let's move on
17:36:19 <oomichi> #topic openstack-health
17:36:38 <oomichi> #link https://review.openstack.org/#/q/project:openstack/openstack-health+status:open
17:37:25 <oomichi> do we have more topics related to o-h?
17:37:41 <mtreinish> well we landed a bunch of improvements to the per test page recently
17:38:02 <mtreinish> and timothyb89 is working on rewriting our graph generation to improve performance and decrease the memory footprint
17:38:10 <mtreinish> #link https://review.openstack.org/363934
17:38:36 <oomichi> mtreinish: oh, cool
17:38:38 <timothyb89> good progress on that so far, hopefully will be ready next week or so
17:39:34 <oomichi> timothyb89: does it improve the performance of drawing?
17:40:05 <timothyb89> oomichi: yes, especially while scrolling
17:40:24 <oomichi> timothyb89: oh, really cool. thanks for doing that:)
17:40:44 <timothyb89> it should make redraws essentially free, so scrolling lag should be pretty much eliminated :)
17:40:58 <oomichi> timothyb89: ++
17:41:23 <oomichi> I'd like to pick another one #link https://review.openstack.org/#/c/287679/ up here
17:42:02 <oomichi> that is a little difficult to understand dual graphs, but I prefer that
17:42:22 <oomichi> #link http://logs.openstack.org/79/287679/2/check/gate-openstack-health-npm-run-test/a61f766/reports/build/#/
17:42:35 <oomichi> are there any idea for improvements to understand easily?
17:43:08 <oomichi> we can zoom up by selecting the range on bottom graph
17:43:35 <oomichi> but at the first looking, it seems just duplicated
17:43:47 <mtreinish> right, which is the problem that was there in earlier revs
17:43:57 <mtreinish> there is no indication what is going on and it's very confusing
17:44:12 <oomichi> mtreinish: yeah, I also was confused
17:45:05 <oomichi> ok, let's wait for more feedback later
17:45:25 <oomichi> #topic Critical reviews
17:45:41 <oomichi> do we have some patches we need to review in this week?
17:46:19 <mtreinish> oomichi: it's not critical and not ready to merge yet but:
17:46:20 <mtreinish> #link https://review.openstack.org/#/c/364414/
17:46:30 <mtreinish> I pushed that out this morning to fix http proxy settings
17:47:24 <oomichi> ok, I will check it. but that is still on red
17:47:47 <mtreinish> I did just say it's not ready to merge :)
17:47:57 <mtreinish> also is not a qa project but:
17:48:00 <mtreinish> #link https://review.openstack.org/#/c/357987/
17:48:02 <oomichi> mtreinish: ah, I missed that ;)
17:48:19 <mtreinish> I pushed that out to address the backwards incompat change that landed a couple weeks ago
17:48:31 <mtreinish> so we can actually do config opt deprecations and not break everyone
17:49:10 <oomichi> yeah, I agree to avoid that
17:50:09 <oomichi> mtreinish: the patch is to show some warning of deprecation?
17:50:33 <mtreinish> oomichi: it's so you can use a deprecated option name in code (it will emit a warning if you do that)
17:50:34 <oomichi> I am checking line 2864 on https://review.openstack.org/#/c/357987/3/oslo_config/cfg.py
17:50:57 <mtreinish> today if you change a config opt name (even with deprecation) in code only the new name works
17:51:18 <mtreinish> which a change doing that was approved a couple weeks ago (not the first time) and it broke a ton of plugins
17:51:38 <mtreinish> we reverted it (which broke the plugins which updated) because it was too widespread an accidental break
17:52:08 <oomichi> can I know which option affected the projects?
17:52:27 <oomichi> mtreinish: ah, I see. I remember
17:52:44 <oomichi> I guess puppet jobs also were affected
17:53:03 <mtreinish> oomichi: it's all options, but in this case it was:
17:53:09 <mtreinish> #link https://review.openstack.org/#/c/357907/
17:53:17 <mtreinish> #link https://review.openstack.org/#/c/349749/
17:53:28 <mtreinish> right, the puppet jobs caught it
17:53:30 <oomichi> mtreinish: yeah, yeah
17:54:23 <oomichi> puppet jobs seems useful to catch this kind of problem now
17:55:05 <oomichi> do we have more patches needed to be reviewed?
17:55:52 <oomichi> #topic Open Discussion
17:56:21 <hogepodge> I have one quick tempest item related to defcore
17:56:33 <oomichi> hogepodge: thanks, go ahead:)
17:56:44 <hogepodge> downstream vendor is running into a problem where images come up with non-zero volume attachments
17:56:49 <hogepodge> causing this test to fail
17:56:50 <hogepodge> test_attach_volume.AttachVolumeTestJSON.test_list_get_volume_attachments
17:57:15 <hogepodge> the test assumes zero at boot, so I will submit a patch to count the number at boot then do a delta after attachment
17:58:01 <hogepodge> also, this test tempest.api.compute.images.test_images.ImagesTestJSON.test_delete_saving_image
17:58:29 <hogepodge> their cloud moves from saving to active really fast, causing the test to fail
17:58:43 <hogepodge> anyway, just some notes on things I'll be poking at
17:58:44 <oomichi> #link https://github.com/openstack/tempest/blob/master/tempest/api/compute/images/test_images.py#L44
17:59:00 <mtreinish> hogepodge: is there a bug filed for that
17:59:23 <mtreinish> hogepodge: I've always worried about that happening on tests that wait until a transient state like saving or building to try and trigger something
17:59:32 <hogepodge> no, but we're going to do that
17:59:42 <mtreinish> because that always was a inherent race condition in the test
18:00:20 <oomichi> hogepodge: ah, yeah, it is possible to miss SAVING status on the test
18:00:21 <hogepodge> mtreinish: I was thinking of using more refined logic to allow skipping 'saving' state
18:00:52 <oomichi> ok, the time is comming
18:00:59 <oomichi> #endmeeting