20:01:18 <ianw> #startmeeting diskimage-builder
20:01:19 <openstack> Meeting started Thu Jun 29 20:01:18 2017 UTC and is due to finish in 60 minutes.  The chair is ianw. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:01:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:01:22 <openstack> The meeting name has been set to 'diskimage_builder'
20:01:38 <ianw> hello
20:01:48 <andreas-f> Hello Ianw.
20:01:57 <ianw> #link https://wiki.openstack.org/wiki/Meetings/diskimage-builder
20:02:16 <ianw> #topic Annoucements
20:02:40 <ianw> I wanted to put in the record our testing situation
20:03:26 <ianw> last week we had the "perfect storm" ... debian stable released over the weekend, ubuntu broke it's sha hash for images, and we had infra instability due to DNS issues
20:04:00 <ianw> after a bunch of work we're in a much better position
20:04:11 <andreas-f> What changed?
20:04:47 <ianw> the voting func-tests are now all using infra mirrors to build, meaning we should not be rechecking due to odd mirror failures
20:05:13 <ianw> i have moved the image-based tests, that are downloading from various places, into a new, non-voting job
20:05:56 <ianw> i've started to look at mirroring the required images.  there is some infrastructure for doing this for devstack.  it's non-trivial to get dib in there, just because of the way it's all written.  it's not high priority for me
20:06:55 <andreas-f> So we are now in a stable environment?
20:07:13 <andreas-f> What is done to prevent wrong sha1?
20:07:31 <ianw> i've seen some suse changes out there to use mirrors, and from now on, we won't do voting jobs unless they're coming from a suitable local mirror
20:07:32 <andreas-f> Or even buggy dnf (as we also saw last week)?
20:08:22 <andreas-f> Is there some check running for / on these local mirrors?
20:08:43 <ianw> yes, thousands of jobs a day :)
20:08:54 <andreas-f> :-)
20:10:03 <ianw> anyway, i've also started a stretch mirror in infra (still need to finalise rerepro setup) so we can get debian going again
20:10:50 <ianw> #topic Testing on Trusty
20:11:07 <ianw> I'm not sure if we need to keep testing on trusty
20:11:20 <ianw> maybe after queens release we can stop
20:11:50 <ianw> we have python2 host testing on centos, and python3 on xenial.  infra has moved to xenial based builders
20:12:30 <andreas-f> I see also no reason to keep it.
20:13:36 <ianw> unless there are old branches using it somehow.  i'll investigate
20:13:59 <ianw> #action ianw to investigate if any old branch jobs use dib on trusty
20:14:46 <ianw> #topic Voting ppc jobs
20:15:27 <ianw> rfolco asked in channel about making the 3rd party ci for these voting
20:16:13 <ianw> i don't have any problem with that, it seems to be going well
20:16:18 <andreas-f> The CI jobs under IBM PowerKVM CI?
20:16:28 <ianw> tonyb tells me that -minimal jobs will work soon too
20:16:44 <ianw> yes PowerKVM
20:16:46 <rfolco> ianw, right. we're willing to vote, after we fix our instability :)
20:17:06 <andreas-f> Two of them (currently) always fail.
20:17:12 <ianw> rfolco: heh, well we know a lot about that :)
20:18:13 <andreas-f> Do we have enough knowledge and resources for supporting this?
20:18:24 <ianw> rfolco: ok so we just adjourn on that for now?
20:19:09 <mmedvede> ianw: well, trusty and xenial do not fail, so if voting is added it would help to keep them green
20:19:29 <rfolco> ianw, ok. Please note: the voting jobs are green: xenial, the non-voting centos and fedora being worked.
20:19:36 <mmedvede> cetnos/fedora on PowerKVM CI are marked non-voting anyway
20:20:13 <ianw> ahh, yes.  sorry trained to look for "-nv" now :)
20:20:51 <rfolco> so as soon as we are added to voting group, these jobs will vote.
20:21:18 <rfolco> not sure how to do that, I believe only infra cores?
20:21:50 <ianw> andreas-f: in reality it's still best-effort, as the vote won't block the gate.  but i think it's a reasonable level of signal
20:22:10 <andreas-f> Fine from my side.
20:23:55 <rfolco> we commit to monitor and be careful with reporting/voting jobs
20:25:10 <ianw> #agreed powerkvm voting
20:25:39 <rfolco> thanks ianw andreas-f
20:25:41 <mmedvede> \o/
20:25:43 <ianw> now ... to the practicalities; we don't have a -ci group i don't think?
20:25:48 <ianw> why can't it just vote?
20:27:00 <clarkb> verified is by default restricted to the jenkins and zuul users. so ya just need to update the acl
20:27:04 <mmedvede> I think by default third-party CIs have no voting rights
20:28:03 <mmedvede> yes, there is no diskimage-builder-ci group setup yet
20:28:34 <ianw> ok, let me look at that offline
20:28:47 <ianw> #action ianw setup 3rd part ci group
20:29:22 <ianw> #undo
20:29:23 <openstack> Removing item from minutes: #action ianw setup 3rd part ci group
20:29:28 <ianw> #action ianw setup 3rd party ci group
20:29:48 <ianw> #topic Review review
20:30:25 <ianw> #link https://review.openstack.org/#/c/478344/
20:30:49 <ianw> that removes the centos/rhel elements, and has some description of the small tangle we've left ourselves with centos7
20:31:08 <ianw> i think i'd like to put that in a next release
20:31:08 <andreas-f> I just started to review this before. Looks good to me.
20:31:43 <ianw> #link https://bugs.launchpad.net/diskimage-builder/+bug/1699437
20:31:45 <openstack> Launchpad bug 1699437 in diskimage-builder "muti mount points don't mount in right order" [Undecided,In progress] - Assigned to Ian Wienand (iwienand)
20:31:55 <ianw> andreas-f: ^ this could maybe do with a comment from you
20:32:13 <ianw> to me, it look like this was always setup backwards and i'm not sure how it ever worked
20:32:40 <ianw> #link https://review.openstack.org/#/c/476340/
20:32:45 <andreas-f> I had a look in - but did really not understand what happens here.
20:32:50 <ianw> is a test-case
20:32:54 <andreas-f> there *are* tests.
20:33:55 <ianw> of the mount ordering?
20:34:25 <andreas-f> Yes - at least once upon a time I wrote one.
20:34:27 <andreas-f> Can you reproduce the bug?
20:35:41 <andreas-f> The pure order should not influence the behavior.
20:36:10 <ianw> i haven't gone back to old releases to try.
20:37:12 <ianw> andreas-f: ? the order comes from the sequential ordering in the "partitions:" list?
20:37:45 <andreas-f> Ahhhh. Missed this special case.
20:38:46 <andreas-f> So for me it looks that the ordering itself is find.
20:39:12 <andreas-f> Do you know if there is somewhere a 'reverse()' while save / load?
20:39:34 <andreas-f> (IMHO this is not tested.=
20:40:51 <ianw> no
20:41:01 <ianw> we test the output ordering of http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/block_device/tests/config/multiple_partitions_tree.yaml
20:42:41 <andreas-f> Yes - I'm just thinking about possible reasons:
20:42:51 <andreas-f> we have test cases that check the order
20:42:59 <andreas-f> therefore this should be ok
20:43:22 <andreas-f> What I don't know if we have a test case using BlockDeviceState?
20:44:07 <ianw> well, we are checking the value put into the state dictionary -> http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/block_device/tests/test_mount_order.py#n55
20:46:47 <ianw> anyway, i'll try to determine a root cause as i get time, but i think it works if just flipped around
20:47:38 <ianw> 476340 could be templated out a bit more, more tests welcome
20:48:26 <ianw> #link https://review.openstack.org/#/c/476302/
20:48:36 <ianw> seems straight foward, use DIB_PYTHON in cleanup path
20:49:03 <ianw> i'll merge that
20:49:47 <ianw> i'll review the outstanding suse stuff today, that hasn't been around for too long
20:50:12 <ianw> #link https://review.openstack.org/#/c/476732/
20:50:23 <andreas-f> That would be good - I have not that much experience with suse.
20:50:44 <ianw> that was from discussions with jamielennox from the work he's doing
20:51:33 <andreas-f> Is this (still) in use?
20:51:53 <andreas-f> IMHO when using the new block device layer the sized are given / fixed?
20:52:12 <andreas-f> Or maybe I'm mixing things here....
20:52:40 <andreas-f> No: I think this whole size computation is needless.
20:53:09 <andreas-f> (Except for debugging)
20:54:48 <ianw> well it ends up passed to image_create(size) ...
20:55:14 <ianw> http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/block_device/level0/localloop.py#n29
20:55:26 <andreas-f> Which works only if you have one image.
20:56:05 <andreas-f> But if you have different partitions on different images this does not work.
20:56:50 <ianw> well i don't think we're going to solve this right now.  i will sync with jamielennox on this today as we're in the same tz and update
20:57:02 <andreas-f> Ack.
20:57:25 <ianw> wow, time just flies ... if anyone else wants to stick a link of something in to look at please do now
21:00:25 <ianw> alright, well i think that's time for now.  thanks and see you on gerrit :)
21:00:27 <ianw> #endmeeting