15:00:38 <asselin_> #startmeeting third-party
15:00:39 <openstack> Meeting started Mon Jul 20 15:00:38 2015 UTC and is due to finish in 60 minutes.  The chair is asselin_. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:42 <openstack> The meeting name has been set to 'third_party'
15:00:56 <asselin_> hi, who's here for third party meeting?
15:01:16 <pots> pots is here
15:01:19 <asselin_> anteaya asked me to chair today
15:01:25 <krtaylor> o/
15:01:39 <mmedvede_> hi
15:01:55 <rhe00_> hi
15:02:37 <asselin_> I don't see any agenda, so any topics anyone would like to bring up?
15:03:42 <mmedvede> asselin_: so there were some problems with infra git farm
15:04:06 <asselin_> mmedvede, how recently?
15:04:07 <mmedvede> and there were assumptions that third-party systems all try to build images at the same time
15:04:30 <asselin_> yeah....that's possible
15:04:44 <mmedvede> so it is more of an announcement, to use non-default time for nodepool image build :)
15:05:11 <mmedvede> or to switch to DIB
15:06:12 <asselin_> any way to know who/how many systems?
15:07:09 <mmedvede> asselin_: I only saw a part of the discussion in infra channel. Did not see any finger pointing
15:07:22 <asselin_> #topic third party ci image building may be causing excessive load on infra git farm
15:08:05 <mmedvede> asselin_: I think the evidence was that it happened around the time of default cron job for nodepool snapshot update
15:09:56 <asselin_> yeah, I have all my ci systems on the same time also, which doesn't help our local network either
15:10:34 <asselin_> Should we send an e-mail / update the 3rd party ci wiki page?
15:11:00 <mmedvede> we have two independent nodepool servers, so I did fan them out. But nodepool does not allow fan out image build within a single server
15:11:35 <mmedvede> asselin_: there probably should be a recommendation somewhere to say not to use default time
15:11:53 <asselin_> here perhaps? http://docs.openstack.org/infra/system-config/third_party.html
15:12:34 <rhe00_> am I reading the nodepool default right that is set to update at 14:14 every day?
15:12:37 <mmedvede> asselin_: you have a setup script that many third-party ci use, does it copy default nodepool.yaml, or leaves it to the user?
15:13:00 <asselin_> mmedvede, it copies the default
15:13:28 <mmedvede> rhe00_: yes, 14:14 is the time that should not be used :)
15:13:34 <rhe00_> ok
15:13:39 <asselin_> #action asselin update the nodepool.yaml template to use non-default and ask users to pick a random time
15:13:47 <mmedvede> we are using 8:00 utc
15:14:27 <mmedvede> asselin_: +1 for adding good etiquette to the third party doc
15:15:19 <krtaylor> +1 might be a good thing to add to the docs
15:16:38 <asselin_> ok anything else or nexy topic?
15:16:46 <asselin_> next*
15:16:59 <mmedvede> nexy works too
15:17:09 <asselin_> :)
15:18:06 <ameade> o/
15:18:15 <asselin_> how are everyone's ci systems doing?
15:19:04 <asselin_> #topic how are everyone's ci systems doing?
15:19:23 <rhe00_> I currently have encrypted volumes test disabled... does it require a driver change to support that test?
15:19:35 <rhe00_> it seemed to fail for a while
15:20:10 <asselin_> rhe00_, yes, the tempest test was a false positive
15:20:32 <asselin_> i think therer was a cinder e-mail or notice in a meeting about that
15:21:06 <ameade> seems like a bunch of cinder 3rd party cis havent voted in a number of days
15:21:08 <akerr> there are also open nova bugs related to the tests
15:22:21 <asselin_> #link https://review.openstack.org/#/c/199709/
15:22:23 <rhe00_> asselin: I remember the encrypted volumes test being a false positive but I thought the change https://review.openstack.org/#/c/193673/ was made above the volume drivers, so it should be transparent at the driver level?
15:23:02 <rhe00_> that's why I was surprised to see that some CI's passed after that change and some failed on encrypted volume tests.
15:24:07 <asselin_> rhe00_, well I know both our drivers do the right thing....we fixed them in Kilo
15:25:34 <akerr> https://bugs.launchpad.net/cinder/+bug/1463525
15:25:36 <openstack> Launchpad bug 1463525 in Cinder "There is no volume encryption metadata for rbd-backed volumes" [Undecided,Triaged]
15:25:37 <uvirtbot> Launchpad bug 1463525 in nova "There is no volume encryption metadata for rbd-backed volumes" [Undecided,Confirmed]
15:26:00 <akerr> needs fixed in nova as well, so might not be related to your driver
15:26:23 <asselin_> https://bugs.launchpad.net/cinder/+bug/1440227
15:26:24 <openstack> Launchpad bug 1440227 in Cinder "Not all volume drivers set the 'encrypted' key in connection_info" [Medium,Fix committed] - Assigned to Matt Riedemann (mriedem)
15:26:25 <uvirtbot> Launchpad bug 1440227 in cinder "Not all volume drivers set the 'encrypted' key in connection_info" [Medium,Fix committed]
15:26:53 <asselin_> ^^ this bug has links to how we fixed it in 3par
15:27:15 <asselin_> #link cinde encryped volumes bug https://bugs.launchpad.net/cinder/+bug/1440227
15:27:45 <asselin_> #link 3par fix for encrypted volume: https://review.openstack.org/#/c/170346/
15:28:53 <asselin_> #link https://bugs.launchpad.net/nova/+bug/1439855    encrypted iSCSI volume fails to attach, name too long
15:28:54 <openstack> Launchpad bug 1439855 in OpenStack Compute (nova) "encrypted iSCSI volume fails to attach, name too long" [Medium,In progress] - Assigned to Nha Pham (phqnha)
15:28:54 <asselin_> #link https://bugs.launchpad.net/nova/+bug/1439869    encrypted iSCSI volume attach fails when iscsi_use_multipath is enabled
15:28:55 <asselin_> #link  https://bugs.launchpad.net/nova/+bug/1439861    encrypted iSCSI volume attach fails to attach when multipath-tools installed
15:28:56 <asselin_> #link https://bugs.launchpad.net/tempest/+bug/1432490 TestEncryptedCinderVolumes cryptsetup name is too long
15:28:57 <openstack> Launchpad bug 1439869 in OpenStack Compute (nova) "encrypted iSCSI volume attach fails when iscsi_use_multipath is enabled" [Medium,In progress] - Assigned to Tomoki Sekiyama (tsekiyama)
15:28:57 <uvirtbot> Launchpad bug 1439855 in nova "encrypted iSCSI volume fails to attach, name too long" [Medium,In progress]
15:28:57 <uvirtbot> Launchpad bug 1439869 in nova "encrypted iSCSI volume attach fails when iscsi_use_multipath is enabled" [Medium,In progress]
15:28:57 <uvirtbot> Launchpad bug 1439861 in nova "encrypted iSCSI volume attach fails to attach when multipath-tools installed" [Undecided,Confirmed]
15:28:58 <openstack> Launchpad bug 1439861 in OpenStack Compute (nova) "encrypted iSCSI volume attach fails to attach when multipath-tools installed" [Undecided,Confirmed]
15:28:59 <uvirtbot> Launchpad bug 1432490 in tempest "TestEncryptedCinderVolumes cryptsetup name is too long (dup-of: 1439855)" [Undecided,Invalid]
15:29:00 <openstack> Launchpad bug 1439855 in OpenStack Compute (nova) "duplicate for #1432490 encrypted iSCSI volume fails to attach, name too long" [Medium,In progress] - Assigned to Nha Pham (phqnha)
15:29:13 <asselin_> some more related bugs we've found ^^
15:30:00 <akerr> thats a lot of bugs
15:31:15 <asselin_> yes it is
15:31:39 <asselin_> that's what happens when people start testing, you find issues :)
15:32:11 <asselin_> anything else on this?
15:33:18 <asselin_> any other topics?
15:34:13 <asselin_> ok I have one:
15:34:19 <asselin_> #topic common-ci migration
15:34:35 <asselin_> anyone else migrating to the common-ci solution?
15:35:08 <asselin_> (it's not done, but can still start the process)
15:35:22 <asselin_> anyone not know what that is?
15:36:00 <asselin_> #link http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
15:36:30 <pots> i think i understand the concept but not what it means in practice
15:36:55 <mmedvede> asselin_: we are migrating gradually
15:37:10 <asselin_> pots, in practice, it means we'll be all maintaining a single solution
15:37:39 <anteaya> <- is here now, thanks asselin_
15:38:12 <asselin_> pots, features added, bugs fixed, etc can be done using gerrit
15:38:36 <asselin_> so peer reviewed, tested (that's WIP)
15:38:46 <rhe00_> we are getting there. I wish we could all use the same image as well.
15:38:59 <asselin_> it also means the setup costs will go down
15:39:06 <mmedvede> rhe00_: the same image?
15:39:22 <rhe00_> the same image snapshot for nodepool
15:39:51 <mmedvede> rhe00_: I doubt it is possible. At least for our ci, we use ppc arch image
15:39:59 <asselin_> rhe00_, not sure that would work for us
15:40:45 <anteaya> let's work towards using what we have, make sure it is well documented and then see where we are after that
15:40:52 <asselin_> rhe00_, but we reuse much of the image-build scripts as possible, and modify what we need (e.g. corporate proxy, pypi mirros, etc.)
15:41:17 <rhe00_> asselin: yes
15:41:41 <asselin_> rhe00_, are you using DIB?
15:42:00 <rhe00_> asselin: no, I have not looked into that
15:42:13 <asselin_> rhe00_, I cannot recommend it enough
15:42:34 <asselin_> rhe00_, b/c it will cache a ton of data, so image builds are incremental
15:42:49 <anteaya> infra would love as many folks using dib as possible
15:43:06 <mmedvede> asselin_: I looked into dib, and the problem for us is that it uses chroot, which requires nodepool to be the same arch as the image it builds
15:43:08 <anteaya> mmedvede: have you made any headway here?
15:43:31 <mmedvede> anteaya: yes, still looking if it is possible to workaround chroot problem
15:43:33 <rhe00_> asselin: ok, I will see if I can find some time to experiment with dib
15:43:40 <asselin_> rhe00_, that saves a ton of network resources. Downloading fully built images wouldn't be much better.
15:44:01 <asselin_> rhe00_, also, in my experience, it's much easier to use, debug, etc.
15:45:00 <mmedvede> anteaya: basically, if nodepool worked on fedora, we could switch to dib sooner. It would take much more effort with the way things are now
15:45:50 <anteaya> ah yes, while we do have tests for openstack services to run on fedora we don't do the same for infra tools
15:45:58 <anteaya> :(
15:46:38 <mmedvede> anteaya: I still can try to bringup a ubuntu server with ppc arch, and to dib builds here, just would take longer :)
15:46:39 <asselin_> mmedvede, why doesn't it work (briefly)
15:47:07 <mmedvede> asselin_: our nodepool VM uses ubuntu x86, but we want to build ppc image in it
15:47:07 <anteaya> mmedvede: I understand, yes infra uses ubuntu os for all our tools
15:47:18 <nikeshm_> hi
15:47:47 <anteaya> what is ppc?
15:47:49 <mmedvede> asselin_: dib builder uses chroot, which breaks if you try chroot into ppc64 from x86
15:47:58 <asselin_> mmedvede, nodepool doesn't run on fedora?
15:48:13 <nikeshm_> my CIs are doing well now, but i am facing these failures some time http://paste.openstack.org/show/391923/
15:48:22 <mmedvede> anteaya: ppc=PowerPC
15:48:27 <anteaya> found it: https://en.wikipedia.org/wiki/Ppc64
15:48:30 <anteaya> mmedvede: thanks
15:48:54 <anteaya> nikeshm_: we are in the middle of a topic, can you wait your turn please?
15:49:06 <nikeshm_> sure
15:49:10 <mmedvede> asselin_: it is probably possible to get nodepool running. But the effort of rewriting puppet scripts and dealing with incompatibilities would be significant
15:49:10 <anteaya> thanks
15:49:42 <asselin_> mmedvede, ok I see
15:50:31 <asselin_> #topic open discussion
15:51:00 <banix> @kevinbenton are you here by any chance? question on ovs plugin
15:51:23 <anteaya> banix: do you realize you are in the third party meeting?
15:51:32 <asselin_> nikeshm_, line 251 shows you that devstack failed. Look at the log file mentioned in line 246
15:52:03 <banix> anteaya: no didn’t realize; my appologies
15:52:28 <anteaya> banix: np, hope you find kevin
15:52:47 <nikeshm_> asselin_ : thanks, also sometimes i get 'not a configured node now'
15:53:09 <asselin_> nikeshm_, can you provide more context?
15:53:19 <nikeshm_> asselin_ : sure looking
15:55:03 <asselin_> anyone else has something to bring up for last 5 minutes?
15:56:46 <anteaya> thanks for chairing the meeting asselin_, I appreciate it
15:57:06 <asselin_> anteaya, you're welcome
15:59:01 <nikeshm_> asselin_: http://paste.openstack.org/show/391994/
15:59:35 <asselin_> nikeshm_, let's take this offline since the meeting is basically done now
15:59:44 <asselin_> nikeshm_, in cinder channel
15:59:48 <nikeshm_> sure
15:59:50 <asselin_> #end-meeting
16:00:06 <asselin_> #endmeeting