15:00:15 #startmeeting third-party 15:00:16 Meeting started Mon Apr 13 15:00:15 2015 UTC and is due to finish in 60 minutes. The chair is anteaya. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:20 The meeting name has been set to 'third_party' 15:00:23 hello 15:00:40 raise your hand if you are here for the third party meeting 15:00:51 hi 15:00:56 hey asselin 15:01:00 #chair asselin 15:01:01 Current chairs: anteaya asselin 15:01:24 asselin: I'm on a train, if my wifi gets knocked out can you continue and ensure we end on time? 15:01:53 ok sure 15:01:56 thanks 15:02:10 while we see if anyone else is joing us today, how are you doing? 15:02:31 I'm fine. It's monday :) 15:02:35 yay monday 15:03:04 what is the status of the tempest volume encryption situation? 15:03:11 o/ 15:03:16 hello ameade 15:03:25 hey hey 15:03:32 hi 15:03:37 hi 15:03:47 hemna was working on that last week. it is quite broken 15:03:48 anteaya, there are quite a few bugs out for volume encryption 15:03:56 hmmmm 15:04:02 I think only non multipath iscsi works 15:04:09 is thingee aware of the situation? 15:04:10 we disabled all of them since they're broken. 15:04:14 :( 15:04:26 the test passing doesn't mean anything anyways (false positive) 15:04:31 right 15:05:09 I think there's 3-4 bugs open now. 15:05:17 anyone working on them? 15:05:57 hemna was working on it last week. not sure if that is his highest priority 15:06:02 I saw Nha Pham phqnha pick one up...but hemnafk might be working on it too: https://bugs.launchpad.net/bugs/1439855 15:06:03 Launchpad bug 1439855 in OpenStack Compute (nova) "encrypted iSCSI volume fails to attach, name too long" [Medium,Triaged] - Assigned to Nha Pham (phqnha) 15:06:26 are any of you able to help address any of the bugs in any way? 15:06:44 I am working on one in Cinder https://bugs.launchpad.net/cinder/+bug/1442302 15:06:45 Launchpad bug 1442302 in Cinder "volume manager should set encrypted property in connection_info" [Undecided,In progress] - Assigned to Richard Hedlind (richard-hedlind) 15:06:49 help track down the exact source, for instance? 15:06:54 rhe00: yay 15:07:20 asselin: do you know if hemna is around? 15:07:30 maybe he could give us an update on his work 15:07:44 well that isn't really required 15:07:53 rhe00, he should be in later. You can ping him in cinder channel. 15:07:53 as this is a third party meeting, not a cinder meeting 15:08:00 ok 15:08:09 mostly I just wanted to ensure cinder core is awaare 15:08:30 and that third party operators get involved in supporting a solution as much as possilbe 15:08:33 that is my goal 15:08:46 anteaya, yes, there are formal bugs open in cinder, nova, & tempest. 15:08:47 so thank you, rhe00 for picking up that bug 15:08:48 asselin: has encrypted volumes test been disabled in tempest? 15:08:56 asselin: great 15:08:57 rhe00, no 15:09:18 that would be thingee's decision I think 15:09:33 you can share your thoughts on that, but I would leave that to thingee to decide 15:09:44 so let's move on, shall we? 15:09:52 sure 15:09:57 sure 15:10:04 does anyone have anything they would like to discuss about their system today? 15:10:15 thanks 15:10:27 I found a workaround for the cinder "test_minimum_scenario" --> use neutron networking instead of nova-networking 15:10:43 asselin: do expand 15:11:30 no idea why nova-networking fails. I was debugging it and for some reason the nova instance was not able to get an ip address, and hence the test failed 15:11:40 :( 15:11:58 do you feel it warrents a bug report? 15:12:03 I debugged it a bit, and it's the path going from nova to the instance that's broken. 15:12:13 o/ 15:12:22 hey wznoinsk 15:12:44 tcpdump showed that an ip address was being assigned, but the packet couldn't make it's way in.... 15:12:52 hmmmm 15:13:24 asselin: what action would you like to take on that? 15:13:42 seems this is hitting certain ci systems, and not others....no idea why that would be the case. 15:14:00 anteaya, none: I switched to neutron which is working. 15:14:11 hmmmm 15:14:28 anteaya, anyway, seems that's the future solution, so all the better that one works. 15:14:35 true 15:14:43 what did you do to fix? 15:15:05 * asselin looks up the line 15:16:16 #link http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate.sh#n82 15:16:23 export DEVSTACK_GATE_NEUTRON=1 15:16:53 where did you put that in your code? 15:17:19 you can put that in your jenkins job 15:17:37 awesome 15:17:47 thank you for sharing that, asselin 15:17:52 any questions here? 15:18:02 #link https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample#L57 15:18:48 nice 15:19:17 any more for this topic? 15:19:42 that's it, just wanted to share it with others in the same boat 15:19:53 thank you for doing so 15:20:12 not wanting to rush you, just no point in having dead air 15:20:24 anyone else with anything to discuss today? 15:20:31 I noticed on Friday that some CIs do not provide console logs in their results, which makes it really hard to know what tests they ran. 15:20:40 no kidding 15:20:49 rhe00, ++ 15:20:53 i would just ping them individually 15:20:56 what project are those ci systems testing? 15:20:58 ok 15:21:06 Cinder 15:21:14 might test others as well, not sure 15:21:35 so I suggest you create a list of the systems you saw, and share the list with thingee 15:21:37 rhe00, do you have a link? 15:21:47 or what anteaya said 15:21:48 I was trying to chase down false positives, since I checked in an incomplete fix and pretty much all CIs should have failed. 15:21:50 they need to have console logs 15:22:00 rhe00: good test 15:22:07 rhe00: what did you discover? 15:22:11 asselin: one second, I'll find it 15:23:20 This is the link to my check in. https://review.openstack.org/#/c/172531/ 15:23:36 #link https://review.openstack.org/#/c/172531/ 15:24:02 the VMWare NSX was the one without console logs. Now it is not accesible at all. 15:24:35 rhe00: please bring this to thingee's attention 15:24:46 rhe00: thank you for sharing your findings 15:24:56 anteaya: I will 15:25:08 the ci operators are the best source of ensuring quality in the ci systems of a project 15:25:32 poor reliablity means that all cis for the project are painted with the same brush 15:26:05 by helping operators get good quality in their ci systems, all ci systems for that project are seen as more reliable 15:26:09 so thank you 15:26:44 I still have this issue: there are still failing in my setup: test_cinder_volume_create_delete, test_cinder_volume_create_delete_retain 15:26:47 anteaya: I will follow up on this and hopefully help getting the situation improved 15:26:50 anyone else? 15:26:52 rhe00, try the monitoring dashboard that patrickeast set up, it will show comparisons for all cinder systems 15:27:00 rhe00: thank you 15:27:12 asselin: nice job, carry on 15:27:18 related to heat 15:27:18 krtaylor: do you have the link for that? I used to have it, but not sure where I have it 15:27:22 hm, which seems to be down at the moment http://ec2-54-67-102-119.us-west-1.compute.amazonaws.com:5000 15:27:23 * anteaya reorganized the space 15:27:36 rhe00, ^^^ 15:28:15 krtaylor: got it. anteaya: carry on 15:28:28 #link https://github.com/patrick-east/scoreboard 15:28:48 rhe00, ^^ I fyou want to run your own 15:29:05 asselin: I might try that. thanks 15:29:11 asselin, good point, we are internally too 15:29:20 very nice simple tool 15:30:04 nice 15:31:03 asselin: did you want to ask your question again? 15:31:16 I still have this issue: there are still failing in my setup: test_cinder_volume_create_delete, test_cinder_volume_create_delete_retain 15:32:05 (it would be great to have a scoreboard showing what tests each ci is running) 15:32:21 it would indeed 15:32:52 those are related to heat. tests pass when run manually, but not as part of the ci job....? 15:33:02 how interesting 15:33:46 well so far it doesn't sound like anyone present is able to confirm your findings 15:33:50 but I don't have stack traces available now. 15:33:58 yeah, guess it's just me. 15:33:59 asselin, is it related to the caching of all the heat images (regardless of whether they will be used or not) 15:34:42 krtaylor, not sure...it seemed to be an issue with the heat client not able to access the heat service....?? 15:35:33 asselin, hm, that doesnt sound the same 15:35:42 anyway we can move on....when I get stack traces I can ask again. 15:35:52 okay sounds good 15:36:06 does anyone have anything else they would like to discuss today? 15:36:20 asselin, the problem we were seeing was failure when it tried to download the fedora cloud image -> https://github.com/openstack-dev/devstack/commit/89983b6dfe15e8e83f390e9870cc3ddfbf2b8243 15:37:01 asselin, not sure if you are pinned to an old version maybe 15:37:20 krtaylor, no i'm on master now.... 15:37:38 asselin, must be a different problem then 15:38:06 krtaylor, yes, thanks anyway 15:38:50 any other topics needing to be discussed today? 15:39:35 if the answer is no, feel free to say no, we can have 20 minutes of our life back 15:39:50 not from me 15:39:56 thanks asselin 15:40:01 anyone else? 15:40:06 none for me, thanks anteaya 15:40:12 none here 15:40:15 okay 15:40:21 let's round it out then 15:40:27 asselin: would you do the honours? 15:40:38 sure: 15:40:42 #endmeeting