19:17:41 #startmeeting tripleo 19:17:42 Meeting started Tue Jan 27 19:17:41 2015 UTC and is due to finish in 60 minutes. The chair is greghaynes. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:17:43 * bnemec is eating lunch 19:17:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:17:46 The meeting name has been set to 'tripleo' 19:17:51 #topic agenda 19:17:53 * bugs 19:17:55 * reviews 19:17:57 * Projects needing releases 19:17:59 * CD Cloud status 19:18:01 * CI 19:18:03 * Specs 19:18:05 * open discussion 19:18:07 Remember that anyone can use the link and info commands, not just the moderator - if you have something worth noting in the meeting minutes feel free to tag it 19:18:13 #topic bugs 19:18:26 #link https://bugs.launchpad.net/tripleo/ 19:18:28 #link https://bugs.launchpad.net/diskimage-builder/ 19:18:30 #link https://bugs.launchpad.net/os-refresh-config 19:18:32 #link https://bugs.launchpad.net/os-apply-config 19:18:34 #link https://bugs.launchpad.net/os-collect-config 19:18:36 #link https://bugs.launchpad.net/os-cloud-config 19:18:38 #link https://bugs.launchpad.net/os-net-config 19:18:40 #link https://bugs.launchpad.net/tuskar 19:18:42 #link https://bugs.launchpad.net/python-tuskarclient 19:19:04 We have the same https://bugs.launchpad.net/tripleo/+bug/1374626 which I believe is still blocking on SpamapS ENOTIME 19:19:58 also https://bugs.launchpad.net/tripleo/+bug/1401300 thats blocking on me getting distracted by CI fails 19:20:26 any other bugs people want to mention? 19:20:54 nope 19:21:01 https://bugs.launchpad.net/diskimage-builder/+bug/1407828 is wierd 19:21:15 maybe one of the rhers could help out with that? 19:21:34 greghaynes: I responded to it already. 19:21:36 oh, looks like bnemec did :) 19:21:46 The link works for me, so I can't make any more progress on it. 19:22:16 ah, ok, since its been over a week maybe we should just mark as invalid 19:23:07 ok, thats all I got 19:23:10 moving on... 19:23:29 #topic reviews 19:23:36 #info There's a dashboard linked from https://wiki.openstack.org/wiki/TripleO#Review_team - look for "TripleO Inbox Dashboard" 19:23:38 #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html 19:23:40 #link http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt 19:23:42 #link http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt 19:24:29 im pretty sure our list of oldest reviews is different this week, so seems like a good thing 19:25:05 I dont really have anything to add to that... 19:25:21 we seem to be doing fine with our review backlog imo so *shrug* 19:26:30 oh, I think we said this week SpamapS would look at culling some core reviewers if their numbers stayed low? 19:26:53 Well, in the near future. 19:27:10 I'd be inclined to wait maybe for the next time we meet at this time. 19:27:10 didnt he already? 19:27:20 Just to make sure we're >30 days out from the holidays. 19:27:30 He removed a couple who said they were no longer interested. 19:27:47 I think he was holding off on the forced removals though. 19:27:57 yep, seems reasonable 19:28:36 ok, so looks like our meeting wiki page isnt updated with out new agenda 19:28:38 #link http://lists.openstack.org/pipermail/openstack-dev/2015-January/054575.html 19:29:23 #action Update https://wiki.openstack.org/wiki/Meetings/TripleO to reflect our new agenda 19:29:44 Our next topic was supposed to be operators? 19:30:09 #topic live cloud status 19:30:40 Do we have anyone here who is running / representing a live cloud? 19:31:15 I think our two who have been working on hp ci clouds tend to come to the other meeting 19:31:17 Or anyone at all besides the three of us? :-) 19:31:39 heh, yea 19:32:18 ooook. Welp going to move on to.... 19:32:28 #topic Projects needing releases 19:32:44 I released last week, new yubikey is awesome 19:33:31 anyone want to take it this week? 19:34:27 *crickets* 19:34:43 ok, ill do it if things look like they need releasing 19:34:51 #action greghaynes to release all the things 19:35:11 #topic CI 19:35:27 so, I hope people have something to say about this :) 19:35:59 It's broken. 19:36:03 that it is 19:36:05 It looks like a fix was proposed 19:36:12 a fix? 19:36:16 for neutron 19:36:18 ah 19:36:25 yep, just in merge queue 19:36:58 Id also like to point out http://logs.openstack.org/19/149819/3/check-tripleo/check-tripleo-ironic-overcloud-f20-nonha/3e39a48/logs/get_state_from_host.txt.gz 19:37:08 which shows why our f20 jobs have no seed logs 19:38:36 So theres still the spurious failure that is causing most f20 jobs to fail, and also the spurious nodes going into error deleting state in heat 19:39:13 I think it would be awesome if we could rally some support around getting those fixed, they seem to be causing a lot of pain for anyone trying to use our CI :) 19:39:46 So I caught part of the conversation about the seed_logs thing. 19:40:09 Were we thinking that it's because there are no logs at all? 19:40:34 oh, no, theres a failure on f20 but an unrelated bug makes us get no logs for the seed in this certian failure condition 19:40:42 so debugging the failure is basically impossible 19:40:58 http://logs.openstack.org/19/149819/3/check-tripleo/check-tripleo-ironic-overcloud-f20-nonha/3e39a48/ is an example of such a job 19:41:59 And we don't have a working theory yet? 19:42:18 and the reason for no seed logs we have not determined to be that tar --exclude somefile causes tar to exit 1 if somefile does not exist, and we are --excluding a file that doesnt exist on these jobs 19:42:39 Ah. 19:42:51 Maybe we should just touch that file before doing the tar. 19:42:55 im working on a patch for that atm (pretty easy), but well still have the f20 fail to fix once that goes in 19:42:59 yep 19:43:14 Okay, cool 19:43:58 Any other comments about CI? Maybe other spurious failures I didnt mention 19:45:14 ooook 19:45:21 #topic Specs 19:45:44 #link https://review.openstack.org/#/q/status:open+project:openstack/tripleo-specs,n,z 19:46:26 looks like there are two specs to review, one which could maybe me merged but more +1's would be awesome (the selinux one) 19:46:38 Agreed. 19:47:59 I doubt theres any more comments on that... since theres only two specs 19:48:11 #topic open discussoin 19:48:38 #action Everyone read the mid-cycle agenda and add/comment on what we have! 19:48:50 #link https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup 19:49:12 Had a question related to OoO if folks could entertain... 19:49:22 sure! 19:49:45 Have a private OpenStack cloud that I'm trying to spin up two VMs and a connecting network. 19:49:50 aaand looks like etherpad.o.o is down, so maybe dont go editing the agenda right now ;) 19:50:03 Want to run DevStack on each of the two VMs and test VPNaaS between them. 19:50:36 WIth VBOX, I would just have a L2 net, and the devstack would create a br-ex with and IP on the public net and eth1 in that bridge. 19:50:56 However, with OS, I'm having problems trying to do the interconnecting network in this manner. 19:51:01 Is it possible? 19:51:21 Are you using tripleo to do this? 19:51:55 No. Mostly because I don't know how, and am constrained by the undercloud that I have to use. 19:52:10 But, I was wondering if OoO has tackled the same issue. 19:52:35 Two VMs running openstack, talking over a network that was created on the undercloud. 19:53:19 We don't run our vms on OpenStack (yet...) 19:53:26 They're created directly with virsh. 19:53:58 I can put IPs on eth1 and ping, so connectivity works, but if I add eth1 into br-ex to run DevStack, I loose connectivity. 19:54:03 bnemec: Ah. 19:54:05 or are actual baremetal for a prod deployment 19:54:13 Well, that too. :-) 19:54:28 pcm_: Yeah, it's a little tricky running OpenStack nested in itself. 19:54:42 bnemec: Is it possible? 19:54:57 Neutron locks down even private networks pretty hard, so if the traffic it sees coming from a vm isn't exactly what it expects it blocks it. 19:55:19 this sounds similar to the multinode devstack testing deal 19:55:28 pcm_: The only way I've gotten it to work is hacking up Neutron to allow more traffic. 19:55:40 Which may not be the only way, but it's what I've got. 19:55:46 where they just made a gre tunnel to make an l2 between the nodes 19:55:47 * bnemec is not a networking expert 19:55:53 bnemec: Yeah, that's what I seem to be seeing. If I move the IP to br-ex and swap macs, I can ping the br-ex, but not anything else (like the public IP of the Neutron router). 19:56:34 greghaynes: Maybe I can explore that. I essentially want a L2 network between the VMs. 19:56:57 yep, thats what they did. Its less than ideal obviously (tubes inside of tubes) but technically should work 19:57:17 also one gotcha they ran into was that nova sets ebtables rules 19:57:33 might make sure youre not running into that if youre seing packets not get where they should be going 19:58:23 ok, I think meeting time is basically up 19:58:26 greghaynes: Thanks. I'll ping others here that (hopefully) know how to setup GRE tunnels and give it a try. 19:58:30 thanks all! 19:58:31 pcm_: np 19:58:38 Thanks for attending all 19:58:40 #endmeeting