15:00:29 #startmeeting third-party 15:00:31 Meeting started Wed Mar 18 15:00:29 2015 UTC and is due to finish in 60 minutes. The chair is krtaylor. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:34 The meeting name has been set to 'third_party' 15:00:46 anyone here for third party working group meeting? 15:00:48 hi 15:00:52 Hi 15:00:56 hi asselin_ 15:00:59 hi lennyb 15:01:04 hello 15:01:08 hey mmedvede 15:01:12 hey 15:01:16 hi everyone 15:01:52 hi everybody 15:02:06 we have a full agenda today 15:02:10 #link https://wiki.openstack.org/wiki/Meetings/ThirdParty#3.2F18.2F15_1500_UTC 15:03:30 also a quick announcement: this Saturday 21st is the gerrit os upgrade 15:03:47 and May 9th is the gerrit upgrade 15:04:07 gerrit os upgrade --> new ip address 15:04:14 so there may be some service interruptions 15:04:31 yes, there has been email sent out for details 15:04:40 * krtaylor looks for link 15:05:19 #link http://lists.openstack.org/pipermail/openstack-dev/2015-February/056508.html 15:05:31 I havent seen anything newer 15:05:49 any other quick announcements? 15:06:11 ok, on to agenda items 15:06:47 #topic Third-party CI documentation 15:07:03 I am calling this one done for now 15:07:17 all outstanding patches have merged except one 15:07:23 mmedvede, its your links patch 15:07:50 oh 15:08:02 need to check on that 15:08:14 I feel that any other changes can happen after openstack-ci, downstream-puppet 15:08:50 any questions or comments on third party ci documentation? 15:09:33 next 15:09:57 #topic openstack-ci or downstream-puppet 15:10:12 asselin_, good progress here 15:10:32 yes, TC approved the governance change yesterday 15:10:41 one of the project adds has merged 15:10:45 yes, great news 15:11:14 next we should be able to merge the project-config change & then we'll have a repo we can submit new changes to. 15:12:07 I did put a few items in storyboard. Still need to populate more. This way those who want to work on it can take ownership of the different sections. 15:12:45 asselin_, do you have a link handy? 15:12:53 trying to find it... 15:13:18 here is the patches: 15:13:21 #link https://review.openstack.org/#/q/topic:downstream-puppet,n,z 15:13:58 with this being a focus item for infra, this will make quick progress 15:14:01 #link https://storyboard.openstack.org/#!/story/2000101 15:14:19 but a good opportunity for working group folks to get involved again 15:14:25 * krtaylor looks 15:14:58 asselin_, what do you need help with first 15:15:07 I need to add more tasks. 15:15:39 I will add one for each component mentioned in the spec. 15:15:45 asselin_, then a few patches to prime the pump I presume 15:16:32 yes, the log server one is probably the easiest. I have something up for that already, and will adjust it to fit the direction of the spec 15:18:41 asselin_ thanks for all your work on this, it is really important to this group 15:18:46 ok, any question for asselin_ ? 15:19:20 next topic then 15:19:25 #topic Repo for third party tools 15:19:48 this has started with a set of links in the Third Party CI Working Group wiki page 15:20:18 we (PowerKVMCI) are still getting approval to make our tools public 15:20:49 not expecting any problems, just have to get the appropriate checks 15:21:20 I encourage everyone to put links to any tools that help them in their environment here 15:21:35 #link https://wiki.openstack.org/wiki/ThirdPartyCIWorkingGroup#Third_Party_CI_System_Tools_Index 15:22:29 this is a repository for info that has been cluttering up the meeting page for a while, I need to move some of that info around at some point 15:23:07 the plan is to gather up tools and see if we have enough mass to create a new project repo 15:23:21 seems like a good starting point 15:23:37 maybe along side the openstack-ci or in stackforge 15:24:28 we can list early proof of concepts for monitoring dashboards there also 15:25:05 that was more of an announcement to gather tools, but any questions on that? 15:26:05 onward 15:26:10 #topic What to do with monitoring dashboard 15:26:24 so this spec has slowed to a stop 15:27:08 patrickeast, I like your view, mmedvede has brought that up behind our firewall 15:27:25 patrickeast: it works :) 15:27:29 nice! 15:27:48 i have been swamped with other stuff so i haven't had any time to keep tinkering with it 15:28:00 all kinds of easy stuff to improve upon there 15:28:06 even the functionality it already has is very helpful 15:28:07 with a few changes, it could be a worthy radar replacement 15:28:33 yes, very nice to see our progress against jenkins quickly 15:28:49 * asselin_ looks for link 15:29:07 #link https://github.com/patrick-east/scoreboard 15:29:25 where's the link to the running version? 15:29:28 thanks mmedvede 15:29:42 asselin_: I think it is down 15:29:46 oh uh 15:29:49 lemmie check 15:29:53 http://ec2-54-67-102-119.us-west-1.compute.amazonaws.com:5000/?project=openstack%2Fnova&user=&timeframe=24 15:30:02 yeah, thats why we started one 15:30:13 ah yea 15:30:15 its down 15:30:30 I thought it was just a demo and not intended for everybody's use 15:30:31 i have one inside my firewall that i keep track of mainly 15:30:35 yea exactly 15:30:47 needs a more beefy server for general use 15:30:55 ok I guess I'll set one up too :) 15:31:27 ok, so back to the spec, abandon and mod scoreboard? 15:32:04 maybe we could get infra to spin up a vm to run it on? 15:32:33 sounds like a new simplified spec is in order 15:32:41 krtaylor, yes, on both 15:32:52 yea i liked what we had going in that spec 15:32:58 we just added too much 15:33:11 agreed, no it was useful work 15:33:13 but there is also a demand for something now i think 15:33:19 but it did boil the ocean 15:33:29 sort of a stop gap solution, and a more long term solution 15:33:35 there are some really good ideas there though 15:33:37 patrickeast, +1 15:33:50 krtaylor, yes, and they should be put into v2 15:34:31 ok, I'll take an action to work with sweston and abandon that spec, start a new simplified one 15:34:59 #action krtaylor to migrate old dashboard spec to new simplified v2 15:35:27 any other questions or comments on monitoring dashboard? 15:36:18 ok then, on to one of my favorite parts of these meetings 15:36:27 #topic Highlighting Third-Party CI Service 15:36:44 hi 15:36:55 this week we have wznoinsk to tell us about Intel Networking CI 15:37:10 first of all I need you to have a quick look at http://pastebin.com/972cE2mc 15:37:11 thanks wznoinsk, you have the floor 15:37:40 I'll be focusing on point 5. - workarounds using docker/containers in a CI system 15:38:01 wow, nice writeup 15:38:05 pastebin is a bit of background and some thoughts 15:38:38 #link http://pastebin.com/972cE2mc 15:39:39 long story short: we needed a CI system that will spin up fast without a need of farm of hosts/VMs, docker was a perfect solution to have a clean instances of operating system and quite performing compared to VMs 15:40:33 because you share kernel between the host and containers running on it there are a few things you may hit (regarding resources sharing etc - if more than one container needs access to hugepages for example) 15:42:02 we had to allocate hugepages on the host because that's most probable way of getting the amount hugepages you've originally requested, allocating hugepages inside the containers wouldn't be guaranteed 15:42:28 wznoinsk, is the jenkins and other infra services running in a container also? different system? 15:43:28 jenkins is running on the baremetal, same containers are spawned on 15:44:19 wznoinsk, this is a really interesting setup 15:44:37 it's doable to have jenkins inside a container, so far we only be providing pip mirror (bandersnatch), apt/yum cache and docker registry (to keep CI images) in containers 15:45:20 yea this setup is really cool 15:45:36 * patrickeast wonders if containers would make for easier FC testing 15:45:53 * krtaylor notes the 5-7 min setup 15:45:58 haha yea 15:46:02 thats pretty sweet 15:46:13 while it's possible to link containers using linux bridge/ovs on the same host I think for containers sitting on different baremetals you still need ssh etc. 15:48:07 wznoisnsk, are you running only tempest? What about https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases and Rally test suite? 15:48:40 http://intel-openstack-ci-logs.ovh/networking-ci/refs/changes/95/158495/19/console.log.gz - 10mins stacking, http://intel-openstack-ci-logs.ovh/networking-ci/refs/changes/60/163860/5/console.log.gz - 12, 5-7mins was seen when the machine was a bit less busy 15:49:46 lennyb_: our guys that own PCI/SRIOV CI are doing PCI tests, networking is mainly ovs on intel's dpdk 15:50:01 wznoinsk, what is the main reasons for choosing a solution built on containers? speed? 15:50:06 we don't have any custom tests for that one, we will for the (still to come) numa ci 15:50:31 thanks 15:51:07 krtaylor: compared to VMs yes, and if you're talking about top notch hardware (nic?) features you may not have them inside VMs to test them 15:52:14 compared to barematal on the other hand - it's cleaner, the container has filesystem isolated from the host so devstack is no longer meesing with your operating system/pip packages 15:52:52 wznoinsk, really nice summary, the pros/cons section will help others trying to make this same decision 15:53:24 I should make a note that it's not a finished list - especially for the cons part ;-) 15:53:41 wznoinsk, what is a problem that stumped you for a while and how did you get around it? 15:54:03 I have feeling upstream ovs may not necessarily play well with network name spaces containers/docker is build around 15:54:55 the one above, still investigating why I'm having problems with vanilla ovs, after trying different setups and kernels I still didn't figure out the reason 15:56:25 wznoinsk, have you scripted any tools to help with monitoring or deploying in this environment? any you'd want to share? 15:57:58 there are some examples in the pastebin as well but the bigger one I think was that docker/containers are still quite fresh to the world and there are some issues with these, you have to kill a container that is hanging there for too long for a reason that's not yet resolved in docker/lxc, you need to monitor resources on a lower level to react when one of the containers gets abusive about memory/other resources blocking other c 15:58:45 I have a few nagios checks I'm using for hugepages, I need to write a few to monitor docker/jenkins behaviour/stuck builds 15:59:04 we are close to time, any questions for wznoinsk ? 15:59:09 not much to share at the moment but some nagios checks may land in your tools index eventually 15:59:54 wznoinsk, thanks for sharing. very interesting setup. 15:59:56 excellent, that would be a welcome addition 16:00:10 happy to help if anyone's interested in containers, it still new to me so I may struggle to answer all questions but will do my best, i'm on IRC /whois wznoinsk 16:00:12 wznoinsk: that was great 16:00:14 wznoinsk, yes, thanks for sharing about your system, very interesting test environment 16:00:30 thanks everybody, great meeting! 16:00:35 thanks 16:00:46 #endmeeting