15:00:47 #startmeeting loci 15:00:48 Meeting started Fri Jan 11 15:00:47 2019 UTC and is due to finish in 60 minutes. The chair is hogepodge. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:51 The meeting name has been set to 'loci' 15:01:35 #link https://etherpad.openstack.org/p/loci-meeting agenda 15:01:48 o/ 15:02:06 Hooray! We've increased our attendance by 100%! 15:02:30 haha 15:02:38 hope you had nice holidays :) 15:03:15 do you have a link handy for last meeting(s)' action points? It's too far I can't remember :p 15:03:26 You too. I got very sick, with was a downside. But on the upside it forced me to take a lot of time to rest and I enjoyed the time doing nothing. 15:03:40 hogepodge: that's good :) 15:03:57 Same link. I don't think we worked on any of the items from that list. 15:04:15 I have been working on possible AIO deployment to use for gate testing. 15:04:41 portdirect: you around? 15:04:53 Anyone else planning on joining? 15:06:01 ok, let's start off and we can chat about the topics on hand 15:06:02 I hope there will be more people indeed 15:06:11 sounds good 15:06:25 #topic stable release 15:07:00 Ok, so a big issue I've run into over the last month is building images from stable branches 15:07:00 o/ 15:07:11 Hi hrw! 200% increase from last week :-) 15:07:40 ;) 15:07:48 hogepodge: look at you getting big numbers! :p 15:08:11 evrardjp: I like to look at the optimistic side of things 15:08:29 hogepodge: so my concern there, is that there are many failing already, that's how I caught up a newton issue. 15:08:46 When distributions update packages, they frequently go outside the bounds of requirements repository, and the build will break. 15:08:57 so if we bring more branches into loci pipelines, we should make sure it's slowly phased, maybe non voting 15:09:33 hogepodge: sorry for the interruption 15:10:04 I believe that if we want to do branches, we need to run integration tests to guarantee that updates to requirements works for all major distros, then we can propose changes to openstack/requirements that guarantees a fix will work 15:10:41 Build tests aren't sufficient, because they don't tell you if a requirement update functionally breaks a service 15:11:29 fair 15:11:38 but isn't that the scope of a deployment project? 15:12:02 can we leverage the existing, to prevent re-inventing the wheel? 15:12:04 I'm working on a simple loci-based installer that we might want to use, but it's pretty basic and might not be robust enough. https://github.com/hogepodge/locistack 15:12:21 (maybe continuing pbourke work on kolla-ansible? ) 15:12:39 That, or OpenStack Helm could be solutions. 15:14:03 I'd say that openstackhelm needs k8s and all that jazz, which is a bigger series of moving parts. 15:14:21 Not that I am pro or against 15:14:52 Yeah, I'd like to keep things as simple as possible. The solution I'm writing requires docker-compose, which is much smaller 15:15:05 I am currently working on the image building of OSH, and yeah, I hope we'll do functional testing of loci images 15:15:33 I am currently working on the image building of OSH, and yeah, I hope we'll do functional testing of loci images 15:15:44 sorry for my flaky bouncer there 15:16:18 In general though, if we have any solution that uses loci and does stable branch integrated builds for multiple distros we can use it as a signal to requirements to do joint-gating and bump as needed 15:16:32 I like the idea of functional testing of loci images, not really linked to the issue of "out of requirements bounds" 15:16:34 no problem :-) 15:17:18 hogepodge: I am curious if that happens a lot. Because OSA is using upper constraints in venv for ages, and we never really had an issue, with any distro 15:17:49 so I am really deeply curious of the root cuase 15:17:51 cause* 15:17:58 evrardjp: it's happened twice to me, once with Centos and once with Leap. Both were eventually corrected. 15:18:44 The root cause is distros update packages to new versions and the values exceed the largest allowable version of a dependency. 15:19:09 if you want to be sane then venv 15:19:13 I am probably too young in this project to really judge. I like the idea of functional testing though, as expressed earlier 15:19:25 So it's typically a one line fix, but the requirements maintainers won't bump a version to fix one distro without evidence it doesn't break other distros 15:20:27 So it's typically something like libvirt (which isn't pip built but is from the distro) is updated which forces the pip-version (which tightly tracks libvirt) to break 15:20:43 (when I say pip-version I mean python-libvirt) 15:22:13 let me check if I understand correctly and compare with osa: in osa, at some point we are building all wheels for a distro,so the minimum pip package version is the minimum taking consideration the packages/other pip packages, so that's why we seem to never break (compared to bare minimum). Here we build the wheels for upper constraints, but then we are pip installing which could means lower version, is that right? 15:22:48 hogepodge: ok so for that I think you might have a different solution 15:23:38 instead of lowering/increasing the minimum base version of a package, you could ensure the building of the wheel is taking the appropriate c binding for said version 15:24:01 (in other words, building where libvirt is installed would produce a different result) 15:24:10 omg my english is bad 15:24:27 I will check some things in code 15:24:43 Two things are happening. On a distro if I install libvirt, I may get version x. Upper constraints may limit python-libvirt to version x-1, which will cause the build to break. 15:24:54 in the meantime, I think it's great you're working on a functional testing :) 15:25:48 I need an environment I can stand up quickly and repeatedly. :-) The functional testing will be a nice side effect. I also like to do little manual installs to check on the progress and health of individual projects and how they're evolving. 15:26:21 It's time consuming though and I hope it's the last time I'll take on such a task. 15:27:05 yup... deploy projects are like that :) 15:28:08 I have all sorts of feedback to individual projects, which I'm not sure if it would be welcomed or not (for example, all api projects should have consistent ways to launch as a wsgi app, currently Neutron stands out as an odd duck in that field) 15:28:11 On the functional testing front, I 100% support a docker-compose approach 15:28:36 Up to 300%! This is truly a blockbuster meeting. Hi portdirect! 15:28:46 Sam and I spun wheels on that for a bit, but it makes sense - apart from anything it's fast. 15:28:50 hogepodge: libvirt has stable api so you do not have to keep libvirt and python-libvirt in sync 15:29:35 If you try to build python-libvirt x against libvirt x+1 the build will fall over. 15:30:06 http://obs.linaro.org/ERP:/18.06/Debian_9/ - libvirt 4.3 libvirt-python 4.0 15:30:13 kolla images work fine 15:30:28 kolla source or kolla packages? 15:30:37 source 15:31:11 it was libvirt that was breaking for me on my centos build 15:31:23 Debian libvirt maintainers do not keep both in sync 15:31:29 I use breaking lightly, because a bump in upper constraints would fix it 15:31:56 In kolla, for source does it not still deploy the distro package for python-libvirt through? 15:32:21 I'm pretty sure it used to 15:32:25 I don't know how kolla handles python dependencies. 15:32:28 portdirect: you probably have too? 15:32:51 or maybe there is some kind of libvirt-libs or something 15:33:01 god the package names is something I hate 15:33:01 portdirect: python-libvirt is from binary package 15:33:21 hrw: which is why you won't hit this issue in kolla 15:33:35 distro packes will be self-consistent 15:33:42 packages 15:33:51 anyway, for individual issues we can talk about that later? :p I think we get the gist of the idea :) 15:34:04 Yes, we can move on I think. 15:34:18 portdirect: and those are binary packages I built. python-libvirt 4.0 against libvirt 4.3 15:34:28 move on 15:34:28 #topic testing 15:35:16 I wanted us to think about how we could rework our testing to be a bit more robust. It's similar to the last topic so maybe doesn't need much discussion, but if anyone has ideas I'd love to hear them. 15:35:34 Right now our testing is that we build a bunch of project for a bunch of distros. 15:35:39 on the previous topic: good luck hogepodge. 15:35:41 There's no functional or integration testing. 15:36:03 evrardjp: thanks, it's actually almost to the stage of booting VMs, once I'm there I can iterate with tempest 15:36:13 great :) 15:36:24 hogepodge: so for the testing I thought of two things 15:37:45 have something like goss , or have real functional testing 15:38:16 Goss? 15:38:18 with your work of the precedent topic, we could maybe have minimum functional testing? 15:38:32 like serverspec 15:38:35 portdirect: ^ 15:38:42 https://github.com/aelsabbahy/goss 15:39:59 I'd be a propoent of real functional testing off the bat 15:40:12 ok 15:40:18 sounds better and less work 15:40:19 As checking that apis are running only scratch the surface 15:40:22 Yup 15:40:53 afk 15:41:17 We could do a simple run of project specific tests on each build 15:41:57 someone started work on this in the keystone tree, but I don't think it's gone anywhere 15:42:05 That as a starting point, and then eveolving toward something like tempest smoke would be awesome 15:42:12 That was gagehugo 15:42:32 I think I could twist his arm into giving it new life 15:42:41 cool :-) 15:43:22 use that as a starting point? I want to be careful about work I assign for myself so I don't over-promise and under-deliver. 15:43:45 Maybe, though it was osh based 15:43:54 Being selfish, that's great for us 15:44:01 But may be a bit heavy 15:44:29 The flip side being, I think there would be some traction from osh people, who would be able to help 15:45:06 Whereas another approach may be a bit hard to get outside interest in for things other than gating 15:45:21 if a job exists somewhere that uses loci containers (regardless of deployment tooling) we can trigger it on loci patches 15:45:46 Yup 15:46:06 then if something simpler comes along we can switch to that, so leveraging existing and cross project work is always better, especially if it means more hands on deck (to use airship terminology) 15:46:27 The idea we punted around with Sam ages ago, was some very lightweight testing in tree, and then cross gating with osh/openstack-projects 15:47:25 portdirect: I'll let you pursue that, and we can check back in. Sound good? 15:47:52 Which leaves the door open for non osh deployment projects being 1st class citizens 15:48:01 Sure, I'll follow this up 15:48:08 great, thanks! 15:48:22 evrardjp: hrw: any other actions or input? 15:49:36 Moving along then. 15:49:43 #topic cycle development 15:49:52 We can start with some open reviews 15:50:18 #link https://review.openstack.org/#/c/624396/ Remove python-qpid-proton 0.14.0 build 15:50:38 We can start with this one, which has an open request for meeting time. 15:50:52 evrardjp: portdirect: I think you both had this action 15:52:10 The context is coming up with a pattern to remove stale requirements from upper-constraints 15:52:22 (which hey, might solve my other problem from earlier) 15:53:14 We could add yet another environment variable that iterates over a set of packages and removes them at build time, which seems like a maintainable solution. 15:55:24 Feedback on that? I'll leave a comment to that effect in the review, and maybe we can iterate one more patch to implement it. 15:57:34 Ok, hearing nothing more, I think we can close the meeting and revisit more cycle priority topics for next week. with testing I think we have a lot on our plate already. 15:57:37 Thanks everyone! 15:57:47 Last comments? I'll leave the floor open for three more minutes. 15:59:30 Thanks everybody! 15:59:37 #endmeeting