15:00:47 <hogepodge> #startmeeting loci
15:00:48 <openstack> Meeting started Fri Jan 11 15:00:47 2019 UTC and is due to finish in 60 minutes.  The chair is hogepodge. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:51 <openstack> The meeting name has been set to 'loci'
15:01:35 <hogepodge> #link https://etherpad.openstack.org/p/loci-meeting agenda
15:01:48 <evrardjp> o/
15:02:06 <hogepodge> Hooray! We've increased our attendance by 100%!
15:02:30 <evrardjp> haha
15:02:38 <evrardjp> hope you had nice holidays :)
15:03:15 <evrardjp> do you have a link handy for last meeting(s)' action points? It's too far I can't remember :p
15:03:26 <hogepodge> You too. I got very sick, with was a downside. But on the upside it forced me to take a lot of time to rest and I enjoyed the time doing nothing.
15:03:40 <evrardjp> hogepodge: that's good :)
15:03:57 <hogepodge> Same link. I don't think we worked on any of the items from that list.
15:04:15 <hogepodge> I have been working on possible AIO deployment to use for gate testing.
15:04:41 <hogepodge> portdirect: you around?
15:04:53 <hogepodge> Anyone else planning on joining?
15:06:01 <hogepodge> ok, let's start off and we can chat about the topics on hand
15:06:02 <evrardjp> I hope there will be more people indeed
15:06:11 <evrardjp> sounds good
15:06:25 <hogepodge> #topic stable release
15:07:00 <hogepodge> Ok, so a big issue I've run into over the last month is building images from stable branches
15:07:00 <hrw> o/
15:07:11 <hogepodge> Hi hrw! 200% increase from last week :-)
15:07:40 <hrw> ;)
15:07:48 <evrardjp> hogepodge: look at you getting big numbers! :p
15:08:11 <hogepodge> evrardjp: I like to look at the optimistic side of things
15:08:29 <evrardjp> hogepodge: so my concern there, is that there are many failing already, that's how I caught up a newton issue.
15:08:46 <hogepodge> When distributions update packages, they frequently go outside the bounds of requirements repository, and the build will break.
15:08:57 <evrardjp> so if we bring more branches into loci pipelines, we should make sure it's slowly phased, maybe non voting
15:09:33 <evrardjp> hogepodge: sorry for the interruption
15:10:04 <hogepodge> I believe that if we want to do branches, we need to run integration tests to guarantee that updates to requirements works for all major distros, then we can propose changes to openstack/requirements that guarantees a fix will work
15:10:41 <hogepodge> Build tests aren't sufficient, because they don't tell you if a requirement update functionally breaks a service
15:11:29 <evrardjp> fair
15:11:38 <evrardjp> but isn't that the scope of a deployment project?
15:12:02 <evrardjp> can we leverage the existing, to prevent re-inventing the wheel?
15:12:04 <hogepodge> I'm working on a simple loci-based installer that we might want to use, but it's pretty basic and might not be robust enough. https://github.com/hogepodge/locistack
15:12:21 <evrardjp> (maybe continuing pbourke work on kolla-ansible? )
15:12:39 <hogepodge> That, or OpenStack Helm could be solutions.
15:14:03 <evrardjp> I'd say that openstackhelm needs k8s and all that jazz, which is a bigger series of moving parts.
15:14:21 <evrardjp> Not that I am pro or against
15:14:52 <hogepodge> Yeah, I'd like to keep things as simple as possible. The solution I'm writing requires docker-compose, which is much smaller
15:15:05 <evrardjp> I am currently working on the image building of OSH, and yeah, I hope we'll do functional testing of loci images
15:15:33 <evrardjp> I am currently working on the image building of OSH, and yeah, I hope we'll do functional testing of loci images
15:15:44 <evrardjp> sorry for my flaky bouncer there
15:16:18 <hogepodge> In general though, if we have any solution that uses loci and does stable branch integrated builds for multiple distros we can use it as a signal to requirements to do joint-gating and bump as needed
15:16:32 <evrardjp> I like the idea of functional testing of loci images, not really linked to the issue of "out of requirements bounds"
15:16:34 <hogepodge> no problem :-)
15:17:18 <evrardjp> hogepodge: I am curious if that happens a lot. Because OSA is using upper constraints in venv for ages, and we never really had an issue, with any distro
15:17:49 <evrardjp> so I am really deeply curious of the root cuase
15:17:51 <evrardjp> cause*
15:17:58 <hogepodge> evrardjp: it's happened twice to me, once with Centos and once with Leap. Both were eventually corrected.
15:18:44 <hogepodge> The root cause is distros update packages to new versions and the values exceed the largest allowable version of a dependency.
15:19:09 <hrw> if you want to be sane then venv
15:19:13 <evrardjp> I am probably too young in this project to really judge. I like the idea of functional testing though, as expressed earlier
15:19:25 <hogepodge> So it's typically a one line fix, but the requirements maintainers won't bump a version to fix one distro without evidence it doesn't break other distros
15:20:27 <hogepodge> So it's typically something like libvirt (which isn't pip built but is from the distro) is updated which forces the pip-version (which tightly tracks libvirt) to break
15:20:43 <hogepodge> (when I say pip-version I mean python-libvirt)
15:22:13 <evrardjp> let me check if I understand correctly and compare with osa: in osa, at some point we are building all wheels for a distro,so the minimum pip package version is the minimum taking consideration the packages/other pip packages, so that's why we seem to never break (compared to bare minimum). Here we build the wheels for upper constraints, but then we are pip installing which could means lower version, is that right?
15:22:48 <evrardjp> hogepodge: ok so for that I think you might have a different solution
15:23:38 <evrardjp> instead of lowering/increasing the minimum base version of a package, you could ensure the building of the wheel is taking the appropriate c binding for said version
15:24:01 <evrardjp> (in other words, building where libvirt is installed would produce a different result)
15:24:10 <evrardjp> omg my english is bad
15:24:27 <evrardjp> I will check some things in code
15:24:43 <hogepodge> Two things are happening. On a distro if I install libvirt, I may get version x. Upper constraints may limit python-libvirt to version x-1, which will cause the build to break.
15:24:54 <evrardjp> in the meantime, I think it's great you're working on a functional testing :)
15:25:48 <hogepodge> I need an environment I can stand up quickly and repeatedly. :-) The functional testing will be a nice side effect. I also like to do little manual installs to check on the progress and health of individual projects and how they're evolving.
15:26:21 <hogepodge> It's time consuming though and I hope it's the last time I'll take on such a task.
15:27:05 <evrardjp> yup... deploy projects are like that :)
15:28:08 <hogepodge> I have all sorts of feedback to individual projects, which I'm not sure if it would be welcomed or not (for example, all api projects should have consistent ways to launch as a wsgi app, currently Neutron stands out as an odd duck in that field)
15:28:11 <portdirect> On the functional testing front, I 100% support a docker-compose approach
15:28:36 <hogepodge> Up to 300%! This is truly a blockbuster meeting. Hi portdirect!
15:28:46 <portdirect> Sam and I spun wheels on that for a bit, but it makes sense - apart from anything it's fast.
15:28:50 <hrw> hogepodge: libvirt has stable api so you do not have to keep libvirt and python-libvirt in sync
15:29:35 <hogepodge> If you try to build python-libvirt x against libvirt x+1 the build will fall over.
15:30:06 <hrw> http://obs.linaro.org/ERP:/18.06/Debian_9/ - libvirt 4.3 libvirt-python 4.0
15:30:13 <hrw> kolla images work fine
15:30:28 <hogepodge> kolla source or kolla packages?
15:30:37 <hrw> source
15:31:11 <hogepodge> it was libvirt that was breaking for me on my centos build
15:31:23 <hrw> Debian libvirt maintainers do not keep both in sync
15:31:29 <hogepodge> I use breaking lightly, because a bump in upper constraints would fix it
15:31:56 <portdirect> In kolla, for source does it not still deploy the distro package for python-libvirt through?
15:32:21 <portdirect> I'm pretty sure it used to
15:32:25 <hogepodge> I don't know how kolla handles python dependencies.
15:32:28 <evrardjp> portdirect: you probably have too?
15:32:51 <evrardjp> or maybe there is some kind of libvirt-libs or something
15:33:01 <evrardjp> god the package names is something I hate
15:33:01 <hrw> portdirect: python-libvirt is from binary package
15:33:21 <portdirect> hrw: which is why you won't hit this issue in kolla
15:33:35 <hogepodge> distro packes will be self-consistent
15:33:42 <hogepodge> packages
15:33:51 <evrardjp> anyway, for individual issues we can talk about that later? :p I think we get the gist of the idea :)
15:34:04 <hogepodge> Yes, we can move on I think.
15:34:18 <hrw> portdirect: and those are binary packages I built. python-libvirt 4.0 against libvirt 4.3
15:34:28 <hrw> move on
15:34:28 <hogepodge> #topic testing
15:35:16 <hogepodge> I wanted us to think about how we could rework our testing to be a bit more robust. It's similar to the last topic so maybe doesn't need much discussion, but if anyone has ideas I'd love to hear them.
15:35:34 <hogepodge> Right now our testing is that we build a bunch of project for a bunch of distros.
15:35:39 <evrardjp> on the previous topic: good luck hogepodge.
15:35:41 <hogepodge> There's no functional or integration testing.
15:36:03 <hogepodge> evrardjp: thanks, it's actually almost to the stage of booting VMs, once I'm there I can iterate with tempest
15:36:13 <evrardjp> great :)
15:36:24 <evrardjp> hogepodge: so for the testing I thought of two things
15:37:45 <evrardjp> have something like goss , or have real functional testing
15:38:16 <portdirect> Goss?
15:38:18 <evrardjp> with your work of the precedent topic, we could maybe have minimum functional testing?
15:38:32 <evrardjp> like serverspec
15:38:35 <evrardjp> portdirect: ^
15:38:42 <evrardjp> https://github.com/aelsabbahy/goss
15:39:59 <portdirect> I'd be a propoent of real functional testing off the bat
15:40:12 <evrardjp> ok
15:40:18 <evrardjp> sounds better and less work
15:40:19 <portdirect> As checking that apis are running only scratch the surface
15:40:22 <portdirect> Yup
15:40:53 <evrardjp> afk
15:41:17 <hogepodge> We could do a simple run of project specific tests on each build
15:41:57 <hogepodge> someone started work on this in the keystone tree, but I don't think it's gone anywhere
15:42:05 <portdirect> That as a starting point, and then eveolving toward something like tempest smoke would be awesome
15:42:12 <portdirect> That was gagehugo
15:42:32 <portdirect> I think I could twist his arm into giving it new life
15:42:41 <hogepodge> cool :-)
15:43:22 <hogepodge> use that as a starting point? I want to be careful about work I assign for myself so I don't over-promise and under-deliver.
15:43:45 <portdirect> Maybe, though it was osh based
15:43:54 <portdirect> Being selfish, that's great for us
15:44:01 <portdirect> But may be a bit heavy
15:44:29 <portdirect> The flip side being, I think there would be some traction from osh people, who would be able to help
15:45:06 <portdirect> Whereas another approach may be a bit hard to get outside interest in for things other than gating
15:45:21 <hogepodge> if a job exists somewhere that uses loci containers (regardless of deployment tooling) we can trigger it on loci patches
15:45:46 <portdirect> Yup
15:46:06 <hogepodge> then if something simpler comes along we can switch to that, so leveraging existing and cross project work is always better, especially if it means more hands on deck (to use airship terminology)
15:46:27 <portdirect> The idea we punted around with Sam ages ago, was some very lightweight testing in tree, and then cross gating with osh/openstack-projects
15:47:25 <hogepodge> portdirect: I'll let you pursue that, and we can check back in. Sound good?
15:47:52 <portdirect> Which leaves the door open for non osh deployment projects being 1st class citizens
15:48:01 <portdirect> Sure, I'll follow this up
15:48:08 <hogepodge> great, thanks!
15:48:22 <hogepodge> evrardjp: hrw: any other actions or input?
15:49:36 <hogepodge> Moving along then.
15:49:43 <hogepodge> #topic cycle development
15:49:52 <hogepodge> We can start with some open reviews
15:50:18 <hogepodge> #link https://review.openstack.org/#/c/624396/ Remove python-qpid-proton 0.14.0 build
15:50:38 <hogepodge> We can start with this one, which has an open request for meeting time.
15:50:52 <hogepodge> evrardjp: portdirect: I think you both had this action
15:52:10 <hogepodge> The context is coming up with a pattern to remove stale requirements from upper-constraints
15:52:22 <hogepodge> (which hey, might solve my other problem from earlier)
15:53:14 <hogepodge> We could add yet another environment variable that iterates over a set of packages and removes them at build time, which seems like a maintainable solution.
15:55:24 <hogepodge> Feedback on that? I'll leave a comment to that effect in the review, and maybe we can iterate one more patch to implement it.
15:57:34 <hogepodge> Ok, hearing nothing more, I think we can close the meeting and revisit more cycle priority topics for next week. with testing I think we have a lot on our plate already.
15:57:37 <hogepodge> Thanks everyone!
15:57:47 <hogepodge> Last comments? I'll leave the floor open for three more minutes.
15:59:30 <hogepodge> Thanks everybody!
15:59:37 <hogepodge> #endmeeting