15:04:10 <number80> #startmeeting RDO meeting - 2017-01-11
15:04:10 <zodbot> Meeting started Wed Jan 11 15:04:10 2017 UTC.  The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:04:10 <openstack> Meeting started Wed Jan 11 15:04:10 2017 UTC and is due to finish in 60 minutes.  The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:04:11 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
15:04:11 <zodbot> The meeting name has been set to 'rdo_meeting_-_2017-01-11'
15:04:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:04:14 <openstack> The meeting name has been set to 'rdo_meeting___2017_01_11'
15:04:15 <number80> #topic roll call
15:04:18 <leifmadsen> o/
15:04:18 <rbowen> Thanks, number80
15:04:20 <rbowen> o/
15:04:23 <jschlueter> o/
15:04:24 <jpena> o/
15:04:27 <number80> agenda is here: https://etherpad.openstack.org/p/RDO-Meeting
15:04:29 <dmsimard> \o
15:04:30 <jruzicka> o/
15:04:37 <number80> #chair leifmadsen rbowen jschlueter jpena dmsimard jruzicka
15:04:37 <openstack> Current chairs: dmsimard jpena jruzicka jschlueter leifmadsen number80 rbowen
15:04:38 <zodbot> Current chairs: dmsimard jpena jruzicka jschlueter leifmadsen number80 rbowen
15:04:41 <number80> #chair apevec
15:04:41 <zodbot> Current chairs: apevec dmsimard jpena jruzicka jschlueter leifmadsen number80 rbowen
15:04:42 <openstack> Current chairs: apevec dmsimard jpena jruzicka jschlueter leifmadsen number80 rbowen
15:05:04 <number80> please update agenda if you have any items that is not there
15:05:19 <apevec> I forgot I've conflict, can please someone take over chairing today?
15:05:23 <chandankumar> dmsimard: ah he updated the spec i need to recheck
15:05:26 <chandankumar> \o/
15:05:29 <number80> apevec: taking it
15:05:32 <number80> #chair chandankumar
15:05:33 <zodbot> Current chairs: apevec chandankumar dmsimard jpena jruzicka jschlueter leifmadsen number80 rbowen
15:05:33 <openstack> Current chairs: apevec chandankumar dmsimard jpena jruzicka jschlueter leifmadsen number80 rbowen
15:05:33 <apevec> thanks
15:05:47 <number80> let's move on, then
15:06:00 <number80> #topic DLRN instance running out of space
15:06:06 <jpena> that's mine
15:06:06 <number80> jpena?
15:06:26 <dougbtv> o/
15:06:27 <trown> o/
15:06:49 <jpena> we are having space issues every few days with the DLRN instances. We keep adding patches, but today most of the allocated space is used by the centos-master and centos-newton workers, which are hard to stop to run a purge
15:07:07 <jpena> I'd like to have a more permanent fix, which requires two things:
15:07:31 <flepied1> o/
15:07:35 <jpena> a) Stop the workers for some time (maybe a weekend?) and run a long purge to keep just the latest 60 days of commits
15:08:04 <jpena> b) Add a cron job to purge old commits every day, so we avoid a long stop and keep storage use (mostly) consistent
15:08:23 <jpena> for b) I have a puppet-dlrn review going on, but I'd need agreement on a)
15:08:31 <rbowen> Is that really a permanent fix? Or does that just put off the crisis a little longer?
15:08:51 <dmsimard> rbowen: a cron would ensure that we keep no longer than x days worth of packages which is sane
15:08:57 <jpena> rbowen: it's quite permanent. We are keeping 4 months of commits today
15:08:59 <number80> 60 days of commits is something we can provide storage size estimate so yes
15:09:10 <dmsimard> we can modify x as need be, make it more or less
15:09:12 <rbowen> ok, cool. Thanks for that clarification.
15:09:13 <number80> jpena: ack for a) and b)
15:09:23 <number80> let's vote then
15:09:28 <leifmadsen> basically sounds like you'd have to double the number of packages available to run into the issue again?
15:09:46 <dmsimard> jpena: I'm okay for a and b, I sort of forgot why it wasn't that way in the first place -- lack of confidence ?
15:09:49 <jpena> leifmadsen: yes. That, or double the number of active workers
15:09:49 * number80 suggests that people puts their name on the etherpad color box
15:10:01 <number80> #chair flepied1 trown dougbtv
15:10:02 <zodbot> Current chairs: apevec chandankumar dmsimard dougbtv flepied1 jpena jruzicka jschlueter leifmadsen number80 rbowen trown
15:10:03 <openstack> Current chairs: apevec chandankumar dmsimard dougbtv flepied1 jpena jruzicka jschlueter leifmadsen number80 rbowen trown
15:10:04 <jpena> dmsimard: yes, we were not too confident, but now the code is quite tested
15:10:10 <dmsimard> okay
15:10:25 <leifmadsen> +1
15:10:26 <dmsimard> also, it's worth mentioning that there will be quite a bigger overlap between the release of ocata, start of pike and EOL of mitaka
15:10:31 <dmsimard> due to the ocata short cycle
15:10:53 <jschlueter> +1 for getting to b in the long run
15:10:54 <dmsimard> this will worsen the storage issue -- perhaps the RDO cloud will come to finally save us all by then but who knows
15:11:07 <number80> for stable releases, we can even keep less commits if we need to save somespace
15:11:20 <leifmadsen> should we just account for that now?
15:11:27 <leifmadsen> if we're going to do a long down time anyways
15:11:30 <jpena> number80: not really needed. If we just keep x days of commits, stable branches tend to have fewer commits over time
15:11:34 <leifmadsen> stable == 30, unstable == 60?
15:11:41 <number80> ack
15:11:46 <leifmadsen> works for me
15:11:48 <dmsimard> what jpena said, there are less commits on stable releases
15:11:54 <jpena> e.g. mitaka is only using 5 GB today
15:11:59 <jpena> vs 80+
15:12:07 <rbowen> Awesome.
15:12:09 <dmsimard> so keeping 60 days of mitaka and 60 days of master is totally different
15:12:15 <leifmadsen> makes sense
15:12:25 <leifmadsen> well, +1 from me :)
15:12:28 <leifmadsen> sounds like the right approach
15:12:32 <radez> jpena: was the only thing that fedora-review reported about the python-pulp package the license?
15:12:35 <leifmadsen> I think even the docs mention 60 days somewhere
15:12:58 <jschlueter> jpena: what's the current backlog ... how many days out are we currently?
15:13:10 <jpena> jschlueter: around 120 days
15:13:36 <radez> apevec:  was the only thing that fedora-review reported about the python-pulp package the license?
15:13:59 <jschlueter> jpena: so need 60 days of pruning 2 days each to get caught up with lower blockage ... is that right?
15:14:43 <jpena> jschlueter: or one long purge where we catch up, although that means stopping the workers during a weekend
15:15:19 <dmsimard> jpena: so the thing about stopping the workers for the weekend I worry about
15:15:22 <jschlueter> jpena: with 60 days to catch up it might make sense to do a weekend stoppage in my mind ... so +1
15:15:24 <number80> radez: I'll update you on bugzilla (Alan's on another meeting and we have one running right now)
15:15:27 <dmsimard> is catching up to the builds we missed
15:15:34 <dmsimard> although there may be less commits during the weekend
15:15:50 <jpena> dmsimard: we can run with --head-only right after the purge
15:16:01 <jschlueter> ... and we are coming up on critical points right now?
15:16:10 <dmsimard> true
15:16:46 <jpena> jschlueter: yes, we keep getting alarms of storage going over 85%
15:17:35 <jschlueter> c) run a purge to get us below alarms threshold, put in place cron job to purge X per day up to the point we get down to 60 days left?
15:17:58 <jschlueter> that might be what is already on table but trying to understand
15:19:05 <number80> purge also implies a slowdown, that's why jpena suggested a) a big purge on week-end and b) smaller purges daily
15:19:15 <number80> (if I understood correctly)
15:20:18 <jpena> c) is doable, although we'd have to adjust the command-line for the purge commands every day
15:20:22 <myoung> o/
15:20:29 <jpena> we specify how many days to keep
15:21:02 * trown thinks we should go for the full purge now, and set up cron
15:21:11 <trown> now being on the weekend of course :)
15:21:31 <dmsimard> dlrn-purge --days $(random)
15:21:49 <dmsimard> yes, we definitely can't stop the dlrn workers on weekdays for any extended period of time
15:22:01 * number80 says that we reached 1/3 of the meeting so now time to decide
15:22:36 <number80> I'd say let's stick to initial proposal and then optimize if needed
15:22:59 * jschlueter votes for getting us under alarms, and keeping it, but not disrupting the current cycle too much
15:23:14 <jschlueter> +1 original proposal
15:23:39 * jschlueter goes back to being quiet
15:23:41 <number80> yes, the plan is not to put DLRN instances down for too long
15:23:47 <dmsimard> +1 original proposal
15:23:48 <number80> jschlueter: you're discouraged to do that!
15:24:03 <jschlueter> :-)
15:24:52 <number80> ok
15:25:31 <number80> #agreed jpena proposal of purging DLRN instance using cron tasks is approved
15:25:58 <jpena> ok, I'll send an email to rdo-list to communicate
15:26:22 <number80> ack, maybe copying openstack-dev since it may impact some upstream CI
15:26:30 <dmsimard> not really
15:26:40 <dmsimard> the repos won't down
15:26:48 <number80> dmsimard: are they all using mirrors,now?
15:26:51 <dmsimard> there just won't be any new repos for some duration
15:26:57 <dmsimard> trunk.rdoproject.org availability is not affected
15:26:58 <jpena> but no new packages, that might affect tripleo for example
15:27:21 <jpena> anyway, it's good to inform
15:27:45 <number80> well, since it's temporary (reasonable temporary), it's fine to proceed anyway (well, IMHO)
15:27:45 <trown> ya, if it happened during the week it would affect tripleo
15:28:20 <number80> let's move to the next topic and let jpena do as he sees fit :)
15:28:35 <trown> thanks jpena
15:28:41 <number80> #topic Add boost159 in CBS
15:29:10 <number80> Ok, review https://bugzilla.redhat.com/show_bug.cgi?id=1391444 has been stuck for a while, and it's blocking facter3 and mongodb3 update
15:29:10 <openstack> bugzilla.redhat.com bug 1391444 in Package Review "Review Request: boost159 - The free peer-reviewed portable C++ source libraries" [Medium,New] - Assigned to apevec
15:29:41 <number80> proposal: add boost159 in -candidate tag in CBS as prereview
15:29:55 <jpena> number80: did you check my comment (#9)?
15:30:00 <number80> It won't be in any shippable repo until review is done
15:30:53 <number80> jpena: ah yes, I need to fix it (Xmas vacations made me forgot that or the booze)
15:31:48 <rdogerrit> Dan Radez created rdoinfo: Adding Congress  https://review.rdoproject.org/r/4270
15:33:54 <number80> anyone ok with a preview CBS builds (or against it) ?
15:34:14 <leifmadsen> abstain
15:34:19 <number80> facter3 should be in RDO ocata, and we're still blocked
15:34:47 <jpena> I'm fine with it. In my tests it didn't conflict with the existing packages
15:34:51 <number80> (the bjam issue is not blocking us since we're not using it but will fix it as it may impact later packages)
15:37:04 <number80> this was also discussed with Alan earlier.
15:37:15 <number80> I guess we can move forward
15:37:37 <number80> #agreed add preview build of boost159 in CBS under -candidate tag
15:38:03 <number80> #topic Enable automated build on mitaka-rdo
15:38:20 <number80> quick info point, I'll enable automated build on mitaka-rdo
15:38:37 <number80> #info automated builds will be enabled for mitaka-rdo branch this week
15:38:50 <number80> so more people could help maintaining mitaka builds
15:39:02 <trown> cool
15:39:20 <number80> so far, no big issue with newton automation, except that I have to renew certificates every 6 months :)
15:39:56 <number80> (which I did last week)
15:40:18 <number80> next
15:40:30 <number80> #topic alternate arch status
15:41:10 <number80> we need to update nodejs and phantomjs as they don't build in aarch64, I had some connectivity issues that prevented me working on this
15:41:40 <number80> xmengd also told me that he's working with CentOS core team to get ppc64le builders operational
15:41:43 <number80> EOF
15:41:58 <number80> #topic OVS 2.6 pending update
15:42:03 <number80> ok, big chunk here
15:42:43 <number80> dmsimard: any update from CI about OVS 2.6?
15:43:05 <dmsimard> packstack and p-o-i jobs that runs from 2.6 from scratch work fine
15:43:16 <dmsimard> the big question mark is tripleo and upgrades from newton to ocata
15:43:36 <number80> trown: ^
15:43:45 <dmsimard> there's currently efforts in tripleo upstream to test upgrades from newton to ocata in the gate
15:44:06 <dmsimard> https://review.openstack.org/#/c/404831/
15:44:54 <dmsimard> The concerns here are around time, priority and available resources to get this done well and in time considering everyone already has their hands full with the impending release of ocata
15:44:57 <trown> ya, I have not had time to look at it
15:45:08 <dmsimard> I've raised some flags to reach out for help
15:45:21 <number80> ok, so we need to keep an eye on that
15:45:33 <number80> should we add it to next week meeting to follow progress?
15:45:35 <dmsimard> number80: I think you also told me there was a mariadb update right ?
15:45:51 <number80> dmsimard: yes, but patch is not merged yet
15:46:01 <dmsimard> from what version to what version are we going, do you know ?
15:46:05 <number80> 10.1.20
15:46:06 <dmsimard> is it significant ?
15:46:51 <number80> last we have is 10.1.18 and it should not be a big deal but it'll go through pending first
15:46:58 <dmsimard> okay
15:47:05 <dmsimard> also, just making sure
15:47:09 <dmsimard> ovs 2.6.1 is ocata-only, right ?
15:47:16 <number80> I assume so
15:47:25 <dmsimard> ok
15:48:00 <number80> ok, I'll add OVS 2.6 follow-up for next week
15:48:07 <number80> #topic 16 days since last promotion of RDO Master Trunk
15:48:12 <number80> jschlueter?
15:48:58 <dmsimard> We are on the verge of promotion, infrastructure problems have prevented us from promoting
15:49:03 <dmsimard> as is my understanding
15:49:16 <dmsimard> we are currently re-running the same hash in the pipeline to get that done
15:49:19 <dmsimard> weshay: correct? ^
15:50:22 <apevec> yes, "almost there" !
15:50:33 <apevec> https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/1029/ has one last job running
15:50:43 <apevec> on hash from Friday
15:50:56 <number80> ok, that's excellent
15:51:19 <rdogerrit> Lars Kellogg-Stedman created puppet/puppet-collectd-distgit: fix installed directory name + upstream sync  https://review.rdoproject.org/r/4271
15:51:46 <number80> btw, for people who wants to follow RDO master statuses:
15:51:47 <number80> https://dashboards.rdoproject.org/rdo-dev
15:52:08 <number80> Mitaka does not look good btw :)
15:52:34 <dmsimard> I ran it yesterday
15:52:53 <number80> anything I can fix?
15:53:18 * number80 will be giving some time on mitaka
15:53:28 <number80> s/on/to/
15:53:34 <larsks> I want to make some changes to the puppet-collectd spec file.  I used 'rdopkg clone' to get the spec file, modified it, and used 'rdopkg review-spec' to submit the changes for review.  Was that the correct workflow?
15:53:36 <dmsimard> I think part of the problem is that we're inconsistent in mitaka
15:53:49 <rdogerrit> Tim Rozet created puppet/puppet-systemd-distgit: Initial spec for puppet-systemd  https://review.rdoproject.org/r/4272
15:53:53 <dmsimard> number80: https://trunk.rdoproject.org/centos7-mitaka/status_report.html
15:54:05 <apevec> there is mitaka ooo issue https://bugs.launchpad.net/tripleo/+bug/1654615
15:54:05 <openstack> Launchpad bug 1654615 in tripleo "Mitaka ha ping test failing to ping vm" [Critical,Triaged] - Assigned to Ben Nemec (bnemec)
15:54:21 <number80> ok
15:54:34 <apevec> looks like there is an issue with seabios from 7.3
15:54:43 <number80> I'll look over the FTBFS
15:54:44 <dmsimard> last consistent in mitaka is jan 4th
15:54:46 <amoralej> fyi, last job for master promotion passed
15:54:47 <apevec> details unclear esp. why it works for > mitaka
15:54:55 <amoralej> only image uploading is pending
15:55:07 <apevec> YES!
15:55:13 <dmsimard> amoralej: god I hope the image upload doesn't fail due to rsync issue
15:55:13 <number80> good
15:55:19 <apevec> weshay, you can do your dance :)
15:55:50 <number80> seems that we're still on the green side for this cycle
15:55:50 <apevec> and cross fingers for the image u/l
15:57:10 <rbowen> Awesome.
15:57:16 <jschlueter> number80: sorry am in second meeting but wanted to bring topic up and see how we were doing for it...
15:57:34 <number80> jschlueter: we're almost having a master promotion as you may read
15:57:55 <number80> ok, little time left so no open floor
15:58:01 <number80> #topic next week chair
15:58:08 <rdogerrit> Juan Antonio Osorio created openstack/openstack-puppet-modules-distgit: Add puppet-ipaclient  https://review.rdoproject.org/r/4273
15:58:09 <number80> who wants to chair next week?
15:59:07 <jpena> I can do it
15:59:17 <number80> thanks :)
15:59:39 <number80> Thanks for everyone who joined today and see you next week!
15:59:40 <amoralej> promoted \o/ https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/1029/
15:59:46 <number80> #endmeeting