Wednesday, 2017-12-13

clarkbit was an early start for me today again and I think I've begun to run out of steam. See everyone tomorrow. I'm hoping to maybe start doing elasticsearch nodes00:04
pabelangerclarkb: ack00:06
fungilooks like puppet is okay parsing 527532 so if that merges i'll give subunit-worker01 another try during tc office hours here in a bit00:12
pabelangerfungi: ianw: mind a review on https://review.openstack.org/527447/00:16
pabelangermight try one more launch this evening00:16
fungisure thing00:17
jeblairyay the paste change landed just in time for me to eod00:52
pabelangerstill having issues launching eavesdrop01.o.o, running with --keep now and debug in the mornig01:45
pabelangermorning*01:45
*** baoli has joined #openstack-sprint01:48
*** baoli_ has joined #openstack-sprint01:49
*** baoli has quit IRC01:53
*** baoli_ has quit IRC03:53
*** baoli has joined #openstack-sprint03:54
*** baoli has quit IRC04:02
*** skramaja has joined #openstack-sprint05:14
AJaegerclarkb, jeblair, regarding 526946, see changes 1 and 3 of that series; they do what you say and that's the documented way to retire a repo06:10
fricklerianw: for ethercalc I removed the npm_package_ensure because that package didn't exist in the 6.x repo, seems the command is provided by the nodejs package instead (re: https://review.openstack.org/#/c/527302/3/manifests/frontend.pp)08:35
ianwfrickler: oh, ok, so it's all bundled together?09:48
ianwdo you think that's the problem in https://review.openstack.org/527302 ?  i'm going to have to dig into that tomorrow, i'm not sure what's going on09:50
ianwi think that's basically holding up the ethercalc & status.o.o (that uses nodejs too) transitions09:50
fricklerethercalc was working fine for me with the latest version, I think we only need to look into the data migration there10:04
fricklerianw: I'll propose a version of 527302 without it and see how things go10:06
*** jkilpatr has quit IRC11:23
*** jkilpatr has joined #openstack-sprint11:54
*** skramaja_ has joined #openstack-sprint12:18
*** skramaja has quit IRC12:18
*** skramaja_ has quit IRC12:23
*** skramaja has joined #openstack-sprint12:23
*** skramaja has quit IRC13:08
fungii want to say i ran into that rebuilding another server and put in a patch to deal with it... looking for it now14:12
pabelangermorning! I'll be diving into eavesdrop01.o.o this morning again14:14
pabelangerah, think I see it.14:18
pabelangerupstart script14:18
*** baoli has joined #openstack-sprint14:26
*** openstackstatus has quit IRC14:36
*** openstack has quit IRC14:39
*** openstack has joined #openstack-sprint14:41
*** ChanServ sets mode: +o openstack14:41
fungifrickler: ianw: found it. i attempted to work around it in puppet-openstack_health with https://review.openstack.org/508564 (and if memory serves it solved the problem)14:42
*** openstack has quit IRC14:43
*** openstack has joined #openstack-sprint14:46
*** ChanServ sets mode: +o openstack14:46
fricklerfungi: yes, that description matches what I saw when testing ethercalc yesterday. updated https://review.openstack.org/527302 now accordingly14:49
fungiahh, i guess it's puppet-openstack_health you're working on, so maybe my patch wasn't a total fix14:50
fricklerfungi: seems a bit more needs to be done for xenial, yes. but ianw was implicitly reverting your patch and that made things worse14:51
fricklerclarkb: if you don't have other plans, I'd like to start doing the elasticsearch nodes with you, of course others can join if they like, but I'm assuming that it'll be one at a time anyway14:55
pabelangerremote:   https://review.openstack.org/527707 Add support for systemd init scripts15:03
pabelangerwhen people have some time15:03
*** ianychoi has quit IRC15:04
pabelangerokay, I'll be deleting files01.o.o here shortly, unless somebody objects15:16
pabelangerfiles02.o.o has been working for last 12 hours15:16
fricklerpabelanger: the init-type selection looks nicer than what ianw and me have been doing for ethercalc, minor issue though I think in 52770715:17
pabelangerlooking15:17
pabelangerah right systemd-reload15:19
pabelangerpuppet doesn't do that by default15:19
pabelangerlet me see how ethercalc did it15:19
* fungi has his fingers crossed that subunit-worker01 will boot into a fully operational condition this time around15:21
fricklerpabelanger: I just copied what I found in some other module here https://review.openstack.org/#/c/527144/13/manifests/init.pp15:22
pabelangeryah15:23
pabelangerremote:   https://review.openstack.org/527707 Add support for systemd init scripts15:24
pabelangernew patch up15:24
pabelangeralso copypasta the systemd hack15:25
fungiyay! subunit-worker01 came up with the daemon working, except that i need to revert the removal patch and restart iptables on logstash.o.o now15:34
fungiRevert "Remove subunit-worker01.openstack.org"  https://review.openstack.org/52772015:43
*** ianychoi has joined #openstack-sprint16:06
jeblairpaste01 seems to work except that i think it's starting too early and the database hostname doesn't resolve.  i'm sticking in an "After=network.target" to the unit file to see if that does it.16:07
jeblairyep, that's got it.16:07
pabelangernice16:08
clarkbfrickler: ya I dont mind doing that but having a slower start today16:22
clarkbfrickler: not sure if you are still interested16:22
fricklerclarkb: interested yes, but a bit off the tracks currently, too. maybe later or tomorrow I guess16:25
clarkbfrickler: ok we will likely do them in a rolling fashion so should be nodes around tomorrow16:26
pabelangerrunning into town for errands, waiting for 527707 to pass tests16:49
pabelangerreviews would be also be helpful16:49
*** jkilpatr has quit IRC17:07
clarkbthinking about the elasticsearch servers and how we might want to do their upgrades. The easy way is just boot a new instance with a new 1TB backing volume and let cluster sync the data over. That will probably be fairly slow. Another way would be to stop shard allocations, stop es on host, remove cinder volume from host, put volume on new host and start es there to use the cinder volume to move the17:15
clarkbbulk of the data17:15
clarkbfor the second method, what is the process of removing a volume like that? I guess remove it from lvm entirely first or is it safe to just have cinder remove it under lvm?17:15
fungithe big concern is keeping in mind which one is acting as the "main" api endpoint for logstash, right?17:16
fungiand i guess if you save it for last (or at least not first) you can adjust configuration for ls to repoint to an upgraded one first17:16
clarkbfungi: yup17:16
fungiimportant to remember that even though it's a six-way cluster, we have a spof where ls is connecting17:17
clarkbI'm leaning towards using the second method myself, it is more complicated but should go much quicker overall17:17
fungifor the second method there, yes you'd want to unmount, deactivate the vg, then cinder detach17:18
fungiunmounting just to make sure it's flushed properly and because lvm would make you use some nasty options to deactivate the vg otherwise17:19
fungideactivating the vg because nova will protest if you ask it to remove a volume it believes is still in use by the instance17:20
clarkbok once I have properly caffeinated I will start writing a process down on an etherpad17:20
fungisometimes it still doesn't figure it out, in which case i end up halting the os on it17:20
clarkbfungi: oh wow17:21
fungiif you need to go even farther, you can delete the instance and then nova will usually get the hint ;)17:21
*** jkilpatr has joined #openstack-sprint17:23
jeblairi'll change paste to a cname for paste01 now17:29
jeblairdone17:33
clarkbI got caught in the period of time where dns resolves no A record and no CNAME record for paste.openstack.org17:33
*** jkilpatr has quit IRC17:38
AJaegerclarkb, jeblair, https://review.openstack.org/526946 is ready now to merge - the noop job merged as planned17:40
AJaeger(retirement of puppet-apps_site)17:40
jeblairclarkb: i believe the negative ttl is 5m, so... should be over now?17:45
clarkbjeblair: yup resolves now /me tests17:46
clarkbhttp://paste.openstack.org/show/628884/ works17:46
fungiif https://review.openstack.org/527720 gets approved, then i'll be able to take subunit-worker02 offline and rebuild it safely to get us back to two workers again17:47
clarkbfungi: done17:48
AJaegerthanks, now we can retire apps-site completely: https://review.openstack.org/526945 and https://review.openstack.org/52694317:51
*** jkilpatr has joined #openstack-sprint17:52
*** baoli has quit IRC17:59
*** baoli has joined #openstack-sprint18:00
fungiAJaeger: thanks! chucked those into the gate now too18:08
AJaegerthanks, fungi18:08
clarkbI've got the elasticsearch upgrade process braindump in https://etherpad.openstack.org/p/elasticsearch-xenial-upgrade. Now to fill in the outline with actual commands18:15
pabelangerand back19:01
pabelangerI've just deleted files01.o.o19:03
pabelangerclarkb: jeblair: mind if I get a review: https://review.openstack.org/527707/ system scripts for xenial on meetbot19:04
clarkbpabelanger: ya I'll take a look19:12
clarkbfungi: after reading much documentation I think I got the vg migration stuff correct in https://etherpad.openstack.org/p/elasticsearch-xenial-upgrade do you mind taking a look?19:13
clarkbpabelanger: reviewed19:16
clarkbpabelanger: left a comment on a thing19:16
fungiclarkb: in my experience, the new system will have the lv known to devmapper within moments of the nova volume attach happening, but then again i wasn't doing vgexport on the old system either so ymmv19:18
clarkbfungi: gotcha thanks19:18
clarkbfungi: I can run pvs/vgs on the new side to see what state it is in before attempting the import19:19
pabelangerclarkb: looking19:19
pabelangerclarkb: yes, you are right. Fixing19:20
clarkbI'll wait to rereview pabelanger's thing then its brunch and start on elasticsaerch after19:20
pabelangerremote:   https://review.openstack.org/527707 Add support for systemd init scripts19:21
clarkbpabelanger: +219:24
clarkbI'm in search of breakfast tacos now, back in a bit19:24
*** openstackstatus has quit IRC19:27
pabelangerthanks19:29
fungiwhat, are you in austin or something?!?19:38
fungii got the impression no other city was allowed to have breakfast tacos19:39
pabelangerSadly no breakfast tocos near me19:40
pabelangerfungi: mind a review again on 52770719:40
fungiyou must be psychic, i had just pulled it up19:40
fungipabelanger: okay, lgtm. seems a little indirect having a separate name for the file resource than its path in that case, but not especially incorrect19:45
pabelangerfungi: yah, syntax is valid, but didn't know how else to do it without creating 2 different entries19:48
fungisure, it's fine19:49
fungijust took me a bit of staring to realize the file was being referred to by something other than its path, when we already knew the path and it was parameterized19:49
pabelangeryah, maybe I should have done large if / else block over variables to make it easier to read19:51
clarkbfungi made them myself, nowhere near as good as what you get in austin though19:52
fungiahh19:52
*** jkilpatr has quit IRC20:03
fungisubunit-worker01 is actively taking work from gearman now, so i'm shutting down/deleting 02 and removing it from dns, then will launch its replacement20:18
pabelangerokay, launching eavesdrop01.o.o again20:21
fungiokay, old subunit-worker02 is gone and the replacement is being booted20:25
fungigonna go grab late lunch/early dinner and check on this when i get back20:25
pabelangerhey, it worked20:26
ianwif i could get a couple of eyes on the codesearch changes https://review.openstack.org/527544 and https://review.openstack.org/527557 that would be great.  i'd like it get it back to regular puppeting and then i'll upgrade it20:27
ianwi actually tested that "live" on codesearch so it's all updated now too20:27
ianwthen i'll check out frickler's nodejs stuff and see where we are with those various bits20:28
ianwfirst i need to deal with a cupcake emergency20:28
*** openstackstatus has joined #openstack-sprint20:28
*** ChanServ sets mode: +v openstackstatus20:28
pabelangerokay, I am going to get a coffee then start the process to migrate volume from eavesdrop.o.o to eavesdrop01.o.o20:30
pabelangerwhich will result in an outage of IRC logs20:31
pabelangershould maybe first confirm we don't have any meetings going20:31
clarkbpabelanger: ya we should do that20:34
* clarkb reviews ianw changes then will be attempting es07 upgrade20:35
pabelangeryah, we have a few more hours of meetings20:38
pabelangerI'll hold off until a little later to migrate the volumes20:39
*** jkilpatr has joined #openstack-sprint20:39
jeblairdeleted the old paste.openstack.org20:48
pabelangerack20:49
pabelangerI'm going to take a stab at cacit02.o.o20:50
jeblairpabelanger: lemme know if you have any questions20:51
pabelangerwill do20:52
clarkbianw: changes lgtm, left a comment but not worth a new patchset I don't think20:52
clarkbes07 replacement is launching now20:59
pabelangerHmm, php5 isn't supported on xenial, only php7 it seems. will have to see how that plays with cacti21:03
pabelangeras we try to load some php5 apache mods21:03
pabelangerbut for now, heading outside to play in the snow :D21:04
mtreinishpabelanger: it looks like it should work: https://bugs.launchpad.net/ubuntu/+source/cacti/+bug/1571432 (assuming we use the packaged version)21:11
openstackLaunchpad bug 1571432 in cacti (Ubuntu) "Cacti package is incompatible with PHP7 on Xenial" [High,Fix released] - Assigned to Nish Aravamudan (nacc)21:11
clarkbfungi: looks like launch failed for me because subunit-worker02 does not have dns records anymore which broke the firewall. I've confirmed that the firewalls on logstash workers are unhappy as well. However subunit works have nothign to do with logstash or logstash workers so I'm now going to take a detour to figure out why our firewall rules are a bit too greedy21:15
mtreinishclarkb: they use the same gearman server so they need to have the hole open to get jobs from the server21:21
mtreinishclarkb: unless things have changed there?21:21
clarkbmtreinish: ya they need a hole to logstash.o.o over port 473021:21
clarkbbut as is we are allowing them all to talk on the elasticsearch ports too whcih isn't necessary21:22
mtreinishah, ok21:22
mtreinishthings are likely too greedy because I screwed up when I added the rules and just copy and pasted everything from the logstash workers for the subunit side in site.pp21:23
pabelangermtreinish: yah, I think we just need to fix the apache mod we enable21:25
clarkbmtreinish: fungi https://review.openstack.org/527787 I think that should do it21:28
mtreinishclarkb: +121:37
clarkbfungi: also I think this means we can't remove things from dns without removing it from firewall rules first21:37
clarkbfungi: because unfortunately the resulting behavior seems to be "be wide open"21:38
clarkbthough it didn't affect the trusty nodes?21:38
clarkboh! I know what it was21:38
clarkbit was the adding of 01 back in but leaving 02 as well21:38
clarkbthat caused netfilter-persistent to reload rules21:39
clarkbfungi: what I don't understand is why the rule got flushed21:41
clarkbtesting on logstash-worker01 if I remove subunit-worker02 rules load, then if I add it back again the old rules remain21:41
clarkband reading scripts that seems to be the intended effect21:41
pabelangerremote:   https://review.openstack.org/527793 Bump puppetlabs-apache to 1.11.121:46
pabelangerjeblair: clarkb: figured out php7 issue for cacti^. Seems it is our only manifest right now using puppetlabs-apache, so bumps the dependency to latest 1.x release21:46
jeblairpabelanger: know why we aren't using puppet-httpd?21:47
*** baoli has quit IRC21:48
jeblairmaybe now would be a good time to switch?21:48
*** baoli has joined #openstack-sprint21:48
pabelangerjeblair: seems like an oversight when we did the convertion, but can dig into it. However, we do have a long standing issue to migrate back to puppetlabs-apache, since we merged our forked code upstream. We just never found the time to migrate back21:48
jeblairpabelanger: oh ok.  guess it doesn't matter then :)21:49
pabelangeryah, if we are going to keep puppet around for next round of LTS, I think migrating to puppetlabs-apache is the right move. I did have a few patches up to start on some modules, but never pushed hard on them21:49
fungiokay, back now... looks like stuff is going on?22:11
ianwfricker: so "Error: /Package[gulp]: Provider npm is not functional on this host" is the main error @ http://logs.openstack.org/02/527302/5/check/legacy-puppet-beaker-rspec-infra/39955b9/job-output.txt.gz22:13
fungii have an 02 and am adding it back to dns now22:13
clarkbfungi: thanks, I'll redo es07 boot shortly22:13
fungiclarkb: there is once again dns for 02322:13
fungi0222:13
fungibut the negative ttl will come into play for a few more minutes probably22:14
clarkbya I'll watch it22:14
fungimay just be that we can't replace some systems while we're in the middle of replacing some other systems22:15
fungii need to at least restart {iptables,netfilter}-persistent on logstash.o.o22:16
clarkbya and all the workers as well22:17
clarkbto pick up the rules22:17
fungithat seems to have fixed the ability of the replacement subunit-worker02 to pull jobs again22:18
clarkbI can help with that once I get es07 going22:18
fungiall what workers?22:18
clarkblogstash and subunit22:19
ianwfrickler: "When using the NodeSource repository, the Node.js package includes npm" so this is where i guess it's all getting confused22:19
clarkbdepending on if their iptables rules are loaded properly22:19
pabelangerI have 2 patchs up for review https://review.openstack.org/527793/ and https://review.openstack.org/527796/ related to cacti02.o.o. reviews most welcome22:19
fungithe logstash workers have iptables rules which involve the subunit workers?22:20
clarkbfungi: yes, see https://review.openstack.org/#/c/527787/22:20
clarkbit is a bug, but I've attempted to address it in ^22:20
fungiahh22:21
ianwfungi: i think npm gets installed to /usr/local/bin ... this has shades of your pip issue?22:27
fungicolor me unsurprised22:27
ianwalthough, maybe not, yours was calling with full path22:28
ianwdid we determine if /usr/local/bin was actually in the path correctly?22:28
funginope, i have generally assumed that puppet has a default shell path of ""22:28
fungithough if it's documented somewhere, i'm happy to start making new assumptions22:29
ianwfrickler: ok, my next attempt is to use nodejs 6.x ... which appears to be the oldest LTS release.  0.12 is EOL already, so maybe this is part of the failure.  also, i don't understand how nodejs versioning works :)22:31
clarkbfungi: lvm steps as described seem to work and cinder had no trouble detaching it22:36
fungicool22:36
clarkbrebooting now to make sure fstab is happyness22:36
clarkbaha foudn a missing step :) need to chown everything as uids/guids are not stable across installs22:38
fungioh! yeah usually that doesn't chance22:38
fungichange22:38
clarkbI'm editing DNS now but think I can bring new elasticsearch07 into existing cluster shortly22:41
ianwfrickler: yay, that worked!  now, the question is if openstack_health is 100% ok with the later nodejs version.  i think we just have to assume it is and fix anything as it occurs22:49
pabelangerokay, stepping out for a few hours, but when I return I'll attempt to move eavesdrop volume to new server22:53
clarkbalright reboots seem to be what are claerning out the firewall rules22:53
clarkbfungi: ^ fyi22:53
clarkbI've not put es07 in use yet as I am trying to figure out why reboots seem to make iptables unhappy22:53
*** harlowja has quit IRC22:54
clarkbBefore=network.target <- really?22:55
clarkbinfra-root so ^ is an issue. Do you think we should install our own unit that starts it again after network.target?22:55
ianwoh firewall rules and systemd ... i feel like we've discussed this before.  maybe it was dib ...22:56
clarkbthats a fairly major regression in ubuntu, but maybe it doesn't count beacuse the service has a new name22:57
clarkbseems like there are ways to shadow system supplied units? but I may be misremembering. Worst case I think if we just had infra-netfilter-persistent with a different dependency it would work?22:58
ianwhttps://review.openstack.org/#/c/293826/ is what i'm thinking of.  "network-pre.target is a target that may be used to order services before any network interface is configured. It's primary purpose is for usage with firewall services that want to establish a firewall before any network interface is up."22:59
clarkbproblem here is we can't configure out firewall until networking is up23:00
clarkbbecause dns resolution23:00
ianwahh23:00
fungiwell, my classic network security training says don't use dns names in firewall rules. that probably doesn't help us much23:01
clarkbya we could convert over to ip addresses everywhere23:02
clarkbit makes managing it in config management more painful though23:02
fungiyeah, mostly based on the idea that unless you hardcode ip addresses into firewall rules, your firewalls are only as secure as your nameservers23:02
clarkband ubuntu will refuse to install any rules if you use dns with default service :)23:03
ianwand then does it actually re-check the name resolution at any point?  or do you have to reload the rules anyway if the dns changes?23:03
* fungi notes that he mostly managed firewalls which protected nameservers, so priorities may have been mildly askew23:03
clarkbianw: you have to reload if dns changes23:03
ianwso it's not so much dns as ns :)23:04
fungithis is a fair point23:04
funginondynamic name service23:04
fungithe internet has changed a lot, and become a freespirited party kind of place23:05
*** baoli has quit IRC23:05
*** baoli has joined #openstack-sprint23:06
clarkbzuulv3.o.o uses names as well but has rules installed23:06
clarkb(it is xenial)23:06
clarkbjeblair: pabelanger ^ any idea if that was handled for zuul?23:06
ianwi remember pabelanger doing a swizzle where we removed the rules, brought up hosts and then re-added them23:07
clarkbfwiw I'm particularly worried about open ES on the internet because they are known for being abused, logstash-workers also exhibiting this but less difficult to abuse them23:08
jeblaircatching up23:10
fungiyeah, having our elasticsearch api sockets default to "go away" seems like a good thing23:11
fungialready dealt with a wiki server getting compromised within minutes of running with an exposed elasticsearch api listener23:11
fungithat's plenty for me23:12
jeblairclarkb: well, also if we ever run zuul with my repl patch installed and it answers connections from outside localhost, we, erm, would need to rotate all our secrets.23:12
clarkbjeblair: ya, I checked it and it seems happy now23:12
clarkber it being the firewall at least23:12
jeblairbut i checked before using it and firewall seemed ok.  i don't know why that is if there's a structural problem23:12
*** baoli has quit IRC23:12
clarkbI'm wondering if it was manually started to reload the rules after last reboot23:13
clarkb(I also seem to recall checking the firewall when the repl was installed a while back as well)23:13
clarkbThinking about ways to address this, installing our own unit to run the same commands with different dependencies is probably safe and reliable23:14
clarkbhowever23:15
clarkbyou'd want to order it such that it was done before "real" system service strt but after networking23:15
clarkband I think after networking is sort of the common system is up start everything else so that may be tricky23:15
ianwfungi: you've looked at some "[" quoting issues ... http://logs.openstack.org/44/527144/13/check/legacy-puppet-beaker-rspec-infra/1ba8fea/job-output.txt.gz#_2017-12-13_00_11_35_740142 ring any bells23:17
ianwi don't think the ethercalc rspec test is actually testing anything :/23:17
clarkbjeblair: fungi ianw trivial fix is to replace everything with ip addrs23:17
clarkbit will add a lot of boilerplate around upgrades because ip addrs will be changing constantly23:17
fungiyeah, i'll admit even though my knee-jerk security wonk reaction is to hardcode ip addresses, it's not at all convenient23:18
fungiianw: "No examples found" is certainly a new error for me23:19
fungialmost seems like a mistranslation23:19
clarkbI feel like this problem may deserve a walk23:19
clarkbI'm nto coming up with any good ideas right now23:20
jeblairthe unit file is called netfilters-persistent?23:20
clarkbjeblair: let me check23:20
fungiyes23:20
clarkbsystemctl show netfilter-persistent23:20
clarkbdrop the s in netfilters23:20
jeblairwelp, on zuulv3 it's a sysvinit script23:20
clarkboh it might be here too I jsut asked systemctl for info /me looks23:20
jeblairand that's provided by the package23:21
clarkbya it is here too23:21
fungithe service on trusty was iptables-persistent and on xenial is netfilter-persistent but very well may be via systemvinit compat23:21
clarkbso how does that sysv script decide to be before networking23:21
clarkb# Required-Start:    mountkernfs $remote_fs <- it must be parsing the lsb info?23:22
clarkb$remote_fs should require networking though I think23:22
jeblairdoes it look at rcS.d?23:23
jeblairlrwxrwxrwx   1 root root    30 Jun  2  2017 S05netfilter-persistent -> ../init.d/netfilter-persistent23:23
jeblairlrwxrwxrwx   1 root root    20 May 17  2017 S12networking -> ../init.d/networking23:23
clarkbhttps://www.turnkeylinux.org/blog/debugging-systemd-sysv-init-compat doesn't say but does say where we can look for the generated compat stuff23:25
clarkbjeblair: /lib/systemd/system/netfilter-persistent.service23:27
clarkbmy xenial node has that unit23:27
jeblairclarkb: it's entirely possible zuulv3 doesn't work, and we just fixed it manually a while ago23:27
clarkbI think that supercedes any compat layer because it won't look for compat scripts if there is a unit with the same name23:27
clarkbthat file is on zuulv3 too23:28
clarkbso  Ithink the sysv init script is just noise?23:28
jeblairoh of course /lib/systemd.  silly me for thinking this would be in etc.23:28
fungifhs to the rescue! or... nope23:29
clarkbI bet if we delete the unit file the sys v init script would work though and run late23:29
clarkbhwoeer I don't know if it would run before or after elasticsearch23:29
jeblairclarkb: well, the symlinks have it running early too23:29
clarkb(or zuul)23:29
jeblairbefore networking23:29
clarkbah23:29
clarkboh ya 05 vs 1223:29
jeblairi also like the idea of using ip addresses, for extra robustness23:30
jeblairi wonder if we could get puppet to do the substitution for us?23:30
clarkbjeblair: I think so, we just edit the firewall rules that puppet writes and it will replace them23:31
jeblairi mean, still have hostname in config management, but have puppet do a dns query and convert them to ip addresses for us23:31
clarkboh23:31
clarkbI bet we could do something in the template at least23:32
jeblair(that would be better that our current system since then our firewalls would automatically adjust as we changed dns)23:32
clarkbya23:32
jeblairit doesn't make anything more secure than the dns servers, of course23:32
jeblairthough, i would like to make our dns servers more secure.  :)23:33
funginow that we're considering running dns servers (aside from local caches), yes for sure23:34
fungidnssec ftw23:35
fungithe proverbial "don't use dns in firewall rules" predated dnssec23:35
fungiunfortunately, so did rackspace's dns service, unfortunately23:36
* fungi adds another unfortunately or two for good measure23:36
clarkbhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/sect-Managing_Services_with_systemd-Unit_Files#brid-Managing_Services_with_systemd-Extending_Unit_Config23:37
clarkbI'm going to test ^ with a new After config23:37
ianwEmilienM: https://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/playbooks/legacy/puppet-beaker-rspec/run.yaml#n127 <- this code doesn't work, which i can fix ... but is the intention, if on a xenial node, to run puppet 4?23:41
*** harlowja has joined #openstack-sprint23:42
EmilienMianw: ouch, do we use this role somewhere?23:42
EmilienMle tme check23:42
ianwEmilienM : the testing for https://review.openstack.org/#/c/527144/ , say23:42
EmilienMoh that's infra modules ok23:43
ianwEmilienM: http://logs.openstack.org/44/527144/13/check/legacy-puppet-beaker-rspec-infra/1ba8fea/job-output.txt.gz#_2017-12-13_00_11_35_74014223:43
EmilienMit's weird we have the playbooks in puppet-openstack-integration23:43
ianwthat's easy enough to fix, i mean we can just cut out the whole line in the new job23:43
ianwbut a) it doesn't seem to do anything and b) we'd want to test it with puppet3 for infra?23:44
EmilienMI'm not sure which version of puppet infra deploys23:44
EmilienMI hope puppet423:44
ianwkeep hoping :)23:44
fungii wouldn't be so sure23:44
ianwpuppet3 is what ships in xenial23:44
clarkbI can't seem to get that rhel docs method for updating these things to work on ubuntu23:47
clarkbDropInPaths=/etc/systemd/system/netfilter-persistent.service.d/after-network.conf <- it seems to know the config is there23:49
clarkbbut the contents of that don't seem ot override what is in the root unit23:49
clarkb"Note that dependencies (After=, etc.) cannot be reset to an empty list, so dependencies can only be added in drop-ins. If you want to remove dependencies, you have to override the entire unit."23:51
clarkbwell then23:51
clarkbI'm testing a complete override now23:56
ianwEmilienM: proposed -> https://review.openstack.org/527811 puppet-beaker tests: don't use puppet 4 ... perhaps we need separate 4 tests?23:57
EmilienMianw: lgtm, I'll check with alex23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!