Thursday, 2017-12-14

fungiianw: cmurphy was heading up a bunch of our puppet 4 testing in infra so would help to get her input if possible00:08
clarkbjeblair: fungi ianw ok http://paste.openstack.org/show/628906/ seems to work00:09
clarkbannoyingly unbound is sys v00:10
* clarkb wonders what made the decision for sysv vs systemd unit in ubuntu00:10
clarkbwe probably shouldn't be rebuilding any new services that require name based firewall rules until we figure this out?00:10
clarkbor do we want to temporarily switch to ip addrs manually (thinking about zuul in particular)00:11
ianwso that replaces it's init.d script?00:16
clarkbianw: nothing I think the init.d script is ignored entirely because systemd never tries to do a compatibility lookup as it already knows the name00:17
ianwoh, ok, override00:18
clarkbanother options is to not override and just run our own unit later00:20
clarkbso you'll get firewall early if no dns resolution necessary, then redundantly reapply rules00:20
clarkbor you won't have firewall until unbound is running00:20
ianwi feel like maybe it's better not to override, but not strongly00:21
clarkbfungi: jeblair ^ do you have an opinion?00:22
clarkbI'll test the supplemental run later now00:22
fungiour base rules give us access anyway without dns resolution, so waiting a little to grant anything beyond that seems fine (and safe)00:29
clarkbwell in this case the fallback behavior is no rules at all until we can apply the ones with names00:30
clarkbso wide open :/00:30
clarkbI think I have supplemental working, I'll push up a patch we can look over00:31
fungik00:31
jeblairmaybe best to do the supplemental script as a stepping stone to automatic translation to ip addrs00:43
*** harlowja has quit IRC00:45
clarkbhttps://review.openstack.org/527821 ist the change to do supplemental serivce00:46
clarkbI have not tested the puppet but have tested the unit00:46
clarkbbecause that seems to be working on the es07 replacement I'm going to go ahead and bring it into the es cluster now00:47
clarkbjeblair: fungi ianw if we decide to move head with 527821 we should watch zuulv3 closely and we can use one of the logstash-workers as a reboot tester00:47
clarkboh I didn't use the right topic on that change, will update01:03
clarkbI'm going to delete the old es07 now01:12
clarkbnew one seems to be working01:12
clarkbold es07 is deleted01:17
clarkbI need to take a break and make dinner and stuff. I'll probably try to resume early tomorrow again to help frickelr with the other es servers01:17
clarkbianw: thanks for review on the firewall thing01:18
pabelangerjust getting back now, not sure I have focus to rotate out volume for eavesdrop.o.o this evening01:30
pabelangerI'll look in the morning for the next window between meetings01:30
*** jkilpatr has quit IRC02:00
*** openstackstatus has quit IRC02:32
*** openstackstatus has joined #openstack-sprint02:36
*** barjavel.freenode.net sets mode: +v openstackstatus02:36
*** skramaja has joined #openstack-sprint04:47
ianwfrickler: progress!  a real test -> http://logs.openstack.org/22/527822/5/check/legacy-puppet-beaker-rspec-infra/101b243/job-output.txt.gz#_2017-12-14_04_22_09_29187805:06
ianwso we did have the repo names wrong.  i think we're getting very very close05:06
ianwif ci comes back positive on 527822 i think the stack is good to go.  more rspec tests would be welcome if you feel like it (connect to the service, etc)05:08
ianwin terms of ethercalc migration, i looked into that.  you just move the redis db in /var/lib/ ... it's safe to do at any time, but so people don't drop edits we should shutdown apache before copying05:09
*** skramaja_ has joined #openstack-sprint08:56
*** skramaja has quit IRC08:57
*** skramaja has joined #openstack-sprint09:01
*** skramaja_ has quit IRC09:01
fricklerianw: small issue on https://review.openstack.org/527144 I think, but I like the rspec stuff09:36
*** jkilpatr has joined #openstack-sprint10:46
*** jkilpatr has quit IRC11:08
ianwfrickler: cool, will look at tomorrow.  hopefully that will be it!11:37
*** skramaja_ has joined #openstack-sprint12:35
*** skramaja has quit IRC12:35
*** skramaja has joined #openstack-sprint12:40
*** skramaja_ has quit IRC12:40
*** skramaja has quit IRC13:31
clarkbfrickler: good morning/afternoon. I'm around if you want to go over the elasticsearch upgrade process14:06
clarkbfrickler: wrote it down at https://etherpad.openstack.org/p/elasticsearch-xenial-upgrade and went through it with elasticsearch07 so I think it is owrking. The one issue I found was related to the firewall. Details at https://review.openstack.org/#/c/527821/14:08
*** baoli has joined #openstack-sprint14:35
fricklerclarkb: I looked at your notes and they seem pretty clear to me, I can give them a go with es06, then14:43
clarkbfrickler: the only thing is we need to make sure the firewall is addressed first14:43
clarkbI'd like 527821 in or something like it before we proceed just so that we can confirm that whatever the fix is is working14:44
clarkb(it is important for elasticsearch because its api is fairly easy to abuse and there are no AAA features built into the open source version)14:44
fricklerclarkb: I'd like to use the new server to test that patch once more before merging it14:45
fricklerclarkb: or you can merge if you are confident enough and I'll test it implicitly14:45
clarkbfrickler: you mean you would manually run a puppet apply with that update applied after the initial install?14:46
clarkbfrickler: thats probably a good idea if you want ot proceed with es0614:46
fricklerclarkb: yes14:46
clarkbok lets do that then14:46
clarkbjust have to remember it is a reboot that we need to test (firewall rules should be in place after)14:48
fricklerclarkb: yes, launching node now14:50
fungicurious to hear if that solves the race14:53
fricklerclarkb: I think you want to add a start action into the service definition, so that it doesn't need a reboot initially. waiting for results of the reboot now15:06
clarkbfrickler: well the regular iptables application should work the first time around. The problem we are trying to address is specifically that the system provided unit does not work on boot for us15:06
clarkbso in this case I think it is ok to just have it be enabled for the next boot15:07
fricklerclarkb: on es06, there were no rules after the initial puppet run nor of running again with the patch. the were installed only after I explicitly started the new service15:08
fricklers/of/after15:08
fricklerclarkb: fungi: looking o.k. after a reboot15:09
fricklerso maybe need more investigation why the rules aren't enabled on the initial puppet run15:10
clarkbfrickler: that is a possibility. Though reading journald logs I'm fairly confident the main service is attempting to run on boot and failing15:12
clarkbfrickler: so guessing there is a separate issue with not enabling on first puppet run15:12
fricklerclarkb: but I guess I could continue with stopping things on the old es06 now, o.k.?15:13
clarkbfrickler: actually, you know what, we may not have noticed because launch node does a reboot so we will always get the rules applied that way15:13
clarkbfrickler: yes if iptables looks good to you after a reboot I think you can proceed with the rest of the process15:13
clarkbfrickler: for the disable shard allocation step that needs to be run from a node currently in the cluster15:15
clarkbfrickler: not sure if I made that clean (also you can run it multiple times safely)15:15
clarkb*clear15:15
fricklerclarkb: I assumed that, yes, but might be worth another note15:16
fungiperhaps we just set elasticsearch to not start automatically, and so once we log in and manually start it the firewall rules have been long-since applied successfully?15:18
clarkbfungi: it already does not start automatically15:19
clarkb(this was intentional as joining cluster at improper time could cause disruption)15:19
fricklerwhew, attaching the volume took a bit of time to execute for rax ... /me was already getting nervous about it not showing up ;)15:23
clarkbthe good news is we have a completely copy of the data on all of theo ther volumes too15:24
clarkber15:24
clarkbits not one complete copy on all of the other ovlumes, it is in aggregate one copy15:25
clarkbbut ya worst case we let elasticsearch recover itself15:25
fricklerclarkb: so just to confirm, for the "update DNS/firewalls" step, this is the same as for the worker earlier? i.e. add new rdns via cli, update existing records via webinterface, run fw-restart on all the nodes15:31
clarkbfrickler: correct, in this case you need to update the firewall on elasticsearch*15:31
clarkbso that they can talk to each other15:32
fricklerclarkb: o.k., new node seems to be running fine. how long did the recovery take for you?15:49
fricklergiven that we want to reboot the new nodes anyway before taking them into service, I'd be fine with merging https://review.openstack.org/#/c/527821/ then as is15:51
clarkbfrickler: I think recovery took a couple hours, but I ended up in a degraded state for longer than you did while I sorted out the firewall so hopefully yours has less time to recover15:52
clarkbfungi: did you want to review the firewall change too?15:52
clarkb(I think the more reviews on that one the better)15:52
fungiclarkb: which one, 527821? i can take a look once tc office hour discussion winds down15:59
clarkbfungi: yes that change16:01
clarkbfrickler: down to 2 initializing shard so I think it is close to being done16:20
fricklerclarkb: should I launch another node already? seems I could do another one and then either you take over or I continue with the remainder tomorrow my morning16:25
clarkbfrickler: once the cluster goes green you can do another one, but should wait until then16:32
fricklerclarkb: just went green, now relocating 2 shards. o.k. to also remove old es06 now?16:38
clarkbfrickler: yup16:39
fricklerclarkb: probably need to do that first, launching es05 did hit quota error16:39
clarkbrelocating shards are ok, if cluster is green you can take the next node offline (we have n+1 redundancy so can only handle one node outage when "green")16:39
clarkbfrickler: ya these are large nodes16:40
clarkbthey are essentially a giant in memory data base for searching billions of rows of free form text data16:40
*** jkilpatr has joined #openstack-sprint16:51
jeblairhttps://review.openstack.org/527729 could use review -- the new paste server is in place, we should merge that before it reboots :)16:52
clarkbjeblair: I think you may actually want network-online16:54
clarkbjeblair: based on https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/16:54
jeblairwelp, if we're okay with a paste outage, i can try it :)16:57
jeblairi know the current one works, and i copied that from other unit files we have16:57
clarkbjeblair: test would be a reboot?16:57
jeblairyep16:57
clarkbI'm ok with that I think. The problem with After network.target is network may not be "up" according to the docs16:58
clarkbso network.target works better as a Before16:58
jeblairok16:58
fricklerclarkb: do we need to wait for shard relocation to finish, too, or will that become obsolete after the next replacement anyway?17:00
clarkbfrickler: you don't have to wait, shard relocation is just disk usage balancing. I expect that yes the upgrades will largely make that obsolete and its the final state that we want to rebalance17:01
jeblairok that works too17:01
jeblairclarkb: https://review.openstack.org/527729 updated17:03
clarkb+217:03
clarkb(I ended up reading far too much about this stuff when sorting out the firewall thing yesterday)17:04
jeblairwe should keep an eye out for other instances17:05
clarkb++17:05
fricklerso I'll start to replace es05 now17:06
fricklerclarkb: I just noticed the the old es06 is still in state deleting, is that expected to take 30 minutes or longer? server id d1cc1c9e-0e3f-4663-aa8c-56a40873e38717:11
clarkbfrickler: no, that is odd, but also not something on our end17:12
clarkbfrickler: we will hae to watch it and if it persists I guess file a rax ticket for it17:13
clarkbfrickler: I wonder if a second delete request might unstick it? mordred might know I'll ask him over in -infra17:21
*** baoli has quit IRC17:27
fricklerclarkb: seems I can still log into that node, maybe do a shutdown/poweroff?17:32
clarkbfrickler: my hunch is that it got lost somewhere on the cloud side and so the event enver made it to the instance. We have had trouble with similar in the past which is why nodepool will retry deletes17:33
clarkbfrickler: we can retry the delete in a bit and if that doesn't work ya we can try the shutdown17:34
fricklero.k., new es05 seems to be doing fine now, 12 shards initializing17:35
*** baoli has joined #openstack-sprint17:38
fungiclarkb: just to clarify earlier discussion on 527821, the upshot is that the instance briefly comes up with no packet filtering because iptables failed to load the ruleset due to missing dns resolution, and then shortly thereafter this reapplies the rules when name resolution is working and at that point we're covered?17:46
fungii wonder if there's some way to make it fail closed initially until dns works17:46
fungibur regardless, we likely should acknowledge that the instances are briefly exposed/vulnerable for any services started before that point17:50
fungiif that's what's happening17:50
clarkbfungi: yes that upshot is correct17:55
clarkbfungi: I haven't checked on a trusty node but I actually think that behavior of not having a firwall until some time later must not be a regression or it would've failed on trusty too17:55
clarkbits just with the systemd change they are trying to clamp down on it much more and broke the dns based rules17:56
fricklerhaving a failsafe mode that only allowed ssh instead of allowing everything still seems a good idea to me, but I'd say that that would be a followup project17:57
clarkbya, I also think that would require much more testing and I'm not sure its raelly a regression especially since ubuntu and debian don't firewall by default17:58
clarkbthe idea jeblair had was to have config managment do the dns lookups and write ip addresses into the rules files so that the base service works17:58
clarkbthat will also address the concern17:58
jeblairi'm going to poke at that real quick18:00
jeblairthe dnslookup thing18:00
pabelangerso, just check to see if we have any ongoing meetings right now18:01
fungiahh, right, i remember that suggestion now. yes, would be pretty cool18:02
pabelangerif not, I have the time to migrate eavesdrop.o.o volume18:02
pabelangerchecking schedule again18:02
fungiclarkb: i suppose it's not possible/easy to make unbound start before netfilter-persistent?18:02
clarkbfungi: not with netfilter-persistent saying it must start before networking does. Unbound would be up but unable to resolv anything18:03
fungior, another alternative, possible to use remote resolvers until unbound starts?18:03
clarkbfungi: that won't help either because the issue is netfilter-persistent is starting before networking18:03
fungiahh, right18:03
clarkband you can only append to the lists of dependencies with modifications to existing units18:03
clarkbwhich is why I went with a different unit entirely18:04
fungibasically we'd need [networking]>[unbound]>[netfilter-persistent]18:04
clarkbfungi: yup18:04
pabelangerlooks like maybe a conflict for http://eavesdrop.openstack.org/#Group_Based_Policy_Team_Meeting18:04
pabelangerconfirming18:04
fungijeblair's idea is sounding better and better18:04
clarkbso idea with my change is its a quick way to give us fairly good coverage on the problem so we don't have to halt upgrades18:05
pabelangerokay, I don't see any traffic in meeting rooms, but we do have a project scheduled18:07
pabelangerI think I can hold off another hour on migration to be safe18:08
clarkbfridays tend to be very quiet as far as meetings go if today doesn't work18:09
clarkb(but getting it done is probably best)18:10
*** baoli has quit IRC18:14
*** baoli has joined #openstack-sprint18:16
*** harlowja has joined #openstack-sprint18:29
clarkbfrickler: just three more shards to go. Should I plan on deleting the old 05 node and continue with the others this afternoon or do you want to finish up tomorrow?18:38
fricklerclarkb: that's up to you, I think I could do 3+4 tomorrow and leave 2 for the grand finale, but if you want to continue today, that's fine for me, too, doesn't look like we'll run out of tasks soon ;)18:41
clarkbok18:45
clarkbI'll probably continue with them today then as they take time and getting them done is worthwhile18:45
*** clayton has quit IRC18:46
*** clayton has joined #openstack-sprint18:47
fricklero.k., good luck, I think I'm done for today anyway18:47
clarkbfrickler: ok I should delete the old 05 then?18:48
fricklerclarkb: sure, I'd have waited until the shards are done, but should be fine by now18:51
clarkboh I can wait just saying I won't wait on you to do it18:52
clarkbyou can go enjoy your evening18:52
jeblairokay, i have a working impl of the dns thing.  will push up patches shortly18:58
pabelangerokay, i think we are clear on meetings at the moment19:13
pabelangerany objections if I migrate eavesdrop volume?19:13
pabelangerinfra-root: ^19:13
funginone from me19:13
fungii concur, no meetings seem to be running in official channels right now, at least19:14
clarkbif no meetings then fine by me19:14
pabelangerokay, will shutdown eavesdrop here in a moment19:14
pabelangerthen detach volume19:14
pabelangerand reattach to eavesdrop01.o.o19:14
*** openstack has joined #openstack-sprint20:32
*** ChanServ sets mode: +o openstack20:32
jeblairclarkb: thx, fixed20:33
pabelangerokay, DNS updated20:34
pabelangerI'm going try starting a meeting , then stop to confirm everything is working as expected20:34
pabelangersuggestions for a project to test work?20:34
pabelangerwith*20:34
clarkbjeblair: +2 thanks20:35
pabelangerclarkb: are you okay with using infra to test startmeeting commands?20:36
clarkbpabelanger: what about in here?20:37
clarkbthat keeps it on topic with the sprint20:37
pabelangerclarkb: was going to use a meeting channel20:37
clarkboh you mean infra meeting?20:37
pabelangeryah, just want to confirm bot works as expected on xenial20:38
clarkbI think you can probably do something like #startmeeting test in here or sure one of the other meeting channels20:38
pabelangerunless you are okay20:38
pabelangeryah, test works20:38
clarkb(I'd avoid adding meeting logs that aren't for proper meetings like infra)20:38
pabelangerkk20:38
pabelanger#startmeeting test20:38
openstackMeeting started Thu Dec 14 20:38:39 2017 UTC and is due to finish in 60 minutes.  The chair is pabelanger. Information about MeetBot at http://wiki.debian.org/MeetBot.20:38
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:38
*** openstack changes topic to " (Meeting topic: test)"20:38
openstackThe meeting name has been set to 'test'20:38
pabelangerthanks!20:38
pabelanger#endmeeting20:38
*** openstack changes topic to "OpenStack Infra team Xenial upgrade sprint | Coordinating at https://etherpad.openstack.org/p/infra-sprint-xenial-upgrades"20:38
openstackMeeting ended Thu Dec 14 20:38:49 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:38
openstackMinutes:        http://eavesdrop.openstack.org/meetings/test/2017/test.2017-12-14-20.38.html20:38
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/test/2017/test.2017-12-14-20.38.txt20:38
openstackLog:            http://eavesdrop.openstack.org/meetings/test/2017/test.2017-12-14-20.38.log.html20:38
pabelangerlooks to be good20:38
fungipabelanger: you want to #status log something too?20:39
clarkbthat did what I expect20:39
pabelangerfungi: yes20:39
*** pabelanger has quit IRC20:39
*** pabelanger has joined #openstack-sprint20:39
fungiheh, weren't identified to nickserv?20:40
pabelanger#status log eavesdrop01.o.o online and running xenial20:40
openstackstatuspabelanger: finished logging20:40
pabelangerwoot20:40
pabelangerhttps://wiki.openstack.org/wiki/Infrastructure_Status20:40
pabelangerI'll keep eavesdrop.o.o online for a day or so, just to be safe20:41
pabelangerbut things look good20:41
fungijeblair: commented on 528043, wondering if you tested that syntax for udp20:42
fungi(i honestly don't know whether that's valid or not)20:42
pabelangerokay, I'll try cacti02.o.o now20:44
pabelangeractually, let me confirm backups20:47
fungipabelanger: we probably want to avoid keeping the old eavesdrop server around for very long unless we do something to prevent the bots from getting started if the server gets rebooted20:49
jeblairfungi: nope, missed that, thanks20:49
jeblairfungi: so we should just stick an <if> in there i think20:50
fungiyeah, that seems easiest20:50
fungiuse "-m state --state NEW" with tcp and omit it for anything else20:51
pabelangerfungi: right, let me shutdown to be safe20:52
jeblair-A openstack-INPUT -m state --state NEW -m tcp -p tcp -s 23.253.235.223 --dport 1234 -j ACCEPT20:52
jeblair-A openstack-INPUT -m udp -p udp -s 104.239.137.167 --dport 1234 -j ACCEPT20:52
jeblairfungi: how's that look?20:52
fungilgtm!20:53
jeblairok updated20:54
jeblairremote:   https://review.openstack.org/528043 Add support for resolving hostnames in rules20:54
fungithanks, +220:55
clarkbI've also +2'd it but not approved as looks like we haven't decided yet to approve the stack?20:56
jeblairi can help shepherd if we want to go for it20:56
clarkbI'm in favor20:57
jeblairwe should get a +2 from fungi on 52804520:57
fungii would volunteer to do it myself but i have to disappear for christine's office holiday party soon and so won't be around20:57
jeblairwhich just happened20:57
fungijust to confirm, this is piloting for graphite but then we can incrementally switch other classes over as we get to them?20:58
jeblairfungi: yep20:58
fungii'm in favor20:58
jeblairok, i'll start the +Ws20:58
clarkbya I'll push up a change for elasticsearch stuff soon20:59
pabelangeryah, looks good firewall changes21:00
fungii may not be around to review that, but sounds good regardless21:00
fungidisappearing in another hour or so21:00
clarkbjeblair: oh you know what21:01
clarkbelasticsearch needs a range of ports21:01
* clarkb checks if that is compatible with this change (should be I guess?)21:01
fungiyeah, should work fine21:01
fungias long as the destination port is just treated as a string21:01
clarkbya I can just set port to 9200:940021:02
fungisimply put in XXX:YYY21:02
fungiright21:02
*** openstack has joined #openstack-sprint21:12
*** ChanServ sets mode: +o openstack21:12
clarkbjeblair: fungi pabelanger https://review.openstack.org/#/c/528087/1 thats the first step in converting elasticsearch to this I think21:18
pabelangerI'm just looking at design-summit-prep.o.o, is that server needed still? I'm not sure how it works21:20
clarkbfungi: ^ do you recall? I want to say its a ttx thing21:20
pabelangeryah, I see ttx name in some apache configs21:21
pabelangerseems to host https://github.com/ttx/summitsched21:22
pabelangeris that something we should move to openstack-infra repo?21:22
pabelangermaybe not21:22
pabelangersummitsched - A proxy to edit the OpenStack Design Summit sched.org21:22
clarkbpabelanger: I think we ask ttx and possibly just delete it tomorrow after confirmation from ttx21:23
fungipabelanger: we're not really managing that21:23
fungiit was more that ttx needed a sandbox to stand up something on short notice21:23
pabelangerkk21:24
pabelangerwill follow up21:24
clarkbjeblair: are we just waiting on CI for the iptables changes to merge now?21:40
jeblairclarkb: ya21:40
jeblairclarkb: looks like a job ahead of it has gone kaput21:41
clarkbok, just making sure I can't help with anything else21:41
clarkbpabelanger care to review https://review.openstack.org/#/c/528087/21:42
jeblairi'm going to push commit message updates21:42
pabelangerlooking21:42
clarkboh my change actually conflicts with jeblairs so I need to rebaes21:42
clarkbpabelanger: ^ so one sec21:42
* clarkb waits for new commit messages21:43
pabelangerah21:43
pabelanger-W21:43
jeblairclarkb: go for it; i popped 528038 which was stuck and reapproved.  i don't think i need to touch the iptables ones21:43
clarkbjeblair: ok21:43
clarkbpabelanger: jeblair ok rebased21:44
clarkbworking on updating the rules now21:44
pabelanger+321:50
clarkbhttps://review.openstack.org/528101 and https://review.openstack.org/528097 do the config change for elasticsearch and logstash22:11
dmsimardFYI I'm picking up logstash-worker14 through 16 which I had not yet completed22:11
pabelangerclarkb: looks good22:13
*** rwsu has quit IRC22:26
pabelangergiving cacti02.o.o another try now22:32
pabelangerokay, booted that time22:39
*** rwsu has joined #openstack-sprint22:39
jeblair2/3 iptables changes landed.  i'm going to verify it doesn't change the firewall on graphite.o.o first, then approve the third22:44
pabelangerokay, cacti02.o.o looks okay. I'm going to proceed with volume migration from cacti01.o.o22:45
jeblairiptables rules are unchanged on graphite; approving 52804522:46
clarkbjeblair: sounds good22:46
pabelangerokay, I've stopped apache2 on cacti01.o.o and placed it in emergency file22:48
pabelangergoing to comment out crontab to stop cacti from running22:48
pabelangervolume attached, rebooting server to confirm22:55
*** baoli has quit IRC22:57
ianwif i could get a review on https://review.openstack.org/526978 (update puppet nodejs) i think ethercalc is ready to go23:00
ianwi'm going to try codesearch now too, since it's back puppeting23:00
clarkbianw: I think you hvae the reviews you need, just a matter of approving when you can watch the js things23:00
pabelangerokay, cacti02.o.o online, I've updated the database info manually. For some reason it doesn't look to be under control of puppet23:03
pabelangerI can see crontab running now23:04
ianwif we could look at https://review.openstack.org/528120 (sysconfig xenial update for codesearch) that would help23:04
pabelangerI'll open firewall for cacti02 now23:05
ianwthanks!23:07
ianwfrickler: as penance for that typo, i added a check for it in the rspec test :)23:08
pabelangerremote:   https://review.openstack.org/528122 Add cacti02.o.o to all snmp iptables rules23:11
pabelangerclarkb: ianw: ^23:11
pabelangerI think we can also update that to new firewalls shortly23:11
pabelangerhttp://cacti02.openstack.org/cacti/graph.php?local_graph_id=2374&rra_id=all works23:12
pabelangerbut toplevel page doesn't yet23:13
pabelangermust be populating still23:13
pabelangerokay, I've updated dns to cacti.o.o to cacti02.o.o, and poweroff cacti0123:20
*** baoli has joined #openstack-sprint23:23
*** baoli has quit IRC23:28
ianwpabelanger: https://review.openstack.org/528127 is probably of interest.  removes npm mirroring bits.  noticed because i don't think we want to bother updating mirror-update to use the new puppet-nodejs23:30
pabelangerianw: ah, ya. good call23:30
pabelanger+223:31
*** harlowja has quit IRC23:32
*** harlowja has joined #openstack-sprint23:39
jeblairpabelanger: it shouldn't need to populate anything -- should just be using the same database23:39
*** harlowja has quit IRC23:40
*** jkilpatr has quit IRC23:40
*** harlowja has joined #openstack-sprint23:42
clarkbianw: looks like the npmrc thing was already not in use?23:42
clarkbI can't find any reference to that template in that repo23:42
pabelangerjeblair: yah, I see in the logs it finds database, but trying to see why tree isn't populated yet23:43
clarkbianw: I've approved the npm mirror cleanup23:43
pabelangerboth list view and preview seem to work23:43
jeblairpabelanger: look at the page source, the tree is there23:43
jeblairit looks like some jquery files are 404ing23:43
pabelangerah, okay23:43
pabelangercool23:43
pabelangerI can look why in a min23:44
jeblairhttp://cacti02.openstack.org/javascript/jquery/jquery.min.js23:44
jeblairiptables 3/3 merged, i'm going to poke at that now23:45
jeblairoh wow, it's already run and updated23:45
ianwclarkb: thanks, i'll keep an eye as i don't think mirror-update has puppeted 100% successfully in a while, should be fine as i guess it just skipped the npm bits23:45
jeblairold and busted: http://paste.openstack.org/raw/628994/23:46
jeblairnew hotness: http://paste.openstack.org/raw/628995/23:46
ianwclarkb: the other one is etherpad.  i noticed it doesn't have an acceptance test ... using what i've learnt i'll add one to try deploying on xenial, and we can start with that ...23:47
jeblairthe iptables stuff seems sane.  i think we're good to go there.23:47
*** harlowja has quit IRC23:50
clarkbjeblair: pabelanger ianw https://review.openstack.org/528101 should pass testing now, they are ready for review if the grafana rules came out good23:50
clarkboh jeblair just said it looks sane so ya I think we are ready to review and possibly merge that stack23:50
jeblairclarkb: oh thanks, i was just looking into that error, will refresh23:51
jeblairclarkb: that seems like a layer violation but i'm fine not fixing it now :)23:52
clarkbjeblair: ya it is preexisting...23:52
jeblair(the reference of elasticsearch_nodes from within logstash_worker)23:52
jeblair+223:53
jeblairand i +3d the parent23:53
clarkbthanks23:54
jeblairpabelanger: be sure to update the base firewall rule for the new cacti server23:55
jeblairit looks like it's not getting any data from any hosts right now23:55
pabelangerjeblair: yah, 528122 should do that23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!