151-200 of 727 results (6ms)
2020-09-24 §
07:49 <godog> roll restart logstash codfw, gc death [production]
01:25 <ryankemper> Root cause of sigkill of `elasticsearch_5@production-logstash-eqiad.service` appears to be OOMKill of the java process: `Killed process 1775 (java) total-vm:8016136kB, anon-rss:4888232kB, file-rss:0kB, shmem-rss:0kB`. Service appears to have restarted itself and is healthy again [production]
01:21 <ryankemper> Observed that `elasticsearch_5@production-logstash-eqiad.service` is in a `failed` state since `Thu 2020-09-24 00:53:53 UTC`; appears the process received a SIGKILL - not sure why [production]
2020-09-14 §
21:30 <cdanis> T257527 ✔️ cdanis@cumin1001.eqiad.wmnet ~ 🕠🍺 sudo cumin 'R:Class ~ "(?i)profile::logstash::collector7"' 'enable-puppet "cdanis rolling out Ifa3c68e4"' [production]
21:24 <cdanis> T257527 ✔️ cdanis@cumin1001.eqiad.wmnet ~ 🕠🍺 sudo cumin 'R:Class ~ "(?i)profile::logstash::collector7"' 'disable-puppet "cdanis rolling out Ifa3c68e4"' [production]
2020-09-09 §
10:28 <volans> restarting ferm on failed hosts: an-test-master1001.eqiad.wmnet,an-worker1116.eqiad.wmnet,db[1075,1101,1116].eqiad.wmnet,labstore1007.wikimedia.org,logstash[1025,1030].eqiad.wmnet leftover from yesterday network issue [production]
2020-09-08 §
16:34 <herron> increased elk5 logstash JVM heaps to 2g (to help decrease kafka-logging consumer lag) [production]
2020-08-28 §
14:22 <moritzm> installing Java security updates on kafka/main and Logstash(5) clusters [production]
2020-08-25 §
17:01 <herron> imported logstash, elasticsearch, and kibana 7.9.0 -oss packages into buster-wikimedia thirdparty/elastic79 [production]
16:21 <shdubsh> restart logstash on logstash1007 -- gc duration outlier [production]
2020-08-06 §
20:47 <shdubsh> restart logstash -- pipeline appears stuck [production]
2020-08-05 §
23:02 <shdubsh> logstash in codfw looks stuck -- restarting [production]
15:08 <godog> bounce logstash on logstash100[789] - udp loss reported [production]
2020-07-18 §
21:41 <shdubsh> restart logstash on logstash200[456] [production]
21:14 <shdubsh> bounce logstash on logstash1007 [production]
21:10 <shdubsh> bounce logstash on logstash1008 [production]
21:06 <shdubsh> bounce logstash on logstash1009 [production]
2020-07-07 §
17:59 <herron> imported (logstash|kibana|elasticsearch)-oss-7.8.0 into buster-wikimedia thirdparty/elastic78 [production]
09:34 <godog> bounce logstash on logstash1023 [production]
2020-06-25 §
16:15 <Krinkle> I've deleted a "saved object" visualisation in logstash called "Production Errors & Deployments" which seemed to be corrupt and redirect random logstash dashboards to a management page. Backed up at https://phabricator.wikimedia.org/P11666 (NDA) [production]
13:28 <godog> bounce logstash on logstash1007 [production]
2020-06-22 §
09:31 <godog> roll-restart logstash in codfw/eqiad to apply configuration change [production]
2020-06-19 §
12:14 <godog> delete march indices from logstash 5 eqiad to free up space [production]
10:49 <godog> close april logstash indices on logstash 5 eqiad [production]
10:21 <godog> start closing logstash indices for 2020.03 in elastic 5 eqiad [production]
08:45 <godog> roll restart elasticsearch_5@production-logstash-eqiad [production]
08:15 <godog> roll-restart logstash elk5 for "JVM GC Old generation-s runs" alert [production]
2020-06-18 §
09:29 <godog> temp stop logstash on elk7 to test 8 pipeline workers - T255243 [production]
2020-06-17 §
15:28 <godog> temp bump logstash7 workers to 8 and temp stop logstash - T255243 [production]
08:30 <godog> start logstash on logstash7 - T255243 [production]
08:10 <godog> stop logstash temporarily on logstash7 hosts to test increased es shards - T255243 [production]
2020-06-15 §
09:46 <godog> run logstash benchmark on logstash1023 [production]
2020-06-13 §
12:51 <herron> restarted logstash service on logstash1007, logstash1009 [production]
12:33 <godog> bounce logstash on logstash1008, GC death [production]
2020-06-11 §
14:30 <godog> bounce logstash on logstash1009, apparent GC death spiral [production]
2020-05-13 §
07:29 <godog> roll-restart logstash in codfw/eqiad for configuration change [production]
2020-04-09 §
06:43 <XioNoX> confirmed on one host that the change didn't break logstash. Re-enable Puppet on logstash hosts - T244147 [production]
06:36 <XioNoX> disabling puppet on logstash host for CR deploy - T244147 [production]
2020-04-08 §
11:48 <mutante> logstash1009 - restarted logstash [production]
2020-03-09 §
14:41 <godog> roll restart logstash in codfw / eqiad - T226986 [production]
2020-03-04 §
19:52 <shdubsh> restart logstash on logstash2005 -- testing field type mismatch mitigation [production]
2020-02-27 §
21:53 <effie> depool mw1262, suspecting it might have overloaded logstash [production]
2020-02-25 §
13:42 <godog> roll-restart logstash in eqiad/codfw - T227080 [production]
2020-02-21 §
11:21 <godog> bounce logstash on logstash1023 - see if can catch up with elastic7 kafka lag [production]
2020-02-19 §
09:49 <akosiaris> T245516. Deploy mathoid chart version 0.0.27, removing logstash gelf configuration [production]
00:10 <niharika29@deploy1001> Synchronized wmf-config/logging.php: Make the logstash and authmanager-statsd Monolog handlers compatible (duration: 01m 04s) [production]
2020-02-03 §
14:44 <moritzm> restarting apache on an-tool*. cloudmetrics*, logstash*, grafana1002 to pick up libidn security update [production]
2020-01-22 §
14:53 <vgutierrez> copied python3-logstash to apt.w.o (buster) - T242093 [production]
2019-12-11 §
09:33 <godog> roll-restart logstash in codfw/eqiad after https://gerrit.wikimedia.org/r/c/operations/puppet/+/556173 [production]
2019-12-03 §
11:31 <godog> refresh kibana fields for logstash-* [production]