601-650 of 793 results (7ms)
2016-02-29 §
18:01 <gehel> elastic2002.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
14:16 <gehel> elastic2001.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
12:16 <gehel> elastic2001.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
2016-02-25 §
00:57 <bd808> Started crashed Logstash process on logstash1002 (systemd doesn't restart authomatically due to T127677) [production]
2016-02-22 §
02:13 <bd808> Logstash process on logstash1002 died from jvm OOM [production]
02:13 <bd808> Logstash process on logstash1002 died from jvm OOM [production]
2016-02-12 §
21:30 <tgr@mira> Synchronized wmf-config/InitialiseSettings.php: T125455: log session-ip channel to logstash (duration: 01m 17s) [production]
2016-02-11 §
06:02 <bd808> Raised logstash mem limit to -Xms512m on logstash1003 [production]
05:40 <bd808> logstash process on logstash1003 flapping; continuing to investigate [production]
05:25 <bd808> Restarted logstash on logstash1003; killed by OOMKiler [production]
2016-01-22 §
13:05 <ori@tin> Synchronized wmf-config/InitialiseSettings.php: If443f3c80: monolog: explicitly declare logstash as debug for sessions (duration: 00m 34s) [production]
2016-01-21 §
01:15 <bd808> Restarted logstash on logstash1003 [production]
01:14 <bd808> Restarted logstash on logstash1002 [production]
2016-01-20 §
23:06 <bd808> Restarted logstash on logstash1001 [production]
22:54 <bd808> no HHVM log events in logstash since 2015-12-31T23:59:44.000Z [production]
22:48 <bd808> HHVM log messages not being recorded in Logstash; bd808 to investigate [production]
2016-01-13 §
19:55 <ostriches> elasticsearch: wikimania2017_content was reporting as missing in logstash, ran updateSearchIndexConfig. messy aliases? Seems to be working again. [production]
2015-12-23 §
19:41 <mutante> logstash1002 - started logstash service [production]
2015-12-18 §
11:40 <hashar> logstash: reorganized list of dashboards per sections https://logstash.wikimedia.org/#/dashboard/elasticsearch/default [production]
2015-11-05 §
23:34 <bd808> Decreased replica count of logstash-2015.10.13 and logstash-2015.10.14 to free disk space on cluster [production]
2015-11-02 §
17:09 <bd808> Decreased replica count to 1 for logstash-2015.10.04 thru logstash-2015.10.12 to free cluster disk space; see T117438 [production]
16:30 <bd808> Deleted logstash-2015.10.03 index to free disk space on logstash1004; see T113571 [production]
2015-10-30 §
00:01 <bd808> Restarted logstash on logstash1003 again. The first try apparently didn't take [production]
2015-10-29 §
23:32 <bd808> Restarted logstash on logstash1003; died with OOM error [production]
2015-10-22 §
19:27 <bd808> Forced ELK Elasticsearch to allocate replica of logstash-2015.10.22 shard 0 on logstash1004 [production]
2015-09-24 §
05:33 <yuvipanda> deleted logstash indexes for 08/27 and 28 too [production]
05:31 <yuvipanda> deleted indexes for 08/14, 15, 25, 26 on logstash [production]
03:02 <yuvipanda> jstack dumped logstash output onto /home/yuvipanda/stack on logstash1001 since strace seems useles [production]
02:51 <yuvipanda> restarted logstash on logstash1002 [production]
02:16 <Krinkle> Kibana/Logstash outage. Zero events received after 2015-09-23T23:59:59.999Z. [production]
01:53 <mutante> started logstash on logstash1002 again [production]
00:16 <mutante> started logstash on logstash1002 [production]
2015-09-21 §
08:21 <_joe_> restarted the logstash agent on logstash1003, OOM'd [production]
2015-09-18 §
12:21 <godog> restart logstash on logstash1001, OOM in logs [production]
00:14 <ori> restarted logstash on logstash1001 [production]
2015-08-17 §
20:53 <bd808> T109369: Restarted logstash on logstash1003; parsoid gelf events not being recorded since 2015-08-15 [production]
2015-08-13 §
20:46 <mutante> killed ganglia aggregator for logstash on carbon [production]
20:40 <bd808> ganglia not getting elasticsearch jvm data for logstash cluster since 2015-08-13T12:00 -- https://ganglia.wikimedia.org/latest/?c=Logstash+cluster+eqiad&&m=es_heap_used [production]
2015-08-11 §
18:21 <bd808> logstash log event volume back to normal levels following elasticsearch upgrade [production]
18:06 <bd808> logstash cluster recovered after upgrade of elasticsearch on logstash1006 [production]
18:03 <bd808> upgraded elasticsearch to 1.7.1 on logstash1006; logstash-2015.08.11 shard recovering [production]
18:01 <bd808> logstash cluster recovered after upgrade of elasticsearch on logstash1005 [production]
17:43 <bd808> log event volume in logstash dropped dramatically again; seems to correlate with final recovery of logstash-2015.08.11 shard [production]
17:29 <bd808> upgraded elasticsearch to 1.7.1 on logstash1005; logstash-2015.08.11 shard recovering [production]
17:27 <bd808> logstash event volume recovered after restarting all 3 logstash services [production]
17:14 <bd808> log event volume in logstash dropped dramatically at 16:49; investigating [production]
17:13 <bd808> logstash cluster recovered after upgrade of elasticsearch on logstash1004 [production]
16:42 <bd808> upgraded elasticsearch to 1.7.1 on logstash1004; logstash-2015.08.11 shard recovering [production]
16:23 <bd808> logstash upgrade on logstash1003 complete [production]
16:20 <bd808> logstash upgrade on logstash1002 complete [production]