601-650 of 731 results (9ms)
2015-07-29 §
20:11 <urandom> bouncing cassandra on restbase1003 to apply logstash config [production]
20:04 <urandom> bouncing cassandra on restbase1002 to apply logstash config [production]
19:59 <urandom> restarting restbase1001 to apply logstash config [production]
2015-07-28 §
16:01 <godog> bounce cassandra on xenon to test logstash logging [production]
15:52 <bd808> installed logstash on logstash1002; forced puppet run [production]
2015-07-27 §
20:54 <bd808> installed libbcprov-java and restarted logstash on logstash1001 [production]
18:38 <bd808> Synchronized wmf-config/InitialiseSettings.php: logstash: change ip address for logstash1001 and logstash1003 (duration: 00m 12s) [production]
2015-07-11 §
04:21 <bd808> Logstash cluster upgrade complete! Kibana working again [production]
2015-07-08 §
23:06 <bd808> Restarted logstash on logstash1001; no hhvm input seen for last hour [production]
2015-07-05 §
22:30 <bd808> Restarted logstash on logstah1001; Hung due to OOM errors [production]
2015-07-02 §
16:40 <bd808> Restarted logstash on logstash1001 due to OOM [production]
2015-06-26 §
23:57 <bd808> Logstash log ingestion working again after forcing recovery of replicas for logstash-2015.06.26; new logs were being rejected with only a primary shard available [production]
23:54 <bd808> re-enabled allocation on logstash elasticsearch cluster [production]
22:36 <bd808> set indices.recovery.concurrent_streams to 4 on logstash ES cluster [production]
22:36 <godog> set indices.recovery.max_bytes_per_sec to 10mb on logstash ES cluster [production]
22:25 <godog> set indices.recovery.max_bytes_per_sec to 50mb on logstash ES cluster [production]
22:09 <bd808> restarted logstash on logstash1001 [production]
20:10 <bd808> Deleted 4 corrupt indices (logstash-2015.05.30 logstash-2015.05.31 logstash-2015.06.03 logstash-2015.06.06) on logstash1004 [production]
2015-06-04 §
21:27 <bd808> restarted logstash and elasticsearch on logstash100[1-3] to pick up latest jre updates [production]
2015-05-19 §
23:20 <bd808> Synchronized wmf-config/InitialiseSettings.php: logstash: Exclude jobrunner debug messages (duration: 00m 12s) [production]
2015-05-11 §
15:05 <manybubbles> Synchronized wmf-config/InitialiseSettings.php: SWAT: send all mediawiki events from all wikis to logstash (duration: 00m 12s) [production]
2015-05-06 §
15:04 <bd808> Synchronized wmf-config/InitialiseSettings.php: Send group0 + group1 MediaWiki events to logstash {{gerrit|209170}} (duration: 00m 16s) [production]
2015-05-04 §
01:13 <bd808> Started logstash cluster relocating indices off of logstash100[1-3] to logstash100[4-6] [production]
2015-04-23 §
22:10 <bd808> Synchronized wmf-config/logging.php: logstash: Fix log level detection (c09014d) (duration: 00m 17s) [production]
2015-04-14 §
15:48 <bd808> Restarted logstash on logstash1003.eqiad.wmnet; subbu reported missing parsoid log events [production]
2015-02-11 §
19:36 <subbu> temporarily turn off logging to logstash till logstash isssues are resolved. [production]
2015-02-05 §
17:47 <ori> Synchronized wmf-config/logging.php: Live hack: disable Logstash logging on suspicion that it is acting up (duration: 00m 05s) [production]
00:54 <bd808> truncated redis input queues for logstash on all 3 hosts to see if cluster can keep up now with 3 elasticsearch writer threads [production]
2015-02-03 §
22:53 <bd808> starting rolling restart of logstash elasticsearch cluster to pick up index.merge.scheduler.max_thread_count puppet change [production]
01:51 <bd808> restarted logstash on logstash1001 [production]
2015-01-25 §
18:44 <bd808> trimmed Logstash redis input queues to 0 events; dropped ~4M backlogged events [production]
2015-01-24 §
22:40 <bd808> Emptied logstash redis lists on all 3 hosts [production]
22:27 <bd808> Full restart of logstash elasticsearch cluster [production]
2015-01-23 §
17:14 <bd808> logstash elasticsearch cluster split brained; logstash1002 thinks it is a lone master [production]
2015-01-16 §
23:33 <bd808> ran `LTRIM logstash -50000 9999999` on redis queues to drop ~4M events in backlog [production]
20:17 <bd808> Synchronized wmf-config/InitialiseSettings.php: Allow wgDebugLogGroups to exclude logstash append (e808e690) (duration: 00m 05s) [production]
20:17 <bd808> Synchronized wmf-config/logging.php: Allow wgDebugLogGroups to exclude logstash append (e808e690) (duration: 00m 07s) [production]
18:13 <bd808> document count not changing for logstash-2015.01.16 index [production]
17:59 <bd808> Synchronized wmf-config/logging-labs.php: beta: Allow wgDebugLogGroups to exclude logstash append (03c3ab27) (duration: 00m 06s) [production]
16:48 <bd808> Upgraded elasticsearch and restarted on all logstash nodes [production]
16:43 <bd808> shutdown whole elasticsearch cluster for logstash [production]
2015-01-15 §
20:04 <bd808> logstash redis queue backlog 384k events and climbing; likely related to the elasticsearch cluster flapping [production]
16:09 <bd808> Deleted 2015-12-* indices from logstash elasticsearch cluster [production]
16:01 <bd808> Elasticsearch cluster for logstash has indices for events dated 2015-12-* again [production]
2015-01-12 §
16:28 <bd808> deleted 2014-01-* and 2015-12-* indices from logstash elasticsearch cluster [production]
16:13 <bd808> logs on logstash1001 reporting elasticserch connection errors; restarted logstash service [production]
16:09 <bd808> logstash elasticsearch cluster has strange indices dated 2014-01-* and 2015-12-* again [production]
15:59 <bd808> logstash not showing any events at all since 2015-01-12T13:58:59.728Z [production]
2015-01-08 §
16:02 <marktraceur> Synchronized wmf-config/logging.php: [SWAT] Honor log sampling and levels for logstash on group0 wikis (duration: 00m 05s) [production]
01:04 <bd808> cleaned up logstash indices dated 2014-01-* and 2015-12-* that look to have been created by some sort of syslog input parsing bug [production]