751-773 of 773 results (8ms)
2014-07-29
§
|
16:09 |
<bd808> |
restarted logstash on logstash1001.eqiad.wmnet; log volume looked to be down from expected levels |
[production] |
2014-07-15
§
|
20:12 |
<bd808> |
log volume up after logstash restart |
[production] |
20:10 |
<bd808> |
restarted logstash on logstash1001; log volume looked to be down from "normal" |
[production] |
2014-07-07
§
|
15:49 |
<bd808> |
Logstash event volume looks better after restart. Probably related to bug 63490. |
[production] |
15:33 |
<bd808> |
Restarted logstash on logstash1001 because log volume looked lower than I though it should be. |
[production] |
2014-06-19
§
|
23:14 |
<bd808> |
Restarted logstash service on logstash1001 |
[production] |
2014-06-12
§
|
17:41 |
<ottomata> |
restarting elasticsearch on logstash servers |
[production] |
2014-05-28
§
|
18:06 |
<bd808|deploy> |
Restarted logstash on logstash1001; log event volume suspiciously low for the last ~35 minutes |
[production] |
2014-05-23
§
|
17:02 |
<bd808> |
Starting rolling update of elasticsearch for logstash cluster |
[production] |
2014-05-19
§
|
19:16 |
<bd808> |
Added display of exception-json events to fatalmonitor logstash dashboard |
[production] |
2014-05-14
§
|
20:32 |
<bd808> |
Restarting logstash on logstash1001.eqiad.wmnet due to missing messages from some (all?) logs |
[production] |
2014-05-01
§
|
23:43 |
<bd808> |
Restarted logstash on logstash1001; MaxSem noticed that many recursion-guard logs were not being completely reassembled and JVM had one CPU maxed out. |
[production] |
2014-03-31
§
|
16:11 |
<bd808> |
Upgraded kibana on logstash cluster to e317bc663495d0172339a4d4ace9c2a580ceed45 |
[production] |
14:28 |
<bd808> |
Started logstash on logstash1001 |
[production] |
14:13 |
<bd808> |
Stopped logstash on logstash1001 |
[production] |
2014-03-20
§
|
15:00 |
<bd808> |
logstash stopped ingesting logs at 2014-03-19T22:37:54.000Z. |
[production] |
14:57 |
<bd808> |
restarted logstash on logstash1001.eqiad.wmnet |
[production] |
2014-03-10
§
|
22:18 |
<bd808> |
Two instances of logstash were running on logstash1001; Killed both and started service again |
[production] |
21:55 |
<bd808> |
Restarted logstash on logstash1001; new events flowing in again now |
[production] |
21:47 |
<bd808> |
ganglia monitoring for elasticsearch on logstash cluster seems broken. Caused by 1.0.x upgrade having not happened there yet? |
[production] |
21:27 |
<bd808> |
No new data in logstash since 14:56Z. Bryan will investigate. |
[production] |
2014-02-24
§
|
16:38 |
<bd808> |
Logstash elasticsearch split-brain resulted in loss of all logs for 2014-02-24 from 00:00Z to ~16:30Z |
[production] |
2014-02-14
§
|
18:24 |
<bd808> |
Starting ganglia-monitor on logstash1001. Filed bug 61384 about problem in elasticsearch_monitoring.py effecting the logstash cluster. |
[production] |