751-782 of 782 results (8ms)
2014-09-22
§
|
21:04 |
<bd808> |
production-logstash-eqiad healed by restarting elasticsearch on logstash1002 after OOM + split brain |
[production] |
20:52 |
<bd808> |
logstash1002 went split brain from rest of logstash elastic search cluster. restarting |
[production] |
2014-09-15
§
|
17:02 |
<bd808> |
Restarted logstash on logstash1001. I hoped this would fix the dashboards, but it looks like the backing elasticsearch cluster is too sad for them to work at the moment. |
[production] |
2014-09-11
§
|
16:19 |
<bd808> |
Restarted logstash on logstash1001. Log empty and events not being stored in elasticsearch |
[production] |
2014-09-10
§
|
17:10 |
<bd808> |
Restarted logstash on logstash1001 |
[production] |
2014-08-22
§
|
15:08 |
<bd808> |
Still no apache2.log on fluorine or in logstash. Log seems to be available on fenari. |
[production] |
2014-08-21
§
|
00:07 |
<manybubbles> |
bd808 needs to plan a logstash upgrade soon - let it be logged |
[production] |
2014-08-18
§
|
14:31 |
<bd808> |
Restarted logstash on logstash1001; event volume was lower than expected |
[production] |
2014-07-29
§
|
16:10 |
<bd808> |
logstash log event volume up after restart |
[production] |
16:09 |
<bd808> |
restarted logstash on logstash1001.eqiad.wmnet; log volume looked to be down from expected levels |
[production] |
2014-07-15
§
|
20:12 |
<bd808> |
log volume up after logstash restart |
[production] |
20:10 |
<bd808> |
restarted logstash on logstash1001; log volume looked to be down from "normal" |
[production] |
2014-07-07
§
|
15:49 |
<bd808> |
Logstash event volume looks better after restart. Probably related to bug 63490. |
[production] |
15:33 |
<bd808> |
Restarted logstash on logstash1001 because log volume looked lower than I though it should be. |
[production] |
2014-06-19
§
|
23:14 |
<bd808> |
Restarted logstash service on logstash1001 |
[production] |
2014-06-12
§
|
17:41 |
<ottomata> |
restarting elasticsearch on logstash servers |
[production] |
2014-05-28
§
|
18:06 |
<bd808|deploy> |
Restarted logstash on logstash1001; log event volume suspiciously low for the last ~35 minutes |
[production] |
2014-05-23
§
|
17:02 |
<bd808> |
Starting rolling update of elasticsearch for logstash cluster |
[production] |
2014-05-19
§
|
19:16 |
<bd808> |
Added display of exception-json events to fatalmonitor logstash dashboard |
[production] |
2014-05-14
§
|
20:32 |
<bd808> |
Restarting logstash on logstash1001.eqiad.wmnet due to missing messages from some (all?) logs |
[production] |
2014-05-01
§
|
23:43 |
<bd808> |
Restarted logstash on logstash1001; MaxSem noticed that many recursion-guard logs were not being completely reassembled and JVM had one CPU maxed out. |
[production] |
2014-03-31
§
|
16:11 |
<bd808> |
Upgraded kibana on logstash cluster to e317bc663495d0172339a4d4ace9c2a580ceed45 |
[production] |
14:28 |
<bd808> |
Started logstash on logstash1001 |
[production] |
14:13 |
<bd808> |
Stopped logstash on logstash1001 |
[production] |
2014-03-20
§
|
15:00 |
<bd808> |
logstash stopped ingesting logs at 2014-03-19T22:37:54.000Z. |
[production] |
14:57 |
<bd808> |
restarted logstash on logstash1001.eqiad.wmnet |
[production] |
2014-03-10
§
|
22:18 |
<bd808> |
Two instances of logstash were running on logstash1001; Killed both and started service again |
[production] |
21:55 |
<bd808> |
Restarted logstash on logstash1001; new events flowing in again now |
[production] |
21:47 |
<bd808> |
ganglia monitoring for elasticsearch on logstash cluster seems broken. Caused by 1.0.x upgrade having not happened there yet? |
[production] |
21:27 |
<bd808> |
No new data in logstash since 14:56Z. Bryan will investigate. |
[production] |
2014-02-24
§
|
16:38 |
<bd808> |
Logstash elasticsearch split-brain resulted in loss of all logs for 2014-02-24 from 00:00Z to ~16:30Z |
[production] |
2014-02-14
§
|
18:24 |
<bd808> |
Starting ganglia-monitor on logstash1001. Filed bug 61384 about problem in elasticsearch_monitoring.py effecting the logstash cluster. |
[production] |