1-50 of 592 results (6ms)
2021-03-12 §
11:22 <hnowlan> corrected git_server for logstash-logback-encoder, cassandra/twcs and cassandra/metrics-collector on deploy1002 [production]
2021-03-10 §
21:30 <brennen> train status: 1.36.0-wmf.34 (T274938): logstash client error board was set up incorrectly; reverting earlier patch for T277094 and will proceed to group1. [production]
2021-03-01 §
21:30 <shdubsh> completed removal of kafka logging inputs to legacy logstash cluster - T234854 [production]
2021-02-17 §
16:46 <godog> roll-restart logstash to apply ulogd filter - T234565 [production]
2021-01-19 §
13:38 <godog> bounce logstash on logstash1025 to debug unindexable logs [production]
2021-01-05 §
20:04 <mutante> mw1344 - restarted apache2 - it was showing the same "partial results" error a mw1362 - no other appservers are showing up in logstash, but these were #1 and #2 source of errors [production]
2020-12-18 §
16:19 <shdubsh> restart logstash on logstash2004 [production]
2020-12-08 §
13:44 <godog> bounce logstash on logstash1023 [production]
13:44 <godog> bounce logstash on logstash102 [production]
2020-12-03 §
19:57 <shdubsh> restart logstash kafka in codfw - java updates [production]
19:44 <shdubsh> restart logstash kafka in eqiad - java updates [production]
2020-11-25 §
22:34 <shdubsh> beginning rolling restart of logstash cluster - eqiad [production]
17:44 <shdubsh> beginning rolling restart of logstash cluster - codfw [production]
2020-11-19 §
20:12 <herron> upgraded logstash-next to kibana 7.10 [production]
2020-10-13 §
15:02 <godog> bounce logstash on logstash1007, GC death [production]
2020-09-24 §
07:49 <godog> roll restart logstash codfw, gc death [production]
01:25 <ryankemper> Root cause of sigkill of `elasticsearch_5@production-logstash-eqiad.service` appears to be OOMKill of the java process: `Killed process 1775 (java) total-vm:8016136kB, anon-rss:4888232kB, file-rss:0kB, shmem-rss:0kB`. Service appears to have restarted itself and is healthy again [production]
01:21 <ryankemper> Observed that `elasticsearch_5@production-logstash-eqiad.service` is in a `failed` state since `Thu 2020-09-24 00:53:53 UTC`; appears the process received a SIGKILL - not sure why [production]
2020-09-14 §
21:30 <cdanis> T257527 ✔️ cdanis@cumin1001.eqiad.wmnet ~ 🕠🍺 sudo cumin 'R:Class ~ "(?i)profile::logstash::collector7"' 'enable-puppet "cdanis rolling out Ifa3c68e4"' [production]
21:24 <cdanis> T257527 ✔️ cdanis@cumin1001.eqiad.wmnet ~ 🕠🍺 sudo cumin 'R:Class ~ "(?i)profile::logstash::collector7"' 'disable-puppet "cdanis rolling out Ifa3c68e4"' [production]
2020-09-09 §
10:28 <volans> restarting ferm on failed hosts: an-test-master1001.eqiad.wmnet,an-worker1116.eqiad.wmnet,db[1075,1101,1116].eqiad.wmnet,labstore1007.wikimedia.org,logstash[1025,1030].eqiad.wmnet leftover from yesterday network issue [production]
2020-09-08 §
16:34 <herron> increased elk5 logstash JVM heaps to 2g (to help decrease kafka-logging consumer lag) [production]
2020-08-28 §
14:22 <moritzm> installing Java security updates on kafka/main and Logstash(5) clusters [production]
2020-08-25 §
17:01 <herron> imported logstash, elasticsearch, and kibana 7.9.0 -oss packages into buster-wikimedia thirdparty/elastic79 [production]
16:21 <shdubsh> restart logstash on logstash1007 -- gc duration outlier [production]
2020-08-06 §
20:47 <shdubsh> restart logstash -- pipeline appears stuck [production]
2020-08-05 §
23:02 <shdubsh> logstash in codfw looks stuck -- restarting [production]
15:08 <godog> bounce logstash on logstash100[789] - udp loss reported [production]
2020-07-18 §
21:41 <shdubsh> restart logstash on logstash200[456] [production]
21:14 <shdubsh> bounce logstash on logstash1007 [production]
21:10 <shdubsh> bounce logstash on logstash1008 [production]
21:06 <shdubsh> bounce logstash on logstash1009 [production]
2020-07-07 §
17:59 <herron> imported (logstash|kibana|elasticsearch)-oss-7.8.0 into buster-wikimedia thirdparty/elastic78 [production]
09:34 <godog> bounce logstash on logstash1023 [production]
2020-06-25 §
16:15 <Krinkle> I've deleted a "saved object" visualisation in logstash called "Production Errors & Deployments" which seemed to be corrupt and redirect random logstash dashboards to a management page. Backed up at https://phabricator.wikimedia.org/P11666 (NDA) [production]
13:28 <godog> bounce logstash on logstash1007 [production]
2020-06-22 §
09:31 <godog> roll-restart logstash in codfw/eqiad to apply configuration change [production]
2020-06-19 §
12:14 <godog> delete march indices from logstash 5 eqiad to free up space [production]
10:49 <godog> close april logstash indices on logstash 5 eqiad [production]
10:21 <godog> start closing logstash indices for 2020.03 in elastic 5 eqiad [production]
08:45 <godog> roll restart elasticsearch_5@production-logstash-eqiad [production]
08:15 <godog> roll-restart logstash elk5 for "JVM GC Old generation-s runs" alert [production]
2020-06-18 §
09:29 <godog> temp stop logstash on elk7 to test 8 pipeline workers - T255243 [production]
2020-06-17 §
15:28 <godog> temp bump logstash7 workers to 8 and temp stop logstash - T255243 [production]
08:30 <godog> start logstash on logstash7 - T255243 [production]
08:10 <godog> stop logstash temporarily on logstash7 hosts to test increased es shards - T255243 [production]
2020-06-15 §
09:46 <godog> run logstash benchmark on logstash1023 [production]
2020-06-13 §
12:51 <herron> restarted logstash service on logstash1007, logstash1009 [production]
12:33 <godog> bounce logstash on logstash1008, GC death [production]
2020-06-11 §
14:30 <godog> bounce logstash on logstash1009, apparent GC death spiral [production]