651-700 of 4917 results (17ms)
2022-01-07 §
14:52 <btullis> root@aqs1014:~# jmap -dump:live,format=b,file=/srv/cassandra-b/tmp/aqs1014-b-dump202201071450.hprof 4468 [analytics]
2022-01-06 §
18:02 <btullis> btullis@aqs1010:~$ sudo systemctl restart cassandra-a.service [analytics]
12:22 <btullis> restarting cassandra-a service on aqs1004.eqiad.wmnet in order to troubleshoot logging. [analytics]
11:24 <btullis> restarting cassandra-a service on aqs1010.eqiad.wmnet in order to troubleshoot logging. [analytics]
08:12 <joal> Rerun failed webrequest-load-wf-text-2022-1-6-7 [analytics]
07:58 <joal> Rerun refine_event_sanitized_analytics_immediate missing hours after errors from the past days [analytics]
07:39 <joal> Rerun failed refine_eventlogging_analytics for mobilewikiappiosuserhistory schema, hours 2022-01-05T2[123]:00:00 and 2022-01-06T00:00:00, dropping malformed rows as discussed with schema owner [analytics]
2022-01-05 §
19:16 <joal> Rerun failed refine_eventlogging_analytics for mobilewikiappiosuserhistory schema, hours 2022-01-04T1[5789]:00:00, dropping malformed rows as discussed with schema owner [analytics]
11:37 <btullis> Upgrading hive on an-test-client1001 in order to test log4j upgrade [analytics]
11:35 <btullis> Upgrading hive packages on an-test-coord1001 to test log4j changes. [analytics]
2022-01-04 §
10:39 <elukey> restart cassandra-a on aqs1010 (heap size used in full, high GC) [analytics]
10:20 <elukey> restart cassandra-a on aqs1015 (heap size used in full, high GC) [analytics]
2022-01-03 §
18:26 <joal> rerun cassandra-daily-wf-local_group_default_T_mediarequest_per_file-2022-1-1 [analytics]
16:08 <joal> Kill cassandra3-local_group_default_T_mediarequest_per_file-daily-2022-1-1 [analytics]
11:26 <elukey> restart cassandra-b on aqs1015 (instance not responding, probably trashing) [analytics]
11:16 <elukey> restart cassandra-b on aqs1010 (stuck trashing) [analytics]
10:34 <elukey> depool aqs1010 (`sudo -i depool` on the node) to allow investigation of the cassandra -b instance [analytics]
10:22 <elukey> powercycle an-worker1114 (CPU soft lockup errors in mgmt console) [analytics]
10:20 <elukey> powercycle an-worker1120 (CPU soft lockup errors in mgmt console) [analytics]
2021-12-22 §
19:13 <milimetric> Additional context on the last delete message: on an-launcher1002 which is filled up [analytics]
19:12 <milimetric> Marcel and I are deleting files from /tmp older than 60 days [analytics]
15:55 <mforns> finished refinery deployment for anomaly detection queries [analytics]
14:54 <mforns> starting refinery deployment for anomaly detection queries [analytics]
2021-12-20 §
18:59 <mforns> finished deployment of refinery, adding anomaly detection hql for airflow job [analytics]
18:39 <mforns> started to deploy refinery, adding anomaly detection hql for airflow job [analytics]
2021-12-17 §
12:32 <btullis> Upgraded druid packages, with pool/depool on druid1004 [analytics]
11:20 <btullis> btullis@an-test-druid1001:~$ sudo apt-get install druid-broker druid-common druid-coordinator druid-historical druid-middlemanager druid-overlord [analytics]
11:18 <btullis> updating reprepo with new druid packages for buster-wikimedia to pick up new log4j jar files [analytics]
2021-12-16 §
11:01 <btullis> btullis@an-test-druid1001:~$ sudo apt-get install druid-broker druid-common druid-coordinator druid-historical druid-middlemanager druid-overlord [analytics]
11:01 <btullis> upgrading druid on the test cluster with new packages to test log4j changes. [analytics]
2021-12-15 §
08:51 <joal> Rerun failed cassandra-daily-wf-local_group_default_T_mediarequest_per_file-2021-12-13 after cluster restart [analytics]
07:20 <elukey> elukey@stat1007:~$ sudo systemctl reset-failed product-analytics-movement-metrics [analytics]
2021-12-14 §
19:02 <milimetric> finished deploying the weekly train as per etherpad [analytics]
18:04 <joal> Rerun failed cassandra-daily-wf-local_group_default_T_pageviews_per_article_flat-2021-12-13 after cluster reboot [analytics]
17:51 <btullis> rebooting aqs1015 [analytics]
17:25 <btullis> rebooting aqs1013 [analytics]
17:19 <btullis> rebooting aqs1012 [analytics]
16:00 <btullis> rebooting aqs1011 [analytics]
15:53 <btullis> rebooting aqs1010 [analytics]
15:00 <btullis> btullis@aqs1010:~$ sudo nodetool-a repair --full system_auth [analytics]
14:59 <btullis> cassandra@cqlsh> ALTER KEYSPACE "system_auth" WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor': '12'}; on aqs1010-a [analytics]
14:25 <btullis> btullis@aqs1011:$ sudo systemctl start cassandra-b.service [analytics]
12:44 <joal> Rerun failed cassandra-hourly-wf-local_group_default_T_pageviews_per_project_v2-2021-12-14-10 [analytics]
12:42 <joal> Kill late spark cassandra loading job [analytics]
2021-12-11 §
10:06 <elukey> kill process 2560 on stat1005 to allow puppet to clean up the related user (offboarded) [analytics]
10:04 <elukey> kill process 2831 on stat1008 to allow puppet to clean up the related user (offboarded) [analytics]
2021-12-09 §
11:08 <btullis> roll restarting druid historical daemons on analytics cluster T297148 [analytics]
10:46 <btullis> roll restarting druid brokers on analytics cluster [analytics]
2021-12-07 §
20:09 <ottomata> deploy wikistats2 with doc updates [analytics]
2021-12-03 §
17:36 <razzi> restart aqs-next to pick up new mediawiki snapshot: `razzi@cumin1001:~$ sudo cumin A:aqs-next 'systemctl restart aqs'` [analytics]