4701-4750 of 5770 results (29ms)
2018-01-12 §
10:33 <elukey> reboot analytics1066->69 for kernel updates [analytics]
09:07 <elukey> reboot analytics1063->65 for kernel updates [analytics]
2018-01-11 §
22:35 <ottomata> restarting kafka-jumbo brokers to apply https://gerrit.wikimedia.org/r/403774 [analytics]
22:04 <ottomata> restarting kafka-jumbo brokers to apply https://gerrit.wikimedia.org/r/#/c/403762/ [analytics]
20:57 <ottomata> restarting kafka-jumbo brokers to apply https://gerrit.wikimedia.org/r/#/c/403753/ [analytics]
17:37 <joal> Kill manual banner-streaming job to see it restarted by cron [analytics]
17:11 <ottomata> restart kafka on kafka-jumbo1003 [analytics]
17:08 <ottomata> restart kafka on kafka-jumbo1001...something is not right with my certpath change yesterday [analytics]
14:46 <joal> Deploy refinery onto HDFS [analytics]
14:33 <joal> Deploy refinery with Scap [analytics]
14:07 <joal> Manually restarting banner streaming job to prevent alerting [analytics]
13:23 <joal> Killing banner-streaming job to have it auto-restarted from cron [analytics]
11:45 <elukey> re-run webrequest-load-wf-text-2018-1-11-8 (failed due to reboots) [analytics]
11:39 <joal> rerun mediacounts-load-wf-2018-1-11-8 [analytics]
10:48 <joal> Restarting banner-streaming job after hadoop nodes reboot [analytics]
10:01 <elukey> reboot analytics1059-61 for kernel updates [analytics]
09:34 <elukey> reboot analytics1055->1058 for kernel updates [analytics]
09:04 <elukey> reboot analytics1051->1054 for kernel updates [analytics]
2018-01-10 §
16:56 <elukey> reboot analytics1048->50 for kernel updates [analytics]
16:23 <ottomata> restarting kafka jumbo brokers to apply java.security certpath restrictions [analytics]
11:51 <elukey> re-run webrequest-load-wf-upload-2018-1-10-10 (failed due to reboots) [analytics]
11:27 <elukey> re-run webrequest-load-wf-text-2018-1-10-10 (failed due to reboots) [analytics]
11:26 <elukey> reboot analytics1044->47 for kernel updates [analytics]
11:03 <elukey> reboot analytics1040->43 for kernel updates [analytics]
2018-01-09 §
16:53 <joal> Rerun pageview-druid-hourly-wf-2018-1-9-13 [analytics]
15:33 <elukey> stop mysql on dbstore1002 as prep step for shutdown (stop all slaves, mysql stop) [analytics]
15:10 <elukey> reboot analytics1028 (hadoop worker and hdfs journal node) for kernel updates [analytics]
15:00 <elukey> reboot kafka-jumbo1006 for kernel updates [analytics]
14:41 <elukey> reboot kafka-jumbo1005 for kernel updates [analytics]
14:33 <elukey> reboot kafka1023 for kernel updates [analytics]
14:04 <elukey> reboot kafka1022 for kernel updates [analytics]
13:51 <elukey> reboot kafka-jumbo1003 for kernel updates [analytics]
10:08 <elukey> reboot kafka-jumbo1002 for kernel updates [analytics]
09:35 <elukey> reboot kafka1014 for kernel updates [analytics]
2018-01-08 §
19:07 <milimetric> Deployed refinery and synced to hdfs [analytics]
15:23 <elukey> reboot kafka1013 for kernel updates [analytics]
13:40 <elukey> reboot analytics10[36-39] for kernel updates [analytics]
12:59 <elukey> reboot kafka1012 for kernel updates [analytics]
12:43 <joal> Deploy AQS from tin [analytics]
12:36 <fdans> Deploying AQS [analytics]
12:33 <joal> Update fake-data in cassandra adiing top-by-country needed row [analytics]
11:07 <elukey> re-run webrequest-load-wf-text-2018-1-8-8 (failed after some reboots due to kernel updates) [analytics]
10:07 <elukey> drain + reboot analytics1029,1031->1034 for kernel updates [analytics]
2018-01-07 §
09:01 <elukey> re-enabled puppet on db110[78] - eventlogging_sync restarted on db1108 (analytics-slave) [analytics]
2018-01-06 §
08:09 <elukey> re-enable eventlogging mysql consumers after database maintenance [analytics]
2018-01-05 §
13:18 <fdans> deploying AQS [analytics]
2018-01-04 §
19:54 <joal> Deploying refinery onto hadoop [analytics]
19:45 <joal> Deploy refinery using scap [analytics]
19:38 <joal> Deploy refinery-source using jenkins [analytics]
16:01 <ottomata> killing json_refine_eventlogging_analytics job that started yesterday and has not completed (has no executors running?) application_1512469367986_81514. I think the cluster is just too busy? mw-history job running... [analytics]