6651-6700 of 10000 results (35ms)
2021-02-09 ยง
20:21 <razzi@cumin1001> END (PASS) - Cookbook sre.druid.roll-restart-workers (exit_code=0) for Druid analytics cluster: Roll restart of Druid's jvm daemons. - razzi@cumin1001 [production]
20:13 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
20:12 <elukey@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
20:12 <otto@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - otto@cumin1001 [production]
20:11 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
20:11 <otto@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - otto@cumin1001 [production]
20:10 <elukey@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
20:09 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1299.eqiad.wmnet with reason: REIMAGE [production]
20:08 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
20:07 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1299.eqiad.wmnet with reason: REIMAGE [production]
20:06 <elukey@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
20:02 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1385.eqiad.wmnet with reason: REIMAGE [production]
20:00 <twentyafterfour> prepping 1.36.0-wmf.30 [production]
20:00 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1382.eqiad.wmnet with reason: REIMAGE [production]
19:58 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1385.eqiad.wmnet with reason: REIMAGE [production]
19:58 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw2263.codfw.wmnet with reason: REIMAGE [production]
19:57 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1382.eqiad.wmnet with reason: REIMAGE [production]
19:56 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw2263.codfw.wmnet with reason: REIMAGE [production]
19:35 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw2264.codfw.wmnet [production]
19:35 <razzi@cumin1001> START - Cookbook sre.druid.roll-restart-workers for Druid analytics cluster: Roll restart of Druid's jvm daemons. - razzi@cumin1001 [production]
19:27 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw1383.eqiad.wmnet [production]
19:26 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
19:23 <elukey@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
19:21 <ryankemper> T262211 `sudo cumin 'P{relforge*}' 'sudo run-puppet-agent'` on `ryankemper@cumin1001` [production]
19:19 <ryankemper> T262211 Attempting to bring `relforge100[3,4]` into service; merging https://gerrit.wikimedia.org/r/661229 [production]
19:15 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw1300.eqiad.wmnet [production]
19:15 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw2220.codfw.wmnet [production]
19:10 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
19:08 <elukey@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
19:04 <elukey@cumin1001> END (FAIL) - Cookbook sre.druid.roll-restart-workers (exit_code=99) for Druid analytics cluster: Roll restart of Druid's jvm daemons. - elukey@cumin1001 [production]
19:04 <elukey@cumin1001> START - Cookbook sre.druid.roll-restart-workers for Druid analytics cluster: Roll restart of Druid's jvm daemons. - elukey@cumin1001 [production]
19:02 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
19:01 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw1383.eqiad.wmnet [production]
19:01 <elukey@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
19:01 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw2264.codfw.wmnet [production]
18:57 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
18:57 <elukey@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
18:46 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
18:45 <elukey@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
18:42 <ryankemper> T267927 [WDQS Data Reload] `sudo cookbook sre.wdqs.data-reload wdqs1010.eqiad.wmnet --reuse-downloaded-dump --reload-data wikidata --reason 'T267927: Reload wikidata jnl from fresh dumps' --task-id T267927` on `ryankemper@cumin1001` tmux session `wdqs_data_reload_1010` [production]
18:41 <ryankemper> T267927 [WDQS Data Reload] Small typo in previous SAL log message, see subsequent SAL line for correction: [production]
18:41 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
18:40 <ryankemper> T267927 [WDQS Data Reload] `sudo cookbook sre.wdqs.data-reload wdqs1010.eqiad.wmnet --reuse-downloaded-dump --reload-data wikidata --reason 'T267927: Reload wikidata jnl from fresh dumps' --task-id T267927` on `ryankemper@cumin1001` tmux session `wdqs_data_reload_1009` [production]
18:40 <ryankemper> T267927 [WDQS Data Reload] `sudo cookbook sre.wdqs.data-reload wdqs1009.eqiad.wmnet --reuse-downloaded-dump --reload-data wikidata --skolemize --reason 'T267927: Reload wikidata jnl from fresh dumps' --task-id T267927` on `ryankemper@cumin1001` tmux session `wdqs_data_reload_1009` [production]
18:39 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-reload [production]
18:39 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-reload [production]
18:39 <elukey@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
18:37 <ryankemper> T267927 [WDQS Data Reload] Clearing old wikidata journal file to free disk space before beginning data reload:`sudo systemctl status wdqs-blazegraph && sudo systemctl stop wdqs-blazegraph && sudo rm -fv /srv/wdqs/wikidata.jnl && sudo systemctl start wdqs-blazegraph` on `wdqs100[9,10]` [production]
18:37 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw1300.eqiad.wmnet [production]
18:37 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw2220.codfw.wmnet [production]