2021-02-09
§
|
19:02 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
19:01 |
<dzahn@cumin1001> |
conftool action : set/pooled=no; selector: name=mw1383.eqiad.wmnet |
[production] |
19:01 |
<elukey@cumin1001> |
START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
19:01 |
<dzahn@cumin1001> |
conftool action : set/pooled=no; selector: name=mw2264.codfw.wmnet |
[production] |
18:57 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
18:57 |
<elukey@cumin1001> |
START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
18:46 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
18:45 |
<elukey@cumin1001> |
START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
18:42 |
<ryankemper> |
T267927 [WDQS Data Reload] `sudo cookbook sre.wdqs.data-reload wdqs1010.eqiad.wmnet --reuse-downloaded-dump --reload-data wikidata --reason 'T267927: Reload wikidata jnl from fresh dumps' --task-id T267927` on `ryankemper@cumin1001` tmux session `wdqs_data_reload_1010` |
[production] |
18:41 |
<ryankemper> |
T267927 [WDQS Data Reload] Small typo in previous SAL log message, see subsequent SAL line for correction: |
[production] |
18:41 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
18:40 |
<ryankemper> |
T267927 [WDQS Data Reload] `sudo cookbook sre.wdqs.data-reload wdqs1010.eqiad.wmnet --reuse-downloaded-dump --reload-data wikidata --reason 'T267927: Reload wikidata jnl from fresh dumps' --task-id T267927` on `ryankemper@cumin1001` tmux session `wdqs_data_reload_1009` |
[production] |
18:40 |
<ryankemper> |
T267927 [WDQS Data Reload] `sudo cookbook sre.wdqs.data-reload wdqs1009.eqiad.wmnet --reuse-downloaded-dump --reload-data wikidata --skolemize --reason 'T267927: Reload wikidata jnl from fresh dumps' --task-id T267927` on `ryankemper@cumin1001` tmux session `wdqs_data_reload_1009` |
[production] |
18:39 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
18:39 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
18:39 |
<elukey@cumin1001> |
START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
18:37 |
<ryankemper> |
T267927 [WDQS Data Reload] Clearing old wikidata journal file to free disk space before beginning data reload:`sudo systemctl status wdqs-blazegraph && sudo systemctl stop wdqs-blazegraph && sudo rm -fv /srv/wdqs/wikidata.jnl && sudo systemctl start wdqs-blazegraph` on `wdqs100[9,10]` |
[production] |
18:37 |
<dzahn@cumin1001> |
conftool action : set/pooled=no; selector: name=mw1300.eqiad.wmnet |
[production] |
18:37 |
<dzahn@cumin1001> |
conftool action : set/pooled=no; selector: name=mw2220.codfw.wmnet |
[production] |
18:32 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
18:29 |
<elukey@cumin1001> |
START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
18:21 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh-clients (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
18:20 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hadoop.roll-restart-workers (exit_code=0) |
[production] |
18:14 |
<elukey@cumin1001> |
START - Cookbook sre.hadoop.change-distro-from-cdh-clients for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 |
[production] |
17:43 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 20:00:00 on maps1005.eqiad.wmnet with reason: Resyncing database, still |
[production] |
17:43 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime for 20:00:00 on maps1005.eqiad.wmnet with reason: Resyncing database, still |
[production] |
17:37 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1300.eqiad.wmnet with reason: REIMAGE |
[production] |
17:35 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw2220.codfw.wmnet with reason: REIMAGE |
[production] |
17:35 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on mw1300.eqiad.wmnet with reason: REIMAGE |
[production] |
17:33 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on mw2220.codfw.wmnet with reason: REIMAGE |
[production] |
17:13 |
<jiji@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host mc-gp1001.eqiad.wmnet |
[production] |
17:07 |
<jiji@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host mc-gp1001.eqiad.wmnet |
[production] |
17:01 |
<gehel@cumin1001> |
END (PASS) - Cookbook sre.wdqs.reboot (exit_code=0) |
[production] |
16:47 |
<hashar@deploy1001> |
rebuilt and synchronized wikiversions files: all wikis to 1.36.0-wmf.29 |
[production] |
16:21 |
<moritzm> |
installing wireshark security updates |
[production] |
16:20 |
<cmjohnson@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
16:14 |
<godog> |
swift eqiad-prod: decrease weight for SSDs on ms-be[1019-1026] - T272836 |
[production] |
16:11 |
<cmjohnson@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
15:59 |
<volker-e@deploy1001> |
Finished deploy [design/style-guide@b9b7ee6]: Deploy design/style-guide: b9b7ee6 “Components”: Fix components overview SVG rendering glitch (#439) (duration: 00m 07s) |
[production] |
15:59 |
<volker-e@deploy1001> |
Started deploy [design/style-guide@b9b7ee6]: Deploy design/style-guide: b9b7ee6 “Components”: Fix components overview SVG rendering glitch (#439) |
[production] |
15:32 |
<papaul> |
power down logstash2035 for relocation |
[production] |
15:23 |
<aborrero@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 95 hosts with reason: upgrading openstack |
[production] |
15:22 |
<aborrero@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on 95 hosts with reason: upgrading openstack |
[production] |
15:22 |
<andrew@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 95 hosts with reason: upgrading openstack |
[production] |
15:22 |
<aborrero@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 10 hosts with reason: upgrading openstack |
[production] |
15:22 |
<aborrero@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on 10 hosts with reason: upgrading openstack |
[production] |
15:21 |
<andrew@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on 95 hosts with reason: upgrading openstack |
[production] |
15:15 |
<papaul> |
power down mw2220 for maintenance |
[production] |
15:11 |
<hashar@deploy1001> |
Synchronized php: group1 wikis to 1.36.0-wmf.29 (duration: 01m 11s) |
[production] |
15:10 |
<moritzm> |
readding ganeti5002 to the eqsin Ganeti cluster following mainboard replacement/reinstall T261130 |
[production] |