2021-08-30
§
|
11:48 |
<jelto@deploy1002> |
helmfile [eqiad] DONE helmfile.d/admin 'apply'. |
[production] |
11:47 |
<jelto@deploy1002> |
helmfile [eqiad] START helmfile.d/admin 'apply'. |
[production] |
11:31 |
<jelto@deploy1002> |
helmfile [staging-eqiad] DONE helmfile.d/admin 'apply'. |
[production] |
11:30 |
<jelto@deploy1002> |
helmfile [staging-eqiad] START helmfile.d/admin 'apply'. |
[production] |
10:55 |
<jelto@deploy1002> |
helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. |
[production] |
10:53 |
<jelto@deploy1002> |
helmfile [staging-codfw] START helmfile.d/admin 'apply'. |
[production] |
10:21 |
<dcausse@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'rdf-streaming-updater' for release 'main' . |
[production] |
09:51 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
09:46 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
09:34 |
<ladsgroup@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:703476|Set $wgIncludejQueryMigrate to false in group0 (T280944)]] (duration: 00m 57s) |
[production] |
09:01 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on maps1006.eqiad.wmnet with reason: Resyncing from master |
[production] |
09:01 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime for 5:00:00 on maps1006.eqiad.wmnet with reason: Resyncing from master |
[production] |
09:01 |
<hnowlan@cumin1001> |
END (FAIL) - Cookbook sre.postgresql.postgres-init (exit_code=99) |
[production] |
09:00 |
<hnowlan@cumin1001> |
START - Cookbook sre.postgresql.postgres-init |
[production] |
08:59 |
<hnowlan@puppetmaster1001> |
conftool action : set/pooled=no; selector: name=maps1006.eqiad.wmnet |
[production] |
08:57 |
<hnowlan@puppetmaster1001> |
conftool action : set/pooled=yes; selector: name=maps1005.eqiad.wmnet |
[production] |
08:57 |
<godog> |
+100G to prometheus/global in codfw |
[production] |
08:04 |
<vgutierrez> |
pool cp2027 - T289908 |
[production] |
06:53 |
<elukey> |
drop an-airflow1001's old airflow logs to fix root partition almost filled up |
[production] |
06:38 |
<godog> |
more weight to ms-be20[62-65] - T288458 |
[production] |
05:44 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2110.codfw.wmnet with reason: REIMAGE |
[production] |
05:42 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db2110.codfw.wmnet with reason: REIMAGE |
[production] |
05:23 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db2110 for reimage T288803', diff saved to https://phabricator.wikimedia.org/P17105 and previous config saved to /var/cache/conftool/dbconfig/20210830-052336-marostegui.json |
[production] |
2021-08-27
§
|
16:46 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on maps1005.eqiad.wmnet with reason: Resyncing from master |
[production] |
16:46 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on maps1005.eqiad.wmnet with reason: Resyncing from master |
[production] |
14:50 |
<akosiaris> |
stop flink on staging cluster to verify some IOPS starvation issues |
[production] |
14:46 |
<akosiaris@deploy1002> |
helmfile [staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
14:45 |
<akosiaris@deploy1002> |
helmfile [staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
14:44 |
<akosiaris@deploy1002> |
helmfile [staging-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
14:44 |
<akosiaris@deploy1002> |
helmfile [staging-eqiad] START helmfile.d/admin 'sync'. |
[production] |
14:44 |
<akosiaris@deploy1002> |
helmfile [staging-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
14:44 |
<akosiaris@deploy1002> |
helmfile [staging-eqiad] START helmfile.d/admin 'sync'. |
[production] |
14:39 |
<hnowlan@puppetmaster1001> |
conftool action : set/pooled=no; selector: name=maps1005.eqiad.wmnet |
[production] |
14:38 |
<hnowlan@cumin1001> |
END (FAIL) - Cookbook sre.postgresql.postgres-init (exit_code=99) |
[production] |
14:37 |
<hnowlan@cumin1001> |
START - Cookbook sre.postgresql.postgres-init |
[production] |
14:30 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on maps1005.eqiad.wmnet with reason: Resyncing from master |
[production] |
14:30 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime for 4:00:00 on maps1005.eqiad.wmnet with reason: Resyncing from master |
[production] |
13:48 |
<dzahn@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'miscweb' for release 'main' . |
[production] |
12:49 |
<mutante> |
rsynced /srv/org/wikimedia/racktables from miscweb1002 to miscweb2002 (T269746) |
[production] |
12:04 |
<topranks> |
removing peering to Wave Division Holdings / AS11404 at Equinix Chicago cr2-eqord, AS no longer on exchange. |
[production] |
10:56 |
<akosiaris> |
sudo cumin 'mw*' 'ip ro ls dev docker0 && sysctl net.ipv4.ip_forward=0' to clear up the docker remnants of the dragonfly evaluation. T286054 |
[production] |
10:31 |
<godog> |
bounce logstash on logstash1007 |
[production] |
10:22 |
<elukey> |
fallback codfw ores to rdb2007 after maintenance |
[production] |
10:18 |
<jiji@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host rdb2007.codfw.wmnet |
[production] |