2021-03-22
§
|
09:49 |
<reedy@deploy1002> |
Synchronized wmf-config/InitialiseSettings-labs.php: Config cleanup (duration: 00m 59s) |
[production] |
09:48 |
<reedy@deploy1002> |
Synchronized wmf-config/CommonSettings-labs.php: Config cleanup (duration: 01m 20s) |
[production] |
09:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1142 for schema change', diff saved to https://phabricator.wikimedia.org/P14971 and previous config saved to /var/cache/conftool/dbconfig/20210322-093558-marostegui.json |
[production] |
09:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1141 (re)pooling @ 100%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P14970 and previous config saved to /var/cache/conftool/dbconfig/20210322-091534-root.json |
[production] |
09:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1141 (re)pooling @ 75%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P14969 and previous config saved to /var/cache/conftool/dbconfig/20210322-090030-root.json |
[production] |
08:45 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1141 (re)pooling @ 50%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P14968 and previous config saved to /var/cache/conftool/dbconfig/20210322-084527-root.json |
[production] |
08:30 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1141 (re)pooling @ 25%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P14967 and previous config saved to /var/cache/conftool/dbconfig/20210322-083023-root.json |
[production] |
08:13 |
<godog> |
swift eqiad-prod: less weight for ms-be[1019-1026] / more weight to ms-be106[0-3] - T272836 T268435 |
[production] |
08:13 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1158.eqiad.wmnet with reason: REIMAGE |
[production] |
08:11 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db1158.eqiad.wmnet with reason: REIMAGE |
[production] |
08:02 |
<jayme> |
build and release docker-registry.discovery.wmnet/eventrouter:0.3.0-6, docker-registry.discovery.wmnet/fluent-bit:1.5.3-3, docker-registry.discovery.wmnet/ratelimit:1.5.1-s3 |
[production] |
08:00 |
<marostegui> |
Stop MySQL on db1085 to clone db1165 (lag will appear on s6 on wiki replicas) T258361 |
[production] |
08:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1085 to clone db1165', diff saved to https://phabricator.wikimedia.org/P14965 and previous config saved to /var/cache/conftool/dbconfig/20210322-080020-marostegui.json |
[production] |
07:51 |
<elukey> |
stop/start mariadb instances on dbstore1004 to reduce buffer pool memory settings - T273865 |
[production] |
07:37 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1161 (re)pooling @ 100%: Slowly repool db1161', diff saved to https://phabricator.wikimedia.org/P14964 and previous config saved to /var/cache/conftool/dbconfig/20210322-073747-root.json |
[production] |
07:22 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1161 (re)pooling @ 75%: Slowly repool db1161', diff saved to https://phabricator.wikimedia.org/P14963 and previous config saved to /var/cache/conftool/dbconfig/20210322-072243-root.json |
[production] |
07:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1141 for schema change', diff saved to https://phabricator.wikimedia.org/P14962 and previous config saved to /var/cache/conftool/dbconfig/20210322-071430-marostegui.json |
[production] |
07:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1161 (re)pooling @ 50%: Slowly repool db1161', diff saved to https://phabricator.wikimedia.org/P14961 and previous config saved to /var/cache/conftool/dbconfig/20210322-070740-root.json |
[production] |
06:52 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1161 (re)pooling @ 25%: Slowly repool db1161', diff saved to https://phabricator.wikimedia.org/P14960 and previous config saved to /var/cache/conftool/dbconfig/20210322-065236-root.json |
[production] |
06:37 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Remove db1084 from dbctl T276302', diff saved to https://phabricator.wikimedia.org/P14959 and previous config saved to /var/cache/conftool/dbconfig/20210322-063732-marostegui.json |
[production] |
06:11 |
<marostegui> |
Sanitize db1124 db2094 db1154: taywiki trvwiki mnwwiktionary |
[production] |
04:28 |
<kartik@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'cxserver' for release 'staging' . |
[production] |
2021-03-19
§
|
21:11 |
<mutante> |
scandium - stop apache and rerun puppet which fails after reimaging because it tries to run an nginx on port 80 which is already used by apache T268248 |
[production] |
20:31 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on scandium.eqiad.wmnet with reason: REIMAGE |
[production] |
20:29 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on scandium.eqiad.wmnet with reason: REIMAGE |
[production] |
20:15 |
<mutante> |
scandium - reimaging with buster |
[production] |
20:14 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on scandium.eqiad.wmnet with reason: reimage |
[production] |
20:14 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on scandium.eqiad.wmnet with reason: reimage |
[production] |
20:11 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mw2245.codfw.wmnet |
[production] |
19:55 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.decommission for hosts mw2245.codfw.wmnet |
[production] |
19:53 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mw2244.codfw.wmnet |
[production] |
19:53 |
<legoktm@cumin1001> |
END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host lists1002.wikimedia.org |
[production] |
19:50 |
<mutante> |
testreduce1001 - confirmed MariaDB @@datadir is /srv/data/mysql and deleting /var/lib/mysql (T277580) |
[production] |
19:40 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.decommission for hosts mw2244.codfw.wmnet |
[production] |
19:39 |
<dzahn@cumin1001> |
conftool action : set/pooled=inactive; selector: name=mw2245.codfw.wmnet |
[production] |
19:39 |
<legoktm@cumin1001> |
START - Cookbook sre.ganeti.makevm for new host lists1002.wikimedia.org |
[production] |
19:39 |
<dzahn@cumin1001> |
conftool action : set/pooled=inactive; selector: name=mw2244.codfw.wmnet |
[production] |
19:37 |
<dzahn@cumin1001> |
conftool action : set/pooled=yes; selector: name=mw2252.codfw.wmnet,service=canary |
[production] |
19:37 |
<dzahn@cumin1001> |
conftool action : set/pooled=yes; selector: name=mw2251.codfw.wmnet,service=canary |
[production] |
19:33 |
<dzahn@cumin1001> |
conftool action : set/weight=1; selector: name=mw2252.codfw.wmnet,service=canary |
[production] |
19:33 |
<dzahn@cumin1001> |
conftool action : set/weight=1; selector: name=mw2251.codfw.wmnet,service=canary |
[production] |
19:24 |
<mutante> |
deploy2002 - re-enabled puppet, reverted patch of scap-sync-master |
[production] |
18:46 |
<mutante> |
deploy2002 - disable puppet, copy modified version of scap-master-sync over it that does not --exclude="**/cache/l10n/*.cdb" (for T275826) |
[production] |
16:01 |
<effie> |
upgrade memcached on mc-gp200* |
[production] |
12:36 |
<klausman@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on ml-serve2002.codfw.wmnet with reason: REIMAGE |
[production] |