2021-11-18
ยง
|
12:15 |
<kartik@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:739550|Enable Tamil (ta) Section Translation in test wiki (T294223)]] (duration: 01m 05s) |
[production] |
12:06 |
<mmandere@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host lvs6003.drmrs.wmnet with OS buster |
[production] |
11:45 |
<mmandere@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host lvs6002.drmrs.wmnet with OS buster |
[production] |
11:29 |
<arturo> |
aborrero@apt1001:~$ sudo -i reprepro export |
[production] |
11:27 |
<mmandere@cumin1001> |
START - Cookbook sre.hosts.reimage for host lvs6003.drmrs.wmnet with OS buster |
[production] |
11:26 |
<arturo> |
aborrero@apt1001:~$ sudo -i reprepro processincoming default /srv/wikimedia/incoming/python-flask-keystone_0.2~git20201012.b5cd4da-1_amd64.changes (T295234) |
[production] |
11:08 |
<arturo> |
run aborrero@apt1001:~$ sudo -i reprepro processincoming default |
[production] |
11:08 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti-test2002.codfw.wmnet |
[production] |
11:07 |
<arturo> |
added python-flask-oslolog_0.1~git20201012.7803a46-1 to bullseye-wikimedia (T295234) |
[production] |
11:06 |
<arturo> |
aborrero@apt1001:~ $ for i in $(ll /srv/wikimedia/incoming/ | grep aborrero | awk -F' ' '{print $NF}') ; do rm /srv/wikimedia/incoming/$i ; done |
[production] |
11:05 |
<mmandere@cumin1001> |
START - Cookbook sre.hosts.reimage for host lvs6002.drmrs.wmnet with OS buster |
[production] |
11:02 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti-test2002.codfw.wmnet |
[production] |
10:57 |
<mmandere@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host lvs6001.drmrs.wmnet with OS buster |
[production] |
10:38 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
10:38 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. |
[production] |
10:21 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host ganeti-test2002.codfw.wmnet with OS buster |
[production] |
10:17 |
<mmandere@cumin1001> |
START - Cookbook sre.hosts.reimage for host lvs6001.drmrs.wmnet with OS buster |
[production] |
10:12 |
<topranks> |
Re-pooling eqiad in DNS after completing iBGP policy changes on cr1-eqiad and cr2-eqiad T295672 |
[production] |
10:08 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
10:08 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. |
[production] |
10:01 |
<moritzm> |
updating perf on buster hosts |
[production] |
10:00 |
<topranks> |
Re-enabling Equinix IXP port on cr1-eqiad following iBGP changes to address T295650 |
[production] |
09:56 |
<ema> |
cp4021: repool w/ single backend experiment enabled T288106 |
[production] |
09:54 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reimage for host ganeti-test2002.codfw.wmnet with OS buster |
[production] |
09:49 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
09:49 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. |
[production] |
09:41 |
<ema> |
cp4021: stop ats-be and clear its cache T288106 |
[production] |
09:35 |
<ema> |
cp4021: depool to enable single backend experiment T288106 |
[production] |
09:32 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cp1090.eqiad.wmnet with OS buster |
[production] |
09:32 |
<vgutierrez> |
pool cp1090 (upload) running HAProxy as TLS terminator - T290005 |
[production] |
09:18 |
<jayme> |
systemctl start prune-production-images.service on deneb - T287222 |
[production] |
08:48 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.reimage for host cp1090.eqiad.wmnet with OS buster |
[production] |
08:46 |
<vgutierrez> |
depool cp1090 to be reimaged as cache::upload_haproxy - T290005 |
[production] |
08:45 |
<moritzm> |
installing mariadb-10.3 security updates on buster (as packaged in Debian, not the wmf-internal packages) |
[production] |
08:27 |
<topranks> |
De-pool of Eqiad seems to be ok, transit/peering/transport links changed BW profile but nothing maxed, total LVS connections steady but have shifted to codfw. Proceeding to reconfigure iBGP policy on cr1-eqiad and cr2-eqiad maually. |
[production] |
08:01 |
<topranks> |
Depooling eqiad in authdns to allow for reconfiguration of CR routers on site (T295672) |
[production] |
07:45 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
07:41 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
07:35 |
<ladsgroup@deploy1002> |
Synchronized php-1.38.0-wmf.9/maintenance/migrateRevisionActorTemp.php: Backport: [[gerrit:739636|maintenance: Add waitForReplication and sleep in migrateRevisionActorTemp (T275246)]] (duration: 01m 04s) |
[production] |
07:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 100%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17772 and previous config saved to /var/cache/conftool/dbconfig/20211118-073507-root.json |
[production] |
07:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 75%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17771 and previous config saved to /var/cache/conftool/dbconfig/20211118-072004-root.json |
[production] |
07:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Remove watchlist from s5 eqiad T263127', diff saved to https://phabricator.wikimedia.org/P17770 and previous config saved to /var/cache/conftool/dbconfig/20211118-070620-marostegui.json |
[production] |
07:05 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1156 (re)pooling @ 100%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17769 and previous config saved to /var/cache/conftool/dbconfig/20211118-070559-root.json |
[production] |
07:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 50%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17768 and previous config saved to /var/cache/conftool/dbconfig/20211118-070500-root.json |
[production] |
06:50 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1156 (re)pooling @ 75%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17767 and previous config saved to /var/cache/conftool/dbconfig/20211118-065055-root.json |
[production] |
06:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 40%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17766 and previous config saved to /var/cache/conftool/dbconfig/20211118-064957-root.json |
[production] |
06:35 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1156 (re)pooling @ 25%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17765 and previous config saved to /var/cache/conftool/dbconfig/20211118-063552-root.json |
[production] |
06:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 25%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17764 and previous config saved to /var/cache/conftool/dbconfig/20211118-063453-root.json |
[production] |
06:31 |
<Amir1> |
revoked all grants from wikiadmin and gave back an explicit list on db1102:3312 (T249683) |
[production] |
06:20 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1156 (re)pooling @ 10%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17763 and previous config saved to /var/cache/conftool/dbconfig/20211118-062048-root.json |
[production] |