3301-3350 of 10000 results (31ms)
2021-11-18 ยง
09:56 <ema> cp4021: repool w/ single backend experiment enabled T288106 [production]
09:54 <jmm@cumin2002> START - Cookbook sre.hosts.reimage for host ganeti-test2002.codfw.wmnet with OS buster [production]
09:49 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
09:49 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
09:41 <ema> cp4021: stop ats-be and clear its cache T288106 [production]
09:35 <ema> cp4021: depool to enable single backend experiment T288106 [production]
09:32 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cp1090.eqiad.wmnet with OS buster [production]
09:32 <vgutierrez> pool cp1090 (upload) running HAProxy as TLS terminator - T290005 [production]
09:18 <jayme> systemctl start prune-production-images.service on deneb - T287222 [production]
08:48 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reimage for host cp1090.eqiad.wmnet with OS buster [production]
08:46 <vgutierrez> depool cp1090 to be reimaged as cache::upload_haproxy - T290005 [production]
08:45 <moritzm> installing mariadb-10.3 security updates on buster (as packaged in Debian, not the wmf-internal packages) [production]
08:27 <topranks> De-pool of Eqiad seems to be ok, transit/peering/transport links changed BW profile but nothing maxed, total LVS connections steady but have shifted to codfw. Proceeding to reconfigure iBGP policy on cr1-eqiad and cr2-eqiad maually. [production]
08:01 <topranks> Depooling eqiad in authdns to allow for reconfiguration of CR routers on site (T295672) [production]
07:45 <mwdebug-deploy@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
07:41 <mwdebug-deploy@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
07:35 <ladsgroup@deploy1002> Synchronized php-1.38.0-wmf.9/maintenance/migrateRevisionActorTemp.php: Backport: [[gerrit:739636|maintenance: Add waitForReplication and sleep in migrateRevisionActorTemp (T275246)]] (duration: 01m 04s) [production]
07:35 <marostegui@cumin1001> dbctl commit (dc=all): 'db1112 (re)pooling @ 100%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17772 and previous config saved to /var/cache/conftool/dbconfig/20211118-073507-root.json [production]
07:20 <marostegui@cumin1001> dbctl commit (dc=all): 'db1112 (re)pooling @ 75%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17771 and previous config saved to /var/cache/conftool/dbconfig/20211118-072004-root.json [production]
07:06 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove watchlist from s5 eqiad T263127', diff saved to https://phabricator.wikimedia.org/P17770 and previous config saved to /var/cache/conftool/dbconfig/20211118-070620-marostegui.json [production]
07:05 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1156 (re)pooling @ 100%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17769 and previous config saved to /var/cache/conftool/dbconfig/20211118-070559-root.json [production]
07:05 <marostegui@cumin1001> dbctl commit (dc=all): 'db1112 (re)pooling @ 50%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17768 and previous config saved to /var/cache/conftool/dbconfig/20211118-070500-root.json [production]
06:50 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1156 (re)pooling @ 75%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17767 and previous config saved to /var/cache/conftool/dbconfig/20211118-065055-root.json [production]
06:49 <marostegui@cumin1001> dbctl commit (dc=all): 'db1112 (re)pooling @ 40%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17766 and previous config saved to /var/cache/conftool/dbconfig/20211118-064957-root.json [production]
06:35 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1156 (re)pooling @ 25%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17765 and previous config saved to /var/cache/conftool/dbconfig/20211118-063552-root.json [production]
06:34 <marostegui@cumin1001> dbctl commit (dc=all): 'db1112 (re)pooling @ 25%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17764 and previous config saved to /var/cache/conftool/dbconfig/20211118-063453-root.json [production]
06:31 <Amir1> revoked all grants from wikiadmin and gave back an explicit list on db1102:3312 (T249683) [production]
06:20 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1156 (re)pooling @ 10%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17763 and previous config saved to /var/cache/conftool/dbconfig/20211118-062048-root.json [production]
06:19 <marostegui@cumin1001> dbctl commit (dc=all): 'db1112 (re)pooling @ 20%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17762 and previous config saved to /var/cache/conftool/dbconfig/20211118-061949-root.json [production]
06:17 <Amir1> revoked all grants from wikiadmin and gave back an explicit list on db1156 (T249683) [production]
06:04 <marostegui@cumin1001> dbctl commit (dc=all): 'db1112 (re)pooling @ 10%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17761 and previous config saved to /var/cache/conftool/dbconfig/20211118-060446-root.json [production]
05:49 <marostegui@cumin1001> dbctl commit (dc=all): 'db1112 (re)pooling @ 5%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17760 and previous config saved to /var/cache/conftool/dbconfig/20211118-054942-root.json [production]
05:47 <marostegui> Upgrade clouddb1014 [production]
05:34 <marostegui@cumin1001> dbctl commit (dc=all): 'db1112 (re)pooling @ 1%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17759 and previous config saved to /var/cache/conftool/dbconfig/20211118-053438-root.json [production]
05:08 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1131 due to network issues (T295952)', diff saved to https://phabricator.wikimedia.org/P17758 and previous config saved to /var/cache/conftool/dbconfig/20211118-050802-ladsgroup.json [production]
04:23 <dzahn@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'miscweb' for release 'main' . [production]
02:08 <legoktm@cumin1001> conftool action : set/pooled=no; selector: name=thumbor2006.codfw.wmnet [production]
02:08 <legoktm@cumin1001> conftool action : set/pooled=no; selector: name=thumbor2005.codfw.wmnet [production]
01:56 <legoktm@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thumbor2006.codfw.wmnet [production]
01:48 <legoktm@cumin1001> START - Cookbook sre.hosts.reboot-single for host thumbor2006.codfw.wmnet [production]
01:47 <legoktm@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thumbor2005.codfw.wmnet [production]
01:42 <mwdebug-deploy@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
01:42 <legoktm@cumin1001> START - Cookbook sre.hosts.reboot-single for host thumbor2005.codfw.wmnet [production]
01:39 <mwdebug-deploy@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
01:35 <ladsgroup@deploy1002> Synchronized wmf-config/InitialiseSettings.php: NOOP - Config: [[gerrit:739633|Revert "Stop setting wgActorTableSchemaMigrationStage, no longer read in core" (T275246)]] (duration: 01m 04s) [production]
00:54 <legoktm@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host thumbor2006.codfw.wmnet with OS stretch [production]
00:28 <legoktm@cumin1001> START - Cookbook sre.hosts.reimage for host thumbor2006.codfw.wmnet with OS stretch [production]
00:26 <legoktm@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host thumbor2005.codfw.wmnet with OS stretch [production]
00:22 <mwdebug-deploy@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
00:20 <ryankemper> T290902 Test host looks good, proceeding to rest of fleet `ryankemper@cumin1001:~$ sudo cumin -b 4 '*elastic*' 'sudo run-puppet-agent --force'` [production]