2021-11-18
§
|
08:27 |
<topranks> |
De-pool of Eqiad seems to be ok, transit/peering/transport links changed BW profile but nothing maxed, total LVS connections steady but have shifted to codfw. Proceeding to reconfigure iBGP policy on cr1-eqiad and cr2-eqiad maually. |
[production] |
08:01 |
<topranks> |
Depooling eqiad in authdns to allow for reconfiguration of CR routers on site (T295672) |
[production] |
07:45 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
07:41 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
07:35 |
<ladsgroup@deploy1002> |
Synchronized php-1.38.0-wmf.9/maintenance/migrateRevisionActorTemp.php: Backport: [[gerrit:739636|maintenance: Add waitForReplication and sleep in migrateRevisionActorTemp (T275246)]] (duration: 01m 04s) |
[production] |
07:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 100%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17772 and previous config saved to /var/cache/conftool/dbconfig/20211118-073507-root.json |
[production] |
07:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 75%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17771 and previous config saved to /var/cache/conftool/dbconfig/20211118-072004-root.json |
[production] |
07:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Remove watchlist from s5 eqiad T263127', diff saved to https://phabricator.wikimedia.org/P17770 and previous config saved to /var/cache/conftool/dbconfig/20211118-070620-marostegui.json |
[production] |
07:05 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1156 (re)pooling @ 100%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17769 and previous config saved to /var/cache/conftool/dbconfig/20211118-070559-root.json |
[production] |
07:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 50%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17768 and previous config saved to /var/cache/conftool/dbconfig/20211118-070500-root.json |
[production] |
06:50 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1156 (re)pooling @ 75%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17767 and previous config saved to /var/cache/conftool/dbconfig/20211118-065055-root.json |
[production] |
06:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 40%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17766 and previous config saved to /var/cache/conftool/dbconfig/20211118-064957-root.json |
[production] |
06:35 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1156 (re)pooling @ 25%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17765 and previous config saved to /var/cache/conftool/dbconfig/20211118-063552-root.json |
[production] |
06:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 25%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17764 and previous config saved to /var/cache/conftool/dbconfig/20211118-063453-root.json |
[production] |
06:31 |
<Amir1> |
revoked all grants from wikiadmin and gave back an explicit list on db1102:3312 (T249683) |
[production] |
06:20 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1156 (re)pooling @ 10%: After fixing GRANTs', diff saved to https://phabricator.wikimedia.org/P17763 and previous config saved to /var/cache/conftool/dbconfig/20211118-062048-root.json |
[production] |
06:19 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 20%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17762 and previous config saved to /var/cache/conftool/dbconfig/20211118-061949-root.json |
[production] |
06:17 |
<Amir1> |
revoked all grants from wikiadmin and gave back an explicit list on db1156 (T249683) |
[production] |
06:04 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 10%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17761 and previous config saved to /var/cache/conftool/dbconfig/20211118-060446-root.json |
[production] |
05:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 5%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17760 and previous config saved to /var/cache/conftool/dbconfig/20211118-054942-root.json |
[production] |
05:47 |
<marostegui> |
Upgrade clouddb1014 |
[production] |
05:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 1%: Repool after HW maintenance', diff saved to https://phabricator.wikimedia.org/P17759 and previous config saved to /var/cache/conftool/dbconfig/20211118-053438-root.json |
[production] |
05:08 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1131 due to network issues (T295952)', diff saved to https://phabricator.wikimedia.org/P17758 and previous config saved to /var/cache/conftool/dbconfig/20211118-050802-ladsgroup.json |
[production] |
04:23 |
<dzahn@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'miscweb' for release 'main' . |
[production] |
02:08 |
<legoktm@cumin1001> |
conftool action : set/pooled=no; selector: name=thumbor2006.codfw.wmnet |
[production] |
02:08 |
<legoktm@cumin1001> |
conftool action : set/pooled=no; selector: name=thumbor2005.codfw.wmnet |
[production] |
01:56 |
<legoktm@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thumbor2006.codfw.wmnet |
[production] |
01:48 |
<legoktm@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host thumbor2006.codfw.wmnet |
[production] |
01:47 |
<legoktm@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thumbor2005.codfw.wmnet |
[production] |
01:42 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
01:42 |
<legoktm@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host thumbor2005.codfw.wmnet |
[production] |
01:39 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
01:35 |
<ladsgroup@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: NOOP - Config: [[gerrit:739633|Revert "Stop setting wgActorTableSchemaMigrationStage, no longer read in core" (T275246)]] (duration: 01m 04s) |
[production] |
00:54 |
<legoktm@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host thumbor2006.codfw.wmnet with OS stretch |
[production] |
00:28 |
<legoktm@cumin1001> |
START - Cookbook sre.hosts.reimage for host thumbor2006.codfw.wmnet with OS stretch |
[production] |
00:26 |
<legoktm@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host thumbor2005.codfw.wmnet with OS stretch |
[production] |
00:22 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
00:20 |
<ryankemper> |
T290902 Test host looks good, proceeding to rest of fleet `ryankemper@cumin1001:~$ sudo cumin -b 4 '*elastic*' 'sudo run-puppet-agent --force'` |
[production] |
00:18 |
<urbanecm> |
UTC late B&C finished |
[production] |
00:18 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
00:18 |
<ryankemper> |
T290902 Merged https://gerrit.wikimedia.org/r/c/operations/puppet/+/739379; running puppet agent on arbitrary elastic host: `ryankemper@elastic1051:~$ sudo run-puppet-agent --force` |
[production] |
00:17 |
<ryankemper> |
T290902 Disabling puppet across all elastic*: `ryankemper@cumin1001:~$ sudo cumin '*elastic*' 'sudo disable-puppet "Merging https://gerrit.wikimedia.org/r/c/operations/puppet/+/739379"'` |
[production] |
00:16 |
<urbanecm@deploy1002> |
Synchronized wmf-config/CommonSettings.php: 5110fe77bb982cca82c8d474339a2b73d02c8024: Migrate wmfHostnames to wmgHostnames (T45956) (duration: 01m 03s) |
[production] |
00:12 |
<urbanecm> |
Purge https://en.wikipedia.org/static/images/project-logos/brwikimedia.png and respective HD variants |
[production] |
00:08 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
00:08 |
<urbanecm@deploy1002> |
Synchronized static/images/project-logos: 59c3fe66a0d140ae21f7269150a256a5e9786b24: Lossless optimization of the brwikimedia logo (duration: 01m 04s) |
[production] |
00:04 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
00:00 |
<legoktm@cumin1001> |
START - Cookbook sre.hosts.reimage for host thumbor2005.codfw.wmnet with OS stretch |
[production] |