2021-11-29
§
|
10:45 |
<vgutierrez@cumin1001> |
END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host cp3064.esams.wmnet with OS buster |
[production] |
10:02 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.reimage for host cp3064.esams.wmnet with OS buster |
[production] |
10:01 |
<vgutierrez> |
depool cp3064 to be reimaged as cache::text_haproxy - T290005 |
[production] |
09:52 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cp2041.codfw.wmnet with OS buster |
[production] |
09:52 |
<vgutierrez> |
pool cp2041 with HAProxy as TLS terminator - T290005 |
[production] |
09:46 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
09:42 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
09:36 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
09:35 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
09:34 |
<moritzm> |
rolling restart of mediawiki canaries to pick up ICU security updates |
[production] |
09:34 |
<urbanecm@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: NOOP: 3a892860b2e1e2ac7b60fc1c4dbdb2035d6af950: foundationwiki: Do not enable wmgUsePageViewInfo explicitly (duration: 00m 55s) |
[production] |
09:32 |
<urbanecm> |
[urbanecm@mwmaint1002 ~]$ mwscript emptyUserGroup.php --wiki=foundationwiki 'inactive' # removing nonexistent group; backup left at P17888 |
[production] |
09:30 |
<urbanecm@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: 786313c06188d5d63700d7e46384ef99a9297b57: foundationwiki: Clear group add/remove declarations (duration: 00m 55s) |
[production] |
09:29 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
09:28 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
09:27 |
<urbanecm@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: c3f47dc55b67d2b53ec27bb610978ff8165aa6ca: foundationwiki: Disable hard redirects (duration: 00m 57s) |
[production] |
08:59 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.reimage for host cp2041.codfw.wmnet with OS buster |
[production] |
08:56 |
<vgutierrez> |
depool cp2041 to be reimaged as cache::text_haproxy - T290005 |
[production] |
08:54 |
<moritzm> |
installing ICU security updates on buster |
[production] |
08:33 |
<moritzm> |
installing bluez security updates |
[production] |
08:26 |
<moritzm> |
installing libvpx security updates |
[production] |
08:19 |
<moritzm> |
instaling libntlm security updates |
[production] |
08:07 |
<elukey@deploy1002> |
Finished deploy [ores/deploy@69ed061]: Upgrade of mwparserfromhell - T296563 (duration: 07m 01s) |
[production] |
08:00 |
<marostegui> |
Restart db2078 and db1117 |
[production] |
08:00 |
<elukey@deploy1002> |
Started deploy [ores/deploy@69ed061]: Upgrade of mwparserfromhell - T296563 |
[production] |
07:31 |
<elukey@deploy1002> |
Finished deploy [ores/deploy@69ed061]: Canary upgrade of mwparserfromhell - T296563 - (second attempt, no git update submodules the first time) (duration: 00m 04s) |
[production] |
07:31 |
<elukey@deploy1002> |
Started deploy [ores/deploy@69ed061]: Canary upgrade of mwparserfromhell - T296563 - (second attempt, no git update submodules the first time) |
[production] |
06:11 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host pc2014.codfw.wmnet with OS bullseye |
[production] |
05:39 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.reimage for host pc2014.codfw.wmnet with OS bullseye |
[production] |
2021-11-27
§
|
19:55 |
<andrew@deploy1002> |
Finished deploy [horizon/deploy@6115b3b]: network UI updates for T296548 (duration: 04m 14s) |
[production] |
19:51 |
<andrew@deploy1002> |
Started deploy [horizon/deploy@6115b3b]: network UI updates for T296548 |
[production] |
19:47 |
<andrew@deploy1002> |
Finished deploy [horizon/deploy@6115b3b]: network UI tests in codfw1dev (duration: 02m 01s) |
[production] |
19:45 |
<andrew@deploy1002> |
Started deploy [horizon/deploy@6115b3b]: network UI tests in codfw1dev |
[production] |
12:22 |
<elukey> |
drop /var/tmp/core files from ores100[2,4] root partition full |
[production] |
12:10 |
<elukey> |
drop /var/tmp/core files from ores1009, root partition full |
[production] |
11:55 |
<elukey> |
disable coredumps for ORES celery units (will cause a roll restart of all celeries) - T296563 |
[production] |
11:46 |
<elukey> |
drop ores coredumps from ores1008 |
[production] |
09:56 |
<elukey> |
powercycle analytics1071, soft lockup stacktraces in the tty |
[production] |
09:51 |
<elukey> |
move ores coredump files from /var/cache/tmp to /srv/coredumps on ores100[6,7,8] and ores2003 to free space on the root partition |
[production] |
2021-11-26
§
|
16:11 |
<arnoldokoth> |
drain kubestage1002 node in prep for decommissioning |
[production] |
16:05 |
<arnoldokoth> |
drain kubestage1001 node in prep for decommissioning |
[production] |
15:46 |
<elukey> |
move /var/tmp/core/* to /srv/coredumps on ores1008 to free root space |
[production] |
14:30 |
<jelto@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'miscweb' for release 'main' . |
[production] |
14:25 |
<jelto@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'miscweb' for release 'main' . |
[production] |
14:21 |
<jelto@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'miscweb' for release 'main' . |
[production] |
13:48 |
<jelto@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'miscweb' for release 'main' . |
[production] |
13:46 |
<jelto@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'miscweb' for release 'main' . |
[production] |
13:25 |
<akosiaris@deploy1002> |
helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. |
[production] |