2022-03-28
ยง
|
11:23 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1147 (T298556)', diff saved to https://phabricator.wikimedia.org/P23397 and previous config saved to /var/cache/conftool/dbconfig/20220328-112352-marostegui.json |
[production] |
11:23 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1147.eqiad.wmnet with reason: Maintenance |
[production] |
11:23 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 8:00:00 on db1147.eqiad.wmnet with reason: Maintenance |
[production] |
11:23 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1144:3314 (T298556)', diff saved to https://phabricator.wikimedia.org/P23396 and previous config saved to /var/cache/conftool/dbconfig/20220328-112345-marostegui.json |
[production] |
11:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1144:3314', diff saved to https://phabricator.wikimedia.org/P23395 and previous config saved to /var/cache/conftool/dbconfig/20220328-110839-marostegui.json |
[production] |
10:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1144:3314', diff saved to https://phabricator.wikimedia.org/P23394 and previous config saved to /var/cache/conftool/dbconfig/20220328-105333-marostegui.json |
[production] |
10:38 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1144:3314 (T298556)', diff saved to https://phabricator.wikimedia.org/P23393 and previous config saved to /var/cache/conftool/dbconfig/20220328-103828-marostegui.json |
[production] |
10:34 |
<hashar> |
contint2001 and contint1001: pruning obsolete branches from the zuul-merger: `sudo -H -u zuul find /srv/zuul/git -type d -name .git -print -execdir git -c url."https://gerrit.wikimedia.org/r/".insteadOf="ssh://jenkins-bot@gerrit.wikimedia.org:29418/" remote prune origin \;` T220606 |
[releng] |
10:29 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1169 (T300775)', diff saved to https://phabricator.wikimedia.org/P23392 and previous config saved to /var/cache/conftool/dbconfig/20220328-102915-marostegui.json |
[production] |
10:29 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1169.eqiad.wmnet with reason: Maintenance |
[production] |
10:29 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1169.eqiad.wmnet with reason: Maintenance |
[production] |
10:25 |
<hashar> |
Changed `Trainsperiment Survey Questions` surveys permissions to be open outside of WMF and limited to 1 answer (forcing signin) https://docs.google.com/forms/u/0/d/e/1FAIpQLSd0Nc2jGkAGW-5rTiKN2EHWzfw2HeHm13N-ZCw1xUdE3z6woQ/formrestricted |
[releng] |
10:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 75%: After schema change', diff saved to https://phabricator.wikimedia.org/P23391 and previous config saved to /var/cache/conftool/dbconfig/20220328-102014-root.json |
[production] |
10:18 |
<hashar> |
contint2001 and contint1001: pruning all git reflog entries from the zuul-merger: `sudo -u zuul find /srv/zuul/git -name .git -type d -execdir git reflog expire --expire=all --all`. They are useless and no more generated since https://gerrit.wikimedia.org/r/c/operations/puppet/+/757943 |
[releng] |
10:17 |
<mmandere> |
pool cp2033 with HAProxy as TLS termination layer - T290005 |
[production] |
10:17 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1144:3314 (T298556)', diff saved to https://phabricator.wikimedia.org/P23390 and previous config saved to /var/cache/conftool/dbconfig/20220328-101712-marostegui.json |
[production] |
10:17 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1144.eqiad.wmnet with reason: Maintenance |
[production] |
10:17 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 8:00:00 on db1144.eqiad.wmnet with reason: Maintenance |
[production] |
10:17 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143 (T298556)', diff saved to https://phabricator.wikimedia.org/P23389 and previous config saved to /var/cache/conftool/dbconfig/20220328-101704-marostegui.json |
[production] |
10:13 |
<mmandere@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cp2033.codfw.wmnet with OS buster |
[production] |
10:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 50%: After schema change', diff saved to https://phabricator.wikimedia.org/P23387 and previous config saved to /var/cache/conftool/dbconfig/20220328-100511-root.json |
[production] |
10:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P23386 and previous config saved to /var/cache/conftool/dbconfig/20220328-100159-marostegui.json |
[production] |
09:53 |
<hashar> |
Tag Quibble 1.4.5 @ abe16d574 | T291549 |
[releng] |
09:50 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 25%: After schema change', diff saved to https://phabricator.wikimedia.org/P23385 and previous config saved to /var/cache/conftool/dbconfig/20220328-095007-root.json |
[production] |
09:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P23384 and previous config saved to /var/cache/conftool/dbconfig/20220328-094653-marostegui.json |
[production] |
09:46 |
<moritzm> |
installing Linux 4.9.303 on Stretch hosts |
[production] |
09:45 |
<mmandere@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cp2033.codfw.wmnet with reason: host reimage |
[production] |
09:43 |
<mmandere@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on cp2033.codfw.wmnet with reason: host reimage |
[production] |
09:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 10%: After schema change', diff saved to https://phabricator.wikimedia.org/P23383 and previous config saved to /var/cache/conftool/dbconfig/20220328-093503-root.json |
[production] |
09:32 |
<wm-bot> |
cleaned up grid queue errors on tools-sgegrid-master.tools.eqiad1.wikimedia.cloud (T304816) - cookbook ran by arturo@nostromo |
[tools] |
09:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143 (T298556)', diff saved to https://phabricator.wikimedia.org/P23382 and previous config saved to /var/cache/conftool/dbconfig/20220328-093148-marostegui.json |
[production] |
09:24 |
<mmandere@cumin1001> |
START - Cookbook sre.hosts.reimage for host cp2033.codfw.wmnet with OS buster |
[production] |
09:13 |
<moritzm> |
installing Linux 4.19.235 on Buster hosts |
[production] |
09:11 |
<mmandere> |
depool cp2033 for reimage - T290005 |
[production] |
09:10 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1143 (T298556)', diff saved to https://phabricator.wikimedia.org/P23379 and previous config saved to /var/cache/conftool/dbconfig/20220328-091041-marostegui.json |
[production] |
09:10 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1143.eqiad.wmnet with reason: Maintenance |
[production] |
09:10 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 8:00:00 on db1143.eqiad.wmnet with reason: Maintenance |
[production] |
09:10 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1142 (T298556)', diff saved to https://phabricator.wikimedia.org/P23378 and previous config saved to /var/cache/conftool/dbconfig/20220328-091033-marostegui.json |
[production] |
09:04 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 100%: After downgrade ', diff saved to https://phabricator.wikimedia.org/P23377 and previous config saved to /var/cache/conftool/dbconfig/20220328-090445-root.json |
[production] |
09:03 |
<moritzm> |
installing Linux 5.10.106 on Bullseye hosts |
[production] |
08:56 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
08:55 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |
08:55 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply |
[production] |
08:55 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1142', diff saved to https://phabricator.wikimedia.org/P23376 and previous config saved to /var/cache/conftool/dbconfig/20220328-085528-marostegui.json |
[production] |
08:55 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 100%: After schema change', diff saved to https://phabricator.wikimedia.org/P23375 and previous config saved to /var/cache/conftool/dbconfig/20220328-085507-root.json |
[production] |
08:53 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply |
[production] |
08:50 |
<jynus> |
deploy new alerting (0.7.1) for db backups at alert1001 T138562 |
[production] |
08:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 75%: After downgrade ', diff saved to https://phabricator.wikimedia.org/P23374 and previous config saved to /var/cache/conftool/dbconfig/20220328-084941-root.json |
[production] |
08:47 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
08:47 |
<marostegui> |
dbmaint s1@eqiad T304812 |
[production] |