2022-01-24
ยง
|
16:43 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
16:43 |
<jdrewniak@deploy1002> |
Synchronized portals: Wikimedia Portals Update: [[gerrit:756622| Bumping portals to master (T128546)]] (duration: 00m 49s) |
[production] |
16:42 |
<jdrewniak@deploy1002> |
Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:756622| Bumping portals to master (T128546)]] (duration: 00m 50s) |
[production] |
16:35 |
<elukey@deploy1002> |
helmfile [staging] DONE helmfile.d/services/api-gateway: sync on staging |
[production] |
16:33 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P19071 and previous config saved to /var/cache/conftool/dbconfig/20220124-163302-marostegui.json |
[production] |
16:28 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on restbase2011.codfw.wmnet with reason: bad disk |
[production] |
16:28 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on restbase2011.codfw.wmnet with reason: bad disk |
[production] |
16:25 |
<elukey@deploy1002> |
helmfile [staging] DONE helmfile.d/services/api-gateway: sync on production |
[production] |
16:25 |
<elukey@deploy1002> |
helmfile [staging] START helmfile.d/services/api-gateway: sync on staging |
[production] |
16:17 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318 (T285149)', diff saved to https://phabricator.wikimedia.org/P19070 and previous config saved to /var/cache/conftool/dbconfig/20220124-161757-marostegui.json |
[production] |
16:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1101:3318 (T285149)', diff saved to https://phabricator.wikimedia.org/P19069 and previous config saved to /var/cache/conftool/dbconfig/20220124-161549-marostegui.json |
[production] |
16:15 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1101.eqiad.wmnet with reason: Maintenance |
[production] |
16:15 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1101.eqiad.wmnet with reason: Maintenance |
[production] |
16:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1177 (T285149)', diff saved to https://phabricator.wikimedia.org/P19068 and previous config saved to /var/cache/conftool/dbconfig/20220124-161540-marostegui.json |
[production] |
16:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1177', diff saved to https://phabricator.wikimedia.org/P19067 and previous config saved to /var/cache/conftool/dbconfig/20220124-160035-marostegui.json |
[production] |
15:49 |
<jbond> |
enable abuse_network blocking globally gerrit:756611 |
[production] |
15:48 |
<ladsgroup@deploy1002> |
Synchronized php-1.38.0-wmf.18/extensions/AbuseFilter/includes/ServiceWiring.php: Backport: [[gerrit:756083|Use MainStash instead of db-replicated (T272512)]] (duration: 00m 49s) |
[production] |
15:45 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1177', diff saved to https://phabricator.wikimedia.org/P19066 and previous config saved to /var/cache/conftool/dbconfig/20220124-154531-marostegui.json |
[production] |
15:37 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
15:36 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
15:36 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
15:35 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
15:30 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1177 (T285149)', diff saved to https://phabricator.wikimedia.org/P19065 and previous config saved to /var/cache/conftool/dbconfig/20220124-153026-marostegui.json |
[production] |
15:29 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
15:28 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1177 (T285149)', diff saved to https://phabricator.wikimedia.org/P19064 and previous config saved to /var/cache/conftool/dbconfig/20220124-152820-marostegui.json |
[production] |
15:28 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1177.eqiad.wmnet with reason: Maintenance |
[production] |
15:28 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1177.eqiad.wmnet with reason: Maintenance |
[production] |
15:28 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on 12 hosts with reason: Maintenance |
[production] |
15:28 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on 12 hosts with reason: Maintenance |
[production] |
15:28 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2079.codfw.wmnet with reason: Maintenance |
[production] |
15:27 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db2079.codfw.wmnet with reason: Maintenance |
[production] |
15:27 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
15:27 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
15:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1172 (T285149)', diff saved to https://phabricator.wikimedia.org/P19063 and previous config saved to /var/cache/conftool/dbconfig/20220124-152748-marostegui.json |
[production] |
15:27 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
15:27 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
15:25 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.ores.roll-restart-workers (exit_code=0) for ORES eqiad cluster: Roll restart of ORES's daemons. |
[production] |
15:22 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
15:17 |
<ladsgroup@deploy1002> |
Synchronized wmf-config/CommonSettings.php: Config: [[gerrit:752134|Update wikitech etcd readonly exemption]] (duration: 00m 49s) |
[production] |
15:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1172', diff saved to https://phabricator.wikimedia.org/P19062 and previous config saved to /var/cache/conftool/dbconfig/20220124-151243-marostegui.json |
[production] |
15:05 |
<elukey@cumin1001> |
START - Cookbook sre.ores.roll-restart-workers for ORES eqiad cluster: Roll restart of ORES's daemons. |
[production] |
15:04 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.ores.roll-restart-workers (exit_code=0) for ORES codfw cluster: Roll restart of ORES's daemons. |
[production] |
14:57 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1172', diff saved to https://phabricator.wikimedia.org/P19061 and previous config saved to /var/cache/conftool/dbconfig/20220124-145738-marostegui.json |
[production] |
14:57 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1033 (re)pooling @ 100%: repooling after reimage', diff saved to https://phabricator.wikimedia.org/P19060 and previous config saved to /var/cache/conftool/dbconfig/20220124-145712-root.json |
[production] |
14:48 |
<hnowlan@puppetmaster1001> |
conftool action : set/pooled=yes; selector: name=restbase1030.eqiad.wmnet |
[production] |
14:46 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host restbase1030.eqiad.wmnet with OS buster |
[production] |
14:44 |
<elukey@cumin1001> |
START - Cookbook sre.ores.roll-restart-workers for ORES codfw cluster: Roll restart of ORES's daemons. |
[production] |
14:42 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1172 (T285149)', diff saved to https://phabricator.wikimedia.org/P19059 and previous config saved to /var/cache/conftool/dbconfig/20220124-144234-marostegui.json |
[production] |
14:42 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1033 (re)pooling @ 75%: repooling after reimage', diff saved to https://phabricator.wikimedia.org/P19058 and previous config saved to /var/cache/conftool/dbconfig/20220124-144208-root.json |
[production] |
14:34 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host es2034.codfw.wmnet with OS bullseye |
[production] |