1551-1600 of 10000 results (49ms)
2022-01-12 ยง
20:17 <mutante> applying firewall change on phabricator (VCS, git-ssh), second attempt, first codfw-only [production]
20:11 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1105:3311 (T297191)', diff saved to https://phabricator.wikimedia.org/P18701 and previous config saved to /var/cache/conftool/dbconfig/20220112-201114-marostegui.json [production]
20:08 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1105:3311 (T297191)', diff saved to https://phabricator.wikimedia.org/P18700 and previous config saved to /var/cache/conftool/dbconfig/20220112-200806-marostegui.json [production]
20:08 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1105.eqiad.wmnet with reason: Maintenance [production]
20:08 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1105.eqiad.wmnet with reason: Maintenance [production]
20:07 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1119 (T297191)', diff saved to https://phabricator.wikimedia.org/P18699 and previous config saved to /var/cache/conftool/dbconfig/20220112-200759-marostegui.json [production]
19:52 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1119', diff saved to https://phabricator.wikimedia.org/P18698 and previous config saved to /var/cache/conftool/dbconfig/20220112-195254-marostegui.json [production]
19:52 <hashar> Restarting CI Jenkins once more to apply the Gearman plugin update T298691 [production]
19:44 <hashar> Clearing /srv partition on integration-castor03 [production]
19:37 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1119', diff saved to https://phabricator.wikimedia.org/P18697 and previous config saved to /var/cache/conftool/dbconfig/20220112-193749-marostegui.json [production]
19:34 <hashar> Upgrading CI Jenkins and Gearman plugin T298691 [production]
19:29 <mutante> wdqs2003 - one power supply failed so it's not redundant anymore, says Icinga [production]
19:29 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
19:28 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
19:28 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
19:26 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
19:25 <cwhite> begin eqiad opensearch upgrade T288621 [production]
19:22 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1119 (T297191)', diff saved to https://phabricator.wikimedia.org/P18696 and previous config saved to /var/cache/conftool/dbconfig/20220112-192244-marostegui.json [production]
19:22 <mutante> deneb - for some reason the "package builder clean up build directory"-service fails T287222 [production]
19:21 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
19:21 <cjming> end of UTC evening backport & config window [production]
19:21 <mutante> [deneb:~] $ sudo systemctl start package_builder_Clean_up_build_directory.service [production]
19:20 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
19:20 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
19:19 <cjming@deploy1002> Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:753187|Add new vector skin key to RelatedArticlesFooterAllowedSkins. (T298916)]] (duration: 01m 21s) [production]
19:18 <mutante> pybal-test2002 - apt-get clean after icinga alert about disk space running out [production]
19:17 <mutante> zookeeper-test1002 - CRITICAL - degraded: The following units failed: ifup@ens5.service - for this issue see T273026 (T268074) [production]
19:16 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
19:14 <mutante> elastic10180 - one power supply seeming failed - see icinga IPMI alert - [Status = Critical, PS Redundancy = Critical] T294805 [production]
19:14 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1119 (T297191)', diff saved to https://phabricator.wikimedia.org/P18695 and previous config saved to /var/cache/conftool/dbconfig/20220112-191436-marostegui.json [production]
19:14 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1119.eqiad.wmnet with reason: Maintenance [production]
19:14 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1119.eqiad.wmnet with reason: Maintenance [production]
19:14 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1106 (T297191)', diff saved to https://phabricator.wikimedia.org/P18694 and previous config saved to /var/cache/conftool/dbconfig/20220112-191428-marostegui.json [production]
19:13 <cjming@deploy1002> Synchronized php-1.38.0-wmf.17/includes/export/WikiExporter.php: Backport: [[gerrit:753085|Partial revert of I1a691f01cd82e60bf41207d32501edb4b9835e37 to unbreak dumps (T299020)]] (duration: 01m 22s) [production]
19:12 <mutante> mirror1001 - CRITICAL - degraded: The following units failed: update-ubuntu-mirror.service - T286898 [production]
19:09 <hashar> Upgraded releases Jenkins from 2.319.1 to 2.319.2 # T298691 [production]
19:06 <moritzm> imported jenkins 2.319.2 to thirdparty/ci fpr buster-wikimedia [production]
19:05 <mutante> [mwmaint1002:~] $ sudo systemctl status mediawiki_job_updatequerypages_mostlinked_s3@13.service (running fine but had failed for unknown reason last time it was supposed to run automatically) [production]
18:59 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1106', diff saved to https://phabricator.wikimedia.org/P18693 and previous config saved to /var/cache/conftool/dbconfig/20220112-185923-marostegui.json [production]
18:55 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=phab2001-vcs.codfw.wmnet [production]
18:51 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=phab2001-vcs.codfw.wmnet [production]
18:44 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1106', diff saved to https://phabricator.wikimedia.org/P18692 and previous config saved to /var/cache/conftool/dbconfig/20220112-184418-marostegui.json [production]
18:40 <mutante> phab1001 - temp disabling puppet - deployed firewall change on phab2001 - debugging - no impact [production]
18:29 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1106 (T297191)', diff saved to https://phabricator.wikimedia.org/P18691 and previous config saved to /var/cache/conftool/dbconfig/20220112-182913-marostegui.json [production]
18:28 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1106 (T297191)', diff saved to https://phabricator.wikimedia.org/P18690 and previous config saved to /var/cache/conftool/dbconfig/20220112-182806-marostegui.json [production]
18:28 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on clouddb[1013,1017,1021].eqiad.wmnet,db1154.eqiad.wmnet with reason: Maintenance [production]
18:27 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on clouddb[1013,1017,1021].eqiad.wmnet,db1154.eqiad.wmnet with reason: Maintenance [production]
18:27 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1106.eqiad.wmnet with reason: Maintenance [production]
18:27 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1106.eqiad.wmnet with reason: Maintenance [production]
18:27 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on 14 hosts with reason: Maintenance [production]