2351-2400 of 10000 results (32ms)
2021-07-06 §
11:57 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1118', diff saved to https://phabricator.wikimedia.org/P16771 and previous config saved to /var/cache/conftool/dbconfig/20210706-115732-marostegui.json [production]
11:47 <marostegui@cumin1001> dbctl commit (dc=all): 'db2071 (re)pooling @ 75%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16770 and previous config saved to /var/cache/conftool/dbconfig/20210706-114739-root.json [production]
11:32 <marostegui@cumin1001> dbctl commit (dc=all): 'db2071 (re)pooling @ 50%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16769 and previous config saved to /var/cache/conftool/dbconfig/20210706-113235-root.json [production]
11:17 <marostegui@cumin1001> dbctl commit (dc=all): 'db2071 (re)pooling @ 25%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16768 and previous config saved to /var/cache/conftool/dbconfig/20210706-111731-root.json [production]
11:16 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2071', diff saved to https://phabricator.wikimedia.org/P16767 and previous config saved to /var/cache/conftool/dbconfig/20210706-111635-marostegui.json [production]
10:19 <moritzm> installing jackson-databind security updates on buster [production]
09:01 <_joe_> repooling wdqs1007 now that lag has caught up [production]
08:43 <moritzm> installing libuv1 security updates on buster [production]
07:06 <marostegui> Upgrade db1104 kernel [production]
06:54 <moritzm> installing PHP 7.3 securiy updates on buster [production]
06:50 <marostegui> Upgrade db1122 kernel [production]
06:35 <marostegui> Upgrade db1138 kernel [production]
06:31 <marostegui> Upgrade db1160 kernel [production]
00:56 <eileen> process-control config revision is 8d46b52ed4 [production]
2021-07-05 §
17:40 <legoktm> published fixed docker-registry.discovery.wmnet/nodejs10-devel:0.0.4 image (T286212) [production]
15:24 <_joe_> leaving wdqs1007 depooled so that the updater can recover faster, now at 16.5 hours of lag [production]
14:01 <moritzm> uploaded nginx 1.13.9-1+wmf3 for stretch-wikimedoa [production]
12:50 <marostegui> Stop MySQL on db1117:3321 to clone db1125 T286042 [production]
11:29 <moritzm> installing openexr security updates on stretch [production]
11:07 <moritzm> installing tiff security updates on stretch [production]
10:48 <moritzm> upgrading PHP on miscweb* [production]
10:37 <jbond> enable puppet fleet wide to post puppetdb change [production]
10:29 <marostegui> Optimize ruwiki.logging on s6 eqiad with replication T286102 [production]
10:27 <jbond> disable puppet fleet wide to preforem puppetdb change [production]
08:15 <moritzm> rolling out debmonitor-client 0.3.0 [production]
08:03 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on releases1002.eqiad.wmnet with reason: bump CPU count [production]
08:03 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 0:30:00 on releases1002.eqiad.wmnet with reason: bump CPU count [production]
07:55 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on releases2002.codfw.wmnet with reason: bump CPU count [production]
07:55 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 0:30:00 on releases2002.codfw.wmnet with reason: bump CPU count [production]
07:04 <_joe_> restarting blazegraph, then restarting the updater again [production]
06:48 <moritzm> start rasdaemon on sretest1001, didn't start after last reboot from a week ago [production]
06:47 <_joe_> restart wdqs-updater on wdqs1007 [production]
00:53 <eileen> process-control config revision is a1717c7fde [production]
00:47 <eileen> process-control config revision is 24565578f7 [production]
2021-07-04 §
17:43 <brennen@deploy1002> Synchronized php-1.37.0-wmf.12/extensions/AbuseFilter/includes/AbuseFilterHooks.php: Backport: [[gerrit:702957|Revert "Replace depricating method IContextSource::getWikiPage to WikiPageFactory usage" (T286140)]] (duration: 01m 06s) [production]
08:02 <elukey> repool eqsin after equinix maintenance - T286113 [production]
2021-07-03 §
17:46 <elukey> depool eqsin due to loss of power redundancy (equinix maintenance) - T286113 [production]
09:11 <Amir1> restarting mailman3-web on lists1001 to pick up patches for T283659 [production]
08:53 <Amir1> patching postorius and mailmanclient on lists1001 for T283659 [production]
2021-07-02 §
22:06 <foks> removing three files for legal compliance [production]
19:53 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
19:49 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
18:52 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:41 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
18:38 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:36 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
18:24 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:22 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
18:08 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:04 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]