401-450 of 10000 results (84ms)
2024-05-21 ยง
08:43 <moritzm> installing ghostscript security updates [production]
08:41 <matthiasmullie> UTC morning backports done [production]
08:41 <mlitn@deploy1002> Finished scap: Backport for [[gerrit:1032888|Allow async (job queue based) chunked upload on all wikis (T364644)]] (duration: 17m 32s) [production]
08:40 <cmooney@cumin1002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) sretest2002.wikimedia.org on all recursors [production]
08:40 <cmooney@cumin1002> START - Cookbook sre.dns.wipe-cache sretest2002.wikimedia.org on all recursors [production]
08:38 <cmooney@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
08:38 <cmooney@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add dns for sretest2002 - cmooney@cumin1002" [production]
08:38 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-host for host db2210.codfw.wmnet [production]
08:37 <effie> enable puppet on all mw* baremetal hosts [production]
08:37 <cmooney@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add dns for sretest2002 - cmooney@cumin1002" [production]
08:35 <marostegui> Deploy schema change on s8 eqiad, this will cause a few hours of replication lag in s8 clouddb replicas T364299 [production]
08:34 <cmooney@cumin1002> START - Cookbook sre.dns.netbox [production]
08:34 <cmooney@cumin1002> END (ERROR) - Cookbook sre.dns.netbox (exit_code=97) [production]
08:34 <cmooney@cumin1002> START - Cookbook sre.dns.netbox [production]
08:33 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1167.eqiad.wmnet with reason: Long schema change [production]
08:32 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1167.eqiad.wmnet with reason: Long schema change [production]
08:32 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on clouddb[1016,1021].eqiad.wmnet,db1154.eqiad.wmnet with reason: Long schema change [production]
08:32 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on clouddb[1016,1021].eqiad.wmnet,db1154.eqiad.wmnet with reason: Long schema change [production]
08:32 <marostegui@cumin1002> dbctl commit (dc=all): 'db1221 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P62764 and previous config saved to /var/cache/conftool/dbconfig/20240521-083212-root.json [production]
08:30 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db1167 for a schema change', diff saved to https://phabricator.wikimedia.org/P62763 and previous config saved to /var/cache/conftool/dbconfig/20240521-083053-root.json [production]
08:28 <marostegui@cumin1002> dbctl commit (dc=all): 'db1237 (re)pooling @ 100%: After reimage', diff saved to https://phabricator.wikimedia.org/P62762 and previous config saved to /var/cache/conftool/dbconfig/20240521-082842-root.json [production]
08:27 <mlitn@deploy1002> mlitn and bawolff: Continuing with sync [production]
08:26 <mlitn@deploy1002> mlitn and bawolff: Backport for [[gerrit:1032888|Allow async (job queue based) chunked upload on all wikis (T364644)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
08:23 <mlitn@deploy1002> Started scap: Backport for [[gerrit:1032888|Allow async (job queue based) chunked upload on all wikis (T364644)]] [production]
08:22 <mlitn@deploy1002> Finished scap: Backport for [[gerrit:1032824|Remove complicated synchronization of caption/description inputs (T365119)]] (duration: 17m 40s) [production]
08:19 <ladsgroup@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1216.eqiad.wmnet with reason: Maintenance [production]
08:19 <ladsgroup@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1216.eqiad.wmnet with reason: Maintenance [production]
08:19 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1213 (T352010)', diff saved to https://phabricator.wikimedia.org/P62761 and previous config saved to /var/cache/conftool/dbconfig/20240521-081930-ladsgroup.json [production]
08:18 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1221.eqiad.wmnet with OS bookworm [production]
08:17 <marostegui@cumin1002> dbctl commit (dc=all): 'db1221 (re)pooling @ 1%: Repooling', diff saved to https://phabricator.wikimedia.org/P62760 and previous config saved to /var/cache/conftool/dbconfig/20240521-081706-root.json [production]
08:14 <effie> enable puppet on mediawiki codfw servers [production]
08:13 <marostegui@cumin1002> dbctl commit (dc=all): 'db1237 (re)pooling @ 75%: After reimage', diff saved to https://phabricator.wikimedia.org/P62759 and previous config saved to /var/cache/conftool/dbconfig/20240521-081336-root.json [production]
08:09 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-host (exit_code=0) for host db2206.codfw.wmnet [production]
08:09 <mlitn@deploy1002> mlitn: Continuing with sync [production]
08:07 <mlitn@deploy1002> mlitn: Backport for [[gerrit:1032824|Remove complicated synchronization of caption/description inputs (T365119)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
08:04 <mlitn@deploy1002> Started scap: Backport for [[gerrit:1032824|Remove complicated synchronization of caption/description inputs (T365119)]] [production]
08:04 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1213', diff saved to https://phabricator.wikimedia.org/P62758 and previous config saved to /var/cache/conftool/dbconfig/20240521-080422-ladsgroup.json [production]
08:04 <mlitn@deploy1002> Finished scap: Backport for [[gerrit:1032823|Fix automatic numbering of copied titles (T365107)]] (duration: 17m 02s) [production]
08:01 <marostegui@cumin1002> dbctl commit (dc=all): 'db1182 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P62757 and previous config saved to /var/cache/conftool/dbconfig/20240521-080145-root.json [production]
07:58 <marostegui@cumin1002> dbctl commit (dc=all): 'db1237 (re)pooling @ 50%: After reimage', diff saved to https://phabricator.wikimedia.org/P62756 and previous config saved to /var/cache/conftool/dbconfig/20240521-075830-root.json [production]
07:56 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1221.eqiad.wmnet with reason: host reimage [production]
07:54 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db1221.eqiad.wmnet with reason: host reimage [production]
07:52 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-host for host db2206.codfw.wmnet [production]
07:51 <moritzm> installing nginx security updates [production]
07:50 <mlitn@deploy1002> mlitn: Continuing with sync [production]
07:49 <mlitn@deploy1002> mlitn: Backport for [[gerrit:1032823|Fix automatic numbering of copied titles (T365107)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
07:49 <effie> disable puppet on all mediawiki hardware hosts - T345740 [production]
07:49 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-host (exit_code=0) for host db2179.codfw.wmnet [production]
07:49 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1213', diff saved to https://phabricator.wikimedia.org/P62755 and previous config saved to /var/cache/conftool/dbconfig/20240521-074914-ladsgroup.json [production]
07:47 <mlitn@deploy1002> Started scap: Backport for [[gerrit:1032823|Fix automatic numbering of copied titles (T365107)]] [production]