5151-5200 of 10000 results (90ms)
2023-07-25 ยง
13:42 <cgoubert@cumin1001> START - Cookbook sre.hosts.downtime for 15 days, 0:00:00 on parse1002.eqiad.wmnet with reason: T339340 - hw troubleshooting [production]
13:41 <urbanecm@deploy1002> urbanecm and dreamyjazz: Backport for [[gerrit:940927|Enable write new on testwiki for CheckUser event tables migration (T330158)]] synced to the testservers mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
13:41 <cgoubert@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "re-run to fix mw1486 - cgoubert@cumin1001" [production]
13:40 <cgoubert@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "re-run to fix mw1486 - cgoubert@cumin1001" [production]
13:40 <urbanecm@deploy1002> Started scap: Backport for [[gerrit:940927|Enable write new on testwiki for CheckUser event tables migration (T330158)]] [production]
13:38 <cgoubert@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host mw1486.eqiad.wmnet with OS buster [production]
13:38 <cgoubert@cumin1001> END (FAIL) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=99) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - cgoubert@cumin1001" [production]
13:38 <urbanecm@deploy1002> Finished scap: Backport for [[gerrit:941414|Add support for writing both new and old to Hooks.php (T341934 T341586)]], [[gerrit:941400|Follow-up: Add support for writing both new and old to Hooks.php (T341586)]] (duration: 07m 28s) [production]
13:30 <urbanecm@deploy1002> Started scap: Backport for [[gerrit:941414|Add support for writing both new and old to Hooks.php (T341934 T341586)]], [[gerrit:941400|Follow-up: Add support for writing both new and old to Hooks.php (T341586)]] [production]
13:21 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2114 (T342617)', diff saved to https://phabricator.wikimedia.org/P49704 and previous config saved to /var/cache/conftool/dbconfig/20230725-132121-ladsgroup.json [production]
13:20 <godog> powercycle parse1002 - T339340 [production]
13:17 <elukey@cumin1001> START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-main-codfw cluster: Roll restart of jvm daemons. [production]
13:06 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2114', diff saved to https://phabricator.wikimedia.org/P49702 and previous config saved to /var/cache/conftool/dbconfig/20230725-130615-ladsgroup.json [production]
12:51 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2114', diff saved to https://phabricator.wikimedia.org/P49701 and previous config saved to /var/cache/conftool/dbconfig/20230725-125109-ladsgroup.json [production]
12:36 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2114 (T342617)', diff saved to https://phabricator.wikimedia.org/P49700 and previous config saved to /var/cache/conftool/dbconfig/20230725-123602-ladsgroup.json [production]
12:06 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2114 (T342617)', diff saved to https://phabricator.wikimedia.org/P49699 and previous config saved to /var/cache/conftool/dbconfig/20230725-120641-ladsgroup.json [production]
12:06 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2114.codfw.wmnet with reason: Maintenance [production]
12:06 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2114.codfw.wmnet with reason: Maintenance [production]
11:49 <akosiaris@deploy1002> helmfile [eqiad] DONE helmfile.d/admin 'apply'. [production]
11:49 <akosiaris@deploy1002> helmfile [eqiad] START helmfile.d/admin 'apply'. [production]
11:48 <akosiaris@deploy1002> helmfile [codfw] DONE helmfile.d/admin 'apply'. [production]
11:48 <akosiaris@deploy1002> helmfile [codfw] START helmfile.d/admin 'apply'. [production]
11:48 <akosiaris@deploy1002> helmfile [staging-eqiad] DONE helmfile.d/admin 'apply'. [production]
11:47 <akosiaris@deploy1002> helmfile [staging-eqiad] START helmfile.d/admin 'apply'. [production]
11:46 <akosiaris@deploy1002> helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. [production]
11:45 <akosiaris@deploy1002> helmfile [staging-codfw] START helmfile.d/admin 'apply'. [production]
11:37 <aborrero@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:37 <aborrero@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: openstack - aborrero@cumin1001" [production]
11:36 <aborrero@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: openstack - aborrero@cumin1001" [production]
11:33 <aborrero@cumin1001> START - Cookbook sre.dns.netbox [production]
11:32 <akosiaris> T340087 wikidiff2 rollout done. 1 host is unreachable and will need to be reimaged or upgraded manually to pick this up, parse1002.eqiad.wmnet [production]
11:29 <akosiaris> T340087 starting wikidiff2 1.41.1 rollout to eqiad. codfw already done. [production]
11:28 <akosiaris> restart php on mw1457 [production]
11:25 <akosiaris> T340087 keep a copy php-wikidiff2_1.13.0-1_amd64.deb in apt1001:/home/akosiaris/wd/ in case of emergency [production]
11:24 <akosiaris> T340087 starting wikidiff2 1.41.1 rollout to codfw [production]
10:51 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 31 days, 0:00:00 on lvs[1013-1015].eqiad.wmnet with reason: test hosts [production]
10:50 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime for 31 days, 0:00:00 on lvs[1013-1015].eqiad.wmnet with reason: test hosts [production]
09:50 <elukey> restart kafka on kafka-main1001 to pick up the new changes - T341558 [production]
09:47 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on kafka-main1001.eqiad.wmnet with reason: Apply a new setting to the Kafka broker [production]
09:46 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 0:30:00 on kafka-main1001.eqiad.wmnet with reason: Apply a new setting to the Kafka broker [production]
09:06 <slyngs> Restart Tomcat / Apereo CAS on idp1002 [production]
09:01 <jnuche@deploy1002> rebuilt and synchronized wikiversions files: group0 wikis to 1.41.0-wmf.19 refs T340247 [production]
08:59 <oblivian@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-debug: apply [production]
08:59 <oblivian@deploy1002> helmfile [codfw] START helmfile.d/services/mw-debug: apply [production]
08:51 <jnuche@deploy1002> Pruned MediaWiki: 1.41.0-wmf.17 (duration: 02m 11s) [production]
08:49 <jnuche@deploy1002> Finished scap: testwikis wikis to 1.41.0-wmf.19 refs T340247 (duration: 52m 35s) [production]
08:35 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on kafka-main1001.eqiad.wmnet with reason: Apply a new setting to the Kafka broker [production]
08:35 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 0:30:00 on kafka-main1001.eqiad.wmnet with reason: Apply a new setting to the Kafka broker [production]
08:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3316 (re)pooling @ 100%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49696 and previous config saved to /var/cache/conftool/dbconfig/20230725-080326-root.json [production]
08:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3315 (re)pooling @ 100%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49695 and previous config saved to /var/cache/conftool/dbconfig/20230725-080315-root.json [production]