3601-3650 of 10000 results (96ms)
2023-07-25 ยง
13:20 <godog> powercycle parse1002 - T339340 [production]
13:17 <elukey@cumin1001> START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-main-codfw cluster: Roll restart of jvm daemons. [production]
13:06 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2114', diff saved to https://phabricator.wikimedia.org/P49702 and previous config saved to /var/cache/conftool/dbconfig/20230725-130615-ladsgroup.json [production]
12:51 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2114', diff saved to https://phabricator.wikimedia.org/P49701 and previous config saved to /var/cache/conftool/dbconfig/20230725-125109-ladsgroup.json [production]
12:36 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2114 (T342617)', diff saved to https://phabricator.wikimedia.org/P49700 and previous config saved to /var/cache/conftool/dbconfig/20230725-123602-ladsgroup.json [production]
12:06 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2114 (T342617)', diff saved to https://phabricator.wikimedia.org/P49699 and previous config saved to /var/cache/conftool/dbconfig/20230725-120641-ladsgroup.json [production]
12:06 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2114.codfw.wmnet with reason: Maintenance [production]
12:06 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2114.codfw.wmnet with reason: Maintenance [production]
11:49 <akosiaris@deploy1002> helmfile [eqiad] DONE helmfile.d/admin 'apply'. [production]
11:49 <akosiaris@deploy1002> helmfile [eqiad] START helmfile.d/admin 'apply'. [production]
11:48 <akosiaris@deploy1002> helmfile [codfw] DONE helmfile.d/admin 'apply'. [production]
11:48 <akosiaris@deploy1002> helmfile [codfw] START helmfile.d/admin 'apply'. [production]
11:48 <akosiaris@deploy1002> helmfile [staging-eqiad] DONE helmfile.d/admin 'apply'. [production]
11:47 <akosiaris@deploy1002> helmfile [staging-eqiad] START helmfile.d/admin 'apply'. [production]
11:46 <akosiaris@deploy1002> helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. [production]
11:45 <akosiaris@deploy1002> helmfile [staging-codfw] START helmfile.d/admin 'apply'. [production]
11:37 <aborrero@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:37 <aborrero@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: openstack - aborrero@cumin1001" [production]
11:36 <aborrero@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: openstack - aborrero@cumin1001" [production]
11:33 <aborrero@cumin1001> START - Cookbook sre.dns.netbox [production]
11:32 <akosiaris> T340087 wikidiff2 rollout done. 1 host is unreachable and will need to be reimaged or upgraded manually to pick this up, parse1002.eqiad.wmnet [production]
11:29 <akosiaris> T340087 starting wikidiff2 1.41.1 rollout to eqiad. codfw already done. [production]
11:28 <akosiaris> restart php on mw1457 [production]
11:25 <akosiaris> T340087 keep a copy php-wikidiff2_1.13.0-1_amd64.deb in apt1001:/home/akosiaris/wd/ in case of emergency [production]
11:24 <akosiaris> T340087 starting wikidiff2 1.41.1 rollout to codfw [production]
10:51 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 31 days, 0:00:00 on lvs[1013-1015].eqiad.wmnet with reason: test hosts [production]
10:50 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime for 31 days, 0:00:00 on lvs[1013-1015].eqiad.wmnet with reason: test hosts [production]
09:50 <elukey> restart kafka on kafka-main1001 to pick up the new changes - T341558 [production]
09:47 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on kafka-main1001.eqiad.wmnet with reason: Apply a new setting to the Kafka broker [production]
09:46 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 0:30:00 on kafka-main1001.eqiad.wmnet with reason: Apply a new setting to the Kafka broker [production]
09:06 <slyngs> Restart Tomcat / Apereo CAS on idp1002 [production]
09:01 <jnuche@deploy1002> rebuilt and synchronized wikiversions files: group0 wikis to 1.41.0-wmf.19 refs T340247 [production]
08:59 <oblivian@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-debug: apply [production]
08:59 <oblivian@deploy1002> helmfile [codfw] START helmfile.d/services/mw-debug: apply [production]
08:51 <jnuche@deploy1002> Pruned MediaWiki: 1.41.0-wmf.17 (duration: 02m 11s) [production]
08:49 <jnuche@deploy1002> Finished scap: testwikis wikis to 1.41.0-wmf.19 refs T340247 (duration: 52m 35s) [production]
08:35 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on kafka-main1001.eqiad.wmnet with reason: Apply a new setting to the Kafka broker [production]
08:35 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 0:30:00 on kafka-main1001.eqiad.wmnet with reason: Apply a new setting to the Kafka broker [production]
08:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3316 (re)pooling @ 100%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49696 and previous config saved to /var/cache/conftool/dbconfig/20230725-080326-root.json [production]
08:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3315 (re)pooling @ 100%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49695 and previous config saved to /var/cache/conftool/dbconfig/20230725-080315-root.json [production]
07:57 <jnuche@deploy1002> Started scap: testwikis wikis to 1.41.0-wmf.19 refs T340247 [production]
07:48 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3316 (re)pooling @ 75%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49694 and previous config saved to /var/cache/conftool/dbconfig/20230725-074821-root.json [production]
07:48 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3315 (re)pooling @ 75%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49693 and previous config saved to /var/cache/conftool/dbconfig/20230725-074810-root.json [production]
07:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3316 (re)pooling @ 50%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49692 and previous config saved to /var/cache/conftool/dbconfig/20230725-073317-root.json [production]
07:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3315 (re)pooling @ 50%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49691 and previous config saved to /var/cache/conftool/dbconfig/20230725-073305-root.json [production]
07:18 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3316 (re)pooling @ 25%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49690 and previous config saved to /var/cache/conftool/dbconfig/20230725-071812-root.json [production]
07:18 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3315 (re)pooling @ 25%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49689 and previous config saved to /var/cache/conftool/dbconfig/20230725-071801-root.json [production]
07:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3316 (re)pooling @ 10%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49688 and previous config saved to /var/cache/conftool/dbconfig/20230725-070307-root.json [production]
07:02 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3315 (re)pooling @ 10%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49687 and previous config saved to /var/cache/conftool/dbconfig/20230725-070256-root.json [production]
06:48 <marostegui@cumin1001> dbctl commit (dc=all): 'db1213:3316 (re)pooling @ 5%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49686 and previous config saved to /var/cache/conftool/dbconfig/20230725-064802-root.json [production]