301-350 of 10000 results (53ms)
2022-06-02 ยง
20:59 <andrew@cumin1001> START - Cookbook sre.hosts.reimage for host clouddumps1001.wikimedia.org with OS bullseye [production]
20:45 <andrew@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host clouddumps1001.wikimedia.org with OS bullseye [production]
20:26 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host backup1009.mgmt.eqiad.wmnet with reboot policy FORCED [production]
20:16 <andrew@cumin1001> START - Cookbook sre.hosts.reimage for host clouddumps1001.wikimedia.org with OS bullseye [production]
20:16 <ryankemper> T306449 Marked `elastic1097` as `Staged` in Netbox (was previously failed, but fixed in https://phabricator.wikimedia.org/T306449#7888260) [production]
20:14 <andrew@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host clouddumps1001.wikimedia.org with OS bullseye [production]
20:14 <andrew@cumin1001> START - Cookbook sre.hosts.reimage for host clouddumps1001.wikimedia.org with OS bullseye [production]
20:14 <andrew@cumin1001> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=93) for host clouddumps1001.wikimedia.org with OS bullseye [production]
20:09 <cmjohnson@cumin1001> START - Cookbook sre.hosts.provision for host backup1009.mgmt.eqiad.wmnet with reboot policy FORCED [production]
20:08 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
20:07 <brennen> no patches and no new trainees; closing utc late backport & config window [production]
20:04 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
19:53 <andrew@cumin1001> START - Cookbook sre.hosts.reimage for host clouddumps1001.wikimedia.org with OS bullseye [production]
19:53 <ryankemper> T294805 Marked `elastic10[68-83]` as Active in netbox (all except `elastic10[77,80]` were erroneously marked as `Staged`) [production]
19:45 <herron@cumin1001> END (PASS) - Cookbook sre.kafka.roll-restart-brokers (exit_code=0) for Kafka A:kafka-main-codfw cluster: Roll restart of jvm daemons. [production]
19:10 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: restart to enable S3 plugin - bking@cumin1001 - T309720 [production]
19:08 <ryankemper> T305646 T308647 Unbanned `elastic2033` and `elastic2054` from clusters; also pooled `elastic2033` [production]
19:07 <bking@cumin1001> END (PASS) - Cookbook sre.elasticsearch.force-shard-allocation (exit_code=0) [production]
19:07 <bking@cumin1001> START - Cookbook sre.elasticsearch.force-shard-allocation [production]
19:07 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174 (T60674)', diff saved to https://phabricator.wikimedia.org/P29360 and previous config saved to /var/cache/conftool/dbconfig/20220602-190701-ladsgroup.json [production]
18:51 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174', diff saved to https://phabricator.wikimedia.org/P29359 and previous config saved to /var/cache/conftool/dbconfig/20220602-185155-ladsgroup.json [production]
18:43 <bking@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.force-shard-allocation (exit_code=99) [production]
18:43 <bking@cumin1001> START - Cookbook sre.elasticsearch.force-shard-allocation [production]
18:36 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174', diff saved to https://phabricator.wikimedia.org/P29358 and previous config saved to /var/cache/conftool/dbconfig/20220602-183650-ladsgroup.json [production]
18:21 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174 (T60674)', diff saved to https://phabricator.wikimedia.org/P29357 and previous config saved to /var/cache/conftool/dbconfig/20220602-182145-ladsgroup.json [production]
18:15 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
18:15 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
18:14 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
18:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1174 (T60674)', diff saved to https://phabricator.wikimedia.org/P29356 and previous config saved to /var/cache/conftool/dbconfig/20220602-181434-ladsgroup.json [production]
18:14 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1174.eqiad.wmnet with reason: Maintenance [production]
18:14 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1174.eqiad.wmnet with reason: Maintenance [production]
18:10 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
18:08 <jhuneidi@deploy1002> rebuilt and synchronized wikiversions files: all wikis to 1.39.0-wmf.14 refs T308067 [production]
18:04 <herron@cumin1001> START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-main-codfw cluster: Roll restart of jvm daemons. [production]
17:39 <bking@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: restart to enable S3 plugin - bking@cumin1001 - T309720 [production]
17:39 <cwhite> rolling restart of eqiad logstash cluster [production]
17:19 <herron@cumin1001> END (PASS) - Cookbook sre.kafka.roll-restart-brokers (exit_code=0) for Kafka A:kafka-logging-eqiad cluster: Roll restart of jvm daemons. [production]
17:11 <cwhite> rolling restart of codfw logstash cluster [production]
17:09 <cwhite> restart logstash on apifeatureusage hosts [production]
16:59 <mutante> mx1001 - deleted certain mails from the mail queue - reacting to mx alert [production]
16:47 <mutante> deleting expired globalsign and digicert TLS certificates [production]
16:42 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on 12 hosts with reason: Maintenance [production]
16:42 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on 12 hosts with reason: Maintenance [production]
16:42 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2110.codfw.wmnet with reason: Maintenance [production]
16:42 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2110.codfw.wmnet with reason: Maintenance [production]
16:41 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1146:3314 (T298560)', diff saved to https://phabricator.wikimedia.org/P29355 and previous config saved to /var/cache/conftool/dbconfig/20220602-164158-ladsgroup.json [production]
16:33 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: restart to enable S3 plugin - bking@cumin1001 - T309720 [production]
16:26 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1146:3314', diff saved to https://phabricator.wikimedia.org/P29354 and previous config saved to /var/cache/conftool/dbconfig/20220602-162653-ladsgroup.json [production]
16:20 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1181 (re)pooling @ 100%: 10', diff saved to https://phabricator.wikimedia.org/P29353 and previous config saved to /var/cache/conftool/dbconfig/20220602-162053-ladsgroup.json [production]
16:19 <herron@cumin1001> START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-logging-eqiad cluster: Roll restart of jvm daemons. [production]