201-250 of 10000 results (60ms)
2022-06-02 ยง
19:07 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174 (T60674)', diff saved to https://phabricator.wikimedia.org/P29360 and previous config saved to /var/cache/conftool/dbconfig/20220602-190701-ladsgroup.json [production]
18:51 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174', diff saved to https://phabricator.wikimedia.org/P29359 and previous config saved to /var/cache/conftool/dbconfig/20220602-185155-ladsgroup.json [production]
18:43 <bking@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.force-shard-allocation (exit_code=99) [production]
18:43 <bking@cumin1001> START - Cookbook sre.elasticsearch.force-shard-allocation [production]
18:36 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174', diff saved to https://phabricator.wikimedia.org/P29358 and previous config saved to /var/cache/conftool/dbconfig/20220602-183650-ladsgroup.json [production]
18:21 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174 (T60674)', diff saved to https://phabricator.wikimedia.org/P29357 and previous config saved to /var/cache/conftool/dbconfig/20220602-182145-ladsgroup.json [production]
18:15 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
18:15 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
18:14 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
18:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1174 (T60674)', diff saved to https://phabricator.wikimedia.org/P29356 and previous config saved to /var/cache/conftool/dbconfig/20220602-181434-ladsgroup.json [production]
18:14 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1174.eqiad.wmnet with reason: Maintenance [production]
18:14 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1174.eqiad.wmnet with reason: Maintenance [production]
18:10 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
18:08 <jhuneidi@deploy1002> rebuilt and synchronized wikiversions files: all wikis to 1.39.0-wmf.14 refs T308067 [production]
18:04 <herron@cumin1001> START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-main-codfw cluster: Roll restart of jvm daemons. [production]
17:39 <bking@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: restart to enable S3 plugin - bking@cumin1001 - T309720 [production]
17:39 <cwhite> rolling restart of eqiad logstash cluster [production]
17:19 <herron@cumin1001> END (PASS) - Cookbook sre.kafka.roll-restart-brokers (exit_code=0) for Kafka A:kafka-logging-eqiad cluster: Roll restart of jvm daemons. [production]
17:11 <cwhite> rolling restart of codfw logstash cluster [production]
17:09 <cwhite> restart logstash on apifeatureusage hosts [production]
16:59 <mutante> mx1001 - deleted certain mails from the mail queue - reacting to mx alert [production]
16:47 <mutante> deleting expired globalsign and digicert TLS certificates [production]
16:42 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on 12 hosts with reason: Maintenance [production]
16:42 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on 12 hosts with reason: Maintenance [production]
16:42 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2110.codfw.wmnet with reason: Maintenance [production]
16:42 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2110.codfw.wmnet with reason: Maintenance [production]
16:41 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1146:3314 (T298560)', diff saved to https://phabricator.wikimedia.org/P29355 and previous config saved to /var/cache/conftool/dbconfig/20220602-164158-ladsgroup.json [production]
16:33 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: restart to enable S3 plugin - bking@cumin1001 - T309720 [production]
16:26 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1146:3314', diff saved to https://phabricator.wikimedia.org/P29354 and previous config saved to /var/cache/conftool/dbconfig/20220602-162653-ladsgroup.json [production]
16:20 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1181 (re)pooling @ 100%: 10', diff saved to https://phabricator.wikimedia.org/P29353 and previous config saved to /var/cache/conftool/dbconfig/20220602-162053-ladsgroup.json [production]
16:19 <herron@cumin1001> START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-logging-eqiad cluster: Roll restart of jvm daemons. [production]
16:15 <bking@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: restart to enable S3 plugin - bking@cumin1001 - T309720 [production]
16:11 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1146:3314', diff saved to https://phabricator.wikimedia.org/P29352 and previous config saved to /var/cache/conftool/dbconfig/20220602-161145-ladsgroup.json [production]
16:05 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1181 (re)pooling @ 75%: 10', diff saved to https://phabricator.wikimedia.org/P29351 and previous config saved to /var/cache/conftool/dbconfig/20220602-160550-ladsgroup.json [production]
15:56 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1146:3314 (T298560)', diff saved to https://phabricator.wikimedia.org/P29350 and previous config saved to /var/cache/conftool/dbconfig/20220602-155640-ladsgroup.json [production]
15:50 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1181 (re)pooling @ 25%: 10', diff saved to https://phabricator.wikimedia.org/P29349 and previous config saved to /var/cache/conftool/dbconfig/20220602-155046-ladsgroup.json [production]
15:50 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1181.eqiad.wmnet with reason: Maintenance [production]
15:50 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1181.eqiad.wmnet with reason: Maintenance [production]
15:49 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1181.eqiad.wmnet with reason: Maintenance [production]
15:49 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1181.eqiad.wmnet with reason: Maintenance [production]
15:49 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 10:00:00 on db1181.eqiad.wmnet with reason: Maintenance [production]
15:49 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 10:00:00 on db1181.eqiad.wmnet with reason: Maintenance [production]
15:23 <moritzm> installing cups security updates (client-side libs only) [production]
15:15 <moritzm> installing openssl security updates on stretch [production]
15:14 <herron@cumin1001> END (PASS) - Cookbook sre.kafka.roll-restart-brokers (exit_code=0) for Kafka A:kafka-logging-codfw cluster: Roll restart of jvm daemons. [production]
15:12 <mutante> gitlab migration to new hardware in progress [production]
15:06 <jelto> start migration to gitlab1004 - T307142 [production]
14:59 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: restart to enable S3 plugin - bking@cumin1001 - T309720 [production]
14:56 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 10:00:00 on db1181.eqiad.wmnet with reason: Maintenance [production]
14:56 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 10:00:00 on db1181.eqiad.wmnet with reason: Maintenance [production]