4151-4200 of 10000 results (51ms)
2022-03-03 ยง
15:53 <pt1979@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
15:47 <pt1979@cumin2002> START - Cookbook sre.dns.netbox [production]
15:46 <ladsgroup@cumin1001> START - Cookbook sre.hosts.reimage for host db1148.eqiad.wmnet with OS bullseye [production]
15:22 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1148 (T302950)', diff saved to https://phabricator.wikimedia.org/P21798 and previous config saved to /var/cache/conftool/dbconfig/20220303-152242-ladsgroup.json [production]
15:22 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1148.eqiad.wmnet with reason: Maintenance [production]
15:22 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1148.eqiad.wmnet with reason: Maintenance [production]
15:21 <moritzm> restarting FPM/Apache on mw job runners to pick up expat security updates [production]
15:08 <mutante> T296022 - phabricator - disabled git cloning over ssh for 'stewardscripts' repo - stewards have been asked via mailing list [production]
14:48 <godog> force a puppet run on cp6011 to unblock icinga and disable puppet again, cc bblack [production]
14:48 <Lucas_WMDE> UTC afternoon backport window done [production]
14:46 <lucaswerkmeister-wmde@deploy1002> Finished scap: Backport: [[gerrit:767690|GLAM event: Update landing page content (T301097)]] (full sync because of i18n change) (duration: 09m 45s) [production]
14:37 <lucaswerkmeister-wmde@deploy1002> Started scap: Backport: [[gerrit:767690|GLAM event: Update landing page content (T301097)]] (full sync because of i18n change) [production]
14:26 <XioNoX> merge Icinga: use parent switch shortname [production]
14:17 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host people1003.eqiad.wmnet [production]
14:14 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host people1003.eqiad.wmnet [production]
14:04 <volans> upgraded spicerack to v2.1.0 on cumin1001/cumin2002 [production]
14:03 <akosiaris@cumin1001> END (PASS) - Cookbook sre.ores.roll-restart-workers (exit_code=0) for ORES eqiad cluster: Roll restart of ORES's daemons. [production]
13:57 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1149 (T302950)', diff saved to https://phabricator.wikimedia.org/P21794 and previous config saved to /var/cache/conftool/dbconfig/20220303-135737-ladsgroup.json [production]
13:54 <akosiaris@deploy1002> helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
13:54 <akosiaris> switch changeprop, changeprop-jobqueue to use rdb1011. T281217 [production]
13:53 <akosiaris@deploy1002> helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: apply [production]
13:53 <akosiaris@deploy1002> helmfile [eqiad] DONE helmfile.d/services/changeprop: apply [production]
13:53 <akosiaris@deploy1002> helmfile [eqiad] START helmfile.d/services/changeprop: apply [production]
13:53 <akosiaris@deploy1002> helmfile [codfw] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
13:52 <akosiaris@deploy1002> helmfile [codfw] START helmfile.d/services/changeprop-jobqueue: apply [production]
13:52 <akosiaris@deploy1002> helmfile [codfw] DONE helmfile.d/services/changeprop: apply [production]
13:52 <akosiaris@deploy1002> helmfile [codfw] START helmfile.d/services/changeprop: apply [production]
13:52 <akosiaris@deploy1002> helmfile [staging] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
13:52 <akosiaris@deploy1002> helmfile [staging] START helmfile.d/services/changeprop-jobqueue: apply [production]
13:52 <akosiaris@deploy1002> helmfile [staging] DONE helmfile.d/services/changeprop: apply [production]
13:51 <akosiaris@deploy1002> helmfile [staging] START helmfile.d/services/changeprop: apply [production]
13:45 <akosiaris> roll restart ores uwsgi and celery for rdb1005 decommissioning. T281217 [production]
13:44 <akosiaris@cumin1001> START - Cookbook sre.ores.roll-restart-workers for ORES eqiad cluster: Roll restart of ORES's daemons. [production]
13:42 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1149', diff saved to https://phabricator.wikimedia.org/P21793 and previous config saved to /var/cache/conftool/dbconfig/20220303-134232-ladsgroup.json [production]
13:20 <moritzm> restarting FPM/Apache on mw app servers to pick up expat security updates [production]
13:12 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1149 (T302950)', diff saved to https://phabricator.wikimedia.org/P21791 and previous config saved to /var/cache/conftool/dbconfig/20220303-131223-ladsgroup.json [production]
13:05 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1149.eqiad.wmnet with OS bullseye [production]
12:50 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1149.eqiad.wmnet with reason: host reimage [production]
12:47 <hashar> Upgrading Quibble on CI Jenkins jobs from 1.3.0 to 1.4.3 https://gerrit.wikimedia.org/r/c/integration/config/+/767749/ [production]
12:47 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on db1149.eqiad.wmnet with reason: host reimage [production]
12:35 <ladsgroup@cumin1001> START - Cookbook sre.hosts.reimage for host db1149.eqiad.wmnet with OS bullseye [production]
12:30 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1149 (T302950)', diff saved to https://phabricator.wikimedia.org/P21790 and previous config saved to /var/cache/conftool/dbconfig/20220303-123030-ladsgroup.json [production]
12:30 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1149.eqiad.wmnet with reason: Maintenance [production]
12:30 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1149.eqiad.wmnet with reason: Maintenance [production]
11:49 <volans> uploaded spicerack_2.1.0 to apt.wikimedia.org buster-wikimedia,bullseye-wikimedia [production]
11:33 <kormat@cumin1001> dbctl commit (dc=all): 'db1126 (re)pooling @ 100%: Repooling to 100% after incident', diff saved to https://phabricator.wikimedia.org/P21789 and previous config saved to /var/cache/conftool/dbconfig/20220303-113304-kormat.json [production]
11:18 <kormat@cumin1001> dbctl commit (dc=all): 'db1126 (re)pooling @ 75%: Repooling to 100% after incident', diff saved to https://phabricator.wikimedia.org/P21788 and previous config saved to /var/cache/conftool/dbconfig/20220303-111801-kormat.json [production]
11:02 <kormat@cumin1001> dbctl commit (dc=all): 'db1126 (re)pooling @ 50%: Repooling to 100% after incident', diff saved to https://phabricator.wikimedia.org/P21787 and previous config saved to /var/cache/conftool/dbconfig/20220303-110257-kormat.json [production]
11:02 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1160 (T302950)', diff saved to https://phabricator.wikimedia.org/P21786 and previous config saved to /var/cache/conftool/dbconfig/20220303-110224-ladsgroup.json [production]
11:02 <kormat@cumin1001> dbctl commit (dc=all): 'Start repooling db1126 to full weight', diff saved to https://phabricator.wikimedia.org/P21785 and previous config saved to /var/cache/conftool/dbconfig/20220303-110220-kormat.json [production]