2351-2400 of 10000 results (52ms)
2022-03-03 ยง
14:14 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host people1003.eqiad.wmnet [production]
14:04 <volans> upgraded spicerack to v2.1.0 on cumin1001/cumin2002 [production]
14:03 <akosiaris@cumin1001> END (PASS) - Cookbook sre.ores.roll-restart-workers (exit_code=0) for ORES eqiad cluster: Roll restart of ORES's daemons. [production]
13:57 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1149 (T302950)', diff saved to https://phabricator.wikimedia.org/P21794 and previous config saved to /var/cache/conftool/dbconfig/20220303-135737-ladsgroup.json [production]
13:54 <akosiaris@deploy1002> helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
13:54 <akosiaris> switch changeprop, changeprop-jobqueue to use rdb1011. T281217 [production]
13:53 <akosiaris@deploy1002> helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: apply [production]
13:53 <akosiaris@deploy1002> helmfile [eqiad] DONE helmfile.d/services/changeprop: apply [production]
13:53 <akosiaris@deploy1002> helmfile [eqiad] START helmfile.d/services/changeprop: apply [production]
13:53 <akosiaris@deploy1002> helmfile [codfw] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
13:52 <akosiaris@deploy1002> helmfile [codfw] START helmfile.d/services/changeprop-jobqueue: apply [production]
13:52 <akosiaris@deploy1002> helmfile [codfw] DONE helmfile.d/services/changeprop: apply [production]
13:52 <akosiaris@deploy1002> helmfile [codfw] START helmfile.d/services/changeprop: apply [production]
13:52 <akosiaris@deploy1002> helmfile [staging] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
13:52 <akosiaris@deploy1002> helmfile [staging] START helmfile.d/services/changeprop-jobqueue: apply [production]
13:52 <akosiaris@deploy1002> helmfile [staging] DONE helmfile.d/services/changeprop: apply [production]
13:51 <akosiaris@deploy1002> helmfile [staging] START helmfile.d/services/changeprop: apply [production]
13:45 <akosiaris> roll restart ores uwsgi and celery for rdb1005 decommissioning. T281217 [production]
13:44 <akosiaris@cumin1001> START - Cookbook sre.ores.roll-restart-workers for ORES eqiad cluster: Roll restart of ORES's daemons. [production]
13:42 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1149', diff saved to https://phabricator.wikimedia.org/P21793 and previous config saved to /var/cache/conftool/dbconfig/20220303-134232-ladsgroup.json [production]
13:20 <moritzm> restarting FPM/Apache on mw app servers to pick up expat security updates [production]
13:12 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1149 (T302950)', diff saved to https://phabricator.wikimedia.org/P21791 and previous config saved to /var/cache/conftool/dbconfig/20220303-131223-ladsgroup.json [production]
13:05 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1149.eqiad.wmnet with OS bullseye [production]
12:50 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1149.eqiad.wmnet with reason: host reimage [production]
12:47 <hashar> Upgrading Quibble on CI Jenkins jobs from 1.3.0 to 1.4.3 https://gerrit.wikimedia.org/r/c/integration/config/+/767749/ [production]
12:47 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on db1149.eqiad.wmnet with reason: host reimage [production]
12:35 <ladsgroup@cumin1001> START - Cookbook sre.hosts.reimage for host db1149.eqiad.wmnet with OS bullseye [production]
12:30 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1149 (T302950)', diff saved to https://phabricator.wikimedia.org/P21790 and previous config saved to /var/cache/conftool/dbconfig/20220303-123030-ladsgroup.json [production]
12:30 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1149.eqiad.wmnet with reason: Maintenance [production]
12:30 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1149.eqiad.wmnet with reason: Maintenance [production]
11:49 <volans> uploaded spicerack_2.1.0 to apt.wikimedia.org buster-wikimedia,bullseye-wikimedia [production]
11:33 <kormat@cumin1001> dbctl commit (dc=all): 'db1126 (re)pooling @ 100%: Repooling to 100% after incident', diff saved to https://phabricator.wikimedia.org/P21789 and previous config saved to /var/cache/conftool/dbconfig/20220303-113304-kormat.json [production]
11:18 <kormat@cumin1001> dbctl commit (dc=all): 'db1126 (re)pooling @ 75%: Repooling to 100% after incident', diff saved to https://phabricator.wikimedia.org/P21788 and previous config saved to /var/cache/conftool/dbconfig/20220303-111801-kormat.json [production]
11:02 <kormat@cumin1001> dbctl commit (dc=all): 'db1126 (re)pooling @ 50%: Repooling to 100% after incident', diff saved to https://phabricator.wikimedia.org/P21787 and previous config saved to /var/cache/conftool/dbconfig/20220303-110257-kormat.json [production]
11:02 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1160 (T302950)', diff saved to https://phabricator.wikimedia.org/P21786 and previous config saved to /var/cache/conftool/dbconfig/20220303-110224-ladsgroup.json [production]
11:02 <kormat@cumin1001> dbctl commit (dc=all): 'Start repooling db1126 to full weight', diff saved to https://phabricator.wikimedia.org/P21785 and previous config saved to /var/cache/conftool/dbconfig/20220303-110220-kormat.json [production]
10:58 <ladsgroup@deploy1002> Synchronized php-1.38.0-wmf.23/includes/libs/rdbms/loadbalancer/LoadBalancer.php: Backport: [[gerrit:767692|rdbms: Change getConnectionRef to return with getLazyConnectionRef (T255493)]] (duration: 00m 50s) [production]
10:50 <ladsgroup@deploy1002> Synchronized php-1.38.0-wmf.24/includes/libs/rdbms/loadbalancer/LoadBalancer.php: Backport: [[gerrit:767691|rdbms: Change getConnectionRef to return with getLazyConnectionRef (T255493)]] (duration: 00m 51s) [production]
10:47 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1160', diff saved to https://phabricator.wikimedia.org/P21784 and previous config saved to /var/cache/conftool/dbconfig/20220303-104713-ladsgroup.json [production]
10:36 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3317 (T300992)', diff saved to https://phabricator.wikimedia.org/P21783 and previous config saved to /var/cache/conftool/dbconfig/20220303-103659-ladsgroup.json [production]
10:32 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1160', diff saved to https://phabricator.wikimedia.org/P21782 and previous config saved to /var/cache/conftool/dbconfig/20220303-103209-ladsgroup.json [production]
10:30 <XioNoX> repool ulsfo [production]
10:21 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3317', diff saved to https://phabricator.wikimedia.org/P21781 and previous config saved to /var/cache/conftool/dbconfig/20220303-102154-ladsgroup.json [production]
10:18 <elukey> kubectl cordon kubernetes200[1-4] to avoid scheduling pods on nodes that will be decommed during the next weeks - T302208 [production]
10:17 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1160 (T302950)', diff saved to https://phabricator.wikimedia.org/P21780 and previous config saved to /var/cache/conftool/dbconfig/20220303-101704-ladsgroup.json [production]
10:09 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1160.eqiad.wmnet with OS bullseye [production]
10:06 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3317', diff saved to https://phabricator.wikimedia.org/P21779 and previous config saved to /var/cache/conftool/dbconfig/20220303-100649-ladsgroup.json [production]
09:53 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1160.eqiad.wmnet with reason: host reimage [production]
09:51 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3317 (T300992)', diff saved to https://phabricator.wikimedia.org/P21778 and previous config saved to /var/cache/conftool/dbconfig/20220303-095145-ladsgroup.json [production]
09:48 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on db1160.eqiad.wmnet with reason: host reimage [production]