551-600 of 10000 results (53ms)
2021-11-23 ยง
12:26 <lucaswerkmeister-wmde@deploy1002> Synchronized php-1.38.0-wmf.9/extensions/ProofreadPage/modules/page/ext.proofreadpage.page.edit.js: Backport: [[gerrit:740778|OSD: Add a ready hook for scripts (T180569)]] (duration: 00m 56s) [production]
12:24 <mwdebug-deploy@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
12:21 <oblivian@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'apple-search' for release 'main' . [production]
12:12 <oblivian@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'apple-search' for release 'main' . [production]
12:09 <oblivian@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'apple-search' for release 'main' . [production]
12:04 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM pybal-test2003.codfw.wmnet [production]
12:01 <jmm@cumin2002> START - Cookbook sre.ganeti.reboot-vm for VM pybal-test2003.codfw.wmnet [production]
11:54 <btullis@cumin1001> START - Cookbook sre.cassandra.roll-restart for nodes matching O:aqs: restarting to pick up new JRE - btullis@cumin1001 [production]
11:51 <btullis@cumin1001> END (ERROR) - Cookbook sre.aqs.roll-restart (exit_code=97) for AQS aqs cluster: Roll restart of all AQS's nodejs daemons. [production]
11:51 <btullis@cumin1001> START - Cookbook sre.aqs.roll-restart for AQS aqs cluster: Roll restart of all AQS's nodejs daemons. [production]
11:41 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM pybal-test2002.codfw.wmnet [production]
11:41 <jmm@cumin2002> START - Cookbook sre.ganeti.reboot-vm for VM pybal-test2002.codfw.wmnet [production]
11:25 <godog> powercycle ms-be2058 - down and nothign on console [production]
11:17 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cp5012.eqsin.wmnet with OS buster [production]
11:15 <vgutierrez> pool cp5012 (text) using HAProxy as TLS terminator - T290005 [production]
11:08 <mwdebug-deploy@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
11:08 <Amir1> start of mwscript migrateRevisionActorTemp.php --wiki=testwiki --sleep=5 (T275246) [production]
11:05 <jayme> cordoned kubestage1003.eqiad.wmnet kubestage1004.eqiad.wmnet (we have issues with POD IP prefix allocation) - T293729 [production]
11:05 <jayme> uncordoned kubestage1001.eqiad.wmnet kubestage1002.eqiad.wmnet (we have issues with POD IP prefix allocation) - T293729 [production]
11:04 <mwdebug-deploy@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
11:02 <ladsgroup@deploy1002> Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:740807|Set test wikis to write both for actor temp table migration (T275246)]] (duration: 00m 56s) [production]
10:38 <mwdebug-deploy@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
10:31 <mwdebug-deploy@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
10:30 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on clouddb[1015,1019,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance T296143 [production]
10:30 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 4:00:00 on clouddb[1015,1019,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance T296143 [production]
10:29 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db1155.eqiad.wmnet with reason: Maintenance T296143 [production]
10:29 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 4:00:00 on db1155.eqiad.wmnet with reason: Maintenance T296143 [production]
10:29 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on clouddb[1015,1019,1021].eqiad.wmnet with reason: Maintenance T296143 [production]
10:28 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 4:00:00 on clouddb[1015,1019,1021].eqiad.wmnet with reason: Maintenance T296143 [production]
10:22 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1121 (T296143)', diff saved to https://phabricator.wikimedia.org/P17800 and previous config saved to /var/cache/conftool/dbconfig/20211123-102234-ladsgroup.json [production]
10:22 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db1121.eqiad.wmnet with reason: Maintenance T296143 [production]
10:22 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 4:00:00 on db1121.eqiad.wmnet with reason: Maintenance T296143 [production]
10:19 <urbanecm@deploy1002> Finished scap: c98acaa2ab27e630c0a1b55a64fb81b131c087f9: Backport localisation updates (duration: 11m 06s) [production]
10:19 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
10:18 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
10:08 <urbanecm@deploy1002> Started scap: c98acaa2ab27e630c0a1b55a64fb81b131c087f9: Backport localisation updates [production]
10:08 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reimage for host cp5012.eqsin.wmnet with OS buster [production]
10:01 <vgutierrez> depool cp5012 to be reimaged as cache::text_haproxy - T290005 [production]
09:57 <jayme> cordoned kubestage1001.eqiad.wmnet kubestage1002.eqiad.wmnet - T293729 [production]
09:52 <kharlan@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'linkrecommendation' for release 'staging' . [production]
09:37 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1124.eqiad.wmnet with OS bullseye [production]
09:27 <Amir1> dropping useless GRANTs on s6 eqiad replicas without replication (T296274) [production]
09:16 <Amir1> dropping useless GRANTs on s6 eqiad master without replication (T296274) [production]
09:09 <marostegui@cumin1001> START - Cookbook sre.hosts.reimage for host db1124.eqiad.wmnet with OS bullseye [production]
09:05 <Amir1> fixing incorrect grants of wikiadmin on localhost in s6 master in codfw with replication [production]
07:52 <topranks> Adjusting BGP on cr1-eqiad and cr2-eqiad to keep MED unchanged in iBGP. [production]
07:08 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1125.eqiad.wmnet with OS bullseye [production]
06:41 <marostegui@cumin1001> START - Cookbook sre.hosts.reimage for host db1125.eqiad.wmnet with OS bullseye [production]
05:28 <ryankemper> T295705 Downtimed `elastic2044` for one hour and doing a full reboot for good measure. Already ran the plugin upgrade: `DEBIAN_FRONTEND=noninteractive sudo apt-get -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install elasticsearch-oss wmf-elasticsearch-search-plugins` [production]
05:26 <ryankemper> T295705 Rolling restart of `codfw` complete. `elastic2044` was manually restarted earlier today so the cookbook didn't restart it (b/c we pass in a datetime cutoff threshold) so I'm manually upgrading and restarting that host [production]