2301-2350 of 10000 results (120ms)
2025-01-30 ยง
14:11 <hashar@deploy2002> urbanecm, hashar: Continuing with sync [production]
14:11 <marostegui@cumin1002> dbctl commit (dc=all): 'db2172 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P72872 and previous config saved to /var/cache/conftool/dbconfig/20250130-141104-root.json [production]
14:10 <aokoth@deploy2002> helmfile [staging] START helmfile.d/services/miscweb: apply [production]
14:10 <hashar@deploy2002> urbanecm, hashar: Backport for [[gerrit:1115062|migrateConfigToCommunity: Handle false BabelMainCategory (T384941)]], [[gerrit:1115059|migrateConfigToCommunity: Handle false BabelMainCategory (T384941)]], [[gerrit:1115336|migrateConfigToCommunity: Include an edit summary (T385024)]], [[gerrit:1115337|migrateConfigToCommunity: Include an edit summary (T385024)]] synced to the testservers (https://wik [production]
14:06 <hashar@deploy2002> Started scap sync-world: Backport for [[gerrit:1115062|migrateConfigToCommunity: Handle false BabelMainCategory (T384941)]], [[gerrit:1115059|migrateConfigToCommunity: Handle false BabelMainCategory (T384941)]], [[gerrit:1115336|migrateConfigToCommunity: Include an edit summary (T385024)]], [[gerrit:1115337|migrateConfigToCommunity: Include an edit summary (T385024)]] [production]
14:05 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2194 (T384592)', diff saved to https://phabricator.wikimedia.org/P72871 and previous config saved to /var/cache/conftool/dbconfig/20250130-140553-marostegui.json [production]
14:04 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2029.codfw.wmnet [production]
13:55 <marostegui@cumin1002> dbctl commit (dc=all): 'db2172 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P72870 and previous config saved to /var/cache/conftool/dbconfig/20250130-135559-root.json [production]
13:50 <jayme@cumin1002> START - Cookbook sre.k8s.wipe-cluster Wipe the K8s cluster staging-codfw: Kubernetes upgrade [production]
13:50 <marostegui@cumin1002> dbctl commit (dc=all): 'db2224 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P72869 and previous config saved to /var/cache/conftool/dbconfig/20250130-135024-root.json [production]
13:45 <elukey@deploy2002> helmfile [eqiad] DONE helmfile.d/services/kartotherian: sync [production]
13:44 <elukey@deploy2002> helmfile [eqiad] START helmfile.d/services/kartotherian: sync [production]
13:36 <hashar@deploy2002> Finished scap sync-world: Backport for [[gerrit:1115344|Fix response error handling in FlickrBlacklist (T385143)]] (duration: 11m 54s) [production]
13:35 <marostegui@cumin1002> dbctl commit (dc=all): 'db2224 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P72868 and previous config saved to /var/cache/conftool/dbconfig/20250130-133519-root.json [production]
13:30 <hashar@deploy2002> hashar: Continuing with sync [production]
13:27 <hashar@deploy2002> hashar: Backport for [[gerrit:1115344|Fix response error handling in FlickrBlacklist (T385143)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
13:24 <hashar@deploy2002> Started scap sync-world: Backport for [[gerrit:1115344|Fix response error handling in FlickrBlacklist (T385143)]] [production]
13:20 <marostegui@cumin1002> dbctl commit (dc=all): 'db2224 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P72867 and previous config saved to /var/cache/conftool/dbconfig/20250130-132014-root.json [production]
13:15 <aokoth@deploy2002> helmfile [staging] DONE helmfile.d/services/miscweb: apply [production]
13:09 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on maps-test2001.codfw.wmnet with reason: host reimage [production]
13:06 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on maps-test2001.codfw.wmnet with reason: host reimage [production]
13:05 <marostegui@cumin1002> dbctl commit (dc=all): 'db2224 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P72866 and previous config saved to /var/cache/conftool/dbconfig/20250130-130509-root.json [production]
13:05 <aokoth@deploy2002> helmfile [staging] START helmfile.d/services/miscweb: apply [production]
13:03 <jayme@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on 7 hosts with reason: K8s update [production]
12:50 <marostegui@cumin1002> dbctl commit (dc=all): 'db2224 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P72865 and previous config saved to /var/cache/conftool/dbconfig/20250130-125004-root.json [production]
12:45 <jmm@cumin2002> START - Cookbook sre.hosts.reimage for host maps-test2001.codfw.wmnet with OS bookworm [production]
12:28 <marostegui@cumin1002> dbctl commit (dc=all): 'db2191 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P72864 and previous config saved to /var/cache/conftool/dbconfig/20250130-122856-root.json [production]
12:24 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db2194 (T384592)', diff saved to https://phabricator.wikimedia.org/P72863 and previous config saved to /var/cache/conftool/dbconfig/20250130-122416-marostegui.json [production]
12:24 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2194.codfw.wmnet with reason: Maintenance [production]
12:23 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2190 (T384592)', diff saved to https://phabricator.wikimedia.org/P72862 and previous config saved to /var/cache/conftool/dbconfig/20250130-122354-marostegui.json [production]
12:13 <marostegui@cumin1002> dbctl commit (dc=all): 'db2191 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P72861 and previous config saved to /var/cache/conftool/dbconfig/20250130-121351-root.json [production]
12:11 <hnowlan@deploy2002> Finished scap sync-world: testing removal of scap proxies (duration: 03m 18s) [production]
12:08 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2190', diff saved to https://phabricator.wikimedia.org/P72860 and previous config saved to /var/cache/conftool/dbconfig/20250130-120847-marostegui.json [production]
12:08 <hnowlan@deploy2002> Started scap sync-world: testing removal of scap proxies [production]
11:58 <marostegui@cumin1002> dbctl commit (dc=all): 'db2191 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P72859 and previous config saved to /var/cache/conftool/dbconfig/20250130-115846-root.json [production]
11:53 <marostegui@cumin1002> dbctl commit (dc=all): 'db1183 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P72858 and previous config saved to /var/cache/conftool/dbconfig/20250130-115348-root.json [production]
11:53 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2190', diff saved to https://phabricator.wikimedia.org/P72857 and previous config saved to /var/cache/conftool/dbconfig/20250130-115340-marostegui.json [production]
11:52 <elukey@deploy2002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'revscoring-editquality-damaging' for release 'main' . [production]
11:43 <marostegui@cumin1002> dbctl commit (dc=all): 'db2191 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P72856 and previous config saved to /var/cache/conftool/dbconfig/20250130-114341-root.json [production]
11:40 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2029.codfw.wmnet [production]
11:40 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2029.codfw.wmnet [production]
11:39 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2029.codfw.wmnet [production]
11:38 <marostegui@cumin1002> dbctl commit (dc=all): 'db1183 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P72855 and previous config saved to /var/cache/conftool/dbconfig/20250130-113842-root.json [production]
11:38 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2190 (T384592)', diff saved to https://phabricator.wikimedia.org/P72854 and previous config saved to /var/cache/conftool/dbconfig/20250130-113833-marostegui.json [production]
11:35 <root@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2224.codfw.wmnet with reason: Index rebuild [production]
11:34 <root@cumin1002> END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db2224.codfw.wmnet [production]
11:29 <root@cumin1002> START - Cookbook sre.mysql.upgrade for db2224.codfw.wmnet [production]
11:28 <ladsgroup@deploy2002> Synchronized portals/wikipedia.org/assets: Bump portals (second try) (duration: 11m 07s) [production]
11:28 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db2224', diff saved to https://phabricator.wikimedia.org/P72853 and previous config saved to /var/cache/conftool/dbconfig/20250130-112853-marostegui.json [production]
11:28 <marostegui@cumin1002> dbctl commit (dc=all): 'db2191 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P72852 and previous config saved to /var/cache/conftool/dbconfig/20250130-112836-root.json [production]