751-800 of 10000 results (33ms)
2024-12-12 ยง
14:03 <jelto@cumin1002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) pool for host wikikube-worker2127.codfw.wmnet [production]
14:03 <jelto@cumin1002> START - Cookbook sre.k8s.pool-depool-node pool for host wikikube-worker2127.codfw.wmnet [production]
14:03 <elukey@deploy2002> helmfile [codfw] START helmfile.d/services/tegola-vector-tiles: sync [production]
14:01 <btullis@cumin1002> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-flink-codfw cluster: Roll restart of jvm daemons. [production]
14:01 <jelto@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2127.codfw.wmnet with OS bookworm [production]
13:55 <btullis@cumin1002> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-flink-codfw cluster: Roll restart of jvm daemons. [production]
13:53 <elukey@deploy2002> helmfile [eqiad] DONE helmfile.d/services/tegola-vector-tiles: sync [production]
13:52 <elukey@deploy2002> helmfile [eqiad] START helmfile.d/services/tegola-vector-tiles: sync [production]
13:52 <btullis@cumin1002> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-flink-eqiad cluster: Roll restart of jvm daemons. [production]
13:48 <elukey@deploy2002> helmfile [staging] DONE helmfile.d/services/tegola-vector-tiles: sync [production]
13:48 <elukey@deploy2002> helmfile [staging] START helmfile.d/services/tegola-vector-tiles: sync [production]
13:48 <marostegui@cumin1002> dbctl commit (dc=all): 'db1169 (re)pooling @ 10%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P71704 and previous config saved to /var/cache/conftool/dbconfig/20241212-134824-root.json [production]
13:48 <elukey@deploy2002> helmfile [staging-codfw] DONE helmfile.d/admin 'sync'. [production]
13:47 <elukey@deploy2002> helmfile [staging-codfw] START helmfile.d/admin 'sync'. [production]
13:47 <elukey@deploy2002> helmfile [staging-eqiad] DONE helmfile.d/admin 'sync'. [production]
13:46 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on db1169.eqiad.wmnet with reason: maintenance [production]
13:46 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1:00:00 on db1169.eqiad.wmnet with reason: maintenance [production]
13:46 <elukey@deploy2002> helmfile [staging-eqiad] START helmfile.d/admin 'sync'. [production]
13:46 <btullis@cumin1002> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-flink-eqiad cluster: Roll restart of jvm daemons. [production]
13:41 <moritzm> installing Python 3.11 security updates [production]
13:41 <jelto@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wikikube-worker2127.codfw.wmnet with reason: host reimage [production]
13:39 <elukey@deploy2002> helmfile [staging] DONE helmfile.d/services/tegola-vector-tiles: sync [production]
13:38 <jelto@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on wikikube-worker2127.codfw.wmnet with reason: host reimage [production]
13:36 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1169 (T381532)', diff saved to https://phabricator.wikimedia.org/P71703 and previous config saved to /var/cache/conftool/dbconfig/20241212-133633-marostegui.json [production]
13:36 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db1169.eqiad.wmnet with reason: Maintenance [production]
13:36 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on db1169.eqiad.wmnet with reason: Maintenance [production]
13:32 <moritzm> rebalance Ganeti cluster in codfw/D following server refresh T376594 [production]
13:29 <elukey@deploy2002> helmfile [staging] START helmfile.d/services/tegola-vector-tiles: sync [production]
13:19 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on pc[2014,2016].codfw.wmnet with reason: maintenance [production]
13:19 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on pc[2014,2016].codfw.wmnet with reason: maintenance [production]
13:18 <jelto@cumin1002> END (PASS) - Cookbook sre.hosts.move-vlan (exit_code=0) for host wikikube-worker2127 [production]
13:18 <jelto@cumin1002> START - Cookbook sre.hosts.move-vlan for host wikikube-worker2127 [production]
13:18 <jelto@cumin1002> START - Cookbook sre.hosts.reimage for host wikikube-worker2127.codfw.wmnet with OS bookworm [production]
13:16 <jelto@cumin1002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host wikikube-worker2127.codfw.wmnet [production]
13:15 <jelto@cumin1002> START - Cookbook sre.k8s.pool-depool-node depool for host wikikube-worker2127.codfw.wmnet [production]
13:15 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on pc[1013,1017].eqiad.wmnet with reason: maintenance [production]
13:15 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on pc[1013,1017].eqiad.wmnet with reason: maintenance [production]
13:11 <mszabo@deploy2002> Finished scap sync-world: Backport for [[gerrit:1102813|Enable IRS in the Project namespace on ptwiki (T382061)]] (duration: 09m 41s) [production]
13:06 <mszabo@deploy2002> mszabo: Continuing with sync [production]
13:05 <mszabo@deploy2002> mszabo: Backport for [[gerrit:1102813|Enable IRS in the Project namespace on ptwiki (T382061)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
13:02 <mszabo@deploy2002> Started scap sync-world: Backport for [[gerrit:1102813|Enable IRS in the Project namespace on ptwiki (T382061)]] [production]
12:36 <btullis@cumin1002> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster relforge: Restarting to pick up new JRE for T377938 - btullis@cumin1002 - T377938 [production]
12:31 <btullis@cumin1002> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster relforge: Restarting to pick up new JRE for T377938 - btullis@cumin1002 - T377938 [production]
12:30 <btullis@cumin1002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster relforge: Restarting to pick up new JRE for T377938 - btullis@cumin1002 - T377938 [production]
12:29 <btullis@cumin1002> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster relforge: Restarting to pick up new JRE for T377938 - btullis@cumin1002 - T377938 [production]
12:15 <hnowlan@deploy1003> helmfile [codfw] DONE helmfile.d/services/mw-videoscaler: apply [production]
12:15 <hnowlan@deploy1003> helmfile [codfw] START helmfile.d/services/mw-videoscaler: apply [production]
12:15 <hnowlan@deploy1003> helmfile [eqiad] DONE helmfile.d/services/mw-videoscaler: apply [production]
12:15 <hnowlan@deploy1003> helmfile [eqiad] START helmfile.d/services/mw-videoscaler: apply [production]
12:10 <hnowlan@deploy2002> Finished scap sync-world: syncing changes to mediawiki chart vendor dependencies (duration: 09m 30s) [production]