2651-2700 of 10000 results (20ms)
2024-09-09 §
07:37 <jayme@cumin1002> START - Cookbook sre.hosts.reimage for host kubestage2002.codfw.wmnet with OS bookworm [production]
07:36 <jayme@cumin1002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubestage2002.codfw.wmnet [production]
07:33 <jayme@cumin1002> START - Cookbook sre.k8s.pool-depool-node depool for host kubestage2002.codfw.wmnet [production]
07:33 <jayme@cumin1002> START - Cookbook sre.k8s.renumber-node Renumbering for host kubestage2002.codfw.wmnet [production]
07:33 <jayme@cumin1002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) pool for host kubestage2001.codfw.wmnet [production]
07:33 <jayme@cumin1002> START - Cookbook sre.k8s.pool-depool-node pool for host kubestage2001.codfw.wmnet [production]
07:17 <moritzm> installing Linux 5.10.223 on bullseye hosts [production]
07:06 <moritzm> roll out debmonitor-client 0.4.0-2+deb11u1 on bullseye hosts [production]
06:56 <moritzm> installing aom security updates [production]
02:55 <wmbot~anticomposite@tools-bastion-13> Deploy d4122ba [tools.krinklebot]
2024-09-08 §
21:07 <wmbot~deltaquad@tools-bastion-12> ./stewardbots/StewardBot/manage.sh restart # Disconnected [tools.stewardbots]
21:06 <wmbot~deltaquad@tools-sgebastion-10> Restarted StewardBot/SULWatcher because of a connection loss [tools.stewardbots]
09:49 <wmbot~bsadowski1@tools-bastion-13> Restarted StewardBot/SULWatcher because of a connection loss [tools.stewardbots]
06:18 <wmbot~deltaquad@tools-bastion-12> ./stewardbots/StewardBot/manage.sh restart # Disconnected [tools.stewardbots]
2024-09-07 §
14:44 <ebernhardson@deploy1003> helmfile [eqiad] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
14:44 <ebernhardson@deploy1003> helmfile [eqiad] START helmfile.d/services/cirrus-streaming-updater: apply [production]
14:38 <dani@deploy1003> helmfile [codfw] DONE helmfile.d/services/miscweb: apply [production]
14:38 <dani@deploy1003> helmfile [codfw] START helmfile.d/services/miscweb: apply [production]
14:38 <dani@deploy1003> helmfile [eqiad] DONE helmfile.d/services/miscweb: apply [production]
14:38 <dani@deploy1003> helmfile [eqiad] START helmfile.d/services/miscweb: apply [production]
14:38 <dani@deploy1003> helmfile [staging] DONE helmfile.d/services/miscweb: apply [production]
14:38 <dani@deploy1003> helmfile [staging] START helmfile.d/services/miscweb: apply [production]
14:34 <dani@deploy1003> helmfile [codfw] DONE helmfile.d/services/miscweb: apply [production]
14:34 <dani@deploy1003> helmfile [codfw] START helmfile.d/services/miscweb: apply [production]
14:34 <dani@deploy1003> helmfile [eqiad] DONE helmfile.d/services/miscweb: apply [production]
14:34 <dani@deploy1003> helmfile [eqiad] START helmfile.d/services/miscweb: apply [production]
14:34 <dani@deploy1003> helmfile [staging] DONE helmfile.d/services/miscweb: apply [production]
14:34 <dani@deploy1003> helmfile [staging] START helmfile.d/services/miscweb: apply [production]
10:28 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4 days, 0:00:00 on db1246.eqiad.wmnet with reason: https://phabricator.wikimedia.org/T374215 → server depooled has hardware issues [production]
10:28 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 4 days, 0:00:00 on db1246.eqiad.wmnet with reason: https://phabricator.wikimedia.org/T374215 → server depooled has hardware issues [production]
2024-09-06 §
23:18 <dcaro@cloudcumin1001> END (FAIL) - Cookbook wmcs.ceph.osd.drain_node (exit_code=99) (T373986) [admin]
22:35 <ebernhardson@deploy1003> helmfile [eqiad] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
22:35 <ebernhardson@deploy1003> helmfile [eqiad] START helmfile.d/services/cirrus-streaming-updater: apply [production]
19:07 <dduvall@deploy1003> Finished deploy [releng/jenkins-deploy@6ca00a7] (releasing): (no justification provided) (duration: 00m 43s) [production]
19:06 <dduvall@deploy1003> Started deploy [releng/jenkins-deploy@6ca00a7] (releasing): (no justification provided) [production]
19:01 <ebernhardson@deploy1003> helmfile [eqiad] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
19:00 <ebernhardson@deploy1003> helmfile [eqiad] START helmfile.d/services/cirrus-streaming-updater: apply [production]
18:57 <ebernhardson@deploy1003> helmfile [codfw] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
18:57 <ebernhardson@deploy1003> helmfile [codfw] START helmfile.d/services/cirrus-streaming-updater: apply [production]
18:32 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 7:00:00 on db2200.codfw.wmnet with reason: Maintenance [production]
18:32 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 7:00:00 on db2200.codfw.wmnet with reason: Maintenance [production]
18:17 <dcaro@cloudcumin1001> START - Cookbook wmcs.ceph.osd.drain_node (T373986) [admin]
18:12 <brett> Import ncmonitor 1.2.1-1 into bookworm-wikimedia apt archive [production]
17:58 <dcaro@cloudcumin1001> END (PASS) - Cookbook wmcs.ceph.osd.drain_node (exit_code=0) (T373986) [admin]
17:55 <brett> Import corto 0.3.1-1 into bookworm-wikimedia apt archive [production]
16:46 <kamila_> ran homer on cr*codfw* for T372878 [production]
16:30 <kamila@cumin1002> END (PASS) - Cookbook sre.k8s.renumber-node (exit_code=0) Renumbering for host wikikube-worker2103.codfw.wmnet [production]
16:30 <kamila@cumin1002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) pool for host wikikube-worker2103.codfw.wmnet [production]
16:30 <kamila@cumin1002> START - Cookbook sre.k8s.pool-depool-node pool for host wikikube-worker2103.codfw.wmnet [production]
16:24 <kamila@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2103.codfw.wmnet with OS bullseye [production]