7751-7800 of 10000 results (132ms)
2024-09-18 ยง
15:19 <bking@deploy1003> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
15:19 <sukhe> rolling out TLS1.3 cipher suite priority order change CR 1073798 to all cp hosts [production]
15:19 <swfrench@cumin2002> START - Cookbook sre.k8s.pool-depool-node depool for host mw2444.codfw.wmnet [production]
15:19 <bking@deploy1003> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
15:19 <swfrench@cumin2002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubernetes2051.codfw.wmnet [production]
15:19 <aokoth@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5 days, 0:00:00 on vrts2002.codfw.wmnet with reason: Migration [production]
15:18 <aokoth@cumin1002> START - Cookbook sre.hosts.downtime for 5 days, 0:00:00 on vrts2002.codfw.wmnet with reason: Migration [production]
15:18 <swfrench@cumin2002> START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes2051.codfw.wmnet [production]
15:18 <swfrench@cumin2002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubernetes2050.codfw.wmnet [production]
15:17 <swfrench@cumin2002> START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes2050.codfw.wmnet [production]
15:17 <swfrench@cumin2002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubernetes2049.codfw.wmnet [production]
15:17 <swfrench@cumin2002> START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes2049.codfw.wmnet [production]
15:16 <swfrench@cumin2002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubernetes2048.codfw.wmnet [production]
15:16 <swfrench@cumin2002> START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes2048.codfw.wmnet [production]
15:16 <swfrench@cumin2002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubernetes2024.codfw.wmnet [production]
15:15 <swfrench@cumin2002> START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes2024.codfw.wmnet [production]
15:15 <swfrench@cumin2002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubernetes2014.codfw.wmnet [production]
15:14 <swfrench@cumin2002> START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes2014.codfw.wmnet [production]
15:14 <swfrench@cumin2002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubernetes2013.codfw.wmnet [production]
15:14 <swfrench@cumin2002> START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes2013.codfw.wmnet [production]
15:08 <denisse> Resolve alerts DNS queries to alert1002 - T372418 [production]
15:03 <_joe_> uploading conftool 3.2.4 to apt T375059 [production]
15:02 <sukhe> sudo cumin "A:cp" 'disable-puppet "merging CR 1073798"': T365327 [production]
15:01 <denisse> Make alert1002 the active host - T372418 [production]
15:00 <denisse> Disable meta-monitoring for the alert hosts - T372418 [production]
14:55 <elukey> restart poolcounter on poolcounter100[4,5] (depooled nodes) to clear old/stale TCP conns for port 7531 [production]
14:54 <dcausse@deploy1003> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
14:54 <dcausse@deploy1003> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
14:54 <dcausse@deploy1003> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
14:54 <dcausse@deploy1003> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
14:53 <ayounsi@cumin1002> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 55655 [production]
14:52 <ayounsi@cumin1002> START - Cookbook sre.network.peering with action 'configure' for AS: 55655 [production]
14:50 <dcausse@deploy1003> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
14:49 <dcausse@deploy1003> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
14:47 <dcausse@deploy1003> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
14:46 <dcausse@deploy1003> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
14:45 <dcausse@deploy1003> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
14:45 <dcausse@deploy1003> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
14:42 <elukey@cumin1002> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ganeti1052.mgmt.eqiad.wmnet with chassis set policy FORCE_RESTART [production]
14:40 <sukhe@cumin1002> END (PASS) - Cookbook sre.dns.roll-restart-reboot-wikimedia-dns (exit_code=0) rolling restart_daemons on A:wikidough and A:wikidough [production]
14:36 <elukey@cumin1002> START - Cookbook sre.hosts.provision for host ganeti1052.mgmt.eqiad.wmnet with chassis set policy FORCE_RESTART [production]
14:26 <sukhe@cumin1002> START - Cookbook sre.dns.roll-restart-reboot-wikimedia-dns rolling restart_daemons on A:wikidough and A:wikidough [production]
14:25 <bking@deploy1003> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
14:24 <sukhe> run puppet agent on A:wikidough [production]
14:23 <bking@deploy1003> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
14:19 <bking@deploy1003> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
14:19 <bking@deploy1003> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
14:07 <bking@deploy1003> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
14:07 <bking@deploy1003> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
13:53 <elukey@deploy1003> Finished scap sync-world: Backport for [[gerrit:1073503|Swap poolcounter1005 with poolcounter1007 (T332015)]] (duration: 07m 23s) [production]