3751-3800 of 10000 results (95ms)
2023-02-20 §
08:39 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
08:08 <moritzm> updating openjdk-11 on elastic* servers T329957 [production]
07:44 <moritzm> imported jenkins 2.375.3 to thirdparty/ci T330045 [production]
07:41 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
07:41 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
07:40 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
07:40 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
07:40 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
07:39 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
07:39 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
07:39 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
07:27 <Amir1> running migrateTagTemplate.php on all wikis (T329766) [production]
06:40 <hashar> Restarting Gerrit [production]
2023-02-18 §
08:29 <elukey> kill leftover processes of user `mepps` (offboarded) from stat100[4,5] to unblock puppet [production]
08:24 <elukey> delete /var/log/{syslog,messages,user.log).1 on kubestagetcd1005 to free space [production]
08:22 <elukey> delete /var/log/{messages,user.log).1 on kubestageetcd1006 to free space [production]
08:21 <elukey> delete /var/log/syslog.1 on kubestageetcd1006 to free space [production]
2023-02-17 §
22:45 <bking@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: cloudelastic cluster restart - bking@cumin1001 - T329957 [production]
22:09 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: cloudelastic cluster restart - bking@cumin1001 - T329957 [production]
22:06 <bking@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: cloudelastic cluster restart - bking@cumin1001 - T329957 [production]
22:05 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: cloudelastic cluster restart - bking@cumin1001 - T329957 [production]
19:30 <eevans@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host restbase1016.eqiad.wmnet [production]
19:02 <eevans@cumin1001> START - Cookbook sre.hosts.reboot-single for host restbase1016.eqiad.wmnet [production]
18:46 <eevans@cumin1001> END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching restbase10*.eqiad.wmnet: Restarting Cassandra to apply JVM 1.8.0_362 - eevans@cumin1001 [production]
17:49 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2187.codfw.wmnet with OS bullseye [production]
17:49 <pt1979@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - pt1979@cumin2002" [production]
17:48 <pt1979@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - pt1979@cumin2002" [production]
17:33 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2187.codfw.wmnet with reason: host reimage [production]
17:30 <pt1979@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on db2187.codfw.wmnet with reason: host reimage [production]
17:27 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2185.codfw.wmnet with OS bullseye [production]
17:27 <pt1979@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - pt1979@cumin2002" [production]
17:19 <pt1979@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - pt1979@cumin2002" [production]
17:10 <bking@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster relforge: relforge cluster restart - bking@cumin1001 - T329957 [production]
17:10 <pt1979@cumin2002> START - Cookbook sre.hosts.reimage for host db2187.codfw.wmnet with OS bullseye [production]
17:06 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster relforge: relforge cluster restart - bking@cumin1001 - T329957 [production]
17:06 <bking@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster relforge: relforge cluster restart - bking@cumin1001 - T329957 [production]
17:06 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster relforge: relforge cluster restart - bking@cumin1001 - T329957 [production]
17:04 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2185.codfw.wmnet with reason: host reimage [production]
17:01 <pt1979@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on db2185.codfw.wmnet with reason: host reimage [production]
16:42 <pt1979@cumin2002> START - Cookbook sre.hosts.reimage for host db2185.codfw.wmnet with OS bullseye [production]
16:40 <pt1979@cumin2002> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host db2185.codfw.wmnet with OS bullseye [production]
16:31 <pt1979@cumin2002> START - Cookbook sre.hosts.reimage for host db2185.codfw.wmnet with OS bullseye [production]
16:20 <pt1979@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['db2187'] [production]
16:20 <pt1979@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['db2187'] [production]
16:02 <pt1979@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['db2187'] [production]
16:01 <pt1979@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['db2187'] [production]
16:00 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host db2187.mgmt.codfw.wmnet with reboot policy FORCED [production]
15:53 <pt1979@cumin2002> START - Cookbook sre.hosts.provision for host db2187.mgmt.codfw.wmnet with reboot policy FORCED [production]
15:50 <elukey@cumin1001> END (PASS) - Cookbook sre.k8s.wipe-cluster (exit_code=0) Wipe the K8s cluster ml-staging-codfw: T327767 [production]
15:45 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]