1601-1650 of 10000 results (70ms)
2022-08-01 ยง
20:07 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
20:06 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
20:06 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
20:05 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
20:03 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2054.codfw.wmnet with OS bullseye [production]
19:41 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2054.codfw.wmnet with reason: host reimage [production]
19:35 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2054.codfw.wmnet with reason: host reimage [production]
19:12 <ryankemper@cumin1001> START - Cookbook sre.hosts.reimage for host elastic2054.codfw.wmnet with OS bullseye [production]
18:56 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2031.codfw.wmnet with OS bullseye [production]
18:44 <mutante> gitlab - moved data_persistence group to new parent, under /repos/ [production]
18:34 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2031.codfw.wmnet with reason: host reimage [production]
18:32 <mutante> gitlab - created group 'data_persistence' - added Ladsgroup and upgraded from member to maintainer [production]
18:27 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2031.codfw.wmnet with reason: host reimage [production]
18:12 <ryankemper@cumin1001> START - Cookbook sre.hosts.reimage for host elastic2031.codfw.wmnet with OS bullseye [production]
17:58 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2025.codfw.wmnet with OS bullseye [production]
17:37 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2025.codfw.wmnet with reason: host reimage [production]
17:31 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2025.codfw.wmnet with reason: host reimage [production]
17:18 <ryankemper> T289135 T314078 Manually reimaging remaining codfw stretch hosts (`elastic[2025,2031,2054,2059-2060]`) to bullseye, one host at a time, waiting for green cluster status to return between each run. `ryankemper@cumin1001` tmux session `codfw_reimage` [production]
17:16 <ryankemper@cumin1001> START - Cookbook sre.hosts.reimage for host elastic2025.codfw.wmnet with OS bullseye [production]
17:08 <bking@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.REIMAGE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reimage (bullseye upgrade) - bking@cumin1001 - T289135 [production]
17:08 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.REIMAGE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reimage (bullseye upgrade) - bking@cumin1001 - T289135 [production]
17:06 <mutante> alert1001 - systemctl restart nsca - pinged by fundraising tech because fundraising hosts have the "passive check is awol" issue again (T196336) [production]
16:25 <moritzm> installing tcpdump updates from bullseye point release [production]
16:23 <cwhite@puppetmaster1001> conftool action : set/pooled=yes; selector: dc=codfw,cluster=kibana7,name=logstash2023.codfw.wmnet [production]
16:16 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host dbproxy1018.eqiad.wmnet with OS bullseye [production]
16:10 <cwhite@puppetmaster1001> conftool action : set/pooled=no; selector: dc=codfw,cluster=kibana7,name=logstash2023.codfw.wmnet [production]
15:57 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on dbproxy1018.eqiad.wmnet with reason: host reimage [production]
15:54 <btullis@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on dbproxy1018.eqiad.wmnet with reason: host reimage [production]
15:41 <btullis@cumin1001> START - Cookbook sre.hosts.reimage for host dbproxy1018.eqiad.wmnet with OS bullseye [production]
15:39 <mvernon@cumin1001> END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching restbase1016.eqiad.wmnet: Canary testing of 3.11.13 on Restbase T309896 - mvernon@cumin1001 [production]
15:33 <pt1979@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
15:29 <pt1979@cumin2002> START - Cookbook sre.dns.netbox [production]
15:29 <mvernon@cumin1001> START - Cookbook sre.cassandra.roll-restart for nodes matching restbase1016.eqiad.wmnet: Canary testing of 3.11.13 on Restbase T309896 - mvernon@cumin1001 [production]
15:14 <lucaswerkmeister-wmde@deploy1002> Synchronized wmf-config/InitialiseSettings-labs.php: Config: [[gerrit:818127|Beta: add configuration for redirect badges (T313896)]] (2/2, should be a no-op) (duration: 03m 30s) [production]
15:11 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
15:11 <lucaswerkmeister-wmde@deploy1002> Synchronized wmf-config/Wikibase.php: Config: [[gerrit:818127|Beta: add configuration for redirect badges (T313896)]] (1/2, should be a no-op) (duration: 03m 15s) [production]
15:10 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
15:10 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
15:09 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
14:54 <btullis@puppetmaster1001> conftool action : set/pooled=no; selector: cluster=wikireplicas-a,name=dbproxy1018.eqiad.wmnet [production]
14:53 <btullis@puppetmaster1001> conftool action : set/pooled=yes; selector: cluster=wikireplicas-a,name=dbproxy1019.eqiad.wmnet [production]
14:42 <moritzm> installing openjdk-11 security updates [production]
14:39 <btullis@puppetmaster1001> conftool action : set/pooled=inactive; selector: cluster=wikireplicas-a,name=dbproxy1019.eqiad.wmnet [production]
14:39 <btullis@puppetmaster1001> conftool action : set/pooled=yes; selector: cluster=wikireplicas-a,name=dbproxy1018.eqiad.wmnet [production]
14:38 <btullis@puppetmaster1001> conftool action : set/pooled=no; selector: cluster=wikireplicas-a,name=dbproxy1018.eqiad.wmnet [production]
14:34 <btullis@puppetmaster1001> conftool action : set/pooled=yes; selector: cluster=wikireplicas-a,name=dbproxy1019.eqiad.wmnet [production]
14:30 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
14:30 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
14:29 <elukey@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. [production]
14:29 <elukey@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]