351-400 of 10000 results (47ms)
2022-04-07 ยง
18:43 <dzahn@cumin2002> conftool action : set/pooled=yes; selector: dc=eqiad,name=wtp1030.eqiad.wmnet [production]
18:39 <dzahn@cumin2002> conftool action : set/pooled=no; selector: dc=eqiad,name=wtp1029.eqiad.wmnet [production]
18:38 <dzahn@cumin2002> conftool action : set/pooled=yes; selector: dc=eqiad,name=wtp1031.eqiad.wmnet [production]
18:35 <dzahn@cumin2002> conftool action : set/pooled=no; selector: dc=eqiad,name=wtp1030.eqiad.wmnet [production]
18:34 <dzahn@cumin2002> conftool action : set/pooled=yes; selector: dc=eqiad,name=wtp1032.eqiad.wmnet [production]
18:33 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:32 <ryankemper> [Elastic] Pooled `elastic1052` (likely was erroneously left depooled after https://phabricator.wikimedia.org/P19885) [production]
18:29 <dzahn@cumin2002> conftool action : set/pooled=no; selector: dc=eqiad,name=wtp1031.eqiad.wmnet [production]
18:29 <dzahn@cumin2002> conftool action : set/pooled=yes; selector: dc=eqiad,name=wtp1033.eqiad.wmnet [production]
18:28 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
18:25 <razzi@cumin1001> START - Cookbook sre.hadoop.reboot-workers for Hadoop test cluster [production]
18:22 <dzahn@cumin2002> conftool action : set/pooled=no; selector: dc=eqiad,name=wtp1032.eqiad.wmnet [production]
18:22 <dzahn@cumin2002> conftool action : set/pooled=yes; selector: dc=eqiad,name=wtp1034.eqiad.wmnet [production]
18:17 <dzahn@cumin2002> conftool action : set/pooled=no; selector: dc=eqiad,name=wtp1033.eqiad.wmnet [production]
18:17 <dzahn@cumin2002> conftool action : set/pooled=yes; selector: dc=eqiad,name=wtp1035.eqiad.wmnet [production]
18:12 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host dse-k8s-worker1004.eqiad.wmnet with OS bullseye [production]
18:09 <ryankemper> [WDQS Deploy] Deploy complete. Successful test query placed on query.wikidata.org, there's no relevant criticals in Icinga, and Grafana looks good [production]
18:08 <ryankemper> [WCQS Deploy] Successful test query placed on commons-query.wikimedia.org, there's no relevant criticals in Icinga, and Grafana looks good. WCQS deploy complete [production]
18:08 <ryankemper> [WCQS Deploy] Restarted `wcqs-updater` across all hosts [production]
18:08 <dzahn@cumin2002> conftool action : set/pooled=no; selector: dc=eqiad,name=wtp1034.eqiad.wmnet [production]
18:07 <dzahn@cumin2002> conftool action : set/pooled=no; selector: dc=eqiad,name=wtp1035.eqiad.wmnet [production]
18:07 <dzahn@cumin2002> conftool action : set/pooled=yes; selector: dc=eqiad,name=wtp1037.eqiad.wmnet [production]
18:04 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host dse-k8s-worker1002.eqiad.wmnet with OS bullseye [production]
18:02 <dzahn@cumin2002> conftool action : set/pooled=yes; selector: dc=eqiad,name=wtp1036.eqiad.wmnet [production]
18:01 <ryankemper@deploy1002> Finished deploy [wdqs/wdqs@0d95eca] (wcqs): Deploy 0.3.110 to WCQS (duration: 01m 58s) [production]
18:01 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host dse-k8s-worker1003.eqiad.wmnet with OS bullseye [production]
18:00 <ryankemper> [WCQS Deploy] Tests look good following deploy of `0.3.110` to `wcqs1003.eqiad.wmnet`, proceeding to rest of fleet [production]
17:59 <ryankemper@deploy1002> Started deploy [wdqs/wdqs@0d95eca] (wcqs): Deploy 0.3.110 to WCQS [production]
17:58 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host dse-k8s-worker1001.eqiad.wmnet with OS bullseye [production]
17:57 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1171.eqiad.wmnet with reason: Maintenance [production]
17:57 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1171.eqiad.wmnet with reason: Maintenance [production]
17:57 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on 10 hosts with reason: Maintenance [production]
17:57 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on 10 hosts with reason: Maintenance [production]
17:57 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2121.codfw.wmnet with reason: Maintenance [production]
17:57 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2121.codfw.wmnet with reason: Maintenance [production]
17:57 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158 (T305300)', diff saved to https://phabricator.wikimedia.org/P24270 and previous config saved to /var/cache/conftool/dbconfig/20220407-175730-ladsgroup.json [production]
17:52 <mutante> rebooting wtp103* servers [production]
17:52 <dzahn@cumin2002> conftool action : set/pooled=no; selector: dc=eqiad,name=wtp1037.eqiad.wmnet [production]
17:51 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on dse-k8s-worker1004.eqiad.wmnet with reason: host reimage [production]
17:50 <ryankemper> T293862 Removed touched files so that it'll be easier to see when the new jvmquake threshold is crossed: `ryankemper@cumin1001:~$ sudo -E cumin 'A:wdqs-public' "rm -fv '/tmp/wdqs_blazegraph_jvmquake_warn_gc'"` [production]
17:47 <cmjohnson@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on dse-k8s-worker1004.eqiad.wmnet with reason: host reimage [production]
17:46 <dzahn@cumin2002> conftool action : set/pooled=no; selector: dc=eqiad,name=wtp1036.eqiad.wmnet [production]
17:44 <ryankemper> T293862 Rolling restart of wdqs public is complete; new jvmquake settings have been uptaken on wdqs public hosts: `-agentpath:/usr/lib/libjvmquake.so=1000,5,0,warn=60,touch=/tmp/wdqs_blazegraph_jvmquake_warn_gc` [production]
17:43 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on dse-k8s-worker1002.eqiad.wmnet with reason: host reimage [production]
17:42 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P24269 and previous config saved to /var/cache/conftool/dbconfig/20220407-174224-ladsgroup.json [production]
17:41 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on dse-k8s-worker1003.eqiad.wmnet with reason: host reimage [production]
17:40 <ryankemper> [WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` [production]
17:40 <ryankemper> [WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` [production]
17:40 <ryankemper> [WDQS Deploy] Restarted `wdqs-updater` across all hosts, 4 hosts at a time: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` [production]
17:38 <cmjohnson@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on dse-k8s-worker1002.eqiad.wmnet with reason: host reimage [production]