2851-2900 of 10000 results (88ms)
2024-02-28 ยง
10:34 <arnaudb@cumin1002> dbctl commit (dc=all): 'Depooling db1188 (T357189)', diff saved to https://phabricator.wikimedia.org/P58036 and previous config saved to /var/cache/conftool/dbconfig/20240228-103442-arnaudb.json [production]
10:34 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1188.eqiad.wmnet with reason: Maintenance [production]
10:34 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 12:00:00 on db1188.eqiad.wmnet with reason: Maintenance [production]
10:34 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1182 (T357189)', diff saved to https://phabricator.wikimedia.org/P58035 and previous config saved to /var/cache/conftool/dbconfig/20240228-103419-arnaudb.json [production]
10:32 <claime> Lowered the weight of small disk videoscalers [production]
10:31 <cgoubert@cumin2002> conftool action : set/weight=15; selector: name=mw(2259|226[3-6]|2278|2279|2281).codfw.wmnet,cluster=videoscaler [production]
10:31 <moritzm> copy cas from bullseye-wikimedia to bookworm-wikimedia T357748 [production]
10:19 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P58034 and previous config saved to /var/cache/conftool/dbconfig/20240228-101913-arnaudb.json [production]
10:18 <volans> installed spicerack 8.4.0 on cumin1002 [production]
10:12 <claime> clearing up leftover boxedcommand media files on mw2281 - sudo find . -type f \( -name '*.wav' -o -name '*.ogg' -o -name '*.webm' -o -name '*.mov' -o -name '*.mp4' \) -mmin +1200 -exec sh -c "lsof {} || rm {}" \; [production]
10:12 <ladsgroup@cumin1002> START - Cookbook sre.mysql.clone of db2156.codfw.wmnet onto db2177.codfw.wmnet [production]
10:07 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Depooling db1165 (T352010)', diff saved to https://phabricator.wikimedia.org/P58033 and previous config saved to /var/cache/conftool/dbconfig/20240228-100720-ladsgroup.json [production]
10:07 <ladsgroup@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on clouddb[1015,1019,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
10:07 <ladsgroup@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on clouddb[1015,1019,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
10:06 <ladsgroup@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1165.eqiad.wmnet with reason: Maintenance [production]
10:06 <ladsgroup@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1165.eqiad.wmnet with reason: Maintenance [production]
10:04 <claime> clearing up leftover boxedcommand media files on mw2278 - sudo find . -type f \( -name '*.wav' -o -name '*.ogg' -o -name '*.webm' -o -name '*.mov' -o -name '*.mp4' \) -mmin +1200 -exec sh -c "lsof {} || rm {}" \; [production]
10:04 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P58032 and previous config saved to /var/cache/conftool/dbconfig/20240228-100406-arnaudb.json [production]
10:03 <ayounsi@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on testvm2006.codfw.wmnet with reason: host reimage [production]
10:00 <ayounsi@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on testvm2006.codfw.wmnet with reason: host reimage [production]
09:54 <ladsgroup@deploy2002> Finished scap: Backport for [[gerrit:1006853|Set three more wikis to read new on pagelinks migration (T351237)]] (duration: 10m 03s) [production]
09:49 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1182 (T357189)', diff saved to https://phabricator.wikimedia.org/P58030 and previous config saved to /var/cache/conftool/dbconfig/20240228-094900-arnaudb.json [production]
09:46 <ayounsi@cumin2002> START - Cookbook sre.hosts.reimage for host testvm2006.codfw.wmnet with OS bookworm [production]
09:46 <ladsgroup@deploy2002> ladsgroup: Continuing with sync [production]
09:46 <joal@deploy2002> Finished deploy [analytics/refinery@dba67fd] (hadoop-test): Additional analytics weekly train - TEST [analytics/refinery@dba67fd6] (duration: 03m 33s) [production]
09:46 <ayounsi@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM testvm2006.codfw.wmnet - ayounsi@cumin2002" [production]
09:45 <ladsgroup@deploy2002> ladsgroup: Backport for [[gerrit:1006853|Set three more wikis to read new on pagelinks migration (T351237)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
09:45 <ayounsi@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM testvm2006.codfw.wmnet - ayounsi@cumin2002" [production]
09:45 <ayounsi@cumin2002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) testvm2006.codfw.wmnet on all recursors [production]
09:44 <ayounsi@cumin2002> START - Cookbook sre.dns.wipe-cache testvm2006.codfw.wmnet on all recursors [production]
09:44 <ayounsi@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
09:44 <ayounsi@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add records for VM testvm2006.codfw.wmnet - ayounsi@cumin2002" [production]
09:44 <ladsgroup@deploy2002> Started scap: Backport for [[gerrit:1006853|Set three more wikis to read new on pagelinks migration (T351237)]] [production]
09:42 <ayounsi@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add records for VM testvm2006.codfw.wmnet - ayounsi@cumin2002" [production]
09:42 <joal@deploy2002> Started deploy [analytics/refinery@dba67fd] (hadoop-test): Additional analytics weekly train - TEST [analytics/refinery@dba67fd6] [production]
09:42 <joal@deploy2002> Finished deploy [analytics/refinery@dba67fd] (thin): Additional analytics weekly train - THIN [analytics/refinery@dba67fd6] (duration: 00m 05s) [production]
09:42 <joal@deploy2002> Started deploy [analytics/refinery@dba67fd] (thin): Additional analytics weekly train - THIN [analytics/refinery@dba67fd6] [production]
09:41 <joal@deploy2002> Finished deploy [analytics/refinery@dba67fd]: Additional analytics weekly train [analytics/refinery@dba67fd6] (duration: 13m 16s) [production]
09:41 <filippo@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]
09:41 <filippo@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]
09:41 <arnaudb@cumin1002> dbctl commit (dc=all): 'Depooling db1182 (T357189)', diff saved to https://phabricator.wikimedia.org/P58029 and previous config saved to /var/cache/conftool/dbconfig/20240228-094103-arnaudb.json [production]
09:41 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1182.eqiad.wmnet with reason: Maintenance [production]
09:41 <filippo@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]
09:41 <ayounsi@cumin2002> START - Cookbook sre.dns.netbox [production]
09:41 <ayounsi@cumin2002> START - Cookbook sre.ganeti.makevm for new host testvm2006.codfw.wmnet [production]
09:40 <filippo@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]
09:40 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 12:00:00 on db1182.eqiad.wmnet with reason: Maintenance [production]
09:40 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1156 (T357189)', diff saved to https://phabricator.wikimedia.org/P58028 and previous config saved to /var/cache/conftool/dbconfig/20240228-094041-arnaudb.json [production]
09:40 <filippo@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]
09:39 <filippo@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]