2251-2300 of 10000 results (87ms)
2023-09-25 §
09:06 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on 13 hosts with reason: Maintenance [production]
09:06 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance [production]
09:06 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance [production]
08:59 <Amir1> by the power vested in my be Chris Albon and ML team, I now pronounce ORES dead. [production]
08:58 <elukey> migrate ores.wikimedia.org's ATS backend to ores-legacy.discovery.wmnet (k8s app) - This will drain traffic to ORES bare metal nodes - T341696 [production]
08:57 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on 15 hosts with reason: Maintenance [production]
08:57 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on 15 hosts with reason: Maintenance [production]
08:57 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1162.eqiad.wmnet with reason: Maintenance [production]
08:57 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1162.eqiad.wmnet with reason: Maintenance [production]
08:56 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on 16 hosts with reason: Schema change [production]
08:56 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on 16 hosts with reason: Schema change [production]
08:43 <jayme> jayme@cumin1001 conftool action : set/pooled=no; selector: name=kubernetes2010.* - T347267 [production]
08:43 <jayme@cumin1001> conftool action : set/pooled=no; selector: name=kubernetes2010.* [production]
08:39 <jayme@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on kubernetes2010.codfw.wmnet with reason: host is down [production]
08:39 <jayme@cumin2002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on kubernetes2010.codfw.wmnet with reason: host is down [production]
08:27 <jayme> draining kubernetes2010.codfw.wmnet - T347267 [production]
08:01 <jayme> cordoning kubernetes2010 [production]
07:49 <taavi> drop cloudmetrics exceptions from cr firewall ACLs https://gerrit.wikimedia.org/r/c/operations/homer/public/+/960027 T326266 [production]
07:47 <taavi@deploy2002> Finished scap: Backport for [[gerrit:959986|Make sure different key values are handled while submitting (T345496)]] (duration: 30m 55s) [production]
07:37 <taavi@deploy2002> taavi and soda: Continuing with sync [production]
07:37 <XioNoX> update eqsin-ulsfo tranport link ospf metrics to match the new latency of 175ms [production]
07:29 <taavi@deploy2002> taavi and soda: Backport for [[gerrit:959986|Make sure different key values are handled while submitting (T345496)]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
07:22 <kevinbazira@deploy2002> helmfile [ml-serve-codfw] 'sync' command on namespace 'recommendation-api-ng' for release 'main' . [production]
07:20 <kevinbazira@deploy2002> helmfile [ml-serve-eqiad] 'sync' command on namespace 'recommendation-api-ng' for release 'main' . [production]
07:16 <taavi@deploy2002> Started scap: Backport for [[gerrit:959986|Make sure different key values are handled while submitting (T345496)]] [production]
07:06 <XioNoX> roll out "Block inbound RAs on the routers" - T334916 [production]
06:14 <ayounsi@cumin1001> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 35008 [production]
06:14 <ayounsi@cumin1001> START - Cookbook sre.network.peering with action 'configure' for AS: 35008 [production]
05:27 <kart_> Updated cxserver to 2023-09-13-074325-production (T346045) [production]
05:27 <kart_> Updated cxserver to 2023-09-13-074325-production (T346045) [production]
05:22 <kartik@deploy2002> helmfile [codfw] DONE helmfile.d/services/cxserver: apply [production]
05:22 <kartik@deploy2002> helmfile [codfw] START helmfile.d/services/cxserver: apply [production]
05:13 <kartik@deploy2002> helmfile [eqiad] DONE helmfile.d/services/cxserver: apply [production]
05:12 <kartik@deploy2002> helmfile [eqiad] START helmfile.d/services/cxserver: apply [production]
05:08 <kartik@deploy2002> helmfile [staging] DONE helmfile.d/services/cxserver: apply [production]
05:08 <kartik@deploy2002> helmfile [staging] START helmfile.d/services/cxserver: apply [production]
2023-09-24 §
23:05 <arnaudb@cumin1001> dbctl commit (dc=all): 'Depooling db2122 (T343198)', diff saved to https://phabricator.wikimedia.org/P52595 and previous config saved to /var/cache/conftool/dbconfig/20230924-230515-arnaudb.json [production]
23:05 <arnaudb@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2122.codfw.wmnet with reason: Maintenance [production]
23:04 <arnaudb@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2122.codfw.wmnet with reason: Maintenance [production]
23:04 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2121 (T343198)', diff saved to https://phabricator.wikimedia.org/P52594 and previous config saved to /var/cache/conftool/dbconfig/20230924-230443-arnaudb.json [production]
22:49 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2121', diff saved to https://phabricator.wikimedia.org/P52593 and previous config saved to /var/cache/conftool/dbconfig/20230924-224936-arnaudb.json [production]
22:34 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2121', diff saved to https://phabricator.wikimedia.org/P52592 and previous config saved to /var/cache/conftool/dbconfig/20230924-223430-arnaudb.json [production]
22:19 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2121 (T343198)', diff saved to https://phabricator.wikimedia.org/P52591 and previous config saved to /var/cache/conftool/dbconfig/20230924-221923-arnaudb.json [production]
10:28 <arnaudb@cumin1001> dbctl commit (dc=all): 'Depooling db2121 (T343198)', diff saved to https://phabricator.wikimedia.org/P52590 and previous config saved to /var/cache/conftool/dbconfig/20230924-102809-arnaudb.json [production]
10:28 <arnaudb@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2121.codfw.wmnet with reason: Maintenance [production]
10:27 <arnaudb@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2121.codfw.wmnet with reason: Maintenance [production]
10:27 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2120 (T343198)', diff saved to https://phabricator.wikimedia.org/P52589 and previous config saved to /var/cache/conftool/dbconfig/20230924-102747-arnaudb.json [production]
10:12 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2120', diff saved to https://phabricator.wikimedia.org/P52588 and previous config saved to /var/cache/conftool/dbconfig/20230924-101241-arnaudb.json [production]
09:57 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2120', diff saved to https://phabricator.wikimedia.org/P52587 and previous config saved to /var/cache/conftool/dbconfig/20230924-095734-arnaudb.json [production]
09:42 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2120 (T343198)', diff saved to https://phabricator.wikimedia.org/P52586 and previous config saved to /var/cache/conftool/dbconfig/20230924-094227-arnaudb.json [production]