2024-02-15
§
|
09:38 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1244:3314 (T352010)', diff saved to https://phabricator.wikimedia.org/P56827 and previous config saved to /var/cache/conftool/dbconfig/20240215-093850-ladsgroup.json |
[production] |
09:29 |
<jmm@cumin2002> |
START - Cookbook sre.puppet.migrate-role for role: eventlogging::analytics |
[production] |
08:50 |
<moritzm> |
rebalance Ganeti codfw/A now that the switch maintenance for A5 and A6 are completed T355864 T355863 |
[production] |
08:39 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.puppet.migrate-host (exit_code=0) for host restbase1036.eqiad.wmnet |
[production] |
08:35 |
<jmm@cumin2002> |
START - Cookbook sre.puppet.migrate-host for host restbase1036.eqiad.wmnet |
[production] |
08:32 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.puppet.migrate-role (exit_code=0) for role: apifeatureusage::logstash |
[production] |
08:18 |
<jmm@cumin2002> |
START - Cookbook sre.puppet.migrate-role for role: apifeatureusage::logstash |
[production] |
05:43 |
<kart_> |
Update cxserver to 2023-12-04-083437-production (T344982, T338432, T351138) |
[production] |
05:40 |
<kartik@deploy2002> |
helmfile [eqiad] DONE helmfile.d/services/cxserver: apply |
[production] |
05:39 |
<kartik@deploy2002> |
helmfile [eqiad] START helmfile.d/services/cxserver: apply |
[production] |
05:39 |
<kartik@deploy2002> |
helmfile [codfw] DONE helmfile.d/services/cxserver: apply |
[production] |
05:38 |
<kartik@deploy2002> |
helmfile [codfw] START helmfile.d/services/cxserver: apply |
[production] |
04:45 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1243', diff saved to https://phabricator.wikimedia.org/P56823 and previous config saved to /var/cache/conftool/dbconfig/20240215-044554-ladsgroup.json |
[production] |
04:30 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1243 (T352010)', diff saved to https://phabricator.wikimedia.org/P56822 and previous config saved to /var/cache/conftool/dbconfig/20240215-043047-ladsgroup.json |
[production] |
04:30 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on dbstore1007.eqiad.wmnet with reason: Maintenance |
[production] |
04:29 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on dbstore1007.eqiad.wmnet with reason: Maintenance |
[production] |
02:31 |
<jclark@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host restbase1036.eqiad.wmnet with OS bullseye |
[production] |
02:31 |
<jclark@cumin1002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jclark@cumin1002" |
[production] |
02:29 |
<jclark@cumin1002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jclark@cumin1002" |
[production] |
02:14 |
<jclark@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on restbase1036.eqiad.wmnet with reason: host reimage |
[production] |
02:11 |
<jclark@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on restbase1036.eqiad.wmnet with reason: host reimage |
[production] |
01:55 |
<jclark@cumin1002> |
START - Cookbook sre.hosts.reimage for host restbase1036.eqiad.wmnet with OS bullseye |
[production] |
01:46 |
<aokoth@cumin1002> |
END (FAIL) - Cookbook sre.ganeti.reboot-vm (exit_code=99) for VM vrts1002.eqiad.wmnet |
[production] |
01:37 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching P{P:cassandra%rack = "d"} and A:restbase and A:codfw: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
00:45 |
<eevans@cumin1002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching P{P:cassandra%rack = "d"} and A:restbase and A:codfw: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
2024-02-14
§
|
23:57 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db1243 (T352010)', diff saved to https://phabricator.wikimedia.org/P56821 and previous config saved to /var/cache/conftool/dbconfig/20240214-235725-ladsgroup.json |
[production] |
23:57 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1243.eqiad.wmnet with reason: Maintenance |
[production] |
23:57 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1243.eqiad.wmnet with reason: Maintenance |
[production] |
23:57 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1242 (T352010)', diff saved to https://phabricator.wikimedia.org/P56820 and previous config saved to /var/cache/conftool/dbconfig/20240214-235703-ladsgroup.json |
[production] |
23:41 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1242', diff saved to https://phabricator.wikimedia.org/P56819 and previous config saved to /var/cache/conftool/dbconfig/20240214-234157-ladsgroup.json |
[production] |
23:32 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching P{P:cassandra%rack = "c"} and A:restbase and A:codfw: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
23:26 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1242', diff saved to https://phabricator.wikimedia.org/P56818 and previous config saved to /var/cache/conftool/dbconfig/20240214-232651-ladsgroup.json |
[production] |
23:14 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: apply new master settings - bking@cumin2002 - T355617 |
[production] |
23:11 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1242 (T352010)', diff saved to https://phabricator.wikimedia.org/P56817 and previous config saved to /var/cache/conftool/dbconfig/20240214-231144-ladsgroup.json |
[production] |
23:10 |
<eileen> |
civicrm upgraded from 3ee91f59 to 84ba0ccf |
[production] |
22:51 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: apply new master settings - bking@cumin2002 - T355617 |
[production] |
22:50 |
<bking@cumin2002> |
conftool action : set/pooled=yes; selector: name=cloudelastic1008.eqiad.wmnet |
[production] |
22:50 |
<bking@cumin2002> |
conftool action : set/pooled=yes; selector: name=cloudelastic1007.eqiad.wmnet |
[production] |
22:49 |
<bking@cumin2002> |
conftool action : set/weight=10; selector: name=cloudelastic1008.eqiad.wmnet |
[production] |
22:49 |
<bking@cumin2002> |
conftool action : set/weight=10; selector: name=cloudelastic1007.eqiad.wmnet |
[production] |
22:48 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: apply new master settings - bking@cumin2002 - T355617 |
[production] |
22:39 |
<eevans@cumin1002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching P{P:cassandra%rack = "c"} and A:restbase and A:codfw: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
22:33 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: apply new master settings - bking@cumin2002 - T355617 |
[production] |
22:20 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: cloudelastic1005*,cloudelastic1006* for IP migration - bking@cumin2002 - T355617 |
[production] |
22:20 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.ban Banning hosts: cloudelastic1005*,cloudelastic1006* for IP migration - bking@cumin2002 - T355617 |
[production] |
22:19 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Unbanning all hosts in cloudelastic |
[production] |
22:19 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.ban Unbanning all hosts in cloudelastic |
[production] |
22:13 |
<urandom> |
restarting Cassandra: restbase/codfw, row b — T353550 |
[production] |
22:10 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cloudelastic1007.eqiad.wmnet with OS bullseye |
[production] |
22:10 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - bking@cumin2002" |
[production] |