301-350 of 10000 results (24ms)
2021-03-24 ยง
10:06 <akosiaris@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
10:06 <akosiaris@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
10:03 <jynus> restart db1139 T271913 [production]
09:56 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1160 for schema change', diff saved to https://phabricator.wikimedia.org/P15072 and previous config saved to /var/cache/conftool/dbconfig/20210324-095655-marostegui.json [production]
09:56 <marostegui@cumin1001> dbctl commit (dc=all): 'db1149 (re)pooling @ 100%: Slowly repool db1149 after schema change', diff saved to https://phabricator.wikimedia.org/P15071 and previous config saved to /var/cache/conftool/dbconfig/20210324-095606-root.json [production]
09:51 <jynus> restart db1116 T271913 [production]
09:41 <marostegui@cumin1001> dbctl commit (dc=all): 'db1149 (re)pooling @ 75%: Slowly repool db1149 after schema change', diff saved to https://phabricator.wikimedia.org/P15070 and previous config saved to /var/cache/conftool/dbconfig/20210324-094102-root.json [production]
09:28 <jayme@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
09:28 <jayme@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
09:25 <marostegui@cumin1001> dbctl commit (dc=all): 'db1149 (re)pooling @ 50%: Slowly repool db1149 after schema change', diff saved to https://phabricator.wikimedia.org/P15069 and previous config saved to /var/cache/conftool/dbconfig/20210324-092558-root.json [production]
09:10 <marostegui@cumin1001> dbctl commit (dc=all): 'db1149 (re)pooling @ 25%: Slowly repool db1149 after schema change', diff saved to https://phabricator.wikimedia.org/P15068 and previous config saved to /var/cache/conftool/dbconfig/20210324-091055-root.json [production]
08:29 <jayme@cumin1001> conftool action : set/pooled=true; selector: name=eqiad,dnsdisc=sessionstore [production]
08:16 <gehel> restarting wdqs updater on all nodes for config change [production]
08:14 <jayme@cumin1001> conftool action : set/pooled=true; selector: name=eqiad,dnsdisc=eventgate-analytics [production]
08:14 <jayme@cumin1001> conftool action : set/pooled=true; selector: name=eqiad,dnsdisc=eventgate-analytics-external [production]
08:10 <marostegui@cumin1001> dbctl commit (dc=all): 'db1086 (re)pooling @ 75%: Slowly repool db1086 after schema change', diff saved to https://phabricator.wikimedia.org/P15066 and previous config saved to /var/cache/conftool/dbconfig/20210324-081057-root.json [production]
08:07 <marostegui@cumin1001> dbctl commit (dc=all): 'db1141 (re)pooling @ 100%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P15065 and previous config saved to /var/cache/conftool/dbconfig/20210324-080725-root.json [production]
08:02 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1149 for schema change', diff saved to https://phabricator.wikimedia.org/P15064 and previous config saved to /var/cache/conftool/dbconfig/20210324-080223-marostegui.json [production]
08:01 <jayme@cumin1001> conftool action : set/pooled=true; selector: name=eqiad,dnsdisc=eventgate-main [production]
08:01 <jayme@cumin1001> conftool action : set/pooled=true; selector: name=eqiad,dnsdisc=eventgate-logging-external [production]
08:01 <jayme@cumin1001> conftool action : set/pooled=true; selector: name=eqiad,dnsdisc=zotero [production]
07:55 <marostegui@cumin1001> dbctl commit (dc=all): 'db1086 (re)pooling @ 50%: Slowly repool db1086 after schema change', diff saved to https://phabricator.wikimedia.org/P15063 and previous config saved to /var/cache/conftool/dbconfig/20210324-075553-root.json [production]
07:52 <marostegui@cumin1001> dbctl commit (dc=all): 'db1141 (re)pooling @ 75%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P15062 and previous config saved to /var/cache/conftool/dbconfig/20210324-075221-root.json [production]
07:50 <jayme@cumin1001> conftool action : set/pooled=true; selector: name=eqiad,dnsdisc=dnsdisc=eventgate-main [production]
07:50 <jayme@cumin1001> conftool action : set/pooled=true; selector: name=eqiad,dnsdisc=dnsdisc=eventgate-logging-external [production]
07:50 <jayme@cumin1001> conftool action : set/pooled=true; selector: name=eqiad,dnsdisc=dnsdisc=zotero [production]
07:41 <elukey@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host ml-etcd2002.codfw.wmnet [production]
07:40 <marostegui@cumin1001> dbctl commit (dc=all): 'db1086 (re)pooling @ 25%: Slowly repool db1086 after schema change', diff saved to https://phabricator.wikimedia.org/P15061 and previous config saved to /var/cache/conftool/dbconfig/20210324-074050-root.json [production]
07:37 <marostegui@cumin1001> dbctl commit (dc=all): 'db1141 (re)pooling @ 50%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P15060 and previous config saved to /var/cache/conftool/dbconfig/20210324-073718-root.json [production]
07:27 <elukey@cumin1001> START - Cookbook sre.ganeti.makevm for new host ml-etcd2002.codfw.wmnet [production]
07:23 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1086 for schema change', diff saved to https://phabricator.wikimedia.org/P15059 and previous config saved to /var/cache/conftool/dbconfig/20210324-072319-marostegui.json [production]
07:22 <marostegui@cumin1001> dbctl commit (dc=all): 'db1141 (re)pooling @ 25%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P15058 and previous config saved to /var/cache/conftool/dbconfig/20210324-072214-root.json [production]
07:20 <elukey@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) for hosts ml-etcd2002.codfw.wmnet [production]
07:10 <elukey@cumin1001> START - Cookbook sre.hosts.decommission for hosts ml-etcd2002.codfw.wmnet [production]
07:09 <moritzm> installing squid security updates [production]
06:35 <marostegui@cumin1001> dbctl commit (dc=all): 'Add db1181 to dbctl, depooled T275633', diff saved to https://phabricator.wikimedia.org/P15057 and previous config saved to /var/cache/conftool/dbconfig/20210324-063459-marostegui.json [production]
06:24 <root@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts db1084.eqiad.wmnet [production]
06:14 <root@cumin1001> START - Cookbook sre.hosts.decommission for hosts db1084.eqiad.wmnet [production]
05:52 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1141', diff saved to https://phabricator.wikimedia.org/P15056 and previous config saved to /var/cache/conftool/dbconfig/20210324-055246-marostegui.json [production]
04:44 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=99) [production]
03:41 <ryankemper> T274204 `sudo -i cookbook sre.elasticsearch.rolling-upgrade search_codfw "codfw cluster reboot" --task-id T274204 --nodes-per-run 3 --start-datetime 2021-03-24T02:29:39` on `ryankemper@cumin1001` tmux session `elasticsearch_rolling_upgrade_reboots` [production]
03:41 <ryankemper> T274204 Restarting `codfw` restart; the timestamp argument should prevent it from wasting time on nodes that have been rebooted already [production]
03:40 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-upgrade [production]
03:39 <ryankemper> T274204 Timed out waiting for write queues to empty: `[59/60, retrying in 60.00s] Attempt to run 'spicerack.elasticsearch_cluster.ElasticsearchClusters.wait_for_all_write_queues_empty' raised: Write queue not empty (had value of 241631) for partition 0 of topic codfw.cpjobqueue.partitioned.mediawiki.job.cirrusSearchElasticaWrite.` [production]
03:38 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=99) [production]
02:38 <ryankemper> T274204 `sudo -i cookbook sre.elasticsearch.rolling-upgrade search_codfw "codfw cluster reboot" --task-id T274204 --nodes-per-run 3 --start-datetime 2021-03-24T02:29:39` on `ryankemper@cumin1001` tmux session `elasticsearch_rolling_upgrade_reboots` [production]
02:31 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-upgrade [production]
01:59 <ryankemper> T274204 For now I'll proceed to the reboots of `codfw` [production]
01:58 <ryankemper> T274204 `ctrl+c`'d out of run; relforge is relying on outdated config that is trying to talk to `relforge1002` which no longer exists. Need to refactor so that config no longer lives in spicerack [production]
01:58 <ryankemper@cumin1001> END (ERROR) - Cookbook sre.elasticsearch.rolling-upgrade-reboot (exit_code=97) [production]