551-600 of 10000 results (24ms)
2020-11-26 §
08:40 <elukey> roll restart cassandra on aqs10* for openjdk upgrades [production]
08:40 <elukey@cumin1001> START - Cookbook sre.cassandra.roll-restart [production]
08:09 <godog> swift eqiad-prod: add weight to ms-be106[0-3] - T268435 [production]
08:08 <marostegui> Deploy schema change on s7 codfw - there will be lag on s7 codfw - T268004 [production]
07:25 <marostegui@cumin1001> dbctl commit (dc=all): 'db1138 (re)pooling @ 100%: After schema change', diff saved to https://phabricator.wikimedia.org/P13430 and previous config saved to /var/cache/conftool/dbconfig/20201126-072506-root.json [production]
07:15 <marostegui@cumin1001> dbctl commit (dc=all): 'db1085 (re)pooling @ 100%: After cloning clouddb hosts', diff saved to https://phabricator.wikimedia.org/P13429 and previous config saved to /var/cache/conftool/dbconfig/20201126-071514-root.json [production]
07:12 <marostegui> Enable GTID on clouddb1018:3317 clouddb1014:3317 T267090 [production]
07:10 <marostegui@cumin1001> dbctl commit (dc=all): 'db1138 (re)pooling @ 75%: After schema change', diff saved to https://phabricator.wikimedia.org/P13428 and previous config saved to /var/cache/conftool/dbconfig/20201126-071003-root.json [production]
07:00 <marostegui@cumin1001> dbctl commit (dc=all): 'db1085 (re)pooling @ 75%: After cloning clouddb hosts', diff saved to https://phabricator.wikimedia.org/P13427 and previous config saved to /var/cache/conftool/dbconfig/20201126-070010-root.json [production]
06:55 <marostegui@cumin1001> dbctl commit (dc=all): 'db1138 (re)pooling @ 50%: After schema change', diff saved to https://phabricator.wikimedia.org/P13426 and previous config saved to /var/cache/conftool/dbconfig/20201126-065500-root.json [production]
06:45 <marostegui@cumin1001> dbctl commit (dc=all): 'db1085 (re)pooling @ 50%: After cloning clouddb hosts', diff saved to https://phabricator.wikimedia.org/P13425 and previous config saved to /var/cache/conftool/dbconfig/20201126-064507-root.json [production]
06:39 <marostegui@cumin1001> dbctl commit (dc=all): 'db1138 (re)pooling @ 25%: After schema change', diff saved to https://phabricator.wikimedia.org/P13424 and previous config saved to /var/cache/conftool/dbconfig/20201126-063956-root.json [production]
06:32 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove es1016 from dbctl', diff saved to https://phabricator.wikimedia.org/P13423 and previous config saved to /var/cache/conftool/dbconfig/20201126-063234-marostegui.json [production]
06:30 <marostegui@cumin1001> dbctl commit (dc=all): 'db1085 (re)pooling @ 25%: After cloning clouddb hosts', diff saved to https://phabricator.wikimedia.org/P13422 and previous config saved to /var/cache/conftool/dbconfig/20201126-063003-root.json [production]
06:28 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool es1016 for decommissioning', diff saved to https://phabricator.wikimedia.org/P13421 and previous config saved to /var/cache/conftool/dbconfig/20201126-062811-marostegui.json [production]
06:17 <marostegui> Stop mysql on db1124:3315 to clone clouddb1016:3315 T267090 [production]
06:15 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1138 for schema change', diff saved to https://phabricator.wikimedia.org/P13420 and previous config saved to /var/cache/conftool/dbconfig/20201126-061552-marostegui.json [production]
06:15 <marostegui@cumin1001> dbctl commit (dc=all): 'Repool db1143', diff saved to https://phabricator.wikimedia.org/P13419 and previous config saved to /var/cache/conftool/dbconfig/20201126-061459-marostegui.json [production]
06:14 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1143 for schema change', diff saved to https://phabricator.wikimedia.org/P13418 and previous config saved to /var/cache/conftool/dbconfig/20201126-061432-marostegui.json [production]
06:08 <ryankemper> T268770 [eqiad] Finished rolling restart of cirrus eqiad. All cirrus elasticsearch restarts are now complete (cloudelastic, relforge, eqiad, codfw) [production]
06:05 <ryankemper@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-restart (exit_code=0) [production]
04:24 <ryankemper> T268770 [eqiad] Begin rolling restart of cirrus eqiad, 3 nodes at a time [production]
04:17 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-restart [production]
03:07 <krinkle@deploy1001> Synchronized wmf-config/mc.php: I805699ecfa (duration: 00m 58s) [production]
2020-11-25 §
23:28 <akosiaris@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) [production]
22:55 <mutante> mwdebug1003 - scap pull - which rsyncs from deploy1001 and runs php-fpm restart check script (T245757) [production]
22:47 <ejegg> increased Ingenico API call timeout [production]
22:34 <shdubsh> beginning rolling restart of logstash cluster - eqiad [production]
22:23 <akosiaris@cumin1001> START - Cookbook sre.ganeti.makevm [production]
22:14 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
22:14 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime [production]
22:12 <akosiaris@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) [production]
21:19 <akosiaris@cumin1001> START - Cookbook sre.ganeti.makevm [production]
20:49 <krinkle@deploy1001> Synchronized php-1.36.0-wmf.18/includes/libs/CSSMin.php: I26ed3e5e9a - fix T268308 (duration: 00m 59s) [production]
20:43 <mutante> LDAP added user duminasi to group wmf (T266791) [production]
20:06 <ryankemper@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-restart (exit_code=0) [production]
18:44 <elukey> upload new hive* packages 2.2.3-2 to stretch-wikimedia - thirdparty/bigtop14 component [production]
18:42 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-restart [production]
18:38 <mutante> LDAP adding swagoel to NDA T267314#6625628 [production]
18:31 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-restart (exit_code=99) [production]
18:05 <ryankemper> T268770 [cloudelastic] Thawed writes to cloudelastic cluster following restarts: `/usr/local/bin/mwscript extensions/CirrusSearch/maintenance/FreezeWritesToCluster.php --wiki=enwiki --cluster=cloudelastic --thaw` on `mwmaint1002` [production]
18:01 <ryankemper> [cloudelastic] (forgot to mention this) Thawed writes to cloudelastic cluster following restarts: `/usr/local/bin/mwscript extensions/CirrusSearch/maintenance/FreezeWritesToCluster.php --wiki=enwiki --cluster=cloudelastic --thaw` on `mwmaint1002` [production]
17:58 <ryankemper> T268770 [cloudelastic] restarts complete, service is healthy. This is done. [production]
17:55 <ryankemper> T268770 [cloudelastic] restarts on `cloudelastic1006` complete and all 3 elasticsearch clusters are green, all cloudelastic instances are now complete [production]
17:49 <ryankemper> T268770 [cloudelastic] restarts on `cloudelastic1005` complete and all 3 elasticsearch clusters are green, proceeding to next instance [production]
17:44 <shdubsh> beginning rolling restart of logstash cluster - codfw [production]
17:44 <ryankemper> T268770 [cloudelastic] restarts on `cloudelastic1004` complete and all 3 elasticsearch clusters are green, proceeding to next instance [production]
17:39 <ryankemper> T268770 [cloudelastic] restarts on `cloudelastic1003` complete and all 3 elasticsearch clusters are green, proceeding to next instance [production]
17:39 <ryankemper> T268770 [cloudelastic] restarts on `cloudelastic1002` complete and all 3 elasticsearch clusters are green, proceeding to next instance [production]
17:28 <ryankemper> T268770 [cloudelastic] restarts on `cloudelastic1001` complete and all 3 elasticsearch clusters are green, proceeding to next instance [production]