2016-04-20
§
|
14:55 |
<ottomata> |
started puppet on analytics1003 |
[production] |
14:52 |
<jynus@tin> |
Synchronized wmf-config/db-codfw.php: Repool es2019 (duration: 00m 38s) |
[production] |
14:37 |
<ottomata> |
stopping puppet on analytics1015 and analytics1003 in prep for migration |
[production] |
13:54 |
<elukey> |
puppet disabled on analytics1027 to stop Camus |
[production] |
13:50 |
<_joe_> |
rolling restart of ocg servers |
[production] |
13:21 |
<moritzm> |
rebooting rdb1002,rdb1003,rdb1004,rdb1006,rdb1007,rdb1008 for upgrade to Linux 4.4 |
[production] |
13:17 |
<jynus> |
[switchover-maintenance] Changing DB slave topology for shard s1 on eqiad T111654 |
[production] |
12:59 |
<volans> |
[switchover-maintenance] Changing DB slave topology for shard s4 on eqiad T111654 |
[production] |
12:54 |
<volans> |
[switchover-maintenance] Changing DB slave topology for shard s5 on eqiad T111654 |
[production] |
12:48 |
<jynus> |
[switchover-maintenance] Changing DB slave topology for shard s3 on eqiad T111654 |
[production] |
12:37 |
<volans> |
[switchover-maintenance] Changing DB slave topology for shard s6 on eqiad T111654 |
[production] |
12:17 |
<volans> |
[switchover-maintenance] Changing DB slave topology for shard s7 on eqiad T111654 |
[production] |
10:51 |
<godog> |
pool restbase1014 |
[production] |
10:42 |
<volans> |
[switchover-maintenance] Restarting db1028 (s7) |
[production] |
10:33 |
<jynus> |
[switchover-maintenance] Restarting db1018 |
[production] |
10:31 |
<volans> |
[switchover-maintenance] Upgrading TLS for shard s7 on eqiad databases |
[production] |
09:13 |
<jynus> |
backfilling recentchanges on enwiki API servers |
[production] |
09:12 |
<godog> |
stop compactions on restbase1014-[ab] |
[production] |
08:54 |
<elukey> |
deployed the new puppet compiler - version 0.1.4 (hosts sorted in the HTML output, minor change) |
[production] |
02:31 |
<l10nupdate@tin> |
ResourceLoader cache refresh completed at Wed Apr 20 02:31:20 UTC 2016 (duration 8m 46s) |
[production] |
02:22 |
<mwdeploy@tin> |
sync-l10n completed (1.27.0-wmf.21) (duration: 09m 37s) |
[production] |
2016-04-19
§
|
22:42 |
<ori> |
killing rc insert query on db1065 and db1066 |
[production] |
21:27 |
<ori> |
running rebuildrecentchanges.php --from=20160419144741 --to=20160419151018 on all wikis |
[production] |
21:12 |
<paravoid> |
clearing the exim4 retry database on mx2001 |
[production] |
20:44 |
<ori> |
on all wikis, deleting from recentchanges where rc_timestamp > 20160419144741 and rc_timestamp < 20160419151018 |
[production] |
20:09 |
<ori> |
ran `mwscript rebuildrecentchanges.php --wiki=testwiki --from=20160419144741 --to=20160419151018` |
[production] |
19:53 |
<paravoid> |
staggered varnish bans for 'obj.http.server ~ "^mw2.+"' as a workaround for T133069 |
[production] |
19:51 |
<ori@tin> |
Synchronized php-1.27.0-wmf.21/maintenance/rebuildrecentchanges.php: Ie9799f5ea: rebuildrecentchanges: Allow rebuilding specified time range only (duration: 00m 28s) |
[production] |
19:43 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Revert "Depool one db server from each shard as a backup" (duration: 00m 27s) |
[production] |
19:12 |
<AaronSchulz> |
Cleared enwiki 'enqueue' queue (T133089) |
[production] |
19:06 |
<legoktm> |
purging sidebar cache across all wikis (T133069) |
[production] |
18:27 |
<mutante> |
kraz.codfw, reinstalling as kraz.wikimedia |
[production] |
18:00 |
<_joe_> |
running rebuildEntityPerPage.php on wikidata, T133048 |
[production] |
17:49 |
<jynus> |
setting binlog_format=ROW on old x1-master at eqiad (db1029) to reenable replication |
[production] |
17:25 |
<demon@tin> |
Synchronized php-1.27.0-wmf.21/extensions/CentralAuth: forgot something (duration: 00m 42s) |
[production] |
17:17 |
<volans> |
Deleting pc1003* and pc1006* binlog from pc2006 to make some space |
[production] |
17:12 |
<volans> |
Deleting pc1005* binlog from pc2005 to make some space |
[production] |
17:08 |
<volans> |
Deleting pc1002* old binlog from pc2005 to make some space |
[production] |
17:01 |
<ostriches> |
ytterbium: stopped puppet for a bit, testing host key mess. |
[production] |
16:55 |
<ostriches> |
restarting gerrit to pick up furud's rsa key |
[production] |
15:49 |
<bblack> |
[traffic codfw switch #4] - puppet change complete - done |
[production] |
15:49 |
<bblack> |
[traffic codfw switch #2] - confirmed bulk of traffic moved after ~10min for DNS TTL, rates levelling out on eqiad+codfw front network stats |
[production] |
15:46 |
<bblack> |
[traffic codfw switch #4] - salting puppet change |
[production] |
15:45 |
<bblack> |
[traffic codfw switch #4] - puppet merging eqiad text -> codfw |
[production] |
15:41 |
<bblack> |
[traffic codfw switch #3] - puppet change complete - done |
[production] |
15:39 |
<bblack> |
[traffic codfw switch #3] - salting puppet change |
[production] |
15:38 |
<bblack> |
[traffic codfw switch #3] - puppet merging esams text -> codfw |
[production] |
15:35 |
<bblack> |
[traffic codfw switch #2] - authdns-update complete, user traffic to eqiad frontends should start dropping off now |
[production] |
15:34 |
<bblack> |
[traffic codfw switch #1] - puppet change complete - done |
[production] |
15:31 |
<bblack> |
[traffic codfw switch #1] - salting puppet change |
[production] |