2010-03-12
§
|
19:23 |
<fvassard> |
synchronized php-1.5/wmf-config/CommonSettings.php 'Enabling .odg extension for uploads' |
[production] |
18:49 |
<Fred> |
moved stats.w.o from Zwinger to Spence |
[production] |
12:00 |
<RoanKattouw> |
storage2 (download.wikimedia.org) was down between 11:40 and ~11:53 UTC, coincides with load spike |
[production] |
07:43 |
<apergos> |
really edited the right copy of worker.py this time (the copy in /backups :-P) |
[production] |
06:19 |
<apergos> |
editd worker.py in place on snapshot3 to remove --opt and replace all except lock options individually (it was running with that on db12 again, with high lag). shot that worker process, lag and thread count going down now |
[production] |
02:49 |
<tstarling> |
synchronized php-1.5/wmf-config/db.php |
[production] |
02:49 |
<tstarling> |
synchronized php-1.5/wmf-config/db.php 'reducing load on db12 again, seems to be lagging again' |
[production] |
02:38 |
<tstarling> |
synchronized php-1.5/wmf-config/db.php 'increased general load on db12' |
[production] |
02:34 |
<tstarling> |
synchronized php-1.5/wmf-config/db.php |
[production] |
02:32 |
<Tim> |
db12 caught up, repooling |
[production] |
02:15 |
<root> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'subpages for Grants namespace on meta (bug 22810)' |
[production] |
02:02 |
<tomaszf> |
killing 20100312 xml snapshot run for enwiki due to high load on db12 |
[production] |
01:43 |
<tstarling> |
synchronized php-1.5/wmf-config/db.php 'moving db12 back into the "dump" query group so we don't kill db26 too' |
[production] |
01:41 |
<Tim> |
note: mysqldump query for pagelinks table is running on db12, presumably that is the cause of the slowness and network spike |
[production] |
01:37 |
<tstarling> |
synchronized php-1.5/wmf-config/db.php |
[production] |
01:32 |
<tstarling> |
synchronized php-1.5/wmf-config/db.php 'brought db26 into rotation as enwiki watchlist/contribs server, to replace db12 after cache warming' |
[production] |
01:30 |
<Tim> |
network spike on db12 at 01:00. Going to depool it in case it's going to try the same trick as last time |
[production] |
01:21 |
<ariel> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'resyncing for bug 22810 (stupid ssh-agent :-P)' |
[production] |
00:47 |
<ariel> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'Grants namespace on meta, bug 22810' |
[production] |
00:17 |
<tfinc> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'Removing foundation wiki from wmgDonationInterface' |
[production] |
2010-03-11
§
|
22:03 |
<root> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'Bug 22669 rollback' |
[production] |
22:02 |
<root> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'Bug 22669' |
[production] |
21:36 |
<root> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'Bug 22404' |
[production] |
21:19 |
<ariel> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'subpages for templates on foundationwiki (#22484)' |
[production] |
20:37 |
<atglenn> |
re-enabled replication on media server (ms7 -> ms8), we are back in business |
[production] |
16:43 |
<root> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'Bug 22089' |
[production] |
12:27 |
<andrew> |
synchronized php-1.5/wmf-config/CommonSettings.php 'Bump style version' |
[production] |
11:22 |
<andrew> |
synchronized php-1.5/extensions/LiquidThreads/classes/View.php 'Deploy r63591, fixes for LiquidThreads comment digestion' |
[production] |
11:21 |
<andrew> |
synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Deploy r63591, fixes for LiquidThreads comment digestion' |
[production] |
01:34 |
<Tim> |
reinstalled wikimedia-task-appserver on fenari to fix symlinks /apache and /usr/local/apache/common |
[production] |
00:26 |
<atglenn> |
playing catchup on replication now from ms7-ms8 zfs send/recvs running as root |
[production] |
2010-03-09
§
|
23:32 |
<andrew> |
synchronized php-1.5/extensions/LiquidThreads/classes/View.php 'Merge r63523 into LiquidThreads production' |
[production] |
23:26 |
<andrew> |
synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Merge r63524 into LiquidThreads alpha' |
[production] |
23:26 |
<andrew> |
synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Merge r63523 into LiquidThreads production' |
[production] |
23:24 |
<andrew> |
synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Merge r63524 into LiquidThreads alpha' |
[production] |
23:24 |
<andrew> |
synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Merge r63523 into LiquidThreads production' |
[production] |
23:15 |
<andrew> |
synchronized php-1.5/extensions/LiquidThreads/classes/View.php 'Merge r63154 into LiquidThreads production' |
[production] |
23:13 |
<andrew> |
synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Merge r63154 into LiquidThreads alpha' |
[production] |
20:22 |
<rainman-sr> |
this morning's search api outage seems to be connected with index update on search8 which triggered high GC load and stuck threads, removing enwp spell index from search8 |
[production] |
15:37 |
<hcatlin> |
message before was referring to mobile site. |
[production] |
15:37 |
<hcatlin> |
deployed new languages. cs homepages. |
[production] |