4251-4300 of 10000 results (13ms)
2010-03-12 §
07:43 <apergos> really edited the right copy of worker.py this time (the copy in /backups :-P) [production]
06:19 <apergos> editd worker.py in place on snapshot3 to remove --opt and replace all except lock options individually (it was running with that on db12 again, with high lag). shot that worker process, lag and thread count going down now [production]
02:49 <tstarling> synchronized php-1.5/wmf-config/db.php [production]
02:49 <tstarling> synchronized php-1.5/wmf-config/db.php 'reducing load on db12 again, seems to be lagging again' [production]
02:38 <tstarling> synchronized php-1.5/wmf-config/db.php 'increased general load on db12' [production]
02:34 <tstarling> synchronized php-1.5/wmf-config/db.php [production]
02:32 <Tim> db12 caught up, repooling [production]
02:15 <root> synchronized php-1.5/wmf-config/InitialiseSettings.php 'subpages for Grants namespace on meta (bug 22810)' [production]
02:02 <tomaszf> killing 20100312 xml snapshot run for enwiki due to high load on db12 [production]
01:43 <tstarling> synchronized php-1.5/wmf-config/db.php 'moving db12 back into the "dump" query group so we don't kill db26 too' [production]
01:41 <Tim> note: mysqldump query for pagelinks table is running on db12, presumably that is the cause of the slowness and network spike [production]
01:37 <tstarling> synchronized php-1.5/wmf-config/db.php [production]
01:32 <tstarling> synchronized php-1.5/wmf-config/db.php 'brought db26 into rotation as enwiki watchlist/contribs server, to replace db12 after cache warming' [production]
01:30 <Tim> network spike on db12 at 01:00. Going to depool it in case it's going to try the same trick as last time [production]
01:21 <ariel> synchronized php-1.5/wmf-config/InitialiseSettings.php 'resyncing for bug 22810 (stupid ssh-agent :-P)' [production]
00:47 <ariel> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Grants namespace on meta, bug 22810' [production]
00:17 <tfinc> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Removing foundation wiki from wmgDonationInterface' [production]
2010-03-11 §
22:03 <root> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Bug 22669 rollback' [production]
22:02 <root> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Bug 22669' [production]
21:36 <root> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Bug 22404' [production]
21:19 <ariel> synchronized php-1.5/wmf-config/InitialiseSettings.php 'subpages for templates on foundationwiki (#22484)' [production]
20:37 <atglenn> re-enabled replication on media server (ms7 -> ms8), we are back in business [production]
16:43 <root> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Bug 22089' [production]
12:27 <andrew> synchronized php-1.5/wmf-config/CommonSettings.php 'Bump style version' [production]
11:22 <andrew> synchronized php-1.5/extensions/LiquidThreads/classes/View.php 'Deploy r63591, fixes for LiquidThreads comment digestion' [production]
11:21 <andrew> synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Deploy r63591, fixes for LiquidThreads comment digestion' [production]
01:34 <Tim> reinstalled wikimedia-task-appserver on fenari to fix symlinks /apache and /usr/local/apache/common [production]
00:26 <atglenn> playing catchup on replication now from ms7-ms8 zfs send/recvs running as root [production]
2010-03-10 §
22:59 <atglenn> patched and rebooted ms8, working on snapshot issue [production]
21:07 <catrope> synchronized php-1.5/extensions/ContactPage/ContactPage.i18n.php 'r63575' [production]
21:07 <catrope> synchronized php-1.5/extensions/ContactPage/ContactPage.php 'r63575' [production]
21:07 <catrope> synchronized php-1.5/extensions/ContactPage/SpecialContact.php 'r63575' [production]
18:54 <Fred> removed duplicate ACL for 208.80.152.157 from text-squids [production]
2010-03-09 §
23:32 <andrew> synchronized php-1.5/extensions/LiquidThreads/classes/View.php 'Merge r63523 into LiquidThreads production' [production]
23:26 <andrew> synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Merge r63524 into LiquidThreads alpha' [production]
23:26 <andrew> synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Merge r63523 into LiquidThreads production' [production]
23:24 <andrew> synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Merge r63524 into LiquidThreads alpha' [production]
23:24 <andrew> synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Merge r63523 into LiquidThreads production' [production]
23:15 <andrew> synchronized php-1.5/extensions/LiquidThreads/classes/View.php 'Merge r63154 into LiquidThreads production' [production]
23:13 <andrew> synchronized php-1.5/extensions/LiquidThreads_alpha/classes/View.php 'Merge r63154 into LiquidThreads alpha' [production]
20:22 <rainman-sr> this morning's search api outage seems to be connected with index update on search8 which triggered high GC load and stuck threads, removing enwp spell index from search8 [production]
15:37 <hcatlin> message before was referring to mobile site. [production]
15:37 <hcatlin> deployed new languages. cs homepages. [production]
14:58 <domas> forgot to log - yesterday's API fail was repeated today too, all API nodes were blocked on talking to lucene, and search cluster was idle too, and... [production]
14:54 <catrope> synchronized php-1.5/languages/messages/MessagesEt.php 'force recache to solve l10ncache issue' [production]
14:51 <catrope> synchronized php-1.5/includes/LocalisationCache.php 'r63417' [production]
00:41 <aaron> synchronized php-1.5/extensions/FlaggedRevs_alpha/FlaggedRevs.hooks.php 'deployed r63447' [production]
00:40 <aaron> synchronized php-1.5/extensions/FlaggedRevs_alpha/FlaggedRevs.class.php 'deployed r63447' [production]
2010-03-08 §
23:59 <tstarling> synchronized php-1.5/includes/Sanitizer.php 'r63426' [production]
23:57 <aaron> synchronized php-1.5/wmf-config/flaggedrevs.php [production]