4801-4850 of 10000 results (13ms)
2010-01-12 §
14:50 <Rob> taking down db10 to relocate from pmtpa-b1 to sdtpa-a2 [production]
14:50 <Rob> fixed issues with transcode2 and transcode3, completing base installation. [production]
05:48 <tstarling> synchronized php-1.5/includes/HTMLCacheUpdate.php [production]
05:48 <tstarling> synchronized php-1.5/includes/BacklinkCache.php [production]
05:47 <Tim> deploying r60962 [production]
01:17 <Tim> on streber: removed a corrupt torrus DB file so it could be rebuilt, torrus should be working now [production]
00:57 <Tim> killed frozen torrus cron jobs and ran "torrus compile --tree=Network --force" [production]
00:51 <Tim> maybe torrus collector is still broken, trying /etc/init.d/torrus-common force-reload [production]
00:46 <Tim> with mpm-prefork managed to debug it fairly easily. Moved away permanently locked DB file render_cache.db, torrus.wikimedia.org is now fixed [production]
00:39 <Fred> restarting pdns on ns1 [production]
00:38 <Tim> switching streber to apache2-mpm-prefork, can't work out why it's not working [production]
00:22 <Tim> trying "apache2 -X" on streber [production]
00:00 <Tim> restarting apache on streber [production]
2010-01-11 §
23:38 <domas> logging the fact that we had cache layer meltdown at some point in time during the day [production]
22:30 <domas> leaving bits.pmtpa on db19's varnish, in case of troubles - uncomment bits.pmtpa .2 record in /etc/powerdns/templates/wikimedia.org and run authdns-update [production]
19:43 <fvassard> synchronized php-1.5/wmf-config/mc.php 'Swapped memcached from srv125 to srv232' [production]
19:06 <Rob> new apaches srv255, srv257 deployed. Updated node groups and synced nagios [production]
19:03 <Rob> new apache server srv254 deployed [production]
18:24 <atglenn> copy backlog of image data from ms1 to ms7 (running in screen as root on both boxes) [production]
14:43 <mark> Rebooting fuchsia, locked up again [production]
14:24 <mark> Increased load on knsq16-22 by upping lvs weight from 10 to 15 [production]
2010-01-10 §
23:02 <midom> synchronized php-1.5/wmf-config/lucene.php 'rainman asked, rainman guilty, hehehe' [production]
23:01 <midom> synchronized php-1.5/wmf-config/secure.php [production]
17:36 <rainman-sr> search limit raised to 500 again, interwiki search re-enabled for "other" wikis [production]
16:07 <kate> synchronized php-1.5/wmf-config/db.php 'take ixia back out' [production]
16:06 <kate> synchronized php-1.5/wmf-config/db.php 'put ixia back' [production]
16:06 <rainman-sr> restarting search cluster to deploy search13-19 [production]
15:58 <domas> all bits serving switched back to text cluster, we have problems with all threads blocking on write(): http://p.defau.lt/?dOBxveiHj_ukjzupEBX3rA [production]
15:19 <rainman-sr> configuring search13-19, will leave search20 as spare [production]
14:22 <domas> apparently varnish worker threads are blocking on network output, ... :) [production]
12:40 <domas> full bits pmtpa load sent to sq1 [production]
12:04 <domas> sending half of bits load to sq1 [production]
12:02 <domas> set up separate geo balancing for bits via bits-geo.wikimedia.org [production]
10:45 <midom> synchronized php-1.5/wmf-config/CommonSettings.php 'setting extension asset path to bits.wm' [production]
10:42 <midom> synchronized php-1.5/extensions/UsabilityInitiative/UsabilityInitiative.hooks.php [production]
10:29 <midom> synchronized php-1.5/includes/Setup.php [production]
10:29 <midom> synchronized php-1.5/includes/DefaultSettings.php [production]
09:36 <domas> moving over static assets to 'bits.wikimedia.org' [production]
09:11 <midom> synchronized php-1.5/wmf-config/CommonSettings.php [production]
09:09 <midom> synchronized php-1.5/wmf-config/secure.php [production]
08:26 <kate> synchronized php-1.5/wmf-config/db.php [production]
08:25 <river> taking ixia out of rotation to dump commons [production]
2010-01-09 §
17:31 <mark> Upgraded pdns-recursor to 3.1.7.2 on dobson, mchenry, lily [production]
15:32 <mark> Temporarily filtering all prefixes from 1299 on br1-knams, due to some balanced link blackholing issue [production]
13:10 <midom> ran sync-common-all [production]
12:45 <domas> restarted, that is :) [production]
12:44 <domas> fixed ns1, was deadlocked [production]
2010-01-08 §
21:48 <Rob> nagios is flapping errors for esams hosts, but they are still up and functional. Perhaps due to new transit setup earlier today. [production]
20:10 <Rob> pushing dns update for flaggedrevssandbox project [production]
16:09 <Rob> finished mobile2 initial setup, gave hcatlin sudo rights to server [production]