6451-6500 of 10000 results (18ms)
2009-10-26 §
20:16 <Andrew> Going to update LiquidThreads to trunk state in a few minutes [production]
16:08 <rainman-sr> overloads all around, turned off en/de/fr wiki highlighting so that searchs don't time out [production]
11:10 <hcatlin> reworked mobile1's config so that its more standardized and more of the config is in the repo [production]
08:53 <domas> updated nagios to reflect changed server roles [production]
08:43 <domas> dewiki is now separate cluster, s5, replication switch over done at http://p.defau.lt/?kfvvlNOc4TkJ_6SCAVe6mg [production]
08:42 <midom> synchronized php-1.5/wmf-config/CommonSettings.php 'dewiki readwrite' [production]
08:40 <midom> synchronized php-1.5/wmf-config/db.php 'restructuring s2dewiki into s5' [production]
08:38 <midom> synchronized php-1.5/wmf-config/CommonSettings.php 'dewiki read-only' [production]
07:57 <midom> synchronized php-1.5/wmf-config/db.php 'entirely separating dewiki slaves' [production]
06:54 <midom> synchronized php-1.5/wmf-config/db.php 'taking out db4 for copy to db23' [production]
05:45 <midom> synchronized php-1.5/wmf-config/db.php [production]
2009-10-25 §
15:23 <domas> converting usability initiative tables to InnoDB... [production]
13:23 <domas> set up snapshot rotation on db10 [production]
12:36 <hcatlin> mobile1: created init.d/cluster to correct USR1 sig problem, fully updated sys ops on wikitech [production]
12:03 <domas> Mark, I'm sure you'll like that! ;-p~ [production]
12:02 <domas> started sq43 without /dev/sdd COSS store (manual conf hack) [production]
11:54 <domas> removed ns3 from nagios, added ns1 [production]
11:45 <domas> bounced ns1 too, was affected by selective-answer leak (same number as ns0, btw, 507!) ages ago, just not noticed by nagios. this seem to resolve some slowness I noticed few times. [production]
11:41 <domas> bounced pdns on ns0, was affected by selective-answer leak [production]
2009-10-24 §
16:49 <rainman-sr> decreasing maximal number of search hits per request (e.g. page) to 50 [production]
16:40 <apergos> re-enabled zfs replication from ms1 to ms5, set to 20 minute intervals now, keeping an eye on it to see if we have failures in running to completion [production]
13:28 <rainman-sr> finished restructuring en.wp, continuing with normal incremental search updates [production]
11:50 <domas> removed hardy-backports from fenari sources.list, added bzr ppa to sources.list.d/bzrppa.list [production]
2009-10-23 §
23:37 <tstarling> synchronized wmf-deployment/cache/trusted-xff.cdb [production]
23:31 <tstarling> synchronized wmf-deployment/cache/trusted-xff.cdb [production]
23:24 <Tim> updating TrustedXFF (bolt browser) [production]
22:36 <domas> db28 has multiple fan failures (LOM is finally able to do something :) - still needs datacenter ops [production]
22:20 <domas> db28 is a toast, needs cold restart by datacenter ops, LOM not able to do anything [production]
22:20 <midom> synchronized php-1.5/wmf-config/db.php 'db28 dead' [production]
11:17 <domas> Fixed skip-list of cached query pages, was broken for past two months :) [production]
10:54 <midom> synchronized php-1.5/thumb.php 'removing livehack' [production]
10:52 <domas> rotating logs becomes difficult when they become too big, so they continue to grow indefinitely! db20 / nearly full, loooots of /var/log/remote ;-) [production]
10:39 <domas> who watches the watchers? :) rrdtool process on spence was using 8G of memory. :-)))) [production]
10:24 <domas> semaphore leaks made some apaches fail, failed apache in rendering farm was not depooled, thus having 404 handler serve plenty of "can't connect to host" broken thumbs. [production]
10:13 <domas> apparently there're intermittent connection failures from ms4 to scalers [production]
09:56 <midom> synchronized php-1.5/thumb.php 'error header livehack' [production]
04:04 <domas> noticed intermittent network failure inside pmtpa [production]
04:01 <domas> switched jobs table on db22 with an empty one, old one was having just few noop entries and five million invalidated rows... hit interesting (but probably easy to fix) performance problem at mtr_memo_release/mtr_commit code inside MySQL :) [production]
03:17 <Fred> restarted powerdns on ns2 to kill some zombies with a double tap :p [production]
2009-10-22 §
21:48 <aaron> synchronized php-1.5/extensions/FlaggedRevs/language/FlaggedRevs.i18n.php 'deploy r58038' [production]
16:25 <Andrew> Updating LiquidThreads to trunk state again [production]
13:34 <Andrew> Updating LiquidThreads to trunk state, scapping. [production]
2009-10-21 §
22:30 <Tim> upgraded libpoppler2 on all apaches [production]
21:21 <Tim> updating ubuntu mirror [production]
20:50 <Tim> apt-get upgrade on pdf1 for USN-850-1 [production]
19:59 <robh> synchronized php-1.5/wmf-config/InitialiseSettings.php [production]
13:37 <rainman-sr> restarting search incremental update on all wikis, will sync to search servers when updates catch up [production]
2009-10-20 §
20:16 <RoanKattouw> Brion synced r57957 [production]
19:59 <robh> synchronized php-1.5/wmf-config/InitialiseSettings.php '21023 Allow bureaucrats to remove sysop flag on Simple English WIkiquote' [production]
15:54 <hcatlin> Deployed changes to S60. Stopping redirect for Nokia Series60, because it needs more work. [production]