5701-5750 of 6569 results (9ms)
2009-02-08 §
20:17 <domas> we were hitting APC lock contention after some CPU peak. Dear Ops Team, please upgrade to APC with localcache support. :))))) [production]
2009-02-07 §
22:49 <domas> db17 came up, but it crashed with different symptoms than other boxes, and it was running 2.6.28.1 kernel. might be previous hardware problems resurfacing [production]
21:23 <domas> db17 down [production]
2009-02-06 §
12:33 <brion> stopped that process since it was taking a while and just saved it as an hourly cronjob. :) log to /opt/mwlib/var/log/cache-cleaning [production]
12:28 <brion> running mw-serve cache cleanup for files older than 24h [production]
2009-02-05 §
18:19 <brion> put ulimit back with -v 1024000 that's better :D [production]
18:18 <brion> removed the ulimit; was unable to reach server with it in place [production]
18:15 <brion> hacked mw-serve to ulimit -v 102400 on erzurumi, see if this helps with the leaks for now [production]
16:56 <domas> rebooted erzuruzumi, placed swap-watchdog ( http://p.defau.lt/?mELQFcwRSvYRYdiIR9pvKQ ) into rc.local [production]
16:03 <mark> Added Qatar (634) to the list of esams countries [production]
01:27 <Tim> migrated arzwiki upload directory from amane to ms1 [production]
01:00 <Tim> fixed arzwiki upload directory permissions [production]
00:56 <Tim> moved most cron jobs from admin user cron tabs to /etc/cron.d on hume [production]
2009-02-04 §
22:33 <tomaszf> Adding cron for torblock under tfinc@hume [production]
22:20 <tomaszf> ran loadExitNodes() to update tor block list [production]
18:36 <brion> running TorBlock/loadExitNodes.php [production]
17:25 <brion> stripped BOM from en.planet config.ini; re-running. [production]
17:24 <brion_> attempting to run planet update for en.planet manually..... there's a config error [production]
16:30 <domas> stealing db27 for moar tests [production]
2009-02-03 §
13:05 <mark> Remote-hands replaced some cables, fuchsia is back up but idling [production]
06:57 <Tim> doing some schema changes on the otrs database. Some fields should be blobs and are text instead, perhaps due to a previous 4.0 -> 5.0 MySQL upgrade [production]
01:49 <Tim> added blob_tracking table to ukwikimedia [production]
01:42 <Tim> repooled db3 and db4 [production]
00:34 <mark> Moved traffic back [production]
00:28 <mark> Shutdown switchport of fuchsia in order to prevent it from interfering with mint (which took up text LVS as well as upload) [production]
00:20 <mark> Moved European traffic to pmtpa - text LVS unreachable [production]
2009-02-02 §
23:54 <domas> took out db29 for some testing [production]
22:07 <mark> Modified Exim configuration on williams to not discard but delivered spam-recognized messages to [[OTRS]] with an X-OTRS-Queue: Junk header, as well as SpamAssassin headers [production]
21:35 <brion> reverting change to Cite_body.php [production]
21:28 <brion> caching for cite refs is known to cause problems with links randomly replacing with other links; likely strip marker problem. andrew is investigating [production]
19:59 <domas> merged in Andrew's Cite cache to live site [production]
16:47 <brion-sick> syncing update to Collection to do more efficient sidebar lookups [production]
16:18 <brion-sick> large spike in text backend service times [production]
16:15 <brion-sick> secure.wikimedia.org is returning 503 Service Temporarily Unavailable [production]
08:11 <Tim> removing ancient static HTML dump from srv31 [production]
08:05 <Tim> removed cluster13 and cluster14 from db.php, will watch exception.log for attempted connections [production]
08:02 <Tim> removed srv130 from LVS and the apaches node group, not accessible by ssh but still serving pages [production]
07:56 <Tim> find /home/wikipedia/logs -size 0 -delete [production]
07:43 <Tim> re-added db22 to s1 rotation, no explanation for its removal in server admin log [production]
06:39 <Tim> dropped the otrs_test database [production]
06:38 <Tim> moved the OTRS database from otrs_real back to otrs. Updated exim4 config on mchenry [production]
04:23 <Tim> db10's relay log was corrupted, did a flush slave/change master [production]
01:11 <Tim> started mysqld on db23, doing recovery [production]
00:59 <Tim> rebooted db23 [production]
00:56 <Tim> db23 down, depooled [production]
00:05 <Tim> adjusted innodb configuration on db10, restarted, starting replication [production]
2009-02-01 §
23:40 <Tim> OTRS recovery script done [production]
21:25 <Tim> running script to copy deleted OTRS data from db10 [production]
03:52 <Tim> done 1 and 2 [production]
02:52 <Tim> patched GenericAgent.pm to prevent ticket deletion [production]