5351-5400 of 6232 results (18ms)
2009-02-09 §
15:53 <mark> Moved upload esams LVS back to mint [production]
15:37 <mark> Moved upload.esams LVS from mint to hawthorn [production]
15:28 <mark> Reinstalled server hawthorn with Hardy 8.04 [production]
13:55 <domas> fixed ganglia group for srv159 (it is scaler, not appserv) [production]
13:51 <domas> brought srv182 up [production]
13:32 <domas> repooled srv104 and srv105, after few months of vacation [production]
13:20 <domas> killed few orphaned tidy processes that were very very busy since Feb1 [production]
13:13 <domas> heeheee, extorted this: [15:11] <rainman-sr> so, srv77,79,80, rose, coronelli and maurus could be converted to apaches [production]
12:36 <Tim> trying apc.localcache=1 on srv176 [production]
04:27 <Tim> patching in r46936 [production]
03:48 <Tim> attempting to reproduce APC lock contention on srv188 [production]
2009-02-08 §
22:43 <brion> may or may not have fixed that -- log file was unwritable. hard to test the command since 'su' bitches about apache not being loginabble on hume :P [production]
22:39 <brion> investigating why centralnotice update is still broken. getting fatal php errors wtf? [production]
20:17 <domas> we were hitting APC lock contention after some CPU peak. Dear Ops Team, please upgrade to APC with localcache support. :))))) [production]
2009-02-07 §
22:49 <domas> db17 came up, but it crashed with different symptoms than other boxes, and it was running 2.6.28.1 kernel. might be previous hardware problems resurfacing [production]
21:23 <domas> db17 down [production]
2009-02-06 §
12:33 <brion> stopped that process since it was taking a while and just saved it as an hourly cronjob. :) log to /opt/mwlib/var/log/cache-cleaning [production]
12:28 <brion> running mw-serve cache cleanup for files older than 24h [production]
2009-02-05 §
18:19 <brion> put ulimit back with -v 1024000 that's better :D [production]
18:18 <brion> removed the ulimit; was unable to reach server with it in place [production]
18:15 <brion> hacked mw-serve to ulimit -v 102400 on erzurumi, see if this helps with the leaks for now [production]
16:56 <domas> rebooted erzuruzumi, placed swap-watchdog ( http://p.defau.lt/?mELQFcwRSvYRYdiIR9pvKQ ) into rc.local [production]
16:03 <mark> Added Qatar (634) to the list of esams countries [production]
01:27 <Tim> migrated arzwiki upload directory from amane to ms1 [production]
01:00 <Tim> fixed arzwiki upload directory permissions [production]
00:56 <Tim> moved most cron jobs from admin user cron tabs to /etc/cron.d on hume [production]
2009-02-04 §
22:33 <tomaszf> Adding cron for torblock under tfinc@hume [production]
22:20 <tomaszf> ran loadExitNodes() to update tor block list [production]
18:36 <brion> running TorBlock/loadExitNodes.php [production]
17:25 <brion> stripped BOM from en.planet config.ini; re-running. [production]
17:24 <brion_> attempting to run planet update for en.planet manually..... there's a config error [production]
16:30 <domas> stealing db27 for moar tests [production]
2009-02-03 §
13:05 <mark> Remote-hands replaced some cables, fuchsia is back up but idling [production]
06:57 <Tim> doing some schema changes on the otrs database. Some fields should be blobs and are text instead, perhaps due to a previous 4.0 -> 5.0 MySQL upgrade [production]
01:49 <Tim> added blob_tracking table to ukwikimedia [production]
01:42 <Tim> repooled db3 and db4 [production]
00:34 <mark> Moved traffic back [production]
00:28 <mark> Shutdown switchport of fuchsia in order to prevent it from interfering with mint (which took up text LVS as well as upload) [production]
00:20 <mark> Moved European traffic to pmtpa - text LVS unreachable [production]
2009-02-02 §
23:54 <domas> took out db29 for some testing [production]
22:07 <mark> Modified Exim configuration on williams to not discard but delivered spam-recognized messages to [[OTRS]] with an X-OTRS-Queue: Junk header, as well as SpamAssassin headers [production]
21:35 <brion> reverting change to Cite_body.php [production]
21:28 <brion> caching for cite refs is known to cause problems with links randomly replacing with other links; likely strip marker problem. andrew is investigating [production]
19:59 <domas> merged in Andrew's Cite cache to live site [production]
16:47 <brion-sick> syncing update to Collection to do more efficient sidebar lookups [production]
16:18 <brion-sick> large spike in text backend service times [production]
16:15 <brion-sick> secure.wikimedia.org is returning 503 Service Temporarily Unavailable [production]
08:11 <Tim> removing ancient static HTML dump from srv31 [production]
08:05 <Tim> removed cluster13 and cluster14 from db.php, will watch exception.log for attempted connections [production]
08:02 <Tim> removed srv130 from LVS and the apaches node group, not accessible by ssh but still serving pages [production]