7001-7050 of 10000 results (24ms)
2011-04-05 §
22:58 <Ryan_Lane> clearing up some space on searchidx1 [production]
22:20 <notpeter> crammed an etherpad db into db9's mysql hole. [production]
17:57 <Ryan_Lane> restarting llsearchd on all search boxes [production]
17:45 <RoanKattouw> Restarted morebots, running on wikitech as catrope [production]
17:45 <Ryan_Lane> changing the udp log location for search to emery [production]
12:16 <catrope> synchronized php-1.17/wmf-config/InitialiseSettings.php 'Undo $wgForceUIMsgAsContentMsg change on incubator from last night per DannyB' [production]
2011-04-04 §
23:28 <Ryan_Lane> uploading ircecho package to lucid-wikimedia repo, for nagios irc bot [production]
22:22 <Ryan_Lane> upgrading wikimedia-task-appserver package on srv281 [production]
22:22 <Ryan_Lane> uploading new version of wikimedia-task-appserver to lucid-wikimedia repo; merges back in 1.17 changes that were missing [production]
22:04 <RobH> updated noc robots entry in its apache config on fenari [production]
21:58 <Ryan_Lane> srv281 is acting as a temporary scaling server for testing of lucid imagescalers, and to help with thumbs load. [production]
21:27 <Ryan_Lane> depooling srv281 from appservers [production]
21:21 <Ryan_Lane> syncing apaches to get configuration pushed to srv281 [production]
21:17 <Ryan_Lane> rebooting srv281 [production]
21:01 <Ryan_Lane> adding srv281 to rendering cluster in pybal via fenari [production]
20:32 <Ryan_Lane> uploading a new version of wikimedia-task-appserver fixing a problem with sync-common [production]
20:13 <catrope> synchronized php-1.17/wmf-config/InitialiseSettings.php 'Add mainpage to $wgForceUIMsgAsContentMsg for incubatorwiki' [production]
19:55 <Ryan_Lane> srv281 successfully ran imagescaler puppet class. ready for testing. [production]
19:47 <Ryan_Lane> adding php5-fss to lucid-wikimedia repo [production]
19:11 <Ryan_Lane> adding wikimedia-task-appserver to lucid-wikimedia repo [production]
18:58 <RobH> bugzilla updates complete [production]
18:50 <RobH> updating bugzilla per rt#718 bz#28409 bz#28402 [production]
18:42 <notpeter> added cname etherpad for hooper.wikimedia.org [production]
18:00 <Ryan_Lane> added the wikimedia-fonts package to lucid-wikimedia repo [production]
17:29 <notpeter> adding self to nagios group. rebooterizing nagios. [production]
05:58 <apergos> cleaned up perms on commons/thumb/a/af, left over from interrupted rsync test last night [production]
05:50 <tstarling> synchronized php-1.17/wmf-config/InitialiseSettings.php 'enabling pool counter on all wikis' [production]
04:12 <tstarling> synchronized php-1.17/wmf-config/InitialiseSettings.php 'enabling PoolCounter on testwiki and test2wiki' [production]
01:22 <Tim> apache CPU overload lasted ~10 mins, v. high backend request rate, don't know cause, seems to have stopped now [production]
2011-04-03 §
18:42 <apergos> 8 rsyncs of ms4 thumbs restarted with better perms so scalers can write... in screen as root on ms5. If we start seeing nfs timesouts in the scaler logs please shoot a couple [production]
17:14 <mark> Deployed max-connections on all cache peers for esams.upload squids to their florida parents (current limit 200) [production]
17:00 <mark> Removed the carp weights on the esams backends again, as the weighting was completely screwed up [production]
16:59 <mark> Started knsq13 backend [production]
14:27 <catrope> ran sync-common-all [production]
14:26 <RoanKattouw> Running sync-common-all to deploy r85256 [production]
13:03 <apergos> shot rsyncs on ms5, setting 777 dir perms on all thumbnail dirs (eg e/ef/blablah.jpg) so scalers can write into them [production]
12:53 <apergos> did same for rest of projects and subdirs (777 on hash dirs) [production]
12:47 <apergos> chmod 777 on commons/thumb/*/* on ms5 so that scalers can create directories in there (mismatch of uid apache vs www-data) [production]
11:12 <mark> Raised per-squid connection limit to ms5 of 200 to 400 connections [production]
11:05 <mark> Raised per-squid connection limit to ms5 of 100 to 200 connections [production]
10:55 <mark> Fixed squid loop, the pmtpa.upload squids were using the esams squids as "CARP parents for distant content" [production]
10:29 <mark> Fixed puppet on sq42/43 [production]
09:44 <mark> Lowered FCGI thumb handlers from 90 to 60 again, to reduce concurrency [production]
08:08 <mark> Started 4 more rsyncs (8 total now) [production]
07:49 <mark> Removed mlocate from ms5, puppetising [production]
07:42 <mark> Started 4 rsyncs from ms4 to ms5 (--ignore-existing) [production]
07:32 <mark> increased thumb handler count from 60 to 90 [production]
07:11 <mark> Doubled the amount of fcgi thumb handlers [production]
07:08 <mark> Turned off logging of 404s to nginx error.log [production]
06:50 <mark> Restarted Apache on the image scalers [production]