151-200 of 10000 results (12ms)
2011-04-04 §
21:21 <Ryan_Lane> syncing apaches to get configuration pushed to srv281 [production]
21:17 <Ryan_Lane> rebooting srv281 [production]
21:01 <Ryan_Lane> adding srv281 to rendering cluster in pybal via fenari [production]
20:32 <Ryan_Lane> uploading a new version of wikimedia-task-appserver fixing a problem with sync-common [production]
20:13 <catrope> synchronized php-1.17/wmf-config/InitialiseSettings.php 'Add mainpage to $wgForceUIMsgAsContentMsg for incubatorwiki' [production]
19:55 <Ryan_Lane> srv281 successfully ran imagescaler puppet class. ready for testing. [production]
19:47 <Ryan_Lane> adding php5-fss to lucid-wikimedia repo [production]
19:11 <Ryan_Lane> adding wikimedia-task-appserver to lucid-wikimedia repo [production]
18:58 <RobH> bugzilla updates complete [production]
18:50 <RobH> updating bugzilla per rt#718 bz#28409 bz#28402 [production]
18:42 <notpeter> added cname etherpad for hooper.wikimedia.org [production]
18:00 <Ryan_Lane> added the wikimedia-fonts package to lucid-wikimedia repo [production]
17:29 <notpeter> adding self to nagios group. rebooterizing nagios. [production]
05:58 <apergos> cleaned up perms on commons/thumb/a/af, left over from interrupted rsync test last night [production]
05:50 <tstarling> synchronized php-1.17/wmf-config/InitialiseSettings.php 'enabling pool counter on all wikis' [production]
04:12 <tstarling> synchronized php-1.17/wmf-config/InitialiseSettings.php 'enabling PoolCounter on testwiki and test2wiki' [production]
01:22 <Tim> apache CPU overload lasted ~10 mins, v. high backend request rate, don't know cause, seems to have stopped now [production]
2011-04-03 §
18:42 <apergos> 8 rsyncs of ms4 thumbs restarted with better perms so scalers can write... in screen as root on ms5. If we start seeing nfs timesouts in the scaler logs please shoot a couple [production]
17:14 <mark> Deployed max-connections on all cache peers for esams.upload squids to their florida parents (current limit 200) [production]
17:00 <mark> Removed the carp weights on the esams backends again, as the weighting was completely screwed up [production]
16:59 <mark> Started knsq13 backend [production]
14:27 <catrope> ran sync-common-all [production]
14:26 <RoanKattouw> Running sync-common-all to deploy r85256 [production]
13:03 <apergos> shot rsyncs on ms5, setting 777 dir perms on all thumbnail dirs (eg e/ef/blablah.jpg) so scalers can write into them [production]
12:53 <apergos> did same for rest of projects and subdirs (777 on hash dirs) [production]
12:47 <apergos> chmod 777 on commons/thumb/*/* on ms5 so that scalers can create directories in there (mismatch of uid apache vs www-data) [production]
11:12 <mark> Raised per-squid connection limit to ms5 of 200 to 400 connections [production]
11:05 <mark> Raised per-squid connection limit to ms5 of 100 to 200 connections [production]
10:55 <mark> Fixed squid loop, the pmtpa.upload squids were using the esams squids as "CARP parents for distant content" [production]
10:29 <mark> Fixed puppet on sq42/43 [production]
09:44 <mark> Lowered FCGI thumb handlers from 90 to 60 again, to reduce concurrency [production]
08:08 <mark> Started 4 more rsyncs (8 total now) [production]
07:49 <mark> Removed mlocate from ms5, puppetising [production]
07:42 <mark> Started 4 rsyncs from ms4 to ms5 (--ignore-existing) [production]
07:32 <mark> increased thumb handler count from 60 to 90 [production]
07:11 <mark> Doubled the amount of fcgi thumb handlers [production]
07:08 <mark> Turned off logging of 404s to nginx error.log [production]
06:50 <mark> Restarted Apache on the image scalers [production]
06:49 <mark> Reconfigured ms5 to use the 404 thumb handler [production]
06:48 <Ryan_Lane> disabling nfs on ms4 [production]
06:33 <mark> Running puppet on all apaches to fix fstab and mount ms5.pmtpa.wmnet:/export/thumbs [production]
06:32 <mark> Unmounting /mnt/thumbs on all mediawiki-installation servers [production]
06:30 <mark> Remounted NFS /mnt/thumbs on the scalers to ms5 [production]
06:28 <Ryan_Lane> bring nfs back up [production]
06:28 <Ryan_Lane> brought ms4 back up. stopping the web server service and nfs [production]
06:20 <mark> Setup NFS kernel server on ms5 [production]
06:18 <Ryan_Lane> powercycling ms4 [production]
05:29 <Ryan_Lane> rebooting ms4 with -d to get a coredump [production]
05:14 <apergos> reanbling webserver on ms4 for testing [production]
04:45 <apergos> stopping web service on ms4 for the moment [production]