901-950 of 10000 results (11ms)
2011-04-07 §
08:16 <RoanKattouw> I meant srv196, not srv193 [production]
08:15 <RoanKattouw> Deploying UploadWizard for real this time, forgot to svn up first. sync-common-all then clearMessageBlobs.php [production]
08:14 <RoanKattouw> Commenting out srv193 in mediawiki-installation node list because its timeouts take forever [production]
08:10 <RoanKattouw> srv196 is not responding to SSH or syncs from fenari (they time out after a looong time) but Nagios says SSH is fine. Should be fixed or temporarily depooled [production]
08:08 <RoanKattouw> Clearing message blobs [production]
08:07 <catrope> ran sync-common-all [production]
08:04 <RoanKattouw> Scap broke with sudo stuff AGAIN, running sync-common-all [production]
08:01 <RoanKattouw> Running scap to deploy UploadWizard changes [production]
07:11 <apergos> turned em off again, started seeing timeouts. bah [production]
06:39 <apergos> and two more... [production]
06:31 <apergos> restarted two of the 8 rsyncs on ms5, keeping an eye on them [production]
01:31 <domas> added nobarrier to xfs mount options on db32 and db37 [production]
2011-04-06 §
20:38 <RobH> updated puppet with a svn::client class (rt#721) [production]
20:18 <RobH> pulled wm09schols, wm10schols, and wm10reg out of enabled sites on singer [production]
20:05 <apergos> suspended all rsyncs on ms5, we were seeing nfs timeouts on the renderers all of a sudden [production]
18:50 <apergos> killed morebots and let the restart script start it up again [production]
2011-04-05 §
23:00 <Ryan_Lane> restarting search indexer on searchidx to free space held by deleted logs [production]
22:58 <Ryan_Lane> clearing up some space on searchidx1 [production]
22:20 <notpeter> crammed an etherpad db into db9's mysql hole. [production]
17:57 <Ryan_Lane> restarting llsearchd on all search boxes [production]
17:45 <RoanKattouw> Restarted morebots, running on wikitech as catrope [production]
17:45 <Ryan_Lane> changing the udp log location for search to emery [production]
12:16 <catrope> synchronized php-1.17/wmf-config/InitialiseSettings.php 'Undo $wgForceUIMsgAsContentMsg change on incubator from last night per DannyB' [production]
2011-04-04 §
23:28 <Ryan_Lane> uploading ircecho package to lucid-wikimedia repo, for nagios irc bot [production]
22:22 <Ryan_Lane> upgrading wikimedia-task-appserver package on srv281 [production]
22:22 <Ryan_Lane> uploading new version of wikimedia-task-appserver to lucid-wikimedia repo; merges back in 1.17 changes that were missing [production]
22:04 <RobH> updated noc robots entry in its apache config on fenari [production]
21:58 <Ryan_Lane> srv281 is acting as a temporary scaling server for testing of lucid imagescalers, and to help with thumbs load. [production]
21:27 <Ryan_Lane> depooling srv281 from appservers [production]
21:21 <Ryan_Lane> syncing apaches to get configuration pushed to srv281 [production]
21:17 <Ryan_Lane> rebooting srv281 [production]
21:01 <Ryan_Lane> adding srv281 to rendering cluster in pybal via fenari [production]
20:32 <Ryan_Lane> uploading a new version of wikimedia-task-appserver fixing a problem with sync-common [production]
20:13 <catrope> synchronized php-1.17/wmf-config/InitialiseSettings.php 'Add mainpage to $wgForceUIMsgAsContentMsg for incubatorwiki' [production]
19:55 <Ryan_Lane> srv281 successfully ran imagescaler puppet class. ready for testing. [production]
19:47 <Ryan_Lane> adding php5-fss to lucid-wikimedia repo [production]
19:11 <Ryan_Lane> adding wikimedia-task-appserver to lucid-wikimedia repo [production]
18:58 <RobH> bugzilla updates complete [production]
18:50 <RobH> updating bugzilla per rt#718 bz#28409 bz#28402 [production]
18:42 <notpeter> added cname etherpad for hooper.wikimedia.org [production]
18:00 <Ryan_Lane> added the wikimedia-fonts package to lucid-wikimedia repo [production]
17:29 <notpeter> adding self to nagios group. rebooterizing nagios. [production]
05:58 <apergos> cleaned up perms on commons/thumb/a/af, left over from interrupted rsync test last night [production]
05:50 <tstarling> synchronized php-1.17/wmf-config/InitialiseSettings.php 'enabling pool counter on all wikis' [production]
04:12 <tstarling> synchronized php-1.17/wmf-config/InitialiseSettings.php 'enabling PoolCounter on testwiki and test2wiki' [production]
01:22 <Tim> apache CPU overload lasted ~10 mins, v. high backend request rate, don't know cause, seems to have stopped now [production]
2011-04-03 §
18:42 <apergos> 8 rsyncs of ms4 thumbs restarted with better perms so scalers can write... in screen as root on ms5. If we start seeing nfs timesouts in the scaler logs please shoot a couple [production]
17:14 <mark> Deployed max-connections on all cache peers for esams.upload squids to their florida parents (current limit 200) [production]
17:00 <mark> Removed the carp weights on the esams backends again, as the weighting was completely screwed up [production]
16:59 <mark> Started knsq13 backend [production]