51-100 of 10000 results (7ms)
2011-12-21 §
03:53 <LocalisationUpdate> completed (1.18) at Wed Dec 21 03:56:58 UTC 2011 [production]
03:48 <Tim> doing a manual run of l10nupdate to check recache timings [production]
03:27 <tstarling> synchronized php-1.18/includes/LocalisationCache.php '[[rev:106927|r106927]]' [production]
02:40 <tstarling> synchronized wmf-config/InitialiseSettings.php 'LC recache log' [production]
02:38 <tstarling> synchronized php-1.18/includes/LocalisationCache.php '[[rev:106922|r106922]]' [production]
02:03 <LocalisationUpdate> completed (1.18) at Wed Dec 21 02:06:08 UTC 2011 [production]
01:51 <reedy> synchronized php-1.18/resources/mediawiki 'creating empty mediawiki.debug.css/js' [production]
01:50 <K4-713> synchronized payments cluster to [[rev:106917|r106917]] [production]
01:16 <K4-713> synchronized payments cluster to [[rev:106909|r106909]] [production]
2011-12-20 §
23:57 <Ryan_Lane> readded /dev/sda2 partition on streber, it was somehow deleted, borking the raidset [production]
23:20 <Ryan_Lane> rebooting streber [production]
23:00 <LeslieCarr> creating a new logical volume on streber called syslog for syslog-ng purposes [production]
21:08 <awjr> synchronizing CiviCRM instance on grosley and aluminium to [[rev:1037|r1037]] [production]
19:23 <reedy> synchronized php-1.18/extensions/CentralAuth/ '[[rev:106840|r106840]]' [production]
19:14 <reedy> synchronized php-1.18/extensions/Contest/ '[[rev:106838|r106838]]' [production]
17:05 <mutante> spence: according to [http://nagios.manubulon.com/traduction/docs25en/tuning.html] we should even double that if we have "high latency values (> 10 or 15 seconds)" and we have like > 1000 [production]
17:04 <mutante> spence: check out "nagios -s /etc/nagios/nagios.cfg" for performance data - it suggests "Value for 'max_concurrent_checks' option should be >= 1231" [production]
16:56 <Jeff_Green> manually rotated spence:/var/log/nagios/nagios.log because nagios log rotation appears broken and the file is ~2.6G [production]
16:43 <catrope> synchronized php-1.18/resources/startup.js 'touch' [production]
16:32 <catrope> synchronized wmf-config/InitialiseSettings.php 'Underscores -> spaces in wmgArticleFeedbackBlacklistCategories' [production]
16:14 <apergos> restarting scp on ds2, seems that it renegotiates after 64GB and that was failing, fixed [production]
15:25 <apergos> thumbs cleaner on ms5 complete. (don't worry, a new job will start up tomorrow) [production]
15:16 <mutante> installing security upgrades on tarin (includes perl and php) [production]
14:10 <apergos> another couple binlogs gone on ds9 [production]
13:41 <mutante> added testswarm package to repo and installed it on gallium [production]
13:15 <catrope> synchronized wmf-config/InitialiseSettings.php 'Use the correct interwiki prefix' [production]
13:14 <catrope> synchronized wmf-config/InitialiseSettings.php 'Configure $wgImportSources on en_labswikimedia' [production]
12:59 <catrope> synchronized wmf-config/InitialiseSettings.php 'Whitelist Category:Article_Feedback_5_Additional_Articles for AFTv5 and blacklist it for AFTv4 on enwiki and en_labswikimedia' [production]
12:57 <catrope> synchronized php-1.18/extensions/ArticleFeedbackv5/ '[[rev:106794|r106794]]' [production]
11:21 <nikerabbit> synchronized php-1.18/extensions/WebFonts/resources/ext.webfonts.css 'bugfix [[rev:106781|r106781]]' [production]
11:20 <nikerabbit> synchronized php-1.18/extensions/Narayam/resources/ext.narayam.core.css 'bugfix [[rev:106781|r106781]]' [production]
10:46 <apergos> ds2 scp to ds1 stalled in the same place, looking into it [production]
09:31 <apergos> should have logged this earlier, prolly about 2 hours ago removed 3 more bin logs from db9, we were getting crowded again. [production]
09:30 <apergos> first attempt at scp from ds2 to ds1 failed after 64gb, nothing useful in log, process on ds1 was hung at "restarting system call"... shot it and running again, from screen as root on ds2. [production]
07:53 <apergos> after some playing around on ms5 (whichis responsible for the little io utilization spikes, but I'm done now), thumb cleaner is back at work for what should be its last day [production]
06:07 <Tim> removed mobile1, srv124, srv159, srv183, srv186 from /etc/dsh/group/apaches: not in mediawiki-installation [production]
06:00 <Tim> removed srv162, srv174 from /etc/dsh/group/job-runners: not in puppet jobrunners class [production]
05:57 <Tim> removed srv159, srv183 and srv186 from /etc/dsh/group/job-runners on fenari and stopped mw-job-runner on them, see https://bugzilla.wikimedia.org/show_bug.cgi?id=31576#c23 [production]
02:22 <Ryan_Lane> fixed labsconsole. reverted aws-sdk to 1.4 [production]
02:04 <LocalisationUpdate> completed (1.18) at Tue Dec 20 02:07:54 UTC 2011 [production]
00:52 <Ryan_Lane> seems I broke labsconsole :( [production]
00:52 <Ryan_Lane> ran svn up on openstackmanager on virt1 [production]
00:05 <asher> synchronized wmf-config/db.php 'putting db50 into rotation for s6' [production]
2011-12-19 §
23:59 <binasher> started replicating db50 from db47 [production]
22:56 <binasher> resolved apt issues on db13,17 [production]
22:50 <binasher> running a hot xtrabackup of db47 to db50 [production]
22:17 <RobH> db50/db51 online for asher to deploy into s6 [production]
22:12 <Jeff_Green> deployed two new cron jobs on hume via /etc/cron.d/mw-fundraising-stats, temporary, will puppetize once we see that they script is working properly [production]
21:41 <awjrichards> synchronizing Wikimedia installation... : Deploying ContributionReporting fixes to use summary tables ([[rev:106696|r106696]]), disabling ContributionReporting everywhere except test and foundationwikis [production]
21:25 <asher> synchronized wmf-config/db.php 'pulling db43 which died' [production]