801-850 of 10000 results (14ms)
2011-12-20 §
10:46 <apergos> ds2 scp to ds1 stalled in the same place, looking into it [production]
09:31 <apergos> should have logged this earlier, prolly about 2 hours ago removed 3 more bin logs from db9, we were getting crowded again. [production]
09:30 <apergos> first attempt at scp from ds2 to ds1 failed after 64gb, nothing useful in log, process on ds1 was hung at "restarting system call"... shot it and running again, from screen as root on ds2. [production]
07:53 <apergos> after some playing around on ms5 (whichis responsible for the little io utilization spikes, but I'm done now), thumb cleaner is back at work for what should be its last day [production]
06:07 <Tim> removed mobile1, srv124, srv159, srv183, srv186 from /etc/dsh/group/apaches: not in mediawiki-installation [production]
06:00 <Tim> removed srv162, srv174 from /etc/dsh/group/job-runners: not in puppet jobrunners class [production]
05:57 <Tim> removed srv159, srv183 and srv186 from /etc/dsh/group/job-runners on fenari and stopped mw-job-runner on them, see https://bugzilla.wikimedia.org/show_bug.cgi?id=31576#c23 [production]
02:22 <Ryan_Lane> fixed labsconsole. reverted aws-sdk to 1.4 [production]
02:04 <LocalisationUpdate> completed (1.18) at Tue Dec 20 02:07:54 UTC 2011 [production]
00:52 <Ryan_Lane> seems I broke labsconsole :( [production]
00:52 <Ryan_Lane> ran svn up on openstackmanager on virt1 [production]
00:05 <asher> synchronized wmf-config/db.php 'putting db50 into rotation for s6' [production]
2011-12-19 §
23:59 <binasher> started replicating db50 from db47 [production]
22:56 <binasher> resolved apt issues on db13,17 [production]
22:50 <binasher> running a hot xtrabackup of db47 to db50 [production]
22:17 <RobH> db50/db51 online for asher to deploy into s6 [production]
22:12 <Jeff_Green> deployed two new cron jobs on hume via /etc/cron.d/mw-fundraising-stats, temporary, will puppetize once we see that they script is working properly [production]
21:41 <awjrichards> synchronizing Wikimedia installation... : Deploying ContributionReporting fixes to use summary tables ([[rev:106696|r106696]]), disabling ContributionReporting everywhere except test and foundationwikis [production]
21:25 <asher> synchronized wmf-config/db.php 'pulling db43 which died' [production]
21:13 <apergos> note that dataset1 appears to be keeping time ok [production]
21:12 <apergos> started an scp (to make ds1 work a tiny bit harder) of some files from ds2. running on ds2 in screen session as root. [production]
21:01 <catrope> synchronized php-1.18/extensions/ArticleFeedbackv5/ [production]
20:45 <RoanKattouw> ...on enwiki that is [production]
20:45 <catrope> synchronized wmf-config/InitialiseSettings.php 'Re-enable ArticleFeedbackv5, hopefully fixed now' [production]
20:43 <catrope> synchronized php-1.18/extensions/ArticleFeedbackv5/ [production]
20:36 <catrope> synchronized wmf-config/InitialiseSettings.php '...and disable for now' [production]
20:34 <catrope> synchronized wmf-config/InitialiseSettings.php 'Enable ArticleFeedbackv5 on enwiki' [production]
20:22 <catrope> synchronized php-1.18/extensions/ArticleFeedbackv5/ [production]
20:08 <catrope> synchronized php-1.18/extensions/ClickTracking/ApiClickTracking.php '[[rev:106681|r106681]]' [production]
19:41 <Jeff_Green> dropping several db's from db9 which have already been migrated to fundraisingdb cluster [production]
19:40 <notpeter> powercycling maerlant [production]
19:05 <nikerabbit> synchronized wmf-config/InitialiseSettings.php 'Translate with tables' [production]
18:34 <nikerabbit> synchronized wmf-config/InitialiseSettings.php 'Translate needs tables' [production]
18:28 <nikerabbit> synchronizing Wikimedia installation... : I18ndeploy [[rev:106667|r106667]] and new extensions on mediawiki.org [production]
16:42 <RobH> dataset1 new data partition ready and setup to automount [production]
15:49 <RobH> dataset1 reinstalled and has had puppet run. Now to see if it can keep time [production]
15:46 <RoanKattouw> maerlant is fried, load avg is 500+, linearly increasing since Friday. Rejects SSH login attempts [production]
15:45 <notpeter> restarting indexer on searchidx2 [production]
14:16 <apergos> thumb cleaner to bed for the night... for the last time? [production]
13:15 <mutante> truncated spence.cfg in ./puppet_checks.d/ - it had multiple dupe service definitions for all checks on spence [production]
13:11 <mutante> commented check_job_queue stuff from non-puppetized files on spence (hosts.cfg, conf.php) to get rid of "duplicate definition" now that it's been pupptized [production]
12:35 <mutante> deleted snapshot4 files from /var/lib/puppet/yaml/node and ./yaml/facts on sockpuppet and stafford, they got recreated and fixed puppet run on sn4 [production]
10:08 <apergos> a few more binlogs on db9 gone. eeking out another 12 hours or so [production]
06:57 <apergos> thumb cleaner awake for the day. poor thing, slaving away but soon it will be able to retire [production]
01:57 <LocalisationUpdate> failed (1.18) at Mon Dec 19 02:00:11 UTC 2011 [production]
2011-12-18 §
16:41 <notpeter> removing about 4G of binlogs from db9. everything more than 24 hours old. [production]
15:12 <apergos> thumb cleaner sleeping it off for the night [production]
07:38 <jeremyb> 17 mins ago <apergos> thumb cleaner to work for the day [production]
01:57 <LocalisationUpdate> failed (1.18) at Sun Dec 18 02:00:04 UTC 2011 [production]
2011-12-17 §
22:49 <RobH> Anytime db9 hits 98 or 99% someone needs to remove binlogs to bring it back down to 94 or 95% [production]