8901-8950 of 10000 results (35ms)
2012-05-06 §
07:03 <apergos> manually rotates udplogs on locke, copying destined_for_storage3 off to hume:/archive/emergencyfromlocke/ (jeff, this note's for you in particular) [production]
06:36 <apergos> bringing up storage3 with neither /a nor /archive mounted, saw "The disk drive for /archive is not ready yet or not present" etc on boot, waited a long time, finally skipped them [production]
06:12 <apergos> and powercycling the box instead. grrrr [production]
06:05 <apergos> rebooting storage3: we have messages like May 6 05:45:12 storage3 kernel: [465081.410025] Filesystem "dm-0": xfs_log_force: error 5 returned. in the log, and the raid is unaccessible, megacli doesn't run either [production]
02:00 <LocalisationUpdate> failed: git pull of extensions failed [production]
2012-05-05 §
09:37 <mutante> squids - upgrading in the sq5x range (upload) [production]
08:53 <apergos> disabling modcompress temporarily for lightty on dataset2 (live hack), let's see what that does as far as it dying. could be issue similar to http://redmine.lighttpd.net/issues/2391 [production]
06:45 <mutante> squids - upgrading sq44,48 (upload) [production]
05:23 <mutante> squids - finishing a couple reboots in the sq7x range [production]
03:04 <binasher> rebooting db1006 as well [production]
03:04 <binasher> rebooting db1038, kernel uptime scheduler chaos [production]
02:00 <LocalisationUpdate> failed: git pull of extensions failed [production]
00:21 <reedy> synchronized php-1.20wmf2/extensions/GlobalBlocking/GlobalBlocking.class.php [production]
2012-05-04 §
23:46 <reedy> synchronized php-1.20wmf2/extensions/GlobalBlocking/GlobalBlocking.class.php [production]
23:45 <reedy> synchronized php-1.20wmf1/extensions/GlobalBlocking/GlobalBlocking.class.php [production]
22:35 <aaron> synchronized php-1.20wmf2/includes/filerepo/backend/FSFileBackend.php 'deployed a807624' [production]
22:34 <LeslieCarr> clearing varnish cache and reloading varnish on mobile [production]
21:14 <reedy> synchronized wmf-config/InitialiseSettings.php [production]
21:13 <reedy> ran sync-common-all [production]
20:18 <catrope> synchronized wmf-config/InitialiseSettings.php 'Fix typo (cswikquote vs cswikiquote)' [production]
20:06 <asher> synchronized wmf-config/db.php 'setting s2 writable' [production]
20:05 <binasher> performing mysql replication steps for s2 master switch to db52 [production]
20:04 <asher> synchronized wmf-config/db.php 'setting s2 read-only, db52 (still ro) as master, db13 removed' [production]
19:49 <asher> synchronized wmf-config/db.php 'setting db52 weight to 0 in prep for making new s2 master' [production]
19:32 <binasher> powering off db24 [production]
18:08 <LeslieCarr> reloaded mobile varnish caches and purged them [production]
18:02 <Ryan_Lane> gerrit upgrade is done [production]
17:55 <Ryan_Lane> starting gerrit [production]
17:32 <Ryan_Lane> installing gerrit package on manganese [production]
17:28 <Ryan_Lane> adding gerrit 2.3 package to the repo [production]
17:25 <Ryan_Lane> shutting down gerrit so that everything can be backed up [production]
16:45 <apergos> lighty on dataset2 is running under gdb in screen session as root, if it dies please leave that alone (or look at it if you want to investigate) [production]
16:26 <notpeter> turning off db30 (former s2 db, still on hardy, will ask asher what to do with it) to test noise in DC [production]
15:50 <mutante> rebooting sq67 (bits) [production]
15:42 <mutante> going through sq7x servers (text), full upgrades [production]
15:32 <notpeter> removing srv281 from rending pool until we figure out what's going on with it [production]
15:23 <notpeter> putting srv224 back into pybal pool [production]
15:09 <notpeter> removing srv224 from pybal pool for repartitioning [production]
14:56 <notpeter> putting srv223 back into pybal pool [production]
14:50 <mutante> going through sq6x (text), full upgrades [production]
14:08 <notpeter> removing srv223 from pybal pool for repartitioning [production]
14:02 <notpeter> putting srv222 back into pybal pool [production]
13:50 <notpeter> removing srv222 from pybal pool for repartitioning [production]
13:43 <notpeter> putting srv221 back into pybal pool [production]
13:30 <notpeter> removing srv221 from pybal pool for repartitioning [production]
13:16 <mutante> going through sq80 to sq86 (upload), full upgrade & reboot [production]
12:56 <mutante> maximum uptime in the sq* group down to 171 days, so we have like a month now for the rest. stopping upgrades for the moment being. [production]
12:54 <notpeter> starting script to move /usr/local/apache to /a partition on all remaing non-imagescaler apaches [production]
12:47 <mutante> (just) new kernels & reboot - sq45,sq49 (upload) [production]
12:30 <mark> Sending ALL non-european upload traffic to eqiad [production]