2012-05-04
§
|
23:46 |
<reedy> |
synchronized php-1.20wmf2/extensions/GlobalBlocking/GlobalBlocking.class.php |
[production] |
23:45 |
<reedy> |
synchronized php-1.20wmf1/extensions/GlobalBlocking/GlobalBlocking.class.php |
[production] |
22:35 |
<aaron> |
synchronized php-1.20wmf2/includes/filerepo/backend/FSFileBackend.php 'deployed a807624' |
[production] |
22:34 |
<LeslieCarr> |
clearing varnish cache and reloading varnish on mobile |
[production] |
21:14 |
<reedy> |
synchronized wmf-config/InitialiseSettings.php |
[production] |
21:13 |
<reedy> |
ran sync-common-all |
[production] |
20:18 |
<catrope> |
synchronized wmf-config/InitialiseSettings.php 'Fix typo (cswikquote vs cswikiquote)' |
[production] |
20:06 |
<asher> |
synchronized wmf-config/db.php 'setting s2 writable' |
[production] |
20:05 |
<binasher> |
performing mysql replication steps for s2 master switch to db52 |
[production] |
20:04 |
<asher> |
synchronized wmf-config/db.php 'setting s2 read-only, db52 (still ro) as master, db13 removed' |
[production] |
19:49 |
<asher> |
synchronized wmf-config/db.php 'setting db52 weight to 0 in prep for making new s2 master' |
[production] |
19:32 |
<binasher> |
powering off db24 |
[production] |
18:08 |
<LeslieCarr> |
reloaded mobile varnish caches and purged them |
[production] |
18:02 |
<Ryan_Lane> |
gerrit upgrade is done |
[production] |
17:55 |
<Ryan_Lane> |
starting gerrit |
[production] |
17:32 |
<Ryan_Lane> |
installing gerrit package on manganese |
[production] |
17:28 |
<Ryan_Lane> |
adding gerrit 2.3 package to the repo |
[production] |
17:25 |
<Ryan_Lane> |
shutting down gerrit so that everything can be backed up |
[production] |
16:45 |
<apergos> |
lighty on dataset2 is running under gdb in screen session as root, if it dies please leave that alone (or look at it if you want to investigate) |
[production] |
16:26 |
<notpeter> |
turning off db30 (former s2 db, still on hardy, will ask asher what to do with it) to test noise in DC |
[production] |
15:50 |
<mutante> |
rebooting sq67 (bits) |
[production] |
15:42 |
<mutante> |
going through sq7x servers (text), full upgrades |
[production] |
15:32 |
<notpeter> |
removing srv281 from rending pool until we figure out what's going on with it |
[production] |
15:23 |
<notpeter> |
putting srv224 back into pybal pool |
[production] |
15:09 |
<notpeter> |
removing srv224 from pybal pool for repartitioning |
[production] |
14:56 |
<notpeter> |
putting srv223 back into pybal pool |
[production] |
14:50 |
<mutante> |
going through sq6x (text), full upgrades |
[production] |
14:08 |
<notpeter> |
removing srv223 from pybal pool for repartitioning |
[production] |
14:02 |
<notpeter> |
putting srv222 back into pybal pool |
[production] |
13:50 |
<notpeter> |
removing srv222 from pybal pool for repartitioning |
[production] |
13:43 |
<notpeter> |
putting srv221 back into pybal pool |
[production] |
13:30 |
<notpeter> |
removing srv221 from pybal pool for repartitioning |
[production] |
13:16 |
<mutante> |
going through sq80 to sq86 (upload), full upgrade & reboot |
[production] |
12:56 |
<mutante> |
maximum uptime in the sq* group down to 171 days, so we have like a month now for the rest. stopping upgrades for the moment being. |
[production] |
12:54 |
<notpeter> |
starting script to move /usr/local/apache to /a partition on all remaing non-imagescaler apaches |
[production] |
12:47 |
<mutante> |
(just) new kernels & reboot - sq45,sq49 (upload) |
[production] |
12:30 |
<mark> |
Sending ALL non-european upload traffic to eqiad |
[production] |
12:23 |
<mutante> |
(just) new kernels & reboot - sq63 to sq66 (209 days up) |
[production] |
12:06 |
<mutante> |
dist-upgrade & kernel & reboot - sq42,sq43 - rebooting upload squids one by one |
[production] |
11:48 |
<mutante> |
powercycling srv266 one more time, but now creating RT for it, once already showed CPU issue before it was reinstalled recently |
[production] |
11:13 |
<apergos> |
restarted lighty on dataset2 ... about ... half an hour ago. stupid case sensitivity |
[production] |
10:02 |
<apergos> |
tossed knsq1 through 7 from squid_knams dsh nodegroups file, prolly lots more cleanup where that came from |
[production] |