5951-6000 of 10000 results (18ms)
2012-01-19 §
03:01 <Ryan_Lane> rebooting virt1 to ensure hardware virtualization is enabled in the bios [production]
02:30 <awjrichards> synchronized php/extensions/CongressLookup/SpecialCongressLookup.php '[[rev:109477|r109477]]' [production]
02:29 <awjrichards> synchronized php/extensions/CongressLookup/CongressLookup.i18n.php '[[rev:109477|r109477]]' [production]
02:06 <Ryan_Lane> rebalance of gluster volume completed [production]
02:05 <LocalisationUpdate> completed (1.18) at Thu Jan 19 02:05:55 UTC 2012 [production]
02:05 <Ryan_Lane> rebalancing instance gluster volume. network may get saturated for a while. [production]
01:55 <Ryan_Lane> added virt1 and virt4 to instance volume for gluster [production]
01:17 <Reedy> Leaving cleanupUploadStash.php running against commonswiki in a screen session as me on hume [production]
01:16 <binasher> removing extra mobile varnish capacity - it wasn't needed [production]
01:15 <awjr> updated zip code/representative data on enwiki to [[rev:109465|r109465]] [production]
01:01 <Ryan_Lane> installed python-argparse on stat1 [production]
00:54 <binasher> running a hot backup of db32, streaming to db52 [production]
00:22 <Ryan_Lane> removing virt1 cname [production]
00:21 <Ryan_Lane> rebuilding virt1 as a nova compute node [production]
00:20 <LeslieCarr> changed vlan for virt1 eth0 [production]
00:18 <Ryan_Lane> cleared lighttpd logs on brewster and restarted squid and lighttpd [production]
00:05 <asher> synchronized wmf-config/db.php 'returning db32 to normal weight' [production]
2012-01-18 §
23:59 <asher> synchronized wmf-config/db.php 'returning db32 at a low weight' [production]
23:50 <binasher> rebooting db32 for mysql/kernel upgrades [production]
23:49 <asher> synchronized wmf-config/db.php 'pulling db32 from s1 for mysql/kernel upgrades' [production]
23:44 <awjrichards> synchronized php/extensions/CongressLookup/SpecialCongressLookup.php '[[rev:109457|r109457]]' [production]
23:02 <maplebed> increased the size of db11's logical volume for /a from 500G to 800G. [production]
22:27 <binasher> enwiki master changed to db36 - MASTER_LOG_FILE='db36-bin.000599', MASTER_LOG_POS=15773827 [production]
22:26 <asher> synchronized wmf-config/db.php 'done swapping s1 master to db36' [production]
22:25 <binasher> swapping s1 master to db36 [production]
22:24 <asher> synchronized wmf-config/db.php 'starting swap of s1 master to db36, s1 in read-only' [production]
22:13 <asher> synchronized wmf-config/db.php 'returning db36 to normal weight' [production]
22:07 <asher> synchronized wmf-config/db.php 'returning db36 at a low weight' [production]
21:59 <awjrichards> synchronized php/extensions/CongressLookup/SpecialCongressLookup.php '[[rev:109440|r109440]]' [production]
21:58 <binasher> rebooting db36, upgrading kernel + mysql [production]
21:56 <asher> synchronized wmf-config/db.php 'pulling db36 from s1 for mysql/kernel upgrades' [production]
21:54 <Ryan_Lane> installing python-wurfl on stat1 [production]
21:35 <Ryan_Lane> installing geoip-bin geoip-database libgeoip1 python-geoip on stat1 [production]
21:13 <asher> synchronized wmf-config/db.php 'returning db38 at prior weight' [production]
21:05 <Reedy> Run patch-ug_group-length-increase.sql on all wikis [production]
21:04 <Reedy> Run patch-uploadstash_chunk.sql on all wikis [production]
21:03 <Reedy> Run patch-jobs-add-timestamp.sql on all wikis [production]
20:55 <awjr> update cl_zip5 table for CongressLookup to data in r\t109408 [production]
20:43 <Reedy> Manually running cleanupUploadStash.php against commonswiki [production]
20:42 <Reedy> Manually ran cleanupUploadStash.php against enwiki [production]
20:31 <binasher> db38 in service at a low weight with new lucid kernel and current mysql build [production]
20:30 <RobH> shutting down db17, confirmed not in db rotation and has no mysql instance active [production]
20:30 <asher> synchronized wmf-config/db.php 'returning db38 at a lower weight' [production]
20:28 <asher> synchronized wmf-config/db.php 'pulling db38 again' [production]
20:26 <asher> synchronized wmf-config/db.php 'returning db38 to service' [production]
20:17 <LeslieCarr> rebooting spence as it's once again gone crazy [production]
20:11 <binasher> pulled db38, rebooting for kernel and mysql upgrades [production]
20:11 <asher> synchronized wmf-config/db.php 'pulling db38 from s1 for upgrade' [production]
20:04 <RobH> mw1102 coming down for mainboard replacement [production]
20:03 <LeslieCarr> killing puppet processes on spence [production]