2009-09-16
§
|
20:22 |
<robh> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'stratwiki api final fix i hope' |
[production] |
20:21 |
<robh> |
synchronized php-1.5/wmf-config/CommonSettings.php 'removing private wiki overrides for api use' |
[production] |
20:03 |
<robh> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'stratwiki api tinkering' |
[production] |
19:57 |
<robh> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'enabling writeapi on stratappswiki' |
[production] |
18:14 |
<domas> |
set up 5xx logging (without upload and old query interface) at locke:/a/squid/5xx.log |
[production] |
17:59 |
<brion> |
synchronized php-1.5/wmf-config/CommonSettings.php |
[production] |
17:57 |
<aZaFred_> |
sapshot[1..3] have been puppetized |
[production] |
17:56 |
<brion> |
mediawiki-installation group troubles have been worked out. thx rob & fred! |
[production] |
17:56 |
<brion> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'updating w/ config preps for code update' |
[production] |
17:39 |
<brion> |
sync system is currently broken. bogus digits (9, 7, 6, 8) and not-quite-set-up snapshot* machines in mediawiki-installation group |
[production] |
17:10 |
<Rob> |
removed some outdated security plugins on blogs, updated some others |
[production] |
15:51 |
<robh> |
synchronized php-1.5/wmf-config/InitialiseSettings.php |
[production] |
15:44 |
<robh> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'Updating logo for stratappswiki' |
[production] |
14:54 |
<mark> |
Moving traffic back to Europe - florida squids overloaded |
[production] |
14:39 |
<mark> |
Capacity test, DNS scenario knams-down |
[production] |
14:37 |
<Rob> |
moved masters from db13 to db15 with some major assistance (basically did it himself ;) from tim |
[production] |
14:34 |
<tstarling> |
synchronized php-1.5/wmf-config/db.php |
[production] |
14:22 |
<Rob> |
script had some issues, Tim is debugging |
[production] |
14:22 |
<Rob> |
yep, switching masters because db13 raid battery is dead. |
[production] |
14:20 |
<robh> |
synchronized php-1.5/wmf-config/db.php 'switching masters from db13 to db15' |
[production] |
14:18 |
<robh> |
synchronized php-1.5/wmf-config/db.php |
[production] |
2009-09-15
§
|
23:13 |
<brion> |
applying patch-log_user_text.sql to newly created wikis: mhrwiki strategywiki uawikimedia cowikimedia ckbwiki pnbwiki mwlwiki acewiki trwikinews flaggedrevs_labswikimedia readerfeedback_labswikimedia strategyappswiki strategyappwiki |
[production] |
23:10 |
<brion> |
adding stub l10n_cache table to all wikis |
[production] |
23:02 |
<brion> |
checking to confirm log_page/log_user_text update is applied on all wikis |
[production] |
23:01 |
<tomaszf> |
installed memcache on sage.knams |
[production] |
21:31 |
<robh> |
synchronized php-1.5/wmf-config/InitialiseSettings.php 'changing settings for readerfeedback on stratapps' |
[production] |
21:12 |
<atglenn> |
/home nfs mounted and added to fstab on srv124, new test.wikipedia |
[production] |
21:09 |
<domas> |
where is my attribution ;-D |
[production] |
21:08 |
<Rob> |
test.wikipedia.org fixed by mounting nfs |
[production] |
20:55 |
<Rob> |
setup new private wiki, added to dns as well as configuration files |
[production] |
20:38 |
<robh> |
ran sync-common-all |
[production] |
19:58 |
<Rob> |
servers running wipe were burdening the logging host. added drop rules to iptables on db20 to refuse those servers access since ssh doesnt work with wipe destorying things |
[production] |
19:24 |
<Rob> |
depooled srv124 to use as test.wikipedia.org, then updated squid config and pushed |
[production] |
19:09 |
<midom> |
synchronized php-1.5/wmf-config/db.php 'oh well' |
[production] |
17:14 |
<Rob> |
db12 is back online with mysql running |
[production] |
16:41 |
<Rob> |
installed python-pyfribidi on pdf1 |
[production] |
16:06 |
<aZaFred_> |
snapshot[1..3] wikimedia-task-appserver install completed. Added hosts to dsh nodegroup for mediawiki-installation so common updates get pushed to them |
[production] |
16:04 |
<Rob> |
srv245 bad powersupply, swapped with on site spare |
[production] |
15:59 |
<Rob> |
rebooting srv245 to fix its drac access |
[production] |
15:33 |
<Rob> |
some odd invalid entries in dsh nodes, removed. |
[production] |
15:24 |
<Rob> |
running wipe on srv35, srv52, srv54, srv56 |
[production] |
15:20 |
<Rob> |
srv66 running wipe |
[production] |
15:12 |
<mark> |
Started MySQL on ms2 and restarted replication |
[production] |
15:11 |
<mark> |
Redistributed the spare drives on ms2 back into the spare pool (/dev/md1) |
[production] |
15:08 |
<Rob> |
shutting down db12 for raid battery swap |
[production] |
15:04 |
<mark> |
Swapped drive c3t6d0 in ms2, readded it to /dev/md14 and moved back the spare /dev/sdao into the spare pool (/dev/md1) |
[production] |
14:42 |
<mark> |
Shutting down MySQL on ms2 |
[production] |
14:33 |
<Rob> |
removed a number of decommissioned servers from nagios |
[production] |
14:30 |
<Rob> |
wipe running on srv44, srv45, srv47 |
[production] |
14:26 |
<Rob> |
srv31, srv32, srv33 running wipe in screen sessions, do not try to use them ;] |
[production] |