2012-02-03
§
|
19:23 |
<asher> |
synchronized wmf-config/db.php 'pulling dbs 13,18,25,26 for upgrades' |
[production] |
19:11 |
<RobH> |
manutius installed and ready for use |
[production] |
17:26 |
<RobH> |
updated dns for manutius.mgmt |
[production] |
17:15 |
<reedy> |
synchronized wmf-config/InitialiseSettings.php 'touch' |
[production] |
17:11 |
<reedy> |
synchronized wmf-config/InitialiseSettings.php 'touch' |
[production] |
16:08 |
<RobH> |
db41 being reinstalled, appears down but logging to be safe |
[production] |
15:20 |
<mark> |
Around 14:50 UTC, removed the 3 remaining esams upload squids in the knsq8-15 range from the config. This made ms5 unhappy. |
[production] |
15:13 |
<reedy> |
synchronized wmf-config/db.php 'Add comment that db40 is parsercache' |
[production] |
13:53 |
<mutante> |
resetting stats on new wikis per bz 34184: updateArticleCount.php vepwiki --update; updateArticleCount.php pnbwiktionary --update |
[production] |
13:42 |
<mark> |
Disabled knsq1-15 in PyBal, preparing for decommissioning |
[production] |
03:53 |
<maplebed> |
moved all the individual puppet files out of place, stopped nagios, and re-ran puppet (at now minus 1.5hrs) |
[production] |
02:24 |
<LocalisationUpdate> |
completed (1.18) at Fri Feb 3 02:24:54 UTC 2012 |
[production] |
00:57 |
<K4-713> |
re-enabled the donations queue consumer via Jenkins |
[production] |
00:42 |
<K4-713> |
updated production civicrm to [[rev:1293|r1293]] |
[production] |
00:23 |
<asher> |
synchronized wmf-config/db.php 'moving watchlist/recentchanges back to db12, returning db24 to s2' |
[production] |
00:09 |
<K4-713> |
Disabled donations queue consumption on aluminium |
[production] |
2012-02-02
§
|
23:51 |
<K4-713> |
updated production civicrm to [[rev:1291|r1291]] |
[production] |
23:44 |
<binasher> |
db12 back up with lucid + current mysql |
[production] |
23:32 |
<binasher> |
rebooting db12 |
[production] |
23:08 |
<asher> |
synchronized wmf-config/db.php 'pulling db12 from enwiki, temporarily moving watchlist/recentchanges to db54' |
[production] |
23:02 |
<pgehres> |
K4-713 synchronized production CiviCRM to [[rev:1288|r1288]] on Aluminium |
[production] |
22:59 |
<binasher> |
db24 upgraded to lucid and current mysql build |
[production] |
22:52 |
<binasher> |
rebooted db24 |
[production] |
22:44 |
<reedy> |
synchronized wmf-config/ 'Disable VariablePage completely' |
[production] |
22:26 |
<binasher> |
pulled db24 from s2, preparing to upgrade to lucid |
[production] |
22:19 |
<asher> |
synchronized wmf-config/db.php 'pulling db24 from s2 for upgrade' |
[production] |
21:37 |
<apergos> |
started rsync from dataset2 to dataset1001 in screen session as root on dataset1001 |
[production] |
21:07 |
<reedy> |
synchronized wmf-config/InitialiseSettings.php 'Drop FundraiserPortal config' |
[production] |
21:07 |
<reedy> |
synchronized wmf-config/CommonSettings.php 'Drop FundraiserPortal config' |
[production] |
21:06 |
<RobH> |
dataset1001 is alive, mostly |
[production] |
19:15 |
<asher> |
synchronized wmf-config/db.php 'raising db55 weight' |
[production] |
19:08 |
<asher> |
synchronized wmf-config/db.php 'add db55 - new s5 slave' |
[production] |
18:06 |
<notpeter> |
doing initial run of puppet on cp1001-1020 |
[production] |
17:33 |
<notpeter> |
reimaging cp1002 and imaging cp1001 and cp1003-1020 |
[production] |
16:06 |
<cmjohnson1> |
disk 15 swap complete on db11 |
[production] |
16:05 |
<cmjohnson1> |
replacing disk 15 on db11 |
[production] |
15:55 |
<mark> |
Running apt-get update && apt-get dist-upgrade && reboot on lvs1 |
[production] |
15:40 |
<mark> |
Running apt-get update && apt-get dist-upgrade && reboot on lvs2 |
[production] |
15:10 |
<reedy> |
synchronized php-1.18/extensions/CodeReview/api/ '[[rev:110574|r110574]]' |
[production] |
14:21 |
<hashar> |
synchronized php-1.18/includes/UserMailer.php 'work around [[bugzilla:34158|bug 34158]]' |
[production] |
14:19 |
<catrope> |
synchronized php-1.18/extensions/LocalisationUpdate/LocalisationUpdate.class.php '[[rev:110570|r110570]]' |
[production] |
14:10 |
<RoanKattouw> |
Finally fixed ownership of cache/l10n on scalers , sync-l10nupdate only throws the expected errors, no more perms errors on the scalers |
[production] |
14:09 |
<RoanKattouw> |
Scalers now have disk space available because php-1.17-test is gone |
[production] |
13:59 |
<catrope> |
synchronizing Wikimedia installation... : Deleted php-1.17-test on fenari, running scap to delete it on the Apaches as well |
[production] |
13:49 |
<RoanKattouw> |
Deleting /home/wikipedia/common/php-1.17-test , has been unused for a long time |
[production] |
13:45 |
<RoanKattouw> |
Deleting /tmp/mw-cache-1.17 on srv219 and srv223 |
[production] |
13:44 |
<RoanKattouw> |
srv219-224 have a full disk according to rsync |
[production] |
13:38 |
<RoanKattouw> |
Fixing ownership of /usr/local/apache/common-local/php-1.18/cache/l10n on srv191, srv199, srv219-224 |
[production] |
13:35 |
<RoanKattouw> |
Running sync-l10nupdate again to investigate rsync errosr |
[production] |
13:34 |
<LocalisationUpdate> |
completed (1.18) at Thu Feb 2 13:34:53 UTC 2012 |
[production] |