2014-05-15
§
|
18:52 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: Wikipedias to 1.24wmf4 |
[production] |
18:46 |
<reedy> |
Finished scap: testwiki to 1.24wmf5 and build l10n cache (duration: 27m 47s) |
[production] |
18:32 |
<mutante> |
mw1053 was already disabled in pybal though and RT 7408,7435 |
[production] |
18:31 |
<mutante> |
mw1053 sits at disk partitioning dialog (via mgmt) |
[production] |
18:29 |
<Reedy> |
mw1053 is pingable but not ssh-able |
[production] |
18:18 |
<reedy> |
Started scap: testwiki to 1.24wmf5 and build l10n cache |
[production] |
17:53 |
<Jeff_Green> |
adjusted exim conf on mchenry to route donate.wm.o mail to barium instead of aluminium |
[production] |
16:43 |
<mwalker> |
disabled qc and put site_offline and maintenance_mode on civicrm to true |
[production] |
15:20 |
<anomie> |
synchronized php-1.24wmf4/extensions/MultimediaViewer 'SWAT: Deploy change 133475 to fix bug 65225 in MultimediaViewer' |
[production] |
14:03 |
<springle> |
xtrabackup clone db1056 to db1070 |
[production] |
13:59 |
<springle> |
synchronized wmf-config/db-eqiad.php 'depool db1056 while cloning' |
[production] |
13:44 |
<cmjohnson1> |
sodium going down again for a different disk replacement |
[production] |
13:16 |
<cmjohnson1> |
shutting down sodium to replace sdb |
[production] |
12:56 |
<godog> |
restarting gerrit on ytterbium, clones over https seemingly stuck |
[production] |
11:56 |
<godog> |
installed openjdk-7-jdk on ytterbium to attempt gerrit thread dump |
[production] |
10:16 |
<springle> |
synchronized wmf-config/db-eqiad.php 'depool db1009 for raid tests' |
[production] |
06:44 |
<springle> |
synchronized wmf-config/db-eqiad.php 'move s5 api traffic to db1005' |
[production] |
05:20 |
<springle> |
synchronized wmf-config/db-eqiad.php 'move s4 commonswiki api traffic to db1042' |
[production] |
04:20 |
<springle> |
installed db1073 |
[production] |
03:15 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Thu May 15 03:14:04 UTC 2014 (duration 14m 3s) |
[production] |
02:27 |
<LocalisationUpdate> |
completed (1.24wmf4) at 2014-05-15 02:26:09+00:00 |
[production] |
02:15 |
<LocalisationUpdate> |
completed (1.24wmf3) at 2014-05-15 02:14:31+00:00 |
[production] |
2014-05-14
§
|
23:42 |
<mwalker> |
synchronized wmf-config/InitialiseSettings.php 'Poking settings to try and apply them' |
[production] |
23:29 |
<mwalker> |
synchronized visualeditor.dblist 'Another part of {{gerrit|132409}} (visual editor)' |
[production] |
23:27 |
<K4-713> |
updated payments from 78cc4285bdeb6eecba3efc75e4a04c8b886561e4 to 5e24b953dcff5305099e152139e6e93daba8aeec |
[production] |
23:27 |
<mwalker> |
synchronized wmf-config/ 'SWAT of {{gerrit|132409}} (visual editor) and {{gerrit|130274}} (abuse filter)' |
[production] |
22:04 |
<maxsem> |
synchronized php-1.24wmf3/extensions/MobileFrontend/ 'bug 65042' |
[production] |
22:03 |
<marktraceur> |
cscott deployed a jenkins job change that pushes parsoid git files to beta-labs for version purposes |
[production] |
22:03 |
<maxsem> |
synchronized php-1.24wmf4/extensions/MobileFrontend/ 'bug 65042' |
[production] |
20:38 |
<awight> |
updated crm from 3fd3b94834f94529841ad4a695ecd73c98e487bc to 7a23465e620211739421cce3ad57c62597eb8cc3 |
[production] |
20:32 |
<bd808> |
Restarting logstash on logstash1001.eqiad.wmnet due to missing messages from some (all?) logs |
[production] |
19:58 |
<demon> |
synchronized wmf-config/InitialiseSettings.php 'No more LQT on wikimania2011wiki' |
[production] |
18:32 |
<Krinkle> |
integration-slave1001 had its 8GB / /dev/vda1 100% full. Purging /tmp/perf-*.map brought it back to 41% |
[production] |
18:25 |
<Krinkle> |
integration-slave1001 is having issues writing to disk |
[production] |
17:50 |
<yurik> |
synchronized php-1.24wmf4/extensions/ZeroRatedMobileAccess/ |
[production] |
17:47 |
<yurik> |
synchronized php-1.24wmf3/extensions/ZeroRatedMobileAccess/ |
[production] |
17:30 |
<yurik> |
synchronized wmf-config/CommonSettings.php |
[production] |
15:44 |
<chasemp> |
disabling puppet on tungsten to try tweaking carbon settings to affect queue drops (for the better) |
[production] |
14:28 |
<cmjohnson1> |
mw1053 going down for disk replacement |
[production] |
13:27 |
<bblack> |
restarting pybals on lvs300x |
[production] |
12:30 |
<_joe_> |
restarted uwsgi on tungsten |
[production] |
09:46 |
<mark> |
Started PyBal on lvs300* and established BGP sessions with the routers |
[production] |
09:43 |
<mark> |
Setup BGP configuration for lvs300* on cr1-esams and cr2-knams, with elevated MEDs to keep them as last resorts |
[production] |
04:18 |
<Tim> |
deploying apache configuration change with fixes |
[production] |
03:11 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Wed May 14 03:10:36 UTC 2014 (duration 10m 35s) |
[production] |
03:01 |
<Tim> |
reverting apache change |
[production] |
02:53 |
<Tim> |
deploying apache configuration change https://gerrit.wikimedia.org/r/106109 |
[production] |
02:26 |
<LocalisationUpdate> |
completed (1.24wmf4) at 2014-05-14 02:25:08+00:00 |
[production] |
02:21 |
<springle> |
upgrade db1043, rebuild as m3 master |
[production] |
02:14 |
<LocalisationUpdate> |
completed (1.24wmf3) at 2014-05-14 02:13:23+00:00 |
[production] |