2013-12-17
§
|
21:31 |
<csteipp> |
synchronized php-1.23wmf6/includes 'bug58088' |
[production] |
21:30 |
<csteipp> |
synchronized php-1.23wmf7/includes 'bug58088' |
[production] |
21:23 |
<reedy> |
synchronized wmf-config/ 'Enable GWToolset on commonswiki' |
[production] |
21:16 |
<reedy> |
updated /a/common to {{Gerrit|I2d1c666e1}}: Production configuration for GWToolset |
[production] |
21:14 |
<reedy> |
finished scap: Rebuild l10n cache for GWToolset, remove AFT, no wikis moving onto GWToolset at this point (commented out) |
[production] |
20:45 |
<manybubbles> |
finished the rolling restart of the elasticsearch cluster. it could have been done more quickly but there was no hurry. |
[production] |
20:45 |
<reedy> |
started scap: Rebuild l10n cache for GWToolset, remove AFT, no wikis moving onto GWToolset at this point (commented out) |
[production] |
20:05 |
<Ryan_Lane> |
deploying change 102285 to OpenStackManager on virt0 |
[production] |
19:58 |
<reedy> |
synchronized php-1.23wmf7/extensions/GWToolset 'Staging' |
[production] |
19:54 |
<reedy> |
synchronized php-1.23wmf7/includes/filerepo/LocalRepo.php 'Fix fatal I12513b40453573124e838d54a72a2f9a2d3de338' |
[production] |
19:32 |
<reedy> |
synchronized wmf-config/ 'Revert Revert Cross-wiki backlink purging for commons file changes' |
[production] |
19:27 |
<reedy> |
synchronized wmf-config/ 'Revert Cross-wiki backlink purging for commons file changes' |
[production] |
19:26 |
<reedy> |
updated /a/common to {{Gerrit|Id4124ef28}}: All non wikipedias to 1.23wmf7 |
[production] |
19:13 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: All non wikipedias to 1.23wmf7 |
[production] |
19:11 |
<reedy> |
synchronized wmf-config/ |
[production] |
18:44 |
<reedy> |
synchronized wmf-config/ |
[production] |
18:19 |
<reedy> |
updated /a/common to {{Gerrit|I5ae36ae21}}: EasyTimeline support for private wikis via img_auth |
[production] |
18:11 |
<mutante> |
deleted php-1.22wmf17 from tin per reedy |
[production] |
18:01 |
<reedy> |
synchronized php-1.23wmf7/includes/revisiondelete/RevisionDelete.php 'I0bd4a5fe9687c4261ca0f57e30f723e8bf2589ac' |
[production] |
17:59 |
<reedy> |
synchronized php-1.23wmf6/includes/revisiondelete/RevisionDelete.php 'I0bd4a5fe9687c4261ca0f57e30f723e8bf2589ac' |
[production] |
16:58 |
<hashar> |
Jenkins is back up |
[production] |
16:51 |
<hashar> |
Jenkins is busy reloading, should be back around 5:20pm UTC. Don't kill it meanwhile it is busy reading a bunch of files :-( |
[production] |
16:27 |
<hashar> |
restarting Jenkins (stuck) |
[production] |
16:17 |
<hashar> |
Jenkins web service threads are all hang busy waiting for a long request to achieve. Caused by myself :/ |
[production] |
16:10 |
<hashar> |
jenkins in a bad mood for some unknown reason :( |
[production] |
15:00 |
<hashar> |
Jenkins manually configured mediawiki-core-phpunit-misc job to be runnable concurrently |
[production] |
14:11 |
<manybubbles> |
the first pass of index building for all wikisources is complete, starting the second pass |
[production] |
13:31 |
<manybubbles> |
performing a rolling restart of the elasticsearch cluster to pick up new settings |
[production] |
10:27 |
<mark> |
Depooled eqiad Squids in PyBal |
[production] |
10:24 |
<mark> |
Depooled esams Squids in PyBal |
[production] |
06:43 |
<Krinkle> |
Increased number of Jenkins executors for 'integration-slave01' from 2 to 3 |
[production] |
04:53 |
<tstarling> |
synchronized wmf-config/InitialiseSettings.php |
[production] |
02:58 |
<Krinkle> |
Reloading Zuul to deploy I4e18cb2dc2a7f4 |
[production] |
02:49 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Tue Dec 17 02:49:29 UTC 2013 |
[production] |
02:30 |
<LocalisationUpdate> |
completed (1.23wmf7) at Tue Dec 17 02:30:08 UTC 2013 |
[production] |
02:25 |
<manybubbles> |
manually setting elastic1008 to be master eligible so we have three master eligible machines in production until the puppet code that would do that properly is merged |
[production] |
02:16 |
<LocalisationUpdate> |
completed (1.23wmf6) at Tue Dec 17 02:16:11 UTC 2013 |
[production] |
01:57 |
<Krinkle> |
Reloading Zuul to deploy I80dafe3457c65 |
[production] |
00:27 |
<apergos> |
elastic1007 failed to come back up after several attempts, a soft drac reset and some more attempts. leaving it in power off state |
[production] |
00:15 |
<apergos> |
depooled elastic1007 in pybal |
[production] |
00:07 |
<apergos> |
ERROR: Timeout while waiting for server to perform requested power action. (from attempt to powercycle elastic1007) |
[production] |
00:05 |
<apergos> |
powercycled elastic1007, inaccessible via ssh or mgmt console |
[production] |