2013-12-17
§
|
18:01 |
<reedy> |
synchronized php-1.23wmf7/includes/revisiondelete/RevisionDelete.php 'I0bd4a5fe9687c4261ca0f57e30f723e8bf2589ac' |
[production] |
17:59 |
<reedy> |
synchronized php-1.23wmf6/includes/revisiondelete/RevisionDelete.php 'I0bd4a5fe9687c4261ca0f57e30f723e8bf2589ac' |
[production] |
16:58 |
<hashar> |
Jenkins is back up |
[production] |
16:51 |
<hashar> |
Jenkins is busy reloading, should be back around 5:20pm UTC. Don't kill it meanwhile it is busy reading a bunch of files :-( |
[production] |
16:27 |
<hashar> |
restarting Jenkins (stuck) |
[production] |
16:17 |
<hashar> |
Jenkins web service threads are all hang busy waiting for a long request to achieve. Caused by myself :/ |
[production] |
16:10 |
<hashar> |
jenkins in a bad mood for some unknown reason :( |
[production] |
15:00 |
<hashar> |
Jenkins manually configured mediawiki-core-phpunit-misc job to be runnable concurrently |
[production] |
14:11 |
<manybubbles> |
the first pass of index building for all wikisources is complete, starting the second pass |
[production] |
13:31 |
<manybubbles> |
performing a rolling restart of the elasticsearch cluster to pick up new settings |
[production] |
10:27 |
<mark> |
Depooled eqiad Squids in PyBal |
[production] |
10:24 |
<mark> |
Depooled esams Squids in PyBal |
[production] |
06:43 |
<Krinkle> |
Increased number of Jenkins executors for 'integration-slave01' from 2 to 3 |
[production] |
04:53 |
<tstarling> |
synchronized wmf-config/InitialiseSettings.php |
[production] |
02:58 |
<Krinkle> |
Reloading Zuul to deploy I4e18cb2dc2a7f4 |
[production] |
02:49 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Tue Dec 17 02:49:29 UTC 2013 |
[production] |
02:30 |
<LocalisationUpdate> |
completed (1.23wmf7) at Tue Dec 17 02:30:08 UTC 2013 |
[production] |
02:25 |
<manybubbles> |
manually setting elastic1008 to be master eligible so we have three master eligible machines in production until the puppet code that would do that properly is merged |
[production] |
02:16 |
<LocalisationUpdate> |
completed (1.23wmf6) at Tue Dec 17 02:16:11 UTC 2013 |
[production] |
01:57 |
<Krinkle> |
Reloading Zuul to deploy I80dafe3457c65 |
[production] |
00:27 |
<apergos> |
elastic1007 failed to come back up after several attempts, a soft drac reset and some more attempts. leaving it in power off state |
[production] |
00:15 |
<apergos> |
depooled elastic1007 in pybal |
[production] |
00:07 |
<apergos> |
ERROR: Timeout while waiting for server to perform requested power action. (from attempt to powercycle elastic1007) |
[production] |
00:05 |
<apergos> |
powercycled elastic1007, inaccessible via ssh or mgmt console |
[production] |
2013-12-16
§
|
22:55 |
<Ryan_Lane> |
rebooting virt0 |
[production] |
22:51 |
<Ryan_Lane> |
dist-upgrading virt0 |
[production] |
22:44 |
<Ryan_Lane> |
rebooting virt1000 |
[production] |
22:34 |
<Ryan_Lane> |
dist-upgrading virt1000 |
[production] |
21:50 |
<gwicke> |
updated Parsoid config to use the API cluster directly for most wikis |
[production] |
21:19 |
<subbu> |
deployed parsoid 7684df12 |
[production] |
21:13 |
<mutante> |
creating wikilovesearth mailing list for bug 52705 |
[production] |
19:27 |
<ottomata> |
stopping puppet on cp1048 to test varnishkafka ganglia module |
[production] |
19:05 |
<Krinkle> |
Reloading Zuul to deploy I01d349bf21b20ce94 |
[production] |
18:27 |
<yurik> |
synchronized php-1.23wmf7/extensions/ZeroRatedMobileAccess/ |
[production] |
18:24 |
<yurik> |
synchronized php-1.23wmf6/extensions/ZeroRatedMobileAccess/ |
[production] |
17:41 |
<manybubbles> |
synchronized wmf-config/ 'update cirrus configuration' |
[production] |
17:31 |
<manybubbles> |
synchronized php-1.23wmf6/extensions/CirrusSearch/ 'update cirrus to master' |
[production] |
17:28 |
<manybubbles> |
synchronized php-1.23wmf7/extensions/CirrusSearch/ 'update cirrus to master' |
[production] |
15:08 |
<mark> |
Depooled all pmtpa Squids in PyBal |
[production] |
14:48 |
<hashar> |
zuul made gate-and-submit pipeline a dependent pipeline. Changes would thus be triggered in parallel whenever a repo has several +2 attempting to land in. That should speed up gating process. See also {{bug|48419}} and {{gerrit|101839}} |
[production] |
14:06 |
<springle> |
synchronized wmf-config/db-pmtpa.php |
[production] |
14:06 |
<paravoid> |
ganglia-monitor restart on srv*/mw*; gmond bug with swapoff |
[production] |
13:42 |
<springle> |
synchronized wmf-config/db-pmtpa.php |
[production] |
13:26 |
<springle> |
synchronized wmf-config/db-pmtpa.php |
[production] |
11:28 |
<springle> |
synchronized wmf-config/db-pmtpa.php |
[production] |
06:42 |
<springle> |
synchronized wmf-config/db-eqiad.php 'repool db1026 after schema changes, LB lowered during warm up' |
[production] |
06:27 |
<tstarling> |
synchronized docroot/secure/404.html |
[production] |
05:35 |
<ori> |
synchronized wmf-config/InitialiseSettings.php 'If79a9443a: Add MassMessage to ' |
[production] |
05:34 |
<ori> |
updated /a/common to {{Gerrit|If79a9443a}}: Add MassMessage to $wgDebugLogGroups |
[production] |
05:34 |
<ori> |
synchronized php-1.23wmf6/extensions/MassMessage/MassMessageJob.php 'Iec240623a: Add debug logging for bug 57464' |
[production] |