2013-11-01
§
|
18:31 |
<reedy> |
Finished syncing Wikimedia installation... : testwiki and test2wiki to 1.23wmf2 |
[production] |
18:17 |
<reedy> |
Started syncing Wikimedia installation... : testwiki and test2wiki to 1.23wmf2 |
[production] |
18:14 |
<paravoid> |
reenabling mw1114-mw1148 |
[production] |
18:12 |
<reedy> |
synchronized wmf-config/ |
[production] |
18:08 |
<paravoid> |
swapoff on all api appservers |
[production] |
17:53 |
<reedy> |
synchronized wmf-config/ |
[production] |
17:26 |
<paravoid> |
depooling mw1114-mw1148; balancing is unfair, boxes overloaded, mw1189-mw1208 capable of handling the load |
[production] |
16:01 |
<apergos> |
shot one more forceSearchIndex on arsenic because we were back in swap, only one left... |
[production] |
13:51 |
<hasharEat> |
gallium : Jenkins console log compression completed, saved 80G out of 500G total disk space. |
[production] |
10:57 |
<apergos> |
shot one more forceSearchIndex on arsenic, only two left... (using about 12gb between them) |
[production] |
10:51 |
<springle> |
synchronized wmf-config/db-pmtpa.php 'depool first batch of pmtpa boxes to be decommissioned/shipped' |
[production] |
09:08 |
<hashar> |
Jenkins: fixing up a race condition in MobileFrontend qunit tests. Both variant were using the same path and the first job to complete would break the other one by deleting the mediawiki install. {{gerrit|93030}} |
[production] |
08:18 |
<apergos> |
shot two of the forceSearchIndex on arsenic, they were using 7gb between them and arsenic was in swapdeath |
[production] |
03:01 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Fri Nov 1 03:01:11 UTC 2013 |
[production] |
02:15 |
<LocalisationUpdate> |
completed (1.23wmf1) at Fri Nov 1 02:15:15 UTC 2013 |
[production] |
01:25 |
<LeslieCarr> |
reverted the ulsfo traffic check, due to editors appearing to come from 127.0.0.1 |
[production] |
01:21 |
<catrope> |
synchronized wmf-config/CommonSettings.php 'Clean up 127.0.0.1 logging code' |
[production] |
01:11 |
<catrope> |
synchronized wmf-config/CommonSettings.php 'Debugging 127.0.0.1 issue: add XFP logging' |
[production] |
01:04 |
<catrope> |
synchronized wmf-config/CommonSettings.php 'Debugging 127.0.0.1 issue' |
[production] |
2013-10-31
§
|
23:55 |
<cmjohnson1> |
dns update |
[production] |
23:42 |
<mutante> |
delete dsh groups 'nagios' and 'misc-servers' from tin/fenari. they are gone from puppet and unused (just not actively deleted) |
[production] |
23:33 |
<cmjohnson1> |
dns update |
[production] |
23:32 |
<reedy> |
synchronized php-1.23wmf1/extensions/MobileFrontend 'touch' |
[production] |
23:31 |
<reedy> |
synchronized php-1.23wmf1/resources 'touch' |
[production] |
23:30 |
<reedy> |
synchronized wmf-config/ 'touch' |
[production] |
22:31 |
<RobH> |
restarted parsoid on wtp1011 |
[production] |
22:17 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: Bye bye MW 1.22 |
[production] |
21:38 |
<demon> |
Finished syncing Wikimedia installation... : Unbreak cluster. Like most things, the sequel wasn't as good as the original |
[production] |
21:33 |
<demon> |
synchronized wmf-config/CommonSettings.php 'Fixing for the last time' |
[production] |
21:33 |
<demon> |
synchronized wmf-config/InitialiseSettings.php 'Fixing for the last time' |
[production] |
21:21 |
<demon> |
Started syncing Wikimedia installation... : Unbreak cluster. Like most things, the sequel wasn't as good as the original |
[production] |
21:02 |
<demon> |
rebuilt wikiversions.cdb and synchronized wikiversions files: |
[production] |
20:55 |
<demon> |
rebuilt wikiversions.cdb and synchronized wikiversions files: group0 wikis to 1.23wmf2 |
[production] |
18:31 |
<cmjohnson1> |
removing decommissioned servers alsted, amaranth, durant, spence, williams from dsh group misc-servers on tin |
[production] |
18:30 |
<demon> |
synchronized wmf-config/ |
[production] |
18:24 |
<demon> |
synchronized wmf-config/ 'Proper poolcounter config for Cirrus' |
[production] |
18:20 |
<hashar> |
synchronized wmf-config/InitialiseSettings.php 'touch' |
[production] |
18:16 |
<hashar> |
synchronized php-1.22wmf22/extensions/PoolCounter/ 'making sure all apaches got all the files.' |
[production] |
18:16 |
<hashar> |
synchronized php-1.23wmf1/extensions/PoolCounter/ 'making sure all apaches got all the files.' |
[production] |
18:11 |
<cmjohnson1> |
dns update |
[production] |
18:01 |
<demon> |
synchronized w/ 'Cluster to known good state' |
[production] |
17:59 |
<demon> |
synchronized wmf-config/ 'Cluster to known good state' |
[production] |
16:52 |
<reedy> |
synchronized wmf-config/ |
[production] |
16:30 |
<reedy> |
synchronized docroot and w |
[production] |
16:29 |
<reedy> |
synchronized php-1.23wmf2 'Staging php-1.23wmf2' |
[production] |
14:52 |
<mark> |
Sending text (all wikis) traffic from OC to ulsfo |
[production] |
13:51 |
<mark> |
synchronized wmf-config/squid.php 'Update cache list with ulsfo text caches' |
[production] |
13:39 |
<cmjohnson> |
dns update |
[production] |
12:59 |
<akosiaris> |
added python-pbr backported from saucy to apt.wikimedia.org |
[production] |
11:23 |
<akosiaris> |
added python-gear package by hashar to apt.wikimedia.org |
[production] |