2016-02-09
§
|
14:53 |
<godog> |
reboot ms-be1004, xfs hosed |
[production] |
14:51 |
<hashar> |
Cutting branches 1.27.0-wmf.13 |
[production] |
14:46 |
<elukey> |
re-enabled puppet on mc1004.eqiad |
[production] |
14:45 |
<bblack> |
resuming cpNNNN rolling kernel reboots |
[production] |
14:41 |
<_joe_> |
setting mw1026-1050 as inactive in the appservers pool (T126242) |
[production] |
13:58 |
<hashar> |
shutting down jenkins finally, and restarting it |
[production] |
13:51 |
<hashar> |
Restarting Jenkins. It can not manage to add slaves |
[production] |
13:15 |
<paravoid> |
upgrading lvs1001/lvs1007/lvs1002/lvs1008/lvs1003/lvs1009 to 4.4.0 |
[production] |
13:11 |
<akosiaris> |
reboot serpens to apply memory increase of 2G |
[production] |
13:07 |
<paravoid> |
installing linux 4.4.0 on lvs1001 |
[production] |
13:01 |
<hashar> |
Jenkins disabled again :( |
[production] |
12:53 |
<akosiaris> |
reboot seaborgium to apply memory increase of 2G |
[production] |
12:47 |
<hashar> |
Updated faulty script that caused 'php' too loop infinitely. Jenkins back up. |
[production] |
12:36 |
<hashar> |
Jenkins no more accept new jobs until the slaves are fixed :/ |
[production] |
12:33 |
<hashar> |
all CI slaves looping to death because of a php loop |
[production] |
11:43 |
<paravoid> |
upgrading lvs2001, lvs2002, lvs2003 to kernel 4.4.0 |
[production] |
11:36 |
<paravoid> |
reverting lvs2005 to 3.19 and rebooting, test is over and was successful |
[production] |
11:19 |
<paravoid> |
stopping pybal on lvs2002 |
[production] |
11:05 |
<paravoid> |
installing linux-image-4.4.0 on lvs2005 and rebooting for testing |
[production] |
10:53 |
<apergos> |
salt minions on labs instances that respond to labcontrol1001 will be coming back up over the next 1/2 hour as puppet runs (salt master key fixes) |
[production] |
10:45 |
<elukey> |
disabled puppet, redis and memcached on mc1004 for jessie migration |
[production] |
10:33 |
<_joe_> |
pybal updated everywhere |
[production] |
10:32 |
<gehel> |
elasticsearch codfw: cleanup leftover logs /var/log/elasticsearch/*.[2-7] |
[production] |
10:24 |
<gehel> |
elasticsearch eqiad: cleanup leftover logs /var/log/elasticsearch/*.[2-7] |
[production] |
10:09 |
<_joe_> |
upgrading pybal on active nodes in esams and eqiad |
[production] |
10:04 |
<_joe_> |
depooling elastic1021.eqiad.wmnet as RAM has failed |
[production] |
09:56 |
<jynus> |
running table engine conversion script on db1069 (potential small lag on labs for 1 day) |
[production] |
09:40 |
<moritzm> |
restarted cassandra-a service on praseodymium |
[production] |
09:21 |
<ema> |
restarted hhvm on mw1132 |
[production] |
08:49 |
<_joe_> |
installing the new pybal package in esams and eqiad backups |
[production] |
08:23 |
<moritzm> |
restarted cassandra-a service on praseodymium |
[production] |
07:11 |
<_joe_> |
manually touched (with -h) the wmf-config/PrivateSettings.php symlink on all mw* hosts |
[production] |
07:02 |
<tgr@mira> |
Synchronized wmf-config/PrivateSettings.php: Mass logout via $wgAuthenticationTokenVersion - T124440#2010709 (duration: 01m 20s) |
[production] |
07:01 |
<tgr@mira> |
Synchronized private/PrivateSettings.php: Mass logout via $wgAuthenticationTokenVersion - T124440#2010709 (duration: 01m 19s) |
[production] |
01:58 |
<legoktm> |
added SMalyshev to wikidata-query gerrit group |
[production] |
01:34 |
<krenair@mira> |
Synchronized php-1.27.0-wmf.12/extensions: https://gerrit.wikimedia.org/r/#/c/269344/ and https://gerrit.wikimedia.org/r/#/c/269293/1 (duration: 01m 51s) |
[production] |
01:27 |
<krenair@mira> |
Synchronized php-1.27.0-wmf.12/extensions/OAuth/frontend/specialpages/SpecialMWOAuthManageConsumers.php: https://gerrit.wikimedia.org/r/#/c/269333/ (duration: 01m 19s) |
[production] |
01:24 |
<krenair@mira> |
Synchronized php-1.27.0-wmf.12/resources/src: https://gerrit.wikimedia.org/r/#/c/269140/ (duration: 01m 19s) |
[production] |
00:54 |
<krenair@mira> |
Synchronized wmf-config/CommonSettings.php: https://gerrit.wikimedia.org/r/#/c/266509/ (duration: 01m 17s) |
[production] |
00:52 |
<krenair@mira> |
Synchronized docroot/noc: https://gerrit.wikimedia.org/r/#/c/266509/8 (duration: 01m 17s) |
[production] |
00:50 |
<krenair@mira> |
Synchronized wmf-config/ProductionServices.php: https://gerrit.wikimedia.org/r/#/c/266509/8 (duration: 01m 18s) |
[production] |
00:42 |
<hashar> |
killed Zuul scheduler. On gallium edited /usr/share/python/zuul/local/lib/python2.7/site-packages/zuul/trigger/gerrit.py and modified: replication_timeout = 300 -> replication_timeout = 10 . Started Zuul |
[production] |
00:26 |
<krenair@mira> |
Synchronized portals: https://gerrit.wikimedia.org/r/#/c/268849/ (duration: 01m 18s) |
[production] |
00:24 |
<krenair@mira> |
Synchronized portals/prod/wikipedia.org/assets: https://gerrit.wikimedia.org/r/#/c/268849/ (duration: 01m 18s) |
[production] |
00:17 |
<krenair@mira> |
Synchronized portals: https://gerrit.wikimedia.org/r/#/c/268849/ (duration: 01m 17s) |
[production] |
00:15 |
<krenair@mira> |
Synchronized portals/prod/wikipedia.org/assets: https://gerrit.wikimedia.org/r/#/c/268849/ (duration: 01m 16s) |
[production] |