2016-02-04
§
|
14:41 |
<moritzm> |
rebooting db203[45] for kernel update |
[production] |
14:09 |
<hoo> |
Restarted blazegraph on wdqs1001 |
[production] |
13:56 |
<godog> |
powercycle ms-be2020 |
[production] |
13:39 |
<moritzm> |
continue rolling reboot of maps cluster for kernel update (2002-2004) |
[production] |
12:21 |
<jynus> |
starting mysql at db2009 |
[production] |
12:08 |
<moritzm> |
rebooting db2001 to db2019 for kernel update |
[production] |
11:44 |
<jynus> |
dropping echo_* tables from labs |
[production] |
11:18 |
<dcausse> |
elastic codfw: resuming writes and setting cluster.routing.allocation.balance.threshold back to default (1%) |
[production] |
10:35 |
<dcausse> |
elastic codfw: freezing writes and setting cluster.routing.allocation.balance.threshold to 100% (fast recovery test) |
[production] |
10:34 |
<hashar@mira> |
Synchronized php-1.27.0-wmf.12/.gitmodules: Set branch in .gitmodules for extensions/Wikidata https://gerrit.wikimedia.org/r/#/c/268218/ (duration: 02m 08s) |
[production] |
10:16 |
<moritzm> |
rolling reboot of maps cluster for kernel update |
[production] |
10:14 |
<jynus> |
testing new replication filters from production's testwiki |
[production] |
10:13 |
<elukey> |
running smartctl -t long on kafka1012 (kafka not running, host de-pooled from the broker list) |
[production] |
10:11 |
<moritzm> |
repooling restbase2006 |
[production] |
10:01 |
<jynus> |
applying live on the 7 sanitarium instance the newly puppet-configured labs replication filters |
[production] |
09:57 |
<moritzm> |
repooling restbase2005, depooling restbase2006 for kernel reboot/Java update |
[production] |
09:46 |
<dcausse> |
elastic in codfw: reducing the number of replicas from 0-3 to 0-2 for commonswiki_file |
[production] |
09:46 |
<moritzm> |
repooling restbase2004, depooling restbase2005 for kernel reboot/Java update |
[production] |
09:39 |
<ema> |
re-enabling puppet on mw1161 |
[production] |
09:34 |
<moritzm> |
depooling restbase2004 for kernel reboot/Java update |
[production] |
09:10 |
<jynus> |
converting remaining InnoDB tables (s3) to TokuDB on db1069 |
[production] |
08:14 |
<chasemp> |
iridium puppet agent --enable && puppet agent --disable "DO NO ENABLE AS IT WILL BREAK THINGS CONTACT MUKUNDA" |
[production] |
07:51 |
<twentyafterfour> |
phabricator repositories checked out to these revisions: http://pastebin.com/JxEaYKiW |
[production] |
07:49 |
<chasemp> |
git checkout tag release/2015-11-18/1 for phab & libphutil on iridiuum |
[production] |
07:35 |
<andrewbogott> |
disabling puppet on iridium to prevent it from smashing phabricator (as it seems to do now and then) |
[production] |
07:00 |
<andrewbogott> |
on iridium in /srv/deployment/phabricator/deploy/phabricator, naming the currently detached git branch ‘andrewfounditlikethis' |
[production] |
06:49 |
<robh> |
phabricator down with errors during repo updates in phd daemon log |
[production] |
02:12 |
<mutante> |
OTRS - changed motd message in /opt/otrs/Kernel/Output/HTML/Templates/Standard/Motd.tt - admins can turn it on and off |
[production] |
01:04 |
<krenair@mira> |
Synchronized php-1.27.0-wmf.12/tests: https://gerrit.wikimedia.org/r/#/c/268332/ (duration: 02m 08s) |
[production] |
01:01 |
<krenair@mira> |
Synchronized php-1.27.0-wmf.12/includes/parser: https://gerrit.wikimedia.org/r/#/c/268332/ (duration: 02m 25s) |
[production] |
01:00 |
<moritzm> |
rebooting iridium (phabricator host) for kernel update |
[production] |
00:42 |
<YuviPanda> |
yuvipanda@labstore2001:~$ sudo lvremove backup/maps20160121040005 |
[production] |
00:41 |
<YuviPanda> |
yuvipanda@labstore2001:~$ sudo lvremove backup/tools20160121020007 |
[production] |
00:04 |
<thcipriani@mira> |
rebuilt wikiversions.php and synchronized wikiversions files: group1 wikis to 1.27.0-wmf.12 |
[production] |
2016-02-03
§
|
23:53 |
<moritzm> |
repooling restbase2002 , depooling restbase2003 for kernel/Java update |
[production] |
23:39 |
<moritzm> |
repooling restbase2001 , depooling restbase2002 for kernel/Java update |
[production] |
23:36 |
<thcipriani@mira> |
rebuilt wikiversions.php and synchronized wikiversions files: group0 to 1.27.0-wmf.12 |
[production] |
23:29 |
<hashar> |
passing wmf12 responsibility to thcipriani . Crashing to bed myself. |
[production] |
23:22 |
<moritzm> |
depooling restbase2001 for kernel/Java update |
[production] |
23:15 |
<moritzm> |
rebooting wdqs1002 for kernel update |
[production] |
23:08 |
<hashar> |
Full script of my deployment session is on mira.codfw.wmnet:/home/hashar/wmf12-deploy.script |
[production] |
23:07 |
<hashar@mira> |
rebuilt wikiversions.php and synchronized wikiversions files: Clarify only testwiki and test2wiki are on php-1.27.0-wmf.12 |
[production] |
23:07 |
<moritzm> |
rebooting wdqs1001 for kernel update |
[production] |
22:51 |
<hashar> |
test / test2 wikis are incredibly slow . Filled https://phabricator.wikimedia.org/T125727 |
[production] |
22:47 |
<subbu> |
finished deploying parsoid sha 98619f7f |
[production] |
22:43 |
<hashar> |
sync-wikiversions "test2wiki to php-1.27.0-wmf.12" |
[production] |
22:43 |
<hashar@mira> |
rebuilt wikiversions.php and synchronized wikiversions files: test2wiki to php-1.27.0-wmf.12 |
[production] |
22:41 |
<moritzm> |
repooling restbase1009 |
[production] |
22:38 |
<hashar@mira> |
Finished scap: to properly sync other master tin due to l10nupdate ui mismatch (duration: 24m 27s) |
[production] |
22:34 |
<moritzm> |
repooling restbase1006 , depooling restbase1009 for kernel/Java update |
[production] |