2016-03-23
ยง
|
21:38 |
<bblack> |
depooling cp3042 - T125485 |
[production] |
20:53 |
<chasemp> |
reboot labvirt1003 |
[production] |
20:40 |
<mdholloway> |
found an issue with the mobileapps deployment, reverting to 85856f7 |
[production] |
20:29 |
<mdholloway> |
starting mobileapps deployment |
[production] |
20:18 |
<chasemp> |
reboot labvirt1002 |
[production] |
20:14 |
<subbu> |
finished deploying parsoid version 5538d868 |
[production] |
20:10 |
<subbu> |
synced code. restarted parsoid on wtp1002 (~4 minutes back) as a canary |
[production] |
20:03 |
<subbu> |
starting parsoid deploy |
[production] |
19:56 |
<andrewbogott> |
rebooting labvirt1001 |
[production] |
19:38 |
<chasemp> |
rebooting labvirt1011 |
[production] |
19:36 |
<andrewbogott> |
rebooting labvirt1009 |
[production] |
19:27 |
<ori@tin> |
Synchronized php-1.27.0-wmf.18/includes/Revision.php: I77575d6d0ea: Request-local caching of revision text (duration: 00m 28s) |
[production] |
19:25 |
<andrewbogott> |
rebooting labvirt1008 |
[production] |
19:05 |
<thcipriani@tin> |
rebuilt wikiversions.php and synchronized wikiversions files: group1 wikis to 1.27.0-wmf.18 |
[production] |
19:01 |
<legoktm> |
restarting zuul |
[production] |
18:12 |
<urandom> |
Removing compaction throughput throttling on restbase2004.codfw.wmnet : T130254 |
[production] |
18:05 |
<urandom> |
Incresing compactionthroughput to 200MB/s on restbase2004.codfw.wmnet : T130254 |
[production] |
17:54 |
<urandom> |
Removing old heap dumps on restbase2004.codfw.wmnet : T130254 |
[production] |
17:48 |
<urandom> |
Increasing compactionthroughput to 120MB/s on restbase2004.codfw.wmnet : T130254 |
[production] |
17:36 |
<urandom> |
Increasing compactionthroughput to 100MB/s on restbase2004.codfw.wmnet : T130254 |
[production] |
17:24 |
<urandom> |
Starting scrub of parsoid_html on restbase2004.codfw.wmnet : T130254 |
[production] |
17:21 |
<urandom> |
Disabling gossip and binary transport on restbase2004.codfw.wmnet : T130254 |
[production] |
17:18 |
<urandom> |
Starting Cassandra on restbase2004.codfw.wmnet : T130254 |
[production] |
17:16 |
<urandom> |
Disabling puppet on restbase2004.codfw.wmnet to override compactor concurrency : T130254 |
[production] |
17:14 |
<urandom> |
Cancelling offline scrubs on restbase2004.codfw.wmnet : T130254 |
[production] |
17:02 |
<bd808@tin> |
Synchronized wmf-config/InitialiseSettings.php: touched (duration: 00m 25s) |
[production] |
16:57 |
<bd808@tin> |
Synchronized wmf-config/InitialiseSettings.php: Logging: add ApiAction kafka logging (34f236c) (T108618) (duration: 00m 28s) |
[production] |
16:56 |
<bd808@tin> |
Synchronized wmf-config/event-schemas: Logging: add ApiAction kafka logging (34f236c) (duration: 00m 31s) |
[production] |
16:51 |
<bd808@tin> |
Synchronized php-1.27.0-wmf.17/includes/api/ApiMain.php: Rename ApiRequest to ApiAction (4dc12de) (duration: 00m 47s) |
[production] |
15:48 |
<elukey> |
updated puppet-compiler to 0.1.2 version (added submodule support) |
[production] |
15:18 |
<urandom> |
CORRECTION: Starting cleanups on restbase10{08,10,11}-{a,b}.eqiad.wmnet : T125842 |
[production] |
15:17 |
<urandom> |
Starting cleanups on restbase10{08,12,13}-{a,b}.eqiad.wmnet : T125842 |
[production] |
14:33 |
<godog> |
rolling-restart restbase after https://gerrit.wikimedia.org/r/279112 |
[production] |
12:29 |
<godog> |
pool restbase1012 / restbase1013 |
[production] |
12:27 |
<godog> |
halt restbase1003 / restbase1004 |
[production] |
12:17 |
<moritzm> |
installing various security updates on mediawiki eqiad servers (along with HHVM restarts): graphite2, libldap, pixman, sqlite, pygments, gnutls26 (already running fine on canaries since yesterday) |
[production] |
11:54 |
<godog> |
swift eqiad-prod ms-be1020 / ms-be1021 to weight 3500 |
[production] |
10:59 |
<jynus> |
stopping and restarting db1015 for upgrade and clone to db1077 |
[production] |
10:44 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1015, increase weight of db1044 (duration: 00m 25s) |
[production] |
10:38 |
<godog> |
depool restbase1003 / restbase1004 prior to deprovisioning the hardware |
[production] |
10:29 |
<moritzm> |
installing various security updates on mediawiki codfw servers (along with HHVM restarts): graphite2, libldap, pixman, sqlite, pygments, gnutls26 (already running fine on canaries since yesterday) |
[production] |
09:21 |
<jynus> |
start mysql on es2019 at es2018-bin.000044:287914983 |
[production] |
08:38 |
<jynus> |
starting mysql at es2019 |
[production] |
08:37 |
<jynus@tin> |
Synchronized wmf-config/db-codfw.php: Add db2008 (x1) depooled, depool es2019 (duration: 00m 26s) |
[production] |
08:30 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1044, pool db1075 (duration: 00m 25s) |
[production] |
08:27 |
<jynus@tin> |
Synchronized wmf-config/db-codfw.php: Add db1075 (duration: 00m 40s) |
[production] |
07:59 |
<jynus> |
powercycling es2019 - it was down |
[production] |
07:19 |
<_joe_> |
progressively activating cross-dc replica and encryption between the jobqueue redises |
[production] |
03:34 |
<mutante> |
tin - re-arm keyholder |
[production] |
03:26 |
<mutante> |
tin - restart keyholder - re: < twentyafterfour> 1 mismatch. I guess that keyholder-proxy needs to be restarted on tin |
[production] |