2017-04-27
ยง
|
17:08 |
<_joe_> |
stop pybal on lvs1006 to stop announcing via BGP |
[production] |
17:08 |
<demon@naos> |
Pruned MediaWiki: 1.29.0-wmf.16 (duration: 00m 13s) |
[production] |
17:04 |
<demon@naos> |
Synchronized scap/plugins/clean.py: One last fix (duration: 01m 04s) |
[production] |
16:53 |
<gehel> |
unbanning all elasticsearch servers in eqiad row D - T148506 |
[production] |
16:48 |
<demon@naos> |
Synchronized scap/plugins/clean.py: --keep-static is nice now. Also need a co-master sync (duration: 01m 28s) |
[production] |
16:45 |
<andrewbogott> |
re-enabling labs instance creation/deletion |
[production] |
16:42 |
<demon@naos> |
Pruned MediaWiki: 1.29.0-wmf.19 [keeping static files] (duration: 00m 15s) |
[production] |
16:32 |
<gehel> |
unbanning elasticsearch servers in eqiad row D - elastic10(17|18|19|20) - T148506 |
[production] |
15:56 |
<elukey> |
restart of jmxtrans on all the hadoop worker nodes |
[production] |
15:51 |
<andrewbogott> |
disabling labs instance create/delete to avoid hilarity during network maintenance |
[production] |
15:50 |
<elukey> |
forced 'service ferm start' on the failed analytics hosts |
[production] |
15:46 |
<marostegui> |
Upgrade db1091 mariadb from 10.0.23 to 10.0.28 |
[production] |
15:39 |
<marostegui> |
Upgrade db1089 mariadb from 10.0.23 to 10.0.28 |
[production] |
15:34 |
<marostegui> |
Upgrade db1090 mariadb from 10.0.23 to 10.0.28 |
[production] |
15:22 |
<jynus> |
stopping all replication channels on dbstore1001 for topology changes |
[production] |
14:34 |
<ema> |
upgrade upload-codfw to varnish 4.1.5-1wm4 T145661 |
[production] |
14:29 |
<marostegui> |
Stop MySQL and shutdown es2019 for HW replacement - T149526 |
[production] |
14:26 |
<ema> |
varnish 4.1.5-1wm4 uploaded to apt.w.o T145661 |
[production] |
14:08 |
<marostegui> |
Deploy alter table labswiki.revision on labtestweb2001 - T132416 |
[production] |
14:04 |
<marostegui> |
Deploy alter table labswiki.revision on silver - T132416 |
[production] |
13:57 |
<_joe_> |
restarting HHVM on mw2213, stuck in HPHP::Treadmill::getAgeOldestRequest |
[production] |
13:52 |
<ladsgroup@naos> |
Synchronized wmf-config/Wikibase-production.php: SWAT: Set echoIcon for notification of wikibase in test wikis (T142102) (duration: 00m 57s) |
[production] |
13:52 |
<Amir1> |
start of scap sync-file wmf-config/Wikibase-production.php 'SWAT: Set echoIcon for notification of wikibase in test wikis (T142102)' |
[production] |
13:45 |
<ladsgroup@naos> |
Synchronized portals: (no justification provided) (duration: 01m 05s) |
[production] |
13:44 |
<ladsgroup@naos> |
Synchronized portals/prod/wikipedia.org/assets: (no justification provided) (duration: 01m 21s) |
[production] |
13:43 |
<Amir1> |
ladsgroup@naos:/srv/mediawiki-staging$ portals/sync-portals (T128546) |
[production] |
12:53 |
<volans> |
disabled puppet on rdb* |
[production] |
12:06 |
<marostegui> |
Upgrade es1011 and es1014 from mariadb 10.0.22 to mariadb 10.0.28 |
[production] |
11:50 |
<marostegui> |
Upgrade mariadb from 10.0.22 to 10.0.28 on es1015 |
[production] |
09:46 |
<moritzm> |
upgrading mysql on bohrium/piwik |
[production] |
09:25 |
<_joe_> |
restarting all redis instances for jobqueues on eqiad to force a full resync with masters in codfw T163337 |
[production] |
08:55 |
<jynus> |
deploying alter table to all wikis on s6 T163979 |
[production] |
08:54 |
<_joe_> |
restarting redis rdb1001:6380 after cleaning up the current AOF files for investigation of T163337 |
[production] |
08:50 |
<moritzm> |
installing django security updates |
[production] |
08:29 |
<godog> |
ms-be1039 issue "controller slot=3 pd 1I:1:5 modify disablepd" to force failed sdc - T163690 |
[production] |
08:25 |
<ema> |
restart varnish-be on cp2024 with expiry thread RT experiment enabled |
[production] |
08:19 |
<ema> |
upgrade varnish to 4.1.5-1wm3 on cp2024 |
[production] |
07:56 |
<elukey> |
aqs100[69] back serving AQS traffic |
[production] |
07:55 |
<ema> |
varnish 4.1.5-1wm3 uploaded to apt.w.o T145661 |
[production] |
07:16 |
<marostegui@naos> |
Synchronized wmf-config/db-eqiad.php: Repool hosts that needed to be moved for the network maintenance - T162681 (duration: 02m 32s) |
[production] |
06:53 |
<marostegui> |
Reboot es1014 for kernel upgrade - T162029 |
[production] |
06:50 |
<elukey> |
executed kafka preferred-replica-election to rebalance topic leaders in the analytics cluster after maintenance |
[production] |
06:45 |
<marostegui> |
Reboot es1011 for kernel upgrade - T162029 |
[production] |
06:39 |
<marostegui> |
Logging for the record: drop table hashs from s2, s3 and s7 (only places where it existed) - T54927 |
[production] |
06:23 |
<_joe_> |
moving orphaned objects in ms-be1039's root partition in sdc1/stale_root to save space |
[production] |
06:17 |
<marostegui> |
Deploy schema change on s7 metawiki.pagelinks to remove partitioning on db1041 - T153300 |
[production] |
06:14 |
<marostegui> |
Deploy alter table on s5 (wikidatawiki) on db1049 - T163548 |
[production] |
06:14 |
<marostegui> |
Deploy alter table on s5 (wikidatawiki) on db1070 (running locally instead of neodymium as this host will be affected by the network maintenance) - T163548 |
[production] |
06:11 |
<marostegui> |
Deploy alter table on s5 (wikidatawiki) on db1070 (running locally instead of neodymium as this host will be affected by the network maintenance) - T130067 T162539 |
[production] |
06:08 |
<marostegui> |
Deploy alter table on s5 (wikidatawiki) on db1049 - T130067 T162539 |
[production] |