2019-02-28
§
|
12:18 |
<zfilipin@deploy1001> |
Synchronized wmf-config/: SWAT: [[gerrit:491959|Show referencePreviews on group0 wikis as beta feature (T214905)]] (duration: 00m 56s) |
[production] |
11:59 |
<jbond42> |
rolling openssl security updates to jessie systems |
[production] |
11:32 |
<akosiaris> |
remove sca1003, sca1004, sca2003, sca2004 from the fleet. Celebrate!!!! |
[production] |
11:28 |
<elukey> |
pause cleanup of 20k+ zookeeper nodes on conf100[4-6] (old Hadoop Yarn state) - T216952 |
[production] |
10:00 |
<_joe_> |
executing a rolling puppet run (2 server at a time per cluster, per dc) in eqiad,codfw as an HHVM restart will be triggered |
[production] |
09:37 |
<gilles@deploy1001> |
Synchronized php-1.33.0-wmf.19/extensions/NavigationTiming/modules/ext.navigationTiming.js: T217210 Don't assume PerformanceObserver entry types are supported (duration: 00m 54s) |
[production] |
09:30 |
<elukey> |
start cleanup of 20k+ zookeeper nodes on conf100[4-6] (old Hadoop Yarn state) - T216952 |
[production] |
09:26 |
<moritzm> |
installed php security updates on netmon1002 and people1001 |
[production] |
09:22 |
<marostegui> |
Stop MySQL on db1125 (sanitarium) to upgrade, this will generate lag on labs on: s2, s4, s6,s7 |
[production] |
09:21 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1121 (duration: 00m 54s) |
[production] |
09:08 |
<marostegui> |
Stop MySQL on db1121 for upgrade, this will generate lag on labsdb:s4 |
[production] |
09:08 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1121 (duration: 00m 53s) |
[production] |
08:59 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Fully repool db1079 (duration: 00m 53s) |
[production] |
08:32 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Increase API traffic db1079 after mysql upgrade (duration: 00m 53s) |
[production] |
08:31 |
<elukey> |
roll restart of Yarn Resource Managers on an-master100[1,2] to pick up new settings |
[production] |
08:22 |
<marostegui> |
Change abuse_filter_log indexes on s3 codfw, lag will appear on codfw - T187295 |
[production] |
08:12 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Increase traffic for db1079 after mysql upgrade (duration: 00m 54s) |
[production] |
08:06 |
<moritzm> |
installing glibc security updates for stretch |
[production] |
07:47 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Slowly repool db1079 in API after mysql upgrade (duration: 00m 53s) |
[production] |
07:24 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Slowly repool db1079 after mysql upgrade (duration: 00m 56s) |
[production] |
07:08 |
<marostegui> |
Stop MySQL on db1079 for mysql upgrade |
[production] |
06:50 |
<marostegui> |
Deploy schema change on db1079, this will generate lag on s7 on labs - T86342 |
[production] |
06:23 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1079 (duration: 00m 55s) |
[production] |
06:18 |
<kart_> |
Finished manual run of unpublished ContentTranslation draft purge script (T216983) |
[production] |
05:56 |
<marostegui> |
Upgrade MySQL on db1124 (Sanitarium) lag will be generated on s1,s3,s5,s8 |
[production] |
03:03 |
<kart_> |
Manual run of unpublished ContentTranslation draft purge script (T216983) |
[production] |
02:08 |
<bstorm_> |
clouddb1002 is now in place to replace labsdb1004 as replica for toolsdb but not wikilabels postgres yet T193264 |
[production] |
01:43 |
<twentyafterfour> |
phabricator upgrade completed without issues (actually completed at 01:23 UTC but I failed to hit enter and submit this message) |
[production] |
01:20 |
<twentyafterfour> |
deploying phabricator update 2019-02-27 |
[production] |
01:03 |
<twentyafterfour> |
preparing to deploy phabricator-2019-02-27 |
[production] |
00:55 |
<ebernhardson@deploy1001> |
Synchronized php-1.33.0-wmf.19/vendor/: vendor/ruflin/Elastica: Remove scalar return type hints (duration: 01m 33s) |
[production] |
00:22 |
<ebernhardson@deploy1001> |
Synchronized vendor/: Remove scalar type hints from ruflin/Elastica (duration: 00m 58s) |
[production] |
00:10 |
<ebernhardson@deploy1001> |
Synchronized wmf-config/CommonSettings.php: T215725 Remove mediawikiwiki from wgCentralAuthAutoCreateWikis (duration: 00m 54s) |
[production] |
00:07 |
<ebernhardson@deploy1001> |
Synchronized wmf-config/: T215684 Add config for switching Wikibase search to WikibaseCirrusSearch codebase (duration: 00m 55s) |
[production] |
2019-02-27
§
|
21:57 |
<XioNoX> |
delete local pref for peering sessions in eqiad - T204281 |
[production] |
21:44 |
<eileen> |
civicrm revision is c81fe7a4fd, config revision is 050abdf9e8 |
[production] |
21:26 |
<XioNoX> |
delete local pref for peering sessions in eqord - T204281 |
[production] |
20:53 |
<XioNoX> |
delete local pref for peering sessions in codfw/eqdfw - T204281 |
[production] |
20:50 |
<hashar> |
1.33.0-wmf.19 not rolled to group1. Pending T217285 (Wikibase raising exception on commonswiki). To be figured out during European day time. |
[production] |
20:50 |
<eileen> |
civicrm revision changed from 224bf15206 to c81fe7a4fd, config revision is d1826e371b |
[production] |
20:14 |
<hashar@deploy1001> |
rebuilt and synchronized wikiversions files: (no justification provided) |
[production] |
20:04 |
<hashar@deploy1001> |
Synchronized php: group1 wikis to 1.33.0-wmf.19 (duration: 00m 53s) |
[production] |
20:04 |
<hashar@deploy1001> |
rebuilt and synchronized wikiversions files: group1 wikis to 1.33.0-wmf.19 |
[production] |
19:49 |
<bstorm_> |
stopped slave on labsbd1004 for T193264 |
[production] |
19:43 |
<bstorm_> |
downtimed labsdb1004 to stop mysql for transferring data for T193264 |
[production] |
19:32 |
<SMalyshev> |
repooled wdqs1005, caught up |
[production] |
19:26 |
<herron> |
replacing kafka on logstash1004 with logstash1010 T213898 |
[production] |
18:56 |
<SMalyshev> |
depooled wdqs1005 to let it catch up |
[production] |
18:36 |
<smalyshev@deploy1001> |
Finished deploy [wdqs/wdqs@465673b]: Redeploy GUI for T217161 (duration: 10m 51s) |
[production] |
18:28 |
<cmjohnson1> |
powering off mw126[3-6] one at a time to move to different rack A5 T212348 |
[production] |