2019-03-28
ยง
|
13:21 |
<ppchelko@deploy1001> |
Started deploy [cpjobqueue/deploy@c120b38]: Partition htmlCacheUpdate topic, explicitly exclude htmlCacheUpdate T219159 |
[production] |
13:14 |
<ppchelko@deploy1001> |
Finished deploy [cpjobqueue/deploy@17285f8]: Partition htmlCacheUpdate topic, step 1 T219159 (duration: 01m 46s) |
[production] |
13:12 |
<ppchelko@deploy1001> |
Started deploy [cpjobqueue/deploy@17285f8]: Partition htmlCacheUpdate topic, step 1 T219159 |
[production] |
12:20 |
<moritzm> |
removing php 7.0 packages from snapshot1005-1007/1009, dumps are only using 7.2 (T218193) |
[production] |
12:13 |
<jbond42> |
move git from jessie-wikimedia backports repo components/ci |
[production] |
12:02 |
<lucaswerkmeister-wmde@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: [[gerrit:499756|Revert "SDC: Enable both new-style and old-style Wikibase federation on Commons" (T219450)]] (duration: 00m 57s) |
[production] |
11:54 |
<moritzm> |
upgrading snapshot1005-1007/1009 to component/php72 (T218193) |
[production] |
11:53 |
<ladsgroup@deploy1001> |
rebuilt and synchronized wikiversions files: Revert T212597 |
[production] |
11:51 |
<ladsgroup@deploy1001> |
Synchronized dblists: Revert T212597 (duration: 00m 58s) |
[production] |
11:29 |
<ladsgroup@deploy1001> |
rebuilt and synchronized wikiversions files: T212597 |
[production] |
11:27 |
<ladsgroup@deploy1001> |
Synchronized dblists: T212597 (duration: 00m 56s) |
[production] |
11:01 |
<godog> |
test copying prometheus metrics on bast3002 |
[production] |
10:54 |
<gehel> |
restarting elasticsearch-psi on elastic20[35,36,53] (shards stuck in recovery) - T218878 |
[production] |
10:22 |
<gehel> |
restarting elasticsearch on elastic20[34,36,50] (shards stuck in recovery) - T218878 |
[production] |
10:15 |
<addshore@deploy1001> |
Synchronized php-1.33.0-wmf.23/extensions/Wikibase/lib: T219452 [[gerrit:499738|Revert: Use enableModuleContentVersion() for Wikibase\lib\SitesModule]] (duration: 01m 06s) |
[production] |
10:11 |
<gehel> |
restarting elasticsearch-omega on elastic2050 (shards stuck in recovery) - T218878 |
[production] |
09:56 |
<gehel> |
restarting elasticsearch-omega on elastic2031 (shards stuck in recovery) - T218878 |
[production] |
09:42 |
<gehel> |
restarting elasticsearch on elastic20[28,29,41] (shards stuck in recovery) - T218878 |
[production] |
09:37 |
<gehel> |
restarting elasticsearch-psi on elastic20[39,40] (shards stuck in recovery) - T218878 |
[production] |
09:33 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1123 (duration: 00m 56s) |
[production] |
09:28 |
<gehel> |
restarting elasticsearch on elastic20[25,27] (shards stuck in recovery) - T218878 |
[production] |
09:19 |
<gehel> |
restarting elasticsearch-omega on elastic20[38,50] (shards stuck in recovery) - T218878 |
[production] |
09:14 |
<godog> |
install rsyslog 8.1901.0-1~bpo8+wmf1 on phab1001 and copper |
[production] |
09:09 |
<gehel> |
restarting elasticsearch-omega on elastic2050 (shards stuck in recovery) - T218878 |
[production] |
09:05 |
<gehel> |
restarting elasticsearch-psi on elastic20[35,36,53] (shards stuck in recovery) - T218878 |
[production] |
09:00 |
<gehel> |
restarting elasticsearch-psi on elastic2036 (shards stuck in recovery) - T218878 |
[production] |
08:57 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1123 (duration: 00m 55s) |
[production] |
08:43 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Repool pc2007 after upgrade (duration: 00m 57s) |
[production] |
08:38 |
<gehel> |
retry shard allocation on elasticsearch codfw all clusters (curl -k -XPOST 'https://localhost:9243/_cluster/reroute?pretty&explain=true&retry_failed') - T218878 |
[production] |
08:37 |
<gehel> |
retry shard allocation on elasticsearch codfw (curl -k -XPOST 'https://localhost:9243/_cluster/reroute?pretty&explain=true&retry_failed') |
[production] |
08:33 |
<elukey> |
move hadoop yarn configuration from hdfs back to zookeeper - T218758 |
[production] |
08:32 |
<marostegui> |
Upgrade pc2007 |
[production] |
08:31 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Depool pc2007 for upgrade (duration: 00m 56s) |
[production] |
08:23 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Repool pc2009 after upgrade (duration: 00m 57s) |
[production] |
08:12 |
<marostegui> |
Upgrade pc2009 |
[production] |
08:11 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Depool pc2009 for upgrade (duration: 00m 57s) |
[production] |
08:10 |
<gehel@cumin2001> |
END (PASS) - Cookbook sre.elasticsearch.force-shard-allocation (exit_code=0) |
[production] |
08:07 |
<gehel@cumin2001> |
START - Cookbook sre.elasticsearch.force-shard-allocation |
[production] |
07:32 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Repool pc2008 after upgrade (duration: 00m 57s) |
[production] |
07:22 |
<marostegui> |
Upgrade pc2008 |
[production] |
07:22 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Depool pc2008 for upgrade (duration: 00m 57s) |
[production] |
07:18 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Clean up old non used entries (duration: 01m 04s) |
[production] |
06:27 |
<marostegui> |
Deploy schema change on s3 codfw, lag will be generated on s3 codfw. |
[production] |
05:39 |
<marostegui> |
Restart apache on phab1001 - phabricator is down |
[production] |
02:50 |
<chaomodus> |
restarted pdfrender on scb1004 in order to attempt to address flapping errors |
[production] |
01:45 |
<XioNoX> |
add AS specific policy-statements to cr2-eqsin (but don't apply them yet) - T211930 |
[production] |
01:20 |
<XioNoX> |
progressive jnt push to standardize cr* |
[production] |
01:15 |
<XioNoX> |
remove sandbox-out6 filter from all routers |
[production] |
00:56 |
<XioNoX> |
jnt push to standardize asw* |
[production] |
00:32 |
<XioNoX> |
jnt push to standardize mr1-* |
[production] |