2018-06-01
§
|
19:21 |
<ebernhar1son> |
enable query phase slow logging and increase thresholds for fetch phase slow logging for content/general indices on eqiad and codfw elasticsearch clusters |
[production] |
19:14 |
<mutante> |
zh.planet - fixed issue with corrupt state file and permissions - updated and using new design as well now |
[production] |
17:34 |
<mutante> |
deployment.eqiad/codfw DNS names switched from tin to deploy1001 |
[production] |
17:06 |
<thcipriani@deploy1001> |
Synchronized README: noop test of new deployment server (duration: 00m 53s) |
[production] |
16:39 |
<mutante> |
deploy2001 - also fixing file permissions. files owned by 996 -> mwdeploy, files owned by 997 -> trebuchet |
[production] |
16:21 |
<mutante> |
deployment server has switched away from tin to deploy1001. set global scap lock on deploy1001, re-enabled puppet and ran puppet, disabled tin as deployment server (T175288) |
[production] |
16:13 |
<herron> |
enabled new logstash tcp input with TLS enabled for syslogs on port 16514 T193766 |
[production] |
15:51 |
<gehel> |
elasticsearch cluster restart on codfw completed - T193734 |
[production] |
15:47 |
<mutante> |
@deploy1001:/srv/deployment# find . -uid 997 -exec chown trebuchet {} \; |
[production] |
15:41 |
<mutante> |
root@deploy1001:/srv/mediawiki-staging# find . -uid 996 -exec chown mwdeploy {} \; |
[production] |
15:17 |
<mutante> |
[deploy1001:~] $ scap pull-master tin.eqiad.wmnet |
[production] |
15:12 |
<mutante> |
tin umask 022 && echo 'switching deploy servers' > /var/lock/scap-global-lock |
[production] |
15:05 |
<mutante> |
rsyncing /srv/mediawiki-staging to /srv/mediawiki-staging-before-backup/ on tin as a backup |
[production] |
14:52 |
<mutante> |
deploy1001 - scap pull |
[production] |
14:30 |
<elukey> |
killed pt-heartbear-wikimedia after https://gerrit.wikimedia.org/r/436748 on db1107 |
[production] |
14:24 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1083 fully (duration: 01m 02s) |
[production] |
14:08 |
<reedy@tin> |
Synchronized php-1.32.0-wmf.6/extensions/FlaggedRevs: T196139 (duration: 01m 08s) |
[production] |
13:32 |
<marostegui> |
Deploy schema change on dbstore1002:s5 - T191316 T192926 T89737 T195193 |
[production] |
13:26 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1096:3315 (duration: 01m 03s) |
[production] |
10:59 |
<_joe_> |
disabling puppet on all hosts with role::mediawiki::common while installing mcrouter everywhere |
[production] |
10:35 |
<marostegui> |
Deploy schema change on db1096:3315 - T191316 T192926 T89737 T195193 |
[production] |
10:34 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1096:3315 (duration: 01m 03s) |
[production] |
10:22 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1113:3315 db1096:3315 (duration: 01m 02s) |
[production] |
10:04 |
<Amir1> |
ladsgroup@terbium:~$ foreachwikiindblist medium deleteAutoPatrolLogs.php --sleep 2 --check-old |
[production] |
09:53 |
<volans@tin> |
Finished deploy [debmonitor/deploy@fe8df6e]: Release v0.1.1 (duration: 00m 33s) |
[production] |
09:53 |
<volans@tin> |
Started deploy [debmonitor/deploy@fe8df6e]: Release v0.1.1 |
[production] |
09:37 |
<marostegui> |
Stop replication in sync on db1113:3315 and db1096:3315 for data checks |
[production] |
09:35 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1113:3315 db1096:3315 (duration: 01m 03s) |
[production] |
09:30 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1083 with low load (duration: 01m 03s) |
[production] |
08:38 |
<pnorman@tin> |
Finished deploy [tilerator/deploy@709ca69] (cleartables): reenable v3view on 2004 (duration: 04m 53s) |
[production] |
08:33 |
<pnorman@tin> |
Started deploy [tilerator/deploy@709ca69] (cleartables): reenable v3view on 2004 |
[production] |
08:30 |
<jynus> |
reimage db1083 |
[production] |
08:30 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1113:3315 after alter table (duration: 01m 03s) |
[production] |
08:24 |
<jynus> |
temporarily reducing s7-codfw-master consistency to aliviate lag (binlog_sync, flush_log) |
[production] |
08:22 |
<joal@tin> |
Finished deploy [analytics/refinery@7a72241]: Regular weekly deploy (duration: 12m 31s) |
[production] |
08:20 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1083 (duration: 01m 05s) |
[production] |
08:10 |
<joal@tin> |
Started deploy [analytics/refinery@7a72241]: Regular weekly deploy |
[production] |
06:15 |
<marostegui> |
Stop MySQL on db2059 to clone db2075 - T190704 |
[production] |
06:15 |
<marostegui@tin> |
Synchronized wmf-config/db-codfw.php: Depool db2059 (duration: 00m 56s) |
[production] |
05:36 |
<marostegui@tin> |
Synchronized wmf-config/db-codfw.php: Repool db2092 and db2062 in s1 (duration: 00m 59s) |
[production] |
05:27 |
<marostegui> |
Deploy schema change on db1113:3315 - T191316 T192926 T89737 T195193 |
[production] |
05:26 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1113:3315 for alter table (duration: 00m 57s) |
[production] |
01:34 |
<pnorman@tin> |
Finished deploy [tilerator/deploy@709ca69] (cleartables): Redeploy to 2004 to try to reproduce error (duration: 00m 33s) |
[production] |
01:33 |
<pnorman@tin> |
Started deploy [tilerator/deploy@709ca69] (cleartables): Redeploy to 2004 to try to reproduce error |
[production] |