2018-06-04
§
|
12:18 |
<krinkle@deploy1001> |
Started deploy [performance/navtiming@b229f75]: (no justification provided) |
[production] |
11:38 |
<akosiaris> |
rebalance row_A, row_C nodegroups in ganeti01.svc.eqiad.wmnet cluster |
[production] |
10:51 |
<akosiaris> |
reimage ganeti1004, ganeti1008 to stretch |
[production] |
10:39 |
<_joe_> |
rolling restart of apache on the jobrunners to pick the changed privatetmp setting, rotating logs |
[production] |
10:14 |
<jdrewniak@deploy1001> |
Synchronized portals: Wikimedia Portals Update: [[gerrit:437209|Bumping portals to master (T128546)]] (duration: 00m 49s) |
[production] |
10:13 |
<jdrewniak@deploy1001> |
Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:437209|Bumping portals to master (T128546)]] (duration: 00m 51s) |
[production] |
09:39 |
<marostegui> |
Reload haproxy on dbproxy1010 to depool labsdb1010 - https://phabricator.wikimedia.org/T190704 |
[production] |
09:34 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1097:3315 after alter table (duration: 00m 49s) |
[production] |
09:04 |
<addshore> |
addshore@terbium:~$ for i in {1..2500}; do echo Lexeme:L$i; done | mwscript purgePage.php --wiki wikidatawiki |
[production] |
08:56 |
<marostegui> |
Deploy schema change on db1097:3315 - T191316 T192926 T89737 T195193 |
[production] |
08:56 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1097:3315 for alter table (duration: 00m 49s) |
[production] |
08:53 |
<jynus@deploy1001> |
Synchronized wmf-config/db-codfw.php: Depool pc2005 (duration: 00m 50s) |
[production] |
08:10 |
<jynus> |
restarting icinga due to ongoing check/downtime issues |
[production] |
07:57 |
<marostegui> |
Stop replication on db2094:3315 for testing |
[production] |
07:29 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1082 after alter table (duration: 00m 51s) |
[production] |
07:11 |
<gehel> |
starting elasticsearch cluster restart on eqiad - T193734 |
[production] |
06:18 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Repool db2059, db2075 - T190704 (duration: 00m 49s) |
[production] |
06:05 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1121 - T190704 (duration: 00m 49s) |
[production] |
05:52 |
<marostegui> |
Stop replication in sync on db1121 and db2051 - T190704 |
[production] |
05:50 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1121 - T190704 (duration: 00m 49s) |
[production] |
05:29 |
<marostegui> |
Deploy schema change on db1082 with replication (this will generate lag on labs for s5) - T191316 T192926 T89737 T195193 |
[production] |
05:24 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1082 for alter table (duration: 00m 53s) |
[production] |
02:53 |
<l10nupdate@deploy1001> |
ResourceLoader cache refresh completed at Mon Jun 4 02:53:16 UTC 2018 (duration 10m 14s) |
[production] |
02:43 |
<l10nupdate@deploy1001> |
scap sync-l10n completed (1.32.0-wmf.6) (duration: 14m 33s) |
[production] |
2018-06-01
§
|
19:21 |
<ebernhar1son> |
enable query phase slow logging and increase thresholds for fetch phase slow logging for content/general indices on eqiad and codfw elasticsearch clusters |
[production] |
19:14 |
<mutante> |
zh.planet - fixed issue with corrupt state file and permissions - updated and using new design as well now |
[production] |
17:34 |
<mutante> |
deployment.eqiad/codfw DNS names switched from tin to deploy1001 |
[production] |
17:06 |
<thcipriani@deploy1001> |
Synchronized README: noop test of new deployment server (duration: 00m 53s) |
[production] |
16:39 |
<mutante> |
deploy2001 - also fixing file permissions. files owned by 996 -> mwdeploy, files owned by 997 -> trebuchet |
[production] |
16:21 |
<mutante> |
deployment server has switched away from tin to deploy1001. set global scap lock on deploy1001, re-enabled puppet and ran puppet, disabled tin as deployment server (T175288) |
[production] |
16:13 |
<herron> |
enabled new logstash tcp input with TLS enabled for syslogs on port 16514 T193766 |
[production] |
15:51 |
<gehel> |
elasticsearch cluster restart on codfw completed - T193734 |
[production] |
15:47 |
<mutante> |
@deploy1001:/srv/deployment# find . -uid 997 -exec chown trebuchet {} \; |
[production] |
15:41 |
<mutante> |
root@deploy1001:/srv/mediawiki-staging# find . -uid 996 -exec chown mwdeploy {} \; |
[production] |
15:17 |
<mutante> |
[deploy1001:~] $ scap pull-master tin.eqiad.wmnet |
[production] |
15:12 |
<mutante> |
tin umask 022 && echo 'switching deploy servers' > /var/lock/scap-global-lock |
[production] |
15:05 |
<mutante> |
rsyncing /srv/mediawiki-staging to /srv/mediawiki-staging-before-backup/ on tin as a backup |
[production] |
14:52 |
<mutante> |
deploy1001 - scap pull |
[production] |
14:30 |
<elukey> |
killed pt-heartbear-wikimedia after https://gerrit.wikimedia.org/r/436748 on db1107 |
[production] |
14:24 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1083 fully (duration: 01m 02s) |
[production] |
14:08 |
<reedy@tin> |
Synchronized php-1.32.0-wmf.6/extensions/FlaggedRevs: T196139 (duration: 01m 08s) |
[production] |
13:32 |
<marostegui> |
Deploy schema change on dbstore1002:s5 - T191316 T192926 T89737 T195193 |
[production] |
13:26 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1096:3315 (duration: 01m 03s) |
[production] |
10:59 |
<_joe_> |
disabling puppet on all hosts with role::mediawiki::common while installing mcrouter everywhere |
[production] |
10:35 |
<marostegui> |
Deploy schema change on db1096:3315 - T191316 T192926 T89737 T195193 |
[production] |
10:34 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1096:3315 (duration: 01m 03s) |
[production] |
10:22 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1113:3315 db1096:3315 (duration: 01m 02s) |
[production] |
10:04 |
<Amir1> |
ladsgroup@terbium:~$ foreachwikiindblist medium deleteAutoPatrolLogs.php --sleep 2 --check-old |
[production] |