2018-04-09
§
|
08:29 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1106 after alter table (duration: 00m 58s) |
[production] |
07:56 |
<marostegui> |
Remove /var/log/wikidata/rebuildTermSqlIndex.log* as per Amir1's request |
[production] |
07:48 |
<moritzm> |
upgrading mw1276-1279 (API canaries) to ICU 57-enabled HHVM build (T189295) |
[production] |
07:42 |
<_joe_> |
repooling mw1300 now with ICU 57-enabled HHVM build (T189295) |
[production] |
07:38 |
<_joe_> |
upgrading mw1300 to ICU 57-enabled HHVM build (T189295) |
[production] |
07:32 |
<moritzm> |
upgrading mw1262-1265 to ICU 57-enabled HHVM build (T189295) |
[production] |
07:24 |
<moritzm> |
repooling mw1261 after upgrade to ICU 57-enabled HHVM build (T189295) |
[production] |
07:17 |
<moritzm> |
upgrading mw1261 to ICU 57-enabled HHVM build (T189295) |
[production] |
07:09 |
<elukey> |
upgrade burrow to 1.0 on kafkamon[12]* - T188719 |
[production] |
06:57 |
<Amir1> |
start of ladsgroup@terbium:~$ mwscript deleteAutoPatrolLogs.php --wiki=zhwiktionary --check-old --before 20180223210426 --sleep 2 (T184485) |
[production] |
06:43 |
<marostegui> |
Reboot db2072 for kernel upgrade |
[production] |
06:41 |
<marostegui> |
Stop MySQL on db2072 to clone db2092 from it - T170662 |
[production] |
06:38 |
<marostegui@tin> |
Synchronized wmf-config/db-codfw.php: Depool db2072 - T170662 (duration: 00m 59s) |
[production] |
06:24 |
<elukey> |
upgrade burrow 1.0.0 to stretch/jessie wikimedia |
[production] |
06:21 |
<marostegui> |
Reboot db2092 for mariadb and kernel upgrade |
[production] |
06:04 |
<marostegui@tin> |
Synchronized wmf-config/db-codfw.php: db2079 is now s8 candidate master (duration: 00m 59s) |
[production] |
05:54 |
<marostegui> |
Stop MySQL on db2079 to change its binlog format |
[production] |
05:34 |
<marostegui> |
Deploy schema change on db1106 with replication enabled (this will generate lag on labs replicas) - T187089 T185128 T153182 |
[production] |
05:34 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1106 for alter table (duration: 01m 00s) |
[production] |
02:36 |
<l10nupdate@tin> |
scap sync-l10n completed (1.31.0-wmf.28) (duration: 05m 57s) |
[production] |
2018-04-06
§
|
22:35 |
<mutante> |
rsyncing bugzilla-static raw html from eqiad to codfw VM |
[production] |
19:59 |
<herron> |
moved rhodium:/var/lib/git/operations/puppet away and triggered puppet agent run to re-create |
[production] |
19:43 |
<ottomata> |
running puppet-merge on rhodium after clash between puppet-merge and new patch submitted |
[production] |
19:23 |
<demon@tin> |
Finished scap: Forcing full scap. Mostly no-op, consistency, paranoia, that sort of thing (duration: 11m 51s) |
[production] |
19:13 |
<bd808> |
wiki replicas: ran maintain-views --database mediawikiwiki --clean on labsdb10{09,10,11} for T191387 |
[production] |
19:11 |
<demon@tin> |
Started scap: Forcing full scap. Mostly no-op, consistency, paranoia, that sort of thing |
[production] |
19:02 |
<demon@tin> |
scap aborted: Forcing full scap, removed clean plugin updates (duration: 11m 03s) |
[production] |
19:00 |
<herron> |
depooled rhodium (puppet master backend) again https://gerrit.wikimedia.org/r/#/c/424646/ |
[production] |
18:51 |
<demon@tin> |
Started scap: Forcing full scap, removed clean plugin updates |
[production] |
18:49 |
<demon@tin> |
scap failed: average error rate on 5/11 canaries increased by 10x (rerun with --force to override this check, see https://logstash.wikimedia.org/goto/2cc7028226a539553178454fc2f14459 for details) |
[production] |
18:47 |
<demon@tin> |
Pruned MediaWiki: 1.31.0-wmf.26 [keeping static files] (duration: 01m 51s) |
[production] |
14:37 |
<herron> |
repooled rhodium (puppet master backend) |
[production] |
14:08 |
<herron> |
upgraded apache on fermium for security updates |
[production] |
14:07 |
<anomie> |
Running populateArchiveRevId.php for group2 for T191307 |
[production] |
14:03 |
<herron> |
apache updated on puppet masters — re-enabling puppet agents |
[production] |
13:55 |
<herron> |
temporarily disabling puppet agents for apache security update on puppet masters |
[production] |
13:14 |
<moritzm> |
installing apache security updates on thorium (running several analytics web services) |
[production] |
12:38 |
<moritzm> |
installing apache security updates on the Kibana nodes of the logstash cluster |
[production] |
11:50 |
<Amir1> |
start of ladsgroup@terbium:~$ mwscript deleteAutoPatrolLogs.php --wiki=fawiki --before 20180223210426 --sleep 2 (T184485) |
[production] |
10:21 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Restore original weight for db1114 (duration: 01m 00s) |
[production] |
09:45 |
<moritzm> |
installing apache security updates on graphite hosts |
[production] |
09:39 |
<marostegui> |
Deploy test alter table on db2038 to test osc_host.py in core |
[production] |
09:26 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Increase traffic for db1114 (duration: 00m 59s) |
[production] |
09:24 |
<moritzm> |
installing apache security updates on planet1001/planet.wikimedia.org |
[production] |
09:00 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Increase traffic for db1114 (duration: 00m 59s) |
[production] |
08:57 |
<no_justification> |
gerrit: restarting services to pick up openjdk updates |
[production] |
08:50 |
<moritzm> |
installing apache security updates on prometheus hosts |
[production] |