2019-03-04
§
|
15:55 |
<jijiki> |
Running puppet on sbc* and kubernetes* - T213194 |
[production] |
15:44 |
<jijiki> |
Disabling puppet on sbc* and kubernetes* - T213194 |
[production] |
15:22 |
<otto@deploy1001> |
Synchronized wmf-config/CommonSettings.php: no-op: Remove unused legacy EventBus config settings (duration: 00m 49s) |
[production] |
15:11 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1097:3314 after changing index on logging table (duration: 00m 51s) |
[production] |
14:54 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1089 and db1100 after changing index on logging tbale (duration: 00m 49s) |
[production] |
14:20 |
<elukey> |
update puppet compiler's facts |
[production] |
14:20 |
<marostegui> |
Change indexes on logging table on db1100 (s5) and db1097:3314 (commonswiki) - T217397 |
[production] |
14:06 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1097:3314, db1100 to changeindexes on logging tbale (duration: 00m 50s) |
[production] |
13:57 |
<gehel> |
restarting blazegraph on wdqs eqiad |
[production] |
12:23 |
<moritzm> |
testing component/php72 on mw2224 |
[production] |
11:04 |
<akosiaris@deploy1001> |
scap-helm citoid finished |
[production] |
11:04 |
<akosiaris@deploy1001> |
scap-helm citoid cluster codfw completed |
[production] |
11:04 |
<akosiaris@deploy1001> |
scap-helm citoid upgrade -f citoid-codfw-values.yaml production stable/citoid [namespace: citoid, clusters: codfw] |
[production] |
11:04 |
<akosiaris@deploy1001> |
scap-helm citoid cluster eqiad completed |
[production] |
11:04 |
<akosiaris@deploy1001> |
scap-helm citoid finished |
[production] |
11:04 |
<akosiaris@deploy1001> |
scap-helm citoid upgrade -f citoid-eqiad-values.yaml production stable/citoid [namespace: citoid, clusters: eqiad] |
[production] |
11:04 |
<akosiaris@deploy1001> |
scap-helm citoid finished |
[production] |
11:04 |
<akosiaris@deploy1001> |
scap-helm citoid cluster staging completed |
[production] |
11:04 |
<akosiaris@deploy1001> |
scap-helm citoid upgrade -f citoid-staging-values.yaml staging stable/citoid [namespace: citoid, clusters: staging] |
[production] |
10:53 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: More weight to db1089 (duration: 00m 48s) |
[production] |
10:38 |
<jdrewniak@deploy1001> |
Synchronized portals: Wikimedia Portals Update: [[gerrit:494191| Bumping portals to master (T128546)]] (duration: 00m 50s) |
[production] |
10:37 |
<jdrewniak@deploy1001> |
Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:494191| Bumping portals to master (T128546)]] (duration: 00m 50s) |
[production] |
09:44 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1089 with low weight (duration: 00m 48s) |
[production] |
09:27 |
<ariel@deploy1001> |
Finished deploy [dumps/dumps@932bf7e]: make misc dumps failure message nicer (duration: 00m 09s) |
[production] |
09:27 |
<ariel@deploy1001> |
Started deploy [dumps/dumps@932bf7e]: make misc dumps failure message nicer |
[production] |
09:22 |
<godog> |
temporarily stop prometheus on prometheus2004 to take a snapshot |
[production] |
08:45 |
<gilles@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: T216499 Undo enabling Priority Hints origin trial on ruwiki (duration: 00m 49s) |
[production] |
08:44 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1103:3314 (duration: 00m 49s) |
[production] |
08:38 |
<gilles@deploy1001> |
scap failed: average error rate on 7/11 canaries increased by 10x (rerun with --force to override this check, see https://logstash.wikimedia.org/goto/db09a36be5ed3e81155041f7d46ad040 for details) |
[production] |
08:29 |
<marostegui> |
Change logging indexes on db1089 to leave the indexes exactly like the ones on tables.sql - T217397 |
[production] |
08:14 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1089 - T217397 (duration: 00m 49s) |
[production] |
07:48 |
<ema> |
cp3032/cp3042: restart varnish-be due to mbox lag |
[production] |
07:42 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1103:3314 for schema change (duration: 00m 49s) |
[production] |
07:39 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1097:3314 (duration: 00m 53s) |
[production] |
07:33 |
<marostegui> |
Reload haproxy on dbproxy1010 to repool labsdb1010 |
[production] |
07:17 |
<kart_> |
Finished manual run of unpublished ContentTranslation draft purge script (T217310) |
[production] |
07:13 |
<marostegui> |
Remove dbstore1002 from tendril and zarcillo - T216491 |
[production] |
07:05 |
<marostegui> |
Upgrade MySQL on db2088 and db2091 |
[production] |
06:46 |
<marostegui> |
Stop MySQL on dbstore1002 for decommission T210478 T172410 T216491 T215589 |
[production] |
06:38 |
<marostegui> |
Stop MySQL on labsdb1010 for mysql upgrade |
[production] |
06:34 |
<gtirloni> |
downtimed cloudstore1008/9 (T209527) |
[production] |
06:13 |
<marostegui> |
Upgrade MySQL on db2041 db2049 db2056 db2095 |
[production] |
06:06 |
<marostegui> |
Run analyze table logging on db2038 and db2059 - T71222 |
[production] |
06:05 |
<marostegui> |
Reload haproxy on dbproxy1010 to depool labsdb1010 |
[production] |
06:04 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1094:3314 for schema change (duration: 01m 11s) |
[production] |
05:18 |
<kart_> |
Started manual run of unpublished ContentTranslation draft purge script (T217310) |
[production] |