2020-03-03
ยง
|
11:08 |
<addshore> |
START warm cache for db1111 & db1126 for Q15-20 million T219123 (pass 2) |
[production] |
11:04 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
11:01 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
11:01 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.kafka.roll-restart-mirror-maker (exit_code=0) |
[production] |
10:54 |
<arturo> |
deleted VMs `tools-worker-[1003-1020]` (legacy k8s cluster) (T246689) |
[tools] |
10:51 |
<arturo> |
cordoned/drained all legacy k8s worker nodes except 1001/1002 (T246689) |
[tools] |
10:50 |
<elukey> |
restarted kafka jumbo (kafka + mirror maker) for openjdk upgrades |
[analytics] |
10:49 |
<elukey@cumin1001> |
START - Cookbook sre.kafka.roll-restart-mirror-maker |
[production] |
10:47 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.kafka.roll-restart-brokers (exit_code=0) |
[production] |
10:47 |
<vgutierrez@cumin2001> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) |
[production] |
10:46 |
<vgutierrez> |
running the decommission cookbook against lvs2002 - T246756 |
[production] |
10:46 |
<vgutierrez@cumin2001> |
START - Cookbook sre.hosts.decommission |
[production] |
10:44 |
<vgutierrez> |
replace lvs2002 with lvs2008 - T196560 |
[production] |
10:14 |
<addshore> |
START warm cache for db1111 & db1126 for Q15-20 million T219123 (pass 1) |
[production] |
10:10 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Reading up to Q15M for the new term store everywhere (was Q12M) + warm db1126 & db1111 caches (T219123) cache bust (duration: 00m 56s) |
[production] |
10:09 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Reading up to Q15M for the new term store everywhere (was Q12M) + warm db1126 & db1111 caches (T219123) (duration: 00m 56s) |
[production] |
10:06 |
<addshore> |
END warm cache for db1111 & db1126 for Q12-15 million T219123 (pass 4) |
[production] |
10:05 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
10:03 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:57 |
<marostegui> |
es4 deployment window finished |
[production] |
09:49 |
<wm-bot> |
<jeanfred> webservice restart for new edition (africa2020) to be displayed (T246696) |
[tools.wikiloves] |
09:44 |
<wm-bot> |
<jeanfred> Deploy latest from Git master: f02938a (T246696) |
[tools.wikiloves] |
09:43 |
<vgutierrez> |
reimage lvs3005 with buster - T245984 |
[production] |
09:36 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Enable es4 as new writable external store section - T246072 (duration: 00m 56s) |
[production] |
09:35 |
<addshore> |
START warm cache for db1111 & db1126 for Q12-15 million T219123 (pass 4) |
[production] |
09:33 |
<marostegui@deploy1001> |
sync-file aborted: Enable es4 as new writable external store section - T246072 (duration: 00m 02s) |
[production] |
09:33 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Enable es4 as new writable external store section - T246072 (duration: 00m 56s) |
[production] |
09:32 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Enable es4 as new writable external store section - T246072 (duration: 00m 57s) |
[production] |
09:22 |
<joal> |
Rerunning failed mediawiki-history jobs for 2020-02 after mediawiki-history-denormalize issue |
[analytics] |
09:16 |
<joal> |
Manually restarting mediawiki-history-denormalize with new patch to try |
[analytics] |
09:09 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Add es4 to the available es sections, not in use yet - T246072 (duration: 00m 57s) |
[production] |
09:07 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Add es4 to the available es sections, not in use yet - T246072 (duration: 00m 57s) |
[production] |
08:53 |
<addshore> |
START warm cache for db1111 & db1126 for Q12-15 million T219123 (pass 3) |
[production] |
08:36 |
<elukey@cumin1001> |
START - Cookbook sre.kafka.roll-restart-brokers |
[production] |
08:36 |
<elukey> |
roll restart kafka-jumbo for openjdk upgrades |
[analytics] |
08:34 |
<elukey> |
re-enable timers on an-coord1001 after maintenance |
[analytics] |
08:30 |
<joal> |
Correct previsou message: Kill mediawiki-history (not mediawiki-history-reduced) as it is failing |
[analytics] |
08:30 |
<joal> |
Kill mediawiki-history-reduced as it is failing |
[analytics] |
08:25 |
<addshore> |
addshore@mwmaint1002:~$ time mwscript extensions/Wikibase/repo/maintenance/rebuildItemTerms.php --wiki=wikidatawiki --batch-size=25 --sleep=1 --file=27feb1125-40to50-holes # T219123 |
[production] |
08:22 |
<elukey> |
hive metastore/server2 now running without zookeeper settings and without DBTokenStore (in memory one used instead, the default) |
[analytics] |
08:19 |
<elukey> |
restart oozie/hive daemons on an-coord1001 for openjdk upgrades |
[analytics] |
08:13 |
<addshore> |
START warm cache for db1111 & db1126 for Q12-15 million T219123 (pass 2) |
[production] |
08:11 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) |
[production] |
08:08 |
<addshore> |
addshore@mwmaint1002:~$ time mwscript extensions/Wikibase/repo/maintenance/rebuildItemTerms.php --wiki=wikidatawiki --batch-size=25 --sleep=1 --file=27feb1125-30to40-holes # T219123 |
[production] |
08:05 |
<elukey@cumin1001> |
START - Cookbook sre.zookeeper.roll-restart-zookeeper |
[production] |
07:55 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) |
[production] |
07:48 |
<elukey@cumin1001> |
START - Cookbook sre.zookeeper.roll-restart-zookeeper |
[production] |
07:45 |
<addshore> |
START warm cache for db1111 & db1126 for Q12-15 million T219123 (pass 1) |
[production] |
07:41 |
<vgutierrez> |
Re-enable BGP in lvs3006 - T245984 |
[production] |
07:39 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.druid.roll-restart-workers (exit_code=0) |
[production] |