2019-06-13
§
|
23:25 |
<SMalyshev> |
depooled wdqs1006 to let it catch up quicker |
[production] |
18:10 |
<fdans@deploy1001> |
Finished deploy [analytics/refinery@67b34fe]: retrying deployment of analytics refinery (duration: 00m 19s) |
[production] |
18:10 |
<fdans@deploy1001> |
Started deploy [analytics/refinery@67b34fe]: retrying deployment of analytics refinery |
[production] |
18:01 |
<fdans@deploy1001> |
Finished deploy [analytics/refinery@67b34fe]: deploying refinery source 0.0.92 into refinery (duration: 16m 45s) |
[production] |
17:44 |
<fdans@deploy1001> |
Started deploy [analytics/refinery@67b34fe]: deploying refinery source 0.0.92 into refinery |
[production] |
17:34 |
<bstorm_> |
T203254 set cpu scaling governor to performance on labstore1004 and labstore1005 |
[production] |
16:02 |
<gehel> |
restart blazegraph on wdqs public cluster completed |
[production] |
15:58 |
<gehel> |
restart blazegraph on wdqs public cluster |
[production] |
15:36 |
<gehel> |
restarting blazegraph on wdqs-internal / eqiad (just in case) |
[production] |
08:09 |
<jynus> |
reloading proxies for wikireplicas to rebalance load |
[production] |
07:00 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: More traffic to db1077 after recovering from a crash (duration: 00m 50s) |
[production] |
00:45 |
<paravoid> |
setting the CPU governor to performance for ms-be1036 (a while ago) |
[production] |
2019-06-12
§
|
18:15 |
<krinkle@deploy1001> |
Synchronized php-1.34.0-wmf.8/thumb.php: T225197 / 06b631fae5 (duration: 00m 47s) |
[production] |
18:13 |
<krinkle@deploy1001> |
Synchronized php-1.34.0-wmf.8/extensions/ArticlePlaceholder/includes/: T207235 / a42aa1599a131c55304 (duration: 00m 49s) |
[production] |
16:06 |
<gehel@cumin1001> |
END (PASS) - Cookbook sre.elasticsearch.rolling-restart (exit_code=0) |
[production] |
15:49 |
<gehel@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-restart |
[production] |
15:37 |
<legoktm> |
re-enabled bawolff's gerrit account |
[production] |
15:14 |
<gehel@cumin1001> |
END (ERROR) - Cookbook sre.elasticsearch.rolling-restart (exit_code=97) |
[production] |
14:38 |
<marostegui> |
Start replication on all threads on labsdb1010 - T222978 |
[production] |
14:35 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: More traffic to db1077 after recovering from a crash (duration: 00m 47s) |
[production] |
13:19 |
<gehel@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-restart |
[production] |
11:55 |
<godog> |
swift eqiad-prod: put back ms-be1033 - T223518 |
[production] |
10:52 |
<godog> |
force-upgrade mtail to 3.0.0~rc24.1-1 on wezen - T225604 |
[production] |
10:36 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: More traffic to db1077 after recovering from a crash (duration: 00m 47s) |
[production] |
10:18 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
10:18 |
<akosiaris@deploy1001> |
scap-helm zotero cluster codfw completed |
[production] |
10:17 |
<akosiaris@deploy1001> |
scap-helm zotero cluster eqiad completed |
[production] |
10:17 |
<akosiaris@deploy1001> |
scap-helm zotero upgrade --dry-run --debug production stable/zotero [namespace: zotero, clusters: eqiad,codfw] |
[production] |
10:01 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Slowly repool db1077 after a crash (duration: 00m 48s) |
[production] |
09:51 |
<gehel@cumin2001> |
END (PASS) - Cookbook sre.elasticsearch.rolling-restart (exit_code=0) |
[production] |
08:59 |
<hashar> |
Gracefully stopping Zuul (kill -SIGUSR1) to prepare for the restart of the CI Jenkins T225322 |
[production] |
08:41 |
<onimisionipe> |
pool map2003. reimage and setup is complete - T224395 |
[production] |
08:31 |
<gehel@cumin2001> |
START - Cookbook sre.elasticsearch.rolling-restart |
[production] |
06:49 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Slowly repool db1077 after a crash (duration: 00m 49s) |
[production] |