2020-05-14
§
|
11:07 |
<matthiasmullie> |
EU swat done |
[production] |
11:05 |
<mlitn@deploy1001> |
Synchronized php-1.35.0-wmf.32/extensions/WikibaseMediaInfo/: [MediaInfo] Enable media search for all users by default (duration: 01m 12s) |
[production] |
11:04 |
<vgutierrez> |
upgrade ats to version 8.0.7-1wm7 on cp3064 |
[production] |
10:31 |
<fdans@deploy1001> |
Finished deploy [analytics/refinery@6f13979]: Regular analytics weekly train (duration: 17m 14s) |
[production] |
10:14 |
<fdans@deploy1001> |
Started deploy [analytics/refinery@6f13979]: Regular analytics weekly train |
[production] |
09:58 |
<elukey> |
remove matomo 3.11 from the main component of stretch-wikimedia |
[production] |
09:56 |
<elukey> |
upgrade matomo on matomo1001 to 3.13.3 (latest upstream) - T252741 |
[production] |
09:30 |
<jayme@deploy1001> |
helmfile [EQIAD] Ran 'sync' command on namespace 'zotero' for release 'production' . |
[production] |
09:29 |
<elukey> |
upload matomo-3.13.3 to thirdparty/matomo on stretch|buster-wikimedia |
[production] |
09:22 |
<jayme@deploy1001> |
helmfile [CODFW] Ran 'sync' command on namespace 'zotero' for release 'production' . |
[production] |
08:57 |
<elukey> |
imported gpg key 1FD752571FE36FF23F78F91B81E2E78B66FED89E in apt1001 (Matomo public debian repo) |
[production] |
08:56 |
<moritzm> |
installing Java security updates on Presto |
[production] |
08:43 |
<jayme> |
updated helm: 2.12.2-1 -> 2.16.7-1 on deploy[1,2]001 and contint1001. 2.12.2-4 -> 2.16.7-1 on contint2001 |
[production] |
08:39 |
<jayme> |
imported helm 2.16.7-1 to main for jessie-wikimedia |
[production] |
08:32 |
<moritzm> |
installing Java security updates on Hadoop/AQS/Druid |
[production] |
08:20 |
<jayme@deploy2001> |
helmfile [STAGING] Ran 'sync' command on namespace 'zotero' for release 'staging' . |
[production] |
08:00 |
<vgutierrez> |
upgrade ats to version 8.0.7-1wm7 on cp5011 |
[production] |
07:03 |
<moritzm> |
installing apt security updates |
[production] |
06:33 |
<ryankemper> |
Pooled wdqs2005 following successful test queries |
[production] |
04:46 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) |
[production] |
04:02 |
<ryankemper@cumin2001> |
END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) |
[production] |
02:59 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
02:59 |
<ryankemper> |
wdqs1005 has been de-pooled pending wdqs data xfer |
[production] |
02:57 |
<ryankemper@cumin2001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
02:57 |
<ryankemper> |
wdqs1004 was repooled after successful test queries |
[production] |
02:55 |
<ryankemper> |
wdqs2006 was repooled after successful test queries |
[production] |
01:32 |
<ryankemper> |
depooled wdqs2006 while waiting for lag to recover |
[production] |
00:54 |
<foks> |
change password for "Python eggs" |
[production] |
00:37 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) |
[production] |
00:31 |
<ryankemper@cumin2001> |
END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) |
[production] |
00:08 |
<twentyafterfour> |
phabricator update appears to be stable. |
[production] |
00:05 |
<twentyafterfour> |
updating phabricator. 1 patch + new translations. Expect only brief downtime. |
[production] |
2020-05-13
§
|
23:46 |
<cstone> |
SmashPig revision changed from cd1a49da5f to 2702b04329 |
[production] |
23:43 |
<ejegg> |
updated payments-wiki from dabba1804c to 3c465cb11c |
[production] |
23:36 |
<ejegg> |
rolled back payments-wiki to dabba1804c |
[production] |
23:29 |
<ejegg> |
updated payment-wiki from dabba1804c to 3c465cb11c |
[production] |
22:40 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
22:39 |
<ryankemper@cumin2001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
22:36 |
<ryankemper> |
Depooled wdqs1004 for subsequent wdqs data xfer |
[production] |
22:29 |
<ryankemper> |
Pooled wdqs2005 given that lag has returned to normal levels and the instance is responding to queries correctly |
[production] |
22:26 |
<ryankemper> |
Pooled wdqs1008 given that lag has returned to normal levels and the instance is responding to queries correctly |
[production] |
21:30 |
<elukey> |
powercycle analytics1055 |
[production] |
21:05 |
<eileen> |
civicrm revision changed from cfb6101e39 to ed4c9522ac, config revision is 2eb75f8dff |
[production] |
20:16 |
<jforrester@deploy1001> |
Synchronized wmf-config/CommonSettings.php: T242430 Stop loading the ParsoidBatchAPI extension (duration: 01m 08s) |
[production] |
19:09 |
<hashar@deploy1001> |
Synchronized php: group1 wikis to 1.35.0-wmf.32 (duration: 01m 05s) |
[production] |
19:08 |
<hashar@deploy1001> |
rebuilt and synchronized wikiversions files: group1 wikis to 1.35.0-wmf.32 |
[production] |
18:54 |
<twentyafterfour> |
restarted php-fpm on phab1001 |
[production] |
18:53 |
<thcipriani> |
restarting gerrit |
[production] |
18:52 |
<twentyafterfour> |
restarting apache on phab1001 for lack of a better idea |
[production] |
18:50 |
<herron> |
restarted kafka broker on kafka-main1001 for java security updates |
[production] |