2016-05-02
§
|
14:52 |
<moritzm> |
rolling restart of zookeeper to pick up Java update |
[production] |
14:22 |
<bblack> |
starting gdnsd on esams (esams is marked down there) |
[production] |
14:20 |
<bblack> |
stopped gdnsd on eeden |
[production] |
13:13 |
<jynus> |
stopping db1040 mysql for backup before cloning |
[production] |
12:15 |
<elukey> |
deployed Varnish change to force HTTP 503 for datasets.wikimedia.org, stats.wikimedia.org, metrics.wikimedia.org as prep-step for OS reimage. |
[production] |
12:13 |
<elukey> |
deployed Varnish cache::misc change to force HTTP 503 for datasets.wikimedia.org, stats.wikimedia.org, metrics.wikimedia.org as prep-step for OS reimage. |
[production] |
12:12 |
<elukey> |
Merged Varnish cache::misc change to force HTTP 503 for datasets.wikimedia.org, stats.wikimedia.org, metrics.wikimedia.org as prep-step for OS reimage. |
[production] |
11:21 |
<elukey> |
deployed the last version of Event Logging from tin. Service also restarted. |
[production] |
11:06 |
<moritzm> |
rolling restart of hhvm in eqiad for pcre security update |
[production] |
10:42 |
<moritzm> |
rolling restart of hhvm in codfw for pcre security update |
[production] |
09:58 |
<moritzm> |
uploaded openldap 2.4.41+wmf1 for jessie-wikimedia to carbon (T130593) |
[production] |
08:14 |
<hashar> |
Restarted stuck Jenkins (due to IRC plugin) |
[production] |
07:44 |
<moritzm> |
rebooting hasseleh/hassium for kernel upgrade to 4.4 |
[production] |
07:10 |
<moritzm> |
installing poppler security updates |
[production] |
06:46 |
<_joe_> |
rebooting serpens from ganeti, unreachable |
[production] |
02:30 |
<l10nupdate@tin> |
ResourceLoader cache refresh completed at Mon May 2 02:30:33 UTC 2016 (duration 9m 18s) |
[production] |
02:21 |
<mwdeploy@tin> |
sync-l10n completed (1.27.0-wmf.22) (duration: 09m 31s) |
[production] |
2016-05-01
§
|
19:37 |
<SMalyshev> |
enabled wdqs1002, put wdqs1001 in maintenance mode for reload |
[production] |
16:20 |
<volans> |
changing live configuration of db1042 thread_pool_stall_limit to 10 to avoid connection timeout errors |
[production] |
16:18 |
<volans> |
changing live configuration of db1042 thread_pool_stall_limit back to 100 to test impact on connection timeout |
[production] |
16:08 |
<volans> |
changing live configuration of db1042 thread_pool_stall_limit to 10 to test impact on connection timout |
[production] |
15:24 |
<jynus> |
alter table puppet.fact_values to a bigint unsigned for m1 T107753 |
[production] |
15:07 |
<volans@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1040 for investigation T134114 (duration: 01m 22s) |
[production] |
14:44 |
<volans> |
truncated puppet.fact_values table to fix puppet (as documented on wikitech) |
[production] |
10:58 |
<godog> |
reboot furud.codfw.wmnet, ganeti instance with increasing load and 100% iowait, kvm/ganeti idle instance bug likely T134098 |
[production] |
2016-04-30
§
|
13:41 |
<elukey> |
disabled puppet on analytics1047 and scheduled downtime for the host, IO errors in the dmesg for /dev/sdd. Stopped also Hadoop daemons to remove it from the cluster temporarily (not sure how to do it properly, will write docs). |
[production] |
10:45 |
<volans> |
Reset slave on sanitarium:3311 due to corrupted relay log after skipping query for duplicate key T132416 |
[production] |
10:19 |
<volans> |
restarted slave on dbstore1001 skipping missing database T132837 |
[production] |
08:28 |
<gehel> |
restarting elasticsearch server elastic1031.eqiad.wmnet (T110236) |
[production] |
07:15 |
<gehel> |
restarting elasticsearch server elastic1030.eqiad.wmnet (T110236) |
[production] |
06:32 |
<gehel> |
restarting elasticsearch server elastic1029.eqiad.wmnet (T110236) |
[production] |
06:16 |
<gehel> |
restarting elasticsearch server elastic1028.eqiad.wmnet (T110236) |
[production] |
01:15 |
<aude> |
applied Ibd302e1 to terbium for debugging broken wikidata rdf dumps |
[production] |
2016-04-29
§
|
22:57 |
<mutante> |
DNS - forced authdns-gen-zones etc from https://phabricator.wikimedia.org/T97051#1994679 on ns0/ns1/ns2 to get new language added |
[production] |
20:59 |
<gehel> |
restarting elasticsearch server elastic1027.eqiad.wmnet (T110236) |
[production] |
19:56 |
<urandom> |
(Re)starting cleanup on restbase1009-{a,b}.eqiad.wmnet |
[production] |
19:56 |
<catrope@tin> |
Synchronized php-1.27.0-wmf.22/extensions/CentralNotice/: T133971 (duration: 00m 41s) |
[production] |
19:29 |
<gehel> |
restarting elasticsearch server elastic1026.eqiad.wmnet (T110236) |
[production] |
19:07 |
<gehel> |
restarting elasticsearch server elastic1025.eqiad.wmnet (T110236) |
[production] |
18:21 |
<jzerebecki@tin> |
Synchronized php-1.27.0-wmf.22/extensions/Wikidata/extensions/Wikibase/repo/includes/Hooks/OutputPageBeforeHTMLHookHandler.php: wmf.22 fc20c54f7915b94ec0d15ef17e207c116910623d 2 of 2 T132645 (duration: 00m 28s) |
[production] |
18:20 |
<jzerebecki@tin> |
Synchronized php-1.27.0-wmf.22/extensions/Wikidata/extensions/Wikibase/repo/includes/Dumpers/DumpGenerator.php: wmf.22 fc20c54f7915b94ec0d15ef17e207c116910623d 1 of 2 T133924 (duration: 00m 29s) |
[production] |
18:14 |
<jzerebecki@tin> |
Synchronized php-1.27.0-wmf.22/extensions/Wikidata/extensions/Wikibase/repo/includes/Hooks/OutputPageBeforeHTMLHookHandler.php: wmf.22 fc20c54f7915b94ec0d15ef17e207c116910623d 2 of 2 T132645 (duration: 00m 34s) |
[production] |
18:14 |
<robh> |
started all slaves via dbstore2001 this time. |
[production] |
18:12 |
<jzerebecki@tin> |
Synchronized php-1.27.0-wmf.22/extensions/Wikidata/extensions/Wikibase/repo/includes/Dumpers/DumpGenerator.php: wmf.22 fc20c54f7915b94ec0d15ef17e207c116910623d 1 of 2 T133924 (duration: 00m 44s) |
[production] |
18:07 |
<robh> |
started all slaves via dbstore2002 per jaime's request |
[production] |
17:45 |
<gehel> |
restarting elasticsearch server elastic1024.eqiad.wmnet (T110236) |
[production] |
16:56 |
<gehel> |
restarting elasticsearch server elastic1023.eqiad.wmnet (T110236) |
[production] |
16:22 |
<gehel> |
restarting elasticsearch server elastic1022.eqiad.wmnet (T110236) |
[production] |
15:29 |
<jynus@tin> |
Synchronized wmf-config/db-codfw.php: Repool db2047 and db2068. Depool db2008, db2009. Pool db2033 as the new x1 node. (duration: 00m 27s) |
[production] |
15:17 |
<gehel> |
restarting elasticsearch server elastic1021.eqiad.wmnet (T110236) |
[production] |