2015-07-06
§
|
21:02 |
<mutante> |
purging static-bz URL on varnish ... |
[production] |
20:39 |
<akosiaris> |
upload php5_5.3.10-1ubuntu3.19-wmf1 on apt.wikimedia.org/precise-wikimedia |
[production] |
20:15 |
<gwicke> |
restart cassandra instance on 1005 |
[production] |
20:04 |
<mobrovac> |
restbase restart cassandra on rb1005 |
[production] |
19:28 |
<krenair> |
Synchronized wmf-config: https://gerrit.wikimedia.org/r/#/c/223040/ (duration: 00m 12s) |
[production] |
19:11 |
<gwicke> |
reduced compaction throughput from 160 to 100 mb/s across the cassandra cluster via 'nodetool -h <host> setcompactionthroughput 100' |
[production] |
18:51 |
<gwicke> |
restarted cassandra on restbase1001 with jdk8, see T104888 |
[production] |
18:22 |
<gwicke> |
restarted cassandra on restbase1004 with jdk8 |
[production] |
17:54 |
<Jeff_Green> |
authdns-update for new rigel A record |
[production] |
17:42 |
<jynus> |
Synchronized wmf-config/db-codfw.php: increase db2029 traffic to normal levels (duration: 00m 12s) |
[production] |
17:37 |
<gwicke> |
upgraded restbase1005 to jdk8 |
[production] |
17:35 |
<gwicke> |
restarting cassandra instance on restbase1005: out of heap |
[production] |
17:10 |
<jynus> |
Synchronized wmf-config/db-codfw.php: repool db2029 again after conf upgrade(2/2) (duration: 00m 11s) |
[production] |
17:09 |
<jynus> |
Synchronized wmf-config/db-codfw.php: repool db2029 again after conf upgrade (duration: 00m 11s) |
[production] |
16:38 |
<jynus> |
upgrade and restart of db2029 |
[production] |
16:35 |
<ori> |
depooled mw1152 |
[production] |
15:29 |
<krenair> |
Finished scap: https://gerrit.wikimedia.org/r/#/c/222993/ (duration: 22m 09s) |
[production] |
15:21 |
<_joe_> |
repooling mw1152 |
[production] |
15:20 |
<_joe_> |
attempting dump-apc on mw1060 |
[production] |
15:09 |
<_joe_> |
depooled the HHVM imagescaler again |
[production] |
15:07 |
<krenair> |
Started scap: https://gerrit.wikimedia.org/r/#/c/222993/ |
[production] |
15:02 |
<krenair> |
Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/222617/ (duration: 00m 12s) |
[production] |
14:48 |
<moritzm> |
installed python security updates on analytics*, lab* and virt* |
[production] |
14:46 |
<moritzm> |
added python-diskimage-builder 0.1.46-1+wmf1 for jessie-wikimedia on carbon |
[production] |
14:43 |
<_joe_> |
depooled the HHVM imagescaler, spitting 503s again. |
[production] |
14:18 |
<mobrovac> |
restbase started thinning out parsoid data (local_group_wikipedia_T_parsoid_dataDVIsgzJSne8k) for >= 22 days |
[production] |
14:07 |
<YuviPanda> |
restart apache on labcontrol1001 to pick up parser function change |
[production] |
12:57 |
<moritzm> |
installed python security updates on mw*, es* and db* |
[production] |
12:18 |
<hoo> |
Synchronized wmf-config/: Enable WikibaseQuality and WikibaseQualityConstraints on wikidata (duration: 00m 13s) |
[production] |
12:15 |
<hoo> |
Finished scap: Update WikibaseQuality and WikibaseQualityConstraint (duration: 25m 56s) |
[production] |
11:49 |
<hoo> |
Started scap: Update WikibaseQuality and WikibaseQualityConstraint |
[production] |
11:40 |
<hoo> |
Created the `wbqc_constraints` table on wikidatawiki |
[production] |
09:02 |
<_joe_> |
restarted the appserver on mw1059 with hhvm.server.apc.expire_on_sets = true, restarted the heap profiling to confirm my hypothesis on T104769 |
[production] |
08:31 |
<_joe_> |
restarted cassandra on rb1004. again. |
[production] |
05:01 |
<springle> |
Synchronized wmf-config/db-eqiad.php: repool db1034, depool db1041 (duration: 00m 12s) |
[production] |
05:00 |
<springle> |
stash/pull/apply CommonSettings.php on tin, which was left with modifications |
[production] |
04:35 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Mon Jul 6 04:35:45 UTC 2015 (duration 35m 44s) |
[production] |
02:22 |
<LocalisationUpdate> |
completed (1.26wmf12) at 2015-07-06 02:22:12+00:00 |
[production] |
02:18 |
<l10nupdate> |
Synchronized php-1.26wmf12/cache/l10n: (no message) (duration: 06m 07s) |
[production] |
2015-07-05
§
|
22:30 |
<bd808> |
Restarted logstash on logstah1001; Hung due to OOM errors |
[production] |
22:03 |
<mobrovac> |
restbase rolling restart of restbase |
[production] |
18:12 |
<krenair> |
Synchronized docroot/noc: https://gerrit.wikimedia.org/r/#/c/222932/ (duration: 00m 12s) |
[production] |
17:49 |
<krenair> |
Synchronized docroot/noc/conf: https://gerrit.wikimedia.org/r/#/c/222290/ (duration: 00m 13s) |
[production] |
17:44 |
<krenair> |
Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/221600/ (duration: 00m 12s) |
[production] |
15:16 |
<YuviPanda> |
restarted nutcracker on silver. |
[production] |
12:55 |
<mobrovac> |
restbase rolling restart of cassandra to apply the 16G heap change https://gerrit.wikimedia.org/r/222899 |
[production] |
11:21 |
<_joe_> |
restarted cassandra on restbase1004 (again), seemingly crashed for a bad request |
[production] |
11:03 |
<_joe_> |
restarting cassandra on rb1003,4 and restbase on rb1002,3 |
[production] |
09:43 |
<bblack> |
restarted restbase on restbase1005 |
[production] |
08:40 |
<_joe_> |
collecting heaps on an api appserver, mw1115, as comparison |
[production] |