2016-01-26
§
|
22:25 |
<dduvall@mira> |
rebuilt wikiversions.php and synchronized wikiversions files: group0 to 1.27.0-wmf.11, for real this time |
[production] |
22:17 |
<dduvall@mira> |
rebuilt wikiversions.php and synchronized wikiversions files: group0 to 1.27.0-wmf.11 |
[production] |
22:15 |
<dduvall@mira> |
Synchronized php-1.27.0-wmf.11: syncing wmf.11 backports of session fixes (duration: 03m 55s) |
[production] |
21:55 |
<ori@mira> |
Synchronized docroot and w: I9b054d847a: New set of speed experiments (duration: 01m 29s) |
[production] |
21:41 |
<marxarelli> |
filed https://phabricator.wikimedia.org/T124828 for fatal in extensions/Echo |
[production] |
21:22 |
<marxarelli> |
Fatal error: Cannot redeclare class CallbackFilterIterator in /srv/mediawiki-staging/php-1.27.0-wmf.11/extensions/Echo/includes/iterator/CallbackFilterIterator.php on line 24 |
[production] |
21:21 |
<marxarelli> |
lint error found when running sync-dir 'Errors parsing /srv/mediawiki-staging/php-1.27.0-wmf.11/extensions/Echo/includes/iterator/CallbackFilterIterator.php' |
[production] |
21:11 |
<marxarelli> |
sync-dir php linting failed |
[production] |
21:02 |
<marxarelli> |
resuming sync-dir and ignoring error as a known issue |
[production] |
20:59 |
<marxarelli> |
getting 'Lost parent, LightProcess exiting' when running sync-dir |
[production] |
20:57 |
<chasemp> |
drop labstore1001 nfs threads down to 192 |
[production] |
20:42 |
<chasemp> |
starting nfsd on labstore1001 |
[production] |
20:40 |
<marxarelli> |
modified wikiversions.php locally on mw1017 to promote all wikis to wmf.11 for initial testing |
[production] |
20:40 |
<chasemp> |
stopping nfs on labstore1001 |
[production] |
20:18 |
<marxarelli> |
locally modified wikiversions.php and wikiversions.json on mw1017 for testing |
[production] |
20:14 |
<marxarelli> |
running 'sync-common --verbose deployment.eqiad.wmnet' on mw1017 to sync wmf.11 for initial testing |
[production] |
20:01 |
<marxarelli> |
proceeding with train deploy. wmf.11 to mw1017, then group0 |
[production] |
19:46 |
<akosiaris> |
issuing a varnish ban on all esams mobile frontend varnish for req.http.host .*wikimedia.org |
[production] |
19:45 |
<akosiaris> |
issuing a varnish ban on all esams mobile backend varnish for req.http.host .*wikimedia.org |
[production] |
19:43 |
<akosiaris> |
issuing a varnish ban on all ulsfo mobile frontend varnish for req.http.host .*wikimedia.org |
[production] |
19:43 |
<akosiaris> |
issuing a varnish ban on all ulsfo mobile backend varnish for req.http.host .*wikimedia.org |
[production] |
19:43 |
<akosiaris> |
issuing a varnish ban on all codfw mobile frontend varnish for req.http.host .*wikimedia.org |
[production] |
19:36 |
<akosiaris> |
issuing a varnish ban on all codfw mobile backend varnish for req.http.host .*wikimedia.org |
[production] |
19:36 |
<akosiaris> |
issuing a varnish ban on all eqiad mobile frontend varnish for req.http.host .*wikimedia.org |
[production] |
19:36 |
<akosiaris> |
issuing a varnish ban on all eqiad mobile backend varnish for req.http.host .*wikimedia.org |
[production] |
19:35 |
<akosiaris> |
all of the above referred to cache_text |
[production] |
19:28 |
<akosiaris> |
all of the above already done, back logging |
[production] |
19:28 |
<akosiaris> |
issuing a varnish ban on all esams frontend varnish for req.http.host .*wikimedia.org |
[production] |
19:28 |
<akosiaris> |
issuing a varnish ban on all esams backend varnish for req.http.host .*wikimedia.org |
[production] |
19:28 |
<akosiaris> |
issuing a varnish ban on all ulsfo backend varnish for req.http.host .*wikimedia.org |
[production] |
19:28 |
<akosiaris> |
issuing a varnish ban on all ulsfo frontend varnish for req.http.host .*wikimedia.org |
[production] |
19:28 |
<akosiaris> |
issuing a varnish ban on all ulsfo backend varnish for req.http.host .*wikimedia.org |
[production] |
19:27 |
<akosiaris> |
issuing a varnish ban on all codfw frontend varnish for req.http.host .*wikimedia.org |
[production] |
19:27 |
<akosiaris> |
issuing a varnish ban on all codfw backend varnish for req.http.host .*wikimedia.org |
[production] |
19:27 |
<akosiaris> |
issuing a varnish ban on all eqiad frontend varnish for req.http.host .*wikimedia.org |
[production] |
19:14 |
<akosiaris> |
issuing a varnish ban on all eqiad backend varnish for req.http.host .*wikimedia.org |
[production] |
19:02 |
<marxarelli> |
backports to wmf.11 ready on mira but delaying train due to wikimedia.org outage |
[production] |
18:44 |
<_joe_> |
running salt --batch-size=20 -C 'G@luster:appserver and G@site:eqiad' cmd.run 'puppet agent -t --tags mw-apache-config' |
[production] |
18:27 |
<robh> |
i broke icinga, but then i fixed it, icinga back to normal. |
[production] |
18:21 |
<robh> |
icinga is broken, it seems it was from a change before mine, but my forced reload broke it |
[production] |
18:18 |
<legoktm> |
running mwscript updateArticleCount.php --wiki=jawiki --update=1 |
[production] |
18:14 |
<cmjohnson1> |
starting puppet on mw cluster |
[production] |
18:14 |
<robh> |
i broke icinga, fixing |
[production] |
18:08 |
<jynus@mira> |
Synchronized wmf-config/db-eqiad.php: Pool new parsercache pc1005 after cloning it from pc1002 (duration: 01m 28s) |
[production] |
17:43 |
<thcipriani> |
ltwiki collation updated 503623 rows processed |
[production] |
17:35 |
<mutante> |
mw1258 - restart hhvm |
[production] |
17:20 |
<cmjohnson> |
disabling puppet on mw cluster |
[production] |