1801-1850 of 10000 results (35ms)
2016-02-24 ยง
19:46 <chasemp> runonce apply for https://gerrit.wikimedia.org/r/#/c/272891/ for labs vm's (only affects nfs clients) [production]
19:46 <legoktm@tin> Synchronized wmf-config/InitialiseSettings-labs.php: https://gerrit.wikimedia.org/r/273032 (duration: 01m 41s) [production]
19:41 <cmjohnson1> db1021 replacing disk 8 [production]
19:04 <legoktm@tin> Synchronized wmf-config/InitialiseSettings-labs.php: https://gerrit.wikimedia.org/r/273017 (duration: 01m 37s) [production]
18:52 <papaul> es201[1-9] -signing puppet certs, salt-key. initial run [production]
18:39 <mutante> restart gitblit [production]
18:10 <bblack> disabling nginx keepalives on remaining clusters (upload, misc, maps) [production]
18:07 <ori> hafnium did not have enough disk space for mongo to execute db.repairDatabase(), which is necessary for reclaiming disk space. Since existing profile data can be tossed, ran `db.dropDatabase(); db.repairDatabase();`. Need to think this through better, obviously. [production]
18:02 <ori> mongodb on hafnium: ran `db.results.remove( { "meta.SERVER.REQUEST_URI": "/wiki/Special:BlankPage" } ); db.repairDatabase();` to drop profiles of PyBal requests and compact the database. [production]
17:44 <demon@tin> Synchronized wmf-config/: poolcounter config simplification (duration: 01m 39s) [production]
17:21 <demon@tin> Synchronized wmf-config/InitialiseSettings.php: Re-apply "Set $wgResourceBasePath to /w for medium wikis" (duration: 01m 42s) [production]
17:16 <demon@tin> Synchronized wmf-config/: service entries for initialisesettings + fix (duration: 01m 45s) [production]
16:59 <papaul> es201[1-9] disabling /revoking puppet and salt keys for re-image [production]
16:57 <papaul> es200[1-9] disabling /revoking puppet and salt keys for re-image [production]
16:53 <bd808> https://wmflabs.org/sal/production missing SAL data since 2016-02-21T14:39 due to bot crash; needs to be backfilled from wikitech data [production]
16:43 <hashar> sal on elastic search is stall https://phabricator.wikimedia.org/T127981 [production]
16:41 <_joe_> started nutcracker on mw1099 [production]
16:39 <bblack> +do_gzip done for all cache_text [production]
16:38 <demon@tin> Synchronized wmf-config/: Rationalize services definitions for labs too. (duration: 01m 45s) [production]
16:16 <demon@tin> Synchronized wmf-config/CommonSettings.php: Don't yet allow wikidatasparql graph urls (duration: 01m 37s) [production]
16:12 <demon@tin> Synchronized wmf-config/throttle.php: New throttle settings for Edit-a-thon workshop for orwiki (urgent) (duration: 01m 29s) [production]
16:09 <demon@tin> Synchronized wmf-config/InitialiseSettings.php: Revert "Set $wgResourceBasePath to "/w" for medium wikis" (duration: 01m 30s) [production]
15:20 <krinkle@tin> Synchronized php-1.27.0-wmf.13/extensions/wikihiero: Ia0990f5f (duration: 01m 33s) [production]
15:18 <krinkle@tin> Synchronized php-1.27.0-wmf.14/extensions/wikihiero: Ia0990f5f (duration: 01m 33s) [production]
15:15 <krinkle@tin> Synchronized php-1.27.0-wmf.13/includes/OutputPage.php: Iad94bb2 (duration: 01m 50s) [production]
15:13 <krinkle@tin> Synchronized php-1.27.0-wmf.14/includes/OutputPage.php: Iad94bb2 (duration: 01m 43s) [production]
15:07 <hasharAW> beta app servers have lost access to memcached due to bad nutcracker conf | T127966 [production]
14:55 <godog> nodetool-a repair -pr on restbase1008 T108611 [production]
14:46 <bblack> cache_text: -do_gzip experiment live on all [production]
14:43 <godog> bump max reconstruction speed on restbase2001 to 0 T127951 [production]
14:41 <hashar> beta: we have a lost a memcached server 11:51am UTC [production]
14:08 <krinkle@tin> Synchronized wmf-config/InitialiseSettings.php: T99096: Enable wmfstatic for medium wikis (duration: 01m 40s) [production]
13:46 <godog> bump max reconstruction speed on restbase2001 to 1 T127951 [production]
13:15 <godog> restart cassandra on restbase2001, throttle raid rebuild speed to 8MB/s [production]
11:06 <godog> reboot restbase2001 [production]
11:01 <godog> mdadm errors on restbase2001 while growing the raid0, load increasing [production]
10:53 <godog> grow restbase2001 raid0 to include a 5th disk [production]
08:42 <moritzm> installing libssh2 security updates across the cluster [production]
05:46 <demon@tin> Synchronized docroot/: removing skel-1.5 symlinks (duration: 01m 41s) [production]
05:37 <demon@tin> Synchronized wmf-config/InitialiseSettings.php: import meta to wikitechwiki (duration: 01m 45s) [production]
05:28 <twentyafterfour> applied https://secure.phabricator.com/rP03d6e7f1b699d89c829e92ba0da2178b41ad1d6a on iridium to fix visibility on pastes [production]
05:11 <ori> Restarting HHVM on codfw app servers to make sure they pick a file-scope change to stop profiling PyBal health-checks [production]
05:04 <ori@mira> Synchronized wmf-config/StartProfiler.php: I0e7be0b5: Never profile PyBal health-checks (duration: 03m 12s) [production]
03:19 <l10nupdate@tin> ResourceLoader cache refresh completed at Wed Feb 24 03:19:41 UTC 2016 (duration 8m 46s) [production]
03:10 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.14) (duration: 18m 05s) [production]
02:40 <catrope@tin> Synchronized wmf-config/InitialiseSettings.php: Add default to fix notices about wmgUseFlow (duration: 01m 36s) [production]
02:37 <catrope@tin> Synchronized php-1.27.0-wmf.13/extensions/Echo: SWAT (duration: 01m 40s) [production]
02:35 <catrope@tin> Synchronized php-1.27.0-wmf.14/extensions/Echo: SWAT (duration: 01m 42s) [production]
02:33 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.13) (duration: 13m 47s) [production]
01:50 <catrope@tin> Synchronized wmf-config/InitialiseSettings.php: Use Flow dblists for deciding which wikis have Flow (duration: 01m 38s) [production]