51-100 of 10000 results (14ms)
2016-02-24 ยง
22:09 <gehel> reboot logstash1006 for kernel and elasticsearch update [production]
22:07 <demon@tin> Finished scap: group0 to wmf.14 (duration: 47m 50s) [production]
21:19 <demon@tin> Started scap: group0 to wmf.14 [production]
21:19 <subbu> finished deploying parsoid version 581a43c75 [production]
21:08 <subbu> synced code; restarted parsoid on wtp1001 as a canary [production]
21:01 <subbu> starting parsoid deploy [production]
20:49 <moritzm> reboot logstash1004 for kernel/elasticsearch update [production]
20:39 <gehel> reboot logstash1003 for kernel and elasticsearch update [production]
20:28 <gehel> reboot logstash1002 for kernel and elasticsearch update [production]
20:15 <chasemp> reboot labstore1002 to ensure io scheduler grub options work [production]
20:13 <moritzm> reboot logstash1001 for kernel update [production]
19:46 <chasemp> runonce apply for https://gerrit.wikimedia.org/r/#/c/272891/ for labs vm's (only affects nfs clients) [production]
19:46 <legoktm@tin> Synchronized wmf-config/InitialiseSettings-labs.php: https://gerrit.wikimedia.org/r/273032 (duration: 01m 41s) [production]
19:41 <cmjohnson1> db1021 replacing disk 8 [production]
19:04 <legoktm@tin> Synchronized wmf-config/InitialiseSettings-labs.php: https://gerrit.wikimedia.org/r/273017 (duration: 01m 37s) [production]
18:52 <papaul> es201[1-9] -signing puppet certs, salt-key. initial run [production]
18:39 <mutante> restart gitblit [production]
18:10 <bblack> disabling nginx keepalives on remaining clusters (upload, misc, maps) [production]
18:07 <ori> hafnium did not have enough disk space for mongo to execute db.repairDatabase(), which is necessary for reclaiming disk space. Since existing profile data can be tossed, ran `db.dropDatabase(); db.repairDatabase();`. Need to think this through better, obviously. [production]
18:02 <ori> mongodb on hafnium: ran `db.results.remove( { "meta.SERVER.REQUEST_URI": "/wiki/Special:BlankPage" } ); db.repairDatabase();` to drop profiles of PyBal requests and compact the database. [production]
17:44 <demon@tin> Synchronized wmf-config/: poolcounter config simplification (duration: 01m 39s) [production]
17:21 <demon@tin> Synchronized wmf-config/InitialiseSettings.php: Re-apply "Set $wgResourceBasePath to /w for medium wikis" (duration: 01m 42s) [production]
17:16 <demon@tin> Synchronized wmf-config/: service entries for initialisesettings + fix (duration: 01m 45s) [production]
16:59 <papaul> es201[1-9] disabling /revoking puppet and salt keys for re-image [production]
16:57 <papaul> es200[1-9] disabling /revoking puppet and salt keys for re-image [production]
16:53 <bd808> https://wmflabs.org/sal/production missing SAL data since 2016-02-21T14:39 due to bot crash; needs to be backfilled from wikitech data [production]
16:43 <hashar> sal on elastic search is stall https://phabricator.wikimedia.org/T127981 [production]
16:41 <_joe_> started nutcracker on mw1099 [production]
16:39 <bblack> +do_gzip done for all cache_text [production]
16:38 <demon@tin> Synchronized wmf-config/: Rationalize services definitions for labs too. (duration: 01m 45s) [production]
16:16 <demon@tin> Synchronized wmf-config/CommonSettings.php: Don't yet allow wikidatasparql graph urls (duration: 01m 37s) [production]
16:12 <demon@tin> Synchronized wmf-config/throttle.php: New throttle settings for Edit-a-thon workshop for orwiki (urgent) (duration: 01m 29s) [production]
16:09 <demon@tin> Synchronized wmf-config/InitialiseSettings.php: Revert "Set $wgResourceBasePath to "/w" for medium wikis" (duration: 01m 30s) [production]
15:20 <krinkle@tin> Synchronized php-1.27.0-wmf.13/extensions/wikihiero: Ia0990f5f (duration: 01m 33s) [production]
15:18 <krinkle@tin> Synchronized php-1.27.0-wmf.14/extensions/wikihiero: Ia0990f5f (duration: 01m 33s) [production]
15:15 <krinkle@tin> Synchronized php-1.27.0-wmf.13/includes/OutputPage.php: Iad94bb2 (duration: 01m 50s) [production]
15:13 <krinkle@tin> Synchronized php-1.27.0-wmf.14/includes/OutputPage.php: Iad94bb2 (duration: 01m 43s) [production]
15:07 <hasharAW> beta app servers have lost access to memcached due to bad nutcracker conf | T127966 [production]
14:55 <godog> nodetool-a repair -pr on restbase1008 T108611 [production]
14:46 <bblack> cache_text: -do_gzip experiment live on all [production]
14:43 <godog> bump max reconstruction speed on restbase2001 to 0 T127951 [production]
14:41 <hashar> beta: we have a lost a memcached server 11:51am UTC [production]
14:08 <krinkle@tin> Synchronized wmf-config/InitialiseSettings.php: T99096: Enable wmfstatic for medium wikis (duration: 01m 40s) [production]
13:46 <godog> bump max reconstruction speed on restbase2001 to 1 T127951 [production]
13:15 <godog> restart cassandra on restbase2001, throttle raid rebuild speed to 8MB/s [production]
11:06 <godog> reboot restbase2001 [production]
11:01 <godog> mdadm errors on restbase2001 while growing the raid0, load increasing [production]
10:53 <godog> grow restbase2001 raid0 to include a 5th disk [production]
08:42 <moritzm> installing libssh2 security updates across the cluster [production]
05:46 <demon@tin> Synchronized docroot/: removing skel-1.5 symlinks (duration: 01m 41s) [production]