4901-4950 of 10000 results (54ms)
2016-02-26 §
15:17 <urandom> blocking CQL native port on restbase1009.eqiad.wmnet : https://phabricator.wikimedia.org/P2677 [production]
15:14 <urandom> disabling puppet on restbase1009.eqiad to preserve local changes during a quick experiment [production]
15:14 <jzerebecki> salt -v --show-timeout '*slave*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" T128191 [releng]
15:14 <jzerebecki> salt -v --show-timeout '*slave*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" [releng]
15:03 <hashar> Switched MediaWiki core npm test to Nodepool instance T119143 [production]
14:44 <hashar> (since it started, dont be that scared!) [releng]
14:44 <hashar> Nodepool has triggered 40 000 instances [releng]
13:59 <krinkle@tin> Synchronized wmf-config/InitialiseSettings.php: T99096: Enable wmgUseWmfstatic on remaining wikis (duration: 00m 50s) [production]
13:54 <moritzm> rebooting lithium for kernel update [production]
13:26 <godog> launch swiftrepl continuous replication for unsharded containers on ms-fe1003 T128096 [production]
12:31 <elukey> added mc1017/mc1018 back to the redis/memcached pools after maintenance [production]
11:53 <hashar> Restarted memcached on deployment-memc02 T128177 [releng]
11:53 <hashar> memcached process on deployment-memc02 seems to have a nice leak of socket usages (from lost) and plainly refuse connections (bunch of CLOSE_WAIT) T128177 [releng]
11:53 <hashar> memcached process on deployment-memc02 seems to have a nice leak of socket usages (from lost) and plainly refuse connections (bunch of CLOSE_WAIT) [releng]
11:42 <godog> run swiftrepl eqiad -> codfw for unsharded containers [production]
11:40 <hashar> deployment-memc04 find /etc/apt -name '*proxy' -delete (prevented apt-get update) [releng]
11:26 <hashar> beta: salt -v '*' cmd.run 'apt-get -y install ruby-msgpack' . I am tired of seeing puppet debug messages: "Debug: Failed to load library 'msgpack' for feature 'msgpack'" [releng]
11:24 <hashar> puppet keep restarting nutcracker apparently T128177 [releng]
11:20 <hashar> Memcached error for key "enwiki:flow_workflow%3Av2%3Apk:63dc3cf6a7184c32477496d63c173f9c:4.8" on server "127.0.0.1:11212": SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY [releng]
11:01 <elukey> removed mc1018/1017 from the redis memcached pools for maintenance [production]
09:46 <elukey> mc1016.eqiad re-added to the memcached/redis pools after maintenance [production]
08:12 <elukey> removed mc1016.eqiad from the redis/memcached pools for maintenance [production]
08:01 <moritzm> blacklisting aufs kernel module [production]
02:32 <l10nupdate@tin> ResourceLoader cache refresh completed at Fri Feb 26 02:32:19 UTC 2016 (duration 7m 42s) [production]
02:24 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.14) (duration: 10m 34s) [production]
01:53 <bd808> Setup initial wiki farm on am-01.authmanager.eqiad.wmflabs (T125320) [authmanager]
01:06 <catrope@tin> Synchronized wmf-config/InitialiseSettings.php: Lower survey rate again (duration: 01m 05s) [production]
00:33 <catrope@tin> Synchronized php-1.27.0-wmf.14/extensions/MobileFrontend/: SWAT (duration: 01m 05s) [production]
00:31 <catrope@tin> Synchronized wmf-config/InitialiseSettings.php: Raise file upload limit to 2047MB (duration: 01m 02s) [production]
00:22 <catrope@tin> Synchronized wmf-config/CommonSettings.php: Add plumbing for wmgUseGraphWithJsonNamespace (duration: 01m 03s) [production]
00:21 <catrope@tin> Synchronized wmf-config/InitialiseSettings.php: Add wmgUseGraphWithJsonNamespace (duration: 01m 04s) [production]
00:15 <catrope@tin> Synchronized wmf-config/InitialiseSettings.php: Run reader segmentation survey at 1:500 to test DNT (duration: 01m 21s) [production]
2016-02-25 §
23:16 <bblack> turning puppet back on for cp*, pushing changes through https://gerrit.wikimedia.org/r/#/c/273385/ [production]
22:38 <hashar> beta: maybe deployment-jobunner01 is processing jobs a bit faster now. Seems like hhvm went wild [releng]
22:33 <bblack> disabling puppet on caches for more scary VCL merges [production]
22:33 <ori@tin> Synchronized php-1.27.0-wmf.14/extensions/CentralAuth: I2cfcbf98f3: Reduce memcache traffic for central session storage (duration: 01m 21s) [production]
22:23 <hashar> beta: jobrunner01 had apache/hhvm killed somehow .... Blame me [releng]
22:06 <bblack> turning puppet back on for cp*, pushing changes through https://gerrit.wikimedia.org/r/273217 to all [production]
21:59 <thcipriani@tin> Synchronized php-1.27.0-wmf.14/extensions/CirrusSearch/includes/InterwikiSearcher.php: Fix undefined variable $term in InterwikiSearcher [[gerrit:273369]] (duration: 01m 08s) [production]
21:56 <hashar> beta: stopped jobchron / jobrunner on deployment-jobrunner01 and restarting them by running puppet [releng]
21:49 <ori> Upgraded Grafana to v3.0.0-pre1. [production]
21:49 <hashar> beta did a git-deploy of jobrunner/jobrunner hoping to fix puppet run on deployment-jobrunner01 and apparently it did! T126846 [releng]
21:28 <thcipriani@tin> rebuilt wikiversions.php and synchronized wikiversions files: all wikis to 1.27.0-wmf.14 [production]
21:23 <bd808> Deleted pytwikibot cookie jar files ./.pywikibot/pywikibot.lwp ./pycore/pywikibot.lwp [tools.rezabot]
21:22 <andrewbogott> disabled quite a lot of tools and crons, as per anomie’s request: "It has broken code that's making bad login attempts at a rate of several per second, and has been for weeks despite the operator being pinged multiple times on various wikis." [tools.rezabot]
21:21 <ebernhardson@tin> Synchronized php-1.27.0-wmf.14/extensions/CirrusSearch/includes/Searcher.php: Update file that wasnt synced properly (duration: 01m 50s) [production]
20:57 <bblack> disabling puppet on caches for scarier VCL merges [production]
20:46 <urandom> starting bootstrap of restbase1008-a T119935 [production]
20:31 <andrewbogott> running webservice restart [tools.wsexport]
20:31 <thcipriani@tin> rebuilt wikiversions.php and synchronized wikiversions files: rollback wmf.14 [production]