451-500 of 10000 results (28ms)
2015-08-24 §
16:23 <bd808@tin> Purged l10n cache for 1.26wmf18 [production]
16:23 <bd808@tin> Purged l10n cache for 1.26wmf17 [production]
16:05 <andrewbogott> rebooting labnet1001 [production]
15:53 <_joe_> restarted nutcracker on mw1010, holding a 150 GB deleted logfile [production]
15:47 <Krenair> running sync-common on mw1010 to bring it up to date after clearing some space [production]
15:44 <krenair@tin> Purged l10n cache for 1.26wmf16 [production]
15:41 <krenair@tin> Purged l10n cache for 1.26wmf15 [production]
15:38 <krenair@tin> Synchronized php-1.26wmf19/extensions/Wikidata: https://gerrit.wikimedia.org/r/#/c/233411/1 (duration: 00m 49s) [production]
15:37 <hashar> stopped and restarted Zuul [production]
15:31 <krenair@tin> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/232919/ and https://gerrit.wikimedia.org/r/#/c/232915/ (duration: 01m 34s) [production]
15:29 <krenair@tin> Synchronized w/static/images/project-logos/knwikiquote.png: https://gerrit.wikimedia.org/r/#/c/232919/ (duration: 02m 04s) [production]
15:19 <Krenair> No space left on mw1010, cannot ping or ssh to mw2180 [production]
15:16 <krenair@tin> Synchronized docroot/noc/db.php: https://gerrit.wikimedia.org/r/#/c/232920/ (duration: 01m 34s) [production]
15:14 <hashar> apt-get upgrade on gallium [production]
14:48 <andrewbogott> forcing wikitech logouts in order to flush everyone’s service catalog [production]
14:18 <ottomata> starting to move kafka topic-partitions to new brokers (and off of analytics1021) [production]
14:12 <yurik> git deploy synced kartotherian [production]
13:55 <akosiaris> disable puppet on fermium preparing for reinstallation [production]
13:55 <akosiaris> disable puppet on fermium [production]
12:54 <akosiaris> stop etcd on etcd1002.eqiad.wmnet. Already removed from the cluster [production]
11:58 <_joe_> stopping etcd on etcd1001 [production]
11:50 <_joe_> restarting etcd on etcd1001 [production]
09:00 <YuviPanda> starting up replicate for tools on labstore1002 [production]
09:00 <YuviPanda> cleaning up lockdir on labstore for maps and tools [production]
09:00 <YuviPanda> others replication on labstore1002 completed successfuly [production]
08:31 <YuviPanda> cleaned up others lockdir for replication on labstore1002 and started it manually [production]
06:43 <jynus> reloading dbproxy1003 service [production]
02:21 <l10nupdate@tin> Synchronized php-1.26wmf19/cache/l10n: l10nupdate for 1.26wmf19 (duration: 06m 36s) [production]
2015-08-23 §
16:54 <urandom> bouncing Cassandra on restbase1001 to apply temporary GC settings [production]
02:19 <l10nupdate@tin> Synchronized php-1.26wmf19/cache/l10n: l10nupdate for 1.26wmf19 (duration: 06m 23s) [production]
2015-08-22 §
23:08 <krenair@tin> Synchronized php-1.26wmf19/extensions/AbuseFilter/maintenance/addMissingLoggingEntries.php: (no message) (duration: 01m 05s) [production]
19:41 <YuviPanda> manually remove old snapshots from labstore1002 [production]
17:28 <chasemp> tweaking apache on iridum T109941 [production]
16:45 <chasemp> scratch that as we have mpm_prefork enabled :) [production]
16:32 <chasemp> raising values in mpm_worker.conf for iridium to to debug and hopefully head off further crashing [production]
14:44 <twentyafterfour> restarted apache2 on iridium. Segfault again. This time I at least got one clue in the log: "zend_mm_heap corrupted" [production]
09:18 <twentyafterfour> phabricator seems stable now, restarting apache2 on iridium did the trick, unfortunately we didn't learn why [production]
08:36 <twentyafterfour> restarted phd on iridium [production]
08:35 <twentyafterfour> restarted apache2 on iridium [production]
02:20 <l10nupdate@tin> Synchronized php-1.26wmf19/cache/l10n: l10nupdate for 1.26wmf19 (duration: 06m 09s) [production]
00:26 <mutante> deleting blog.sh and blog_pageviews crontab from stat1003 [production]
2015-08-21 §
23:34 <urandom> restarting Cassandra on restbase1001 to restore baseline settings [production]
23:11 <yurik> synced kartotherian [production]
22:35 <mutante> deleting held messages on mailman that are older than 1 year [production]
21:44 <mutante> had to reset list creator password for mailman - ask me if you think you should have it and don't (this is not the master pass) [production]
20:37 <ori@tin> Synchronized php-1.26wmf19/includes: I1eb8dfc: Revert Count API and hook calls, with 1:1000 sampling (duration: 01m 09s) [production]
16:06 <jynus> checksumming dewiki database, higher write rate/dbstore lag expected temporarily [production]
15:10 <ottomata> rebooting kafka broker analytics1021 to hopefully reload /dev/sdg with new disk, also will turn on hyperthreading [production]
14:13 <ottomata> rebooting analytics1056 after upgrading kernel to linux-image-3.13.0-61-generic [production]
13:57 <urandom> restarting restbase1001 to apply temporary GC setting [production]