651-700 of 10000 results (24ms)
2015-01-04 §
15:59 <springle> upgrade db1057 trusty [production]
15:23 <springle> limiting exim/otrs concurrent connections on m2-master to 250 [production]
14:29 <springle> xtrabackup clone db1020 to db2011 [production]
13:49 <springle> dbproxy1002 failed m2-master traffic over to m2-slave. services up. investigating cause [production]
2015-01-03 §
23:23 <subbu> Try #2: hotfix synced to parsoid cores (to return 500 for urwiki:نام_مقامات_اے); git sha 85d8818ec1b692aaab440630a119c539d63d5ca5 [production]
22:38 <YuviPanda> restarted parsoid on wtp1010 [production]
22:38 <YuviPanda> restarted parsoid on wtp1006 [production]
22:37 <YuviPanda> restarted parsoid on wtp1004 [production]
22:29 <subbu> hotfix synced to parsoid cores (to return 500 for urwiki:نام_مقامات_اے); restart coming next [production]
22:15 <YuviPanda> restarted parsoid on wtp* hosts agian [production]
21:19 <YuviPanda> restarting parsoid on wtp* hosts again [production]
20:46 <YuviPanda> restarting parsoid on wtp* again [production]
20:29 <YuviPanda> manually restarted parsoid on wtp1012 [production]
20:12 <YuviPanda> restarting parsoid on all wtp* hosts [production]
20:06 <YuviPanda> restarting parsoid on wtp1008 [production]
17:13 <_joe_> restarting parsoid across the cluster [production]
2015-01-02 §
21:19 <qchris> Ran kafka leader re-election to bring analytics1021 back into the set of leaders [production]
11:48 <godog> reboot es2004, debugging gmond stuck on start/stop [production]
04:59 <springle> Synchronized wmf-config/db-eqiad.php: repool db1061, warm up (duration: 00m 06s) [production]
03:29 <springle> clone and deploy es2002 es2003 es2004 [production]
2015-01-01 §
15:19 <springle> upgrade db1061 trusty [production]
15:19 <springle> upgrade db1061 trusty [production]
2014-12-31 §
23:45 <springle> Synchronized wmf-config/db-eqiad.php: depool db1061 (duration: 00m 05s) [production]
14:02 <springle> Synchronized wmf-config/db-eqiad.php: repool db1065, warm up (duration: 00m 06s) [production]
09:47 <godog> updating precise-wikimedia from third-party repo (hwraid) [production]
09:45 <godog> previous reprepro update also accidentally updated elasticsearch in trusty-wikimedia to 1.3.7 [production]
09:43 <godog> updating trusty-wikimedia from third-party repo (hwraid) [production]
02:22 <springle> upgrade db1065 trusty [production]
02:16 <springle> Synchronized wmf-config/db-eqiad.php: depool db1065 (duration: 00m 05s) [production]
02:03 <awight> updated payments from 78b72063e4e0cc76b7e168be1e626d5e10e34d4a to 62c81d4574e5e994ff8f3cac7115eff335bd5265 [production]
00:52 <bd808> restarted elasticsearch on logstash1001 [production]
00:49 <awight> updated payments from e81f473acc5b31b49dd27714c40f9b71c3462e26 to 78b72063e4e0cc76b7e168be1e626d5e10e34d4a [production]
00:42 <bd808> log2udp events still not making it into logstash; possibly related to earlier elasticsearch cluster issues; I don't want to restart elasticsearch on logstash1001 while the cluster is still recovering form that. [production]
00:33 <bd808> restarted logstash on logstash1001; log2udp events not being recorded in elasticsearch [production]
2014-12-30 §
21:52 <bd808> restarted elasticsearch on logstash1002; it had dropped from the cluster [production]
20:46 <yurik> Synchronized wmf-config/CommonSettings.php: ZeroPortal 182227 (duration: 00m 06s) [production]
19:06 <paravoid> manually stopping acct on neon and setting /etc/default/acct ACCT_ENABLE to 0 [production]
16:38 <godog> killing uwsgi on tunsten, blew memory [production]
14:46 <Nemo_bis> morebots is being rude today [production]
14:36 <hoo> Synchronized wmf-config/CommonSettings.php: Enable unregistered users editing on it.m.wikipedia.org after Dec 31 (duration: 00m 06s) [production]
2014-12-29 §
20:19 <awight> payments updated from ce7fb9af37c4bba2a84668387b61729df4f9723c to e81f473acc5b31b49dd27714c40f9b71c3462e26 [production]
10:35 <godog> reboot ms-be2011, stuck while removing a LD, no console [production]
2014-12-27 §
23:33 <paravoid> restarting puppetmasters [production]
20:29 <gwicke> dropped old keyspaces titan{,2,3} on xenon to free space for titan4 [production]
19:53 <ori> gallium: restarted jenkins [production]
16:19 <Reedy> jenkins started again... [production]
16:17 <Reedy> jenkins killed [production]
16:12 <Reedy> attempting to kill jenkins [production]
16:11 <Reedy> jenkins is hung with high cpu/memory usage [production]
12:55 <springle> Synchronized wmf-config/db-eqiad.php: bump up s1 api load sent to db1066 (duration: 00m 06s) [production]