2051-2100 of 10000 results (27ms)
2014-10-28 §
14:51 <godog> rolling-restart of eqiad ms-fe* after https://gerrit.wikimedia.org/r/#/c/167310/ [production]
14:04 <godog> reload swift frontend in eqiad after password rotation [production]
14:04 <demon> Synchronized wmf-config/PrivateSettings.php: (no message) (duration: 00m 04s) [production]
13:48 <manybubbles> Synchronized php-1.25wmf5/extensions/CirrusSearch/: (no message) (duration: 00m 05s) [production]
13:47 <manybubbles> Synchronized php-1.25wmf4/extensions/CirrusSearch/: (no message) (duration: 00m 11s) [production]
01:01 <demon> Synchronized wmf-config/InitialiseSettings.php: Turn Cirrus back on basically everywhere. If Elasticsearch freaks out again just revert I73ae276e to get back to lsearchd again (duration: 00m 04s) [production]
00:43 <ori> Synchronized php-1.25wmf4/extensions/WikimediaEvents/WikimediaEventsHooks.php: I4adffaa26: Actually unset the HHVM cookie (duration: 00m 03s) [production]
00:43 <ori> Synchronized php-1.25wmf5/extensions/WikimediaEvents/WikimediaEventsHooks.php: I4adffaa26: Actually unset the HHVM cookie (duration: 00m 03s) [production]
00:27 <awight> reenabling recurring GlobalCollect job [production]
00:07 <awight> updated crm from 9bb50403616d80aa8d39a89ab59965f53e9e3f3d to ffa543cab3eb508fa38b94c6de2643d168b0d507 [production]
2014-10-27 §
23:52 <bd808> Restarted logstash service on logstash1001 because I was not seeing any events from MW make it into kibana [production]
23:27 <maxsem> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/169229/ for reals now (duration: 00m 04s) [production]
23:27 <Reedy> restarted logstash on logstash1001 [production]
23:23 <maxsem> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/169229/ (duration: 00m 04s) [production]
23:22 <Tim> on mw1114: disabled puppet, enabled Eval.PerfPidMap, restarted hhvm [production]
23:21 <awight> updated crm from 5b395c37dc596736ecafceeb156221e3751bfe37 to 9bb50403616d80aa8d39a89ab59965f53e9e3f3d [production]
23:21 <awight> disabling recurring globalcollect job [production]
23:20 <maxsem> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/168771/ (duration: 00m 04s) [production]
23:17 <maxsem> Synchronized php-1.25wmf4/extensions/VisualEditor/: (no message) (duration: 00m 04s) [production]
23:16 <maxsem> Synchronized php-1.25wmf4/extensions/MobileFrontend/: (no message) (duration: 00m 05s) [production]
23:14 <maxsem> Synchronized php-1.25wmf5/extensions/MobileFrontend/: (no message) (duration: 00m 04s) [production]
23:14 <maxsem> Synchronized php-1.25wmf5/extensions/VisualEditor/: (no message) (duration: 00m 05s) [production]
23:14 <maxsem> Synchronized php-1.25wmf5/extensions/Wikidata/: (no message) (duration: 00m 10s) [production]
23:13 <maxsem> Synchronized php-1.25wmf3/extensions/Wikidata/: (no message) (duration: 00m 12s) [production]
23:06 <maxsem> Synchronized wmf-config/Wikibase.php: https://gerrit.wikimedia.org/r/#/c/169192/ (duration: 00m 04s) [production]
22:58 <awight> reenabling recurring globalcollect job [production]
22:54 <awight> rollback civicrm from 9bb50403616d80aa8d39a89ab59965f53e9e3f3d to 5b395c37dc596736ecafceeb156221e3751bfe37 [production]
22:53 <awight> updated civicrm from 5b395c37dc596736ecafceeb156221e3751bfe37 to 9bb50403616d80aa8d39a89ab59965f53e9e3f3d [production]
22:50 <aaron> Synchronized wmf-config/flaggedrevs.php: Removed $wgFlaggedRevsProtectQuota for enwiki (duration: 00m 03s) [production]
22:46 <awight> disabling recurring GlobalCollect job [production]
22:45 <Tim> activated heap profiling on mw1114 [production]
22:21 <AaronSchulz> Running cleanupBlocks.php on all wikis [production]
22:18 <aaron> Synchronized php-1.25wmf4/maintenance: 64fe61e0dbfea84d2bab4c17bf01f5dfdf5cc3b5 (duration: 00m 04s) [production]
22:12 <aaron> Synchronized wmf-config/CommonSettings.php: Stop GWT wgJobBackoffThrottling values from getting lost (duration: 00m 03s) [production]
20:35 <subbu> deploy parsoid sha 617e9e61 [production]
20:27 <cscott> updated OCG to version 60b15d9985f881aadaa5fdf7c945298c3d7ebeac [production]
20:10 <maxsem> Synchronized php-1.25wmf4/extensions/GeoData: GeoData back to normal (duration: 00m 03s) [production]
19:39 <manybubbles> after restarting elasticsearch we expected to get memory errors again. no such luck so far.... [production]
18:57 <manybubbles> completed restarting elasticsearch cluster. now it'll make a useful file on out of memory errors. raised the recovery throttling so it'll recover fast enough to cause oom errors [production]
18:48 <maxsem> Synchronized php-1.25wmf4/extensions/GeoData: live hack to disable geosearch (duration: 00m 04s) [production]
18:37 <manybubbles> note that this is a restart without waiting for the cluster to go green after each restart. I expect lots of whining from icinga. This will cause us to lose some updates but should otherwise be safe. [production]
18:34 <manybubbles> restarting elasticsearch servers to pick up new gc logging and to reset them into a "working" state so they can have their gc problem again and we can log it properly this time. [production]
18:15 <aaron> Synchronized wmf-config/CommonSettings.php: Remove obsolete flags (all of them) from $wgAntiLockFlags (duration: 00m 07s) [production]
17:53 <cmjohnson> replacing disk /dev/sdl slot 11 ms-be1013 [production]
17:37 <_joe_> uploaded a version of jemalloc for trusty with --enable-prof [production]
16:31 <^d> elasticsearch: temporarily raised node_concurrent_recoveries from 3 to 5. [production]
15:32 <demon> Synchronized wmf-config/InitialiseSettings.php: Enable Cirrus as secondary everywhere, brings back GeoData (duration: 00m 04s) [production]
15:08 <manybubbles> Its unclear how much of the master going haywire is something that'll be fixed in elasticsearch 1.4. They've done a lot of work there on the cluster state communication. [production]
15:03 <manybubbles> restarting gmond on all elasticsearch systems because stats aren't updating properly in ganglia and usually that helps [production]
15:02 <manybubbles> restarted a bunch of the elasticsearch nodes that had their heap full. wasn't able to get a heap dump on any of them because they all froze while trying to get the heap dump. [production]