2501-2550 of 10000 results (42ms)
2016-12-19 §
19:41 <nuria@tin> Starting deploy [analytics/refinery@711a572]: (no message) [production]
19:36 <nuria@tin> Finished deploy [analytics/refinery@711a572]: (no message) (duration: 00m 04s) [production]
19:36 <nuria@tin> Starting deploy [analytics/refinery@711a572]: (no message) [production]
19:35 <nuria@tin> Finished deploy [analytics/refinery@711a572]: (no message) (duration: 27m 10s) [production]
19:16 <mutante> analytics1027 - out of disk, apt-get clean to free about 500M [production]
19:08 <nuria@tin> Starting deploy [analytics/refinery@711a572]: (no message) [production]
16:17 <marostegui> Run lots os small optimize tables on db1015 as it needs to get some space back urgently [production]
16:04 <andrewbogott> upgrading to python-urllib3_1.19 on scb1001 [production]
14:02 <jynus> deploying new firewall rules to labsdb1009/10/11 [production]
13:39 <elukey> Manually raise hhvm.server.connection_timeout_seconds on mw1259 to one day [production]
13:15 <_joe_> restarted hhvm, apache on mw1260, raised the apache timeout to 1 day, restarted the jobrunner, disabled puppet [production]
11:47 <_joe_> disabling puppet, reconfiguring timeout on apache, restarting HHVM on mw1259 [production]
10:16 <elukey> reimaging mw1168 and mw1169 to Trusty - T153488 [production]
09:38 <elukey> stopping jobrunner/jobchron daemons on mw116[89] as prep step for repurpose to videoscalers - T153488 [production]
09:23 <marostegui> Stop mysql db2048 (depooled) for maintenance - T149553 [production]
09:20 <elukey> killing irc-echo [production]
09:04 <ariel@tin> Finished deploy [dumps/dumps@c8fb9a1]: table jobs to yaml config; stop dumping private tables completely (duration: 00m 01s) [production]
09:04 <ariel@tin> Starting deploy [dumps/dumps@c8fb9a1]: table jobs to yaml config; stop dumping private tables completely [production]
06:44 <marostegui> Deploy innodb compression dbstore2001 on dewiki and wikidatawiki - T151552 [production]
02:23 <l10nupdate@tin> ResourceLoader cache refresh completed at Mon Dec 19 02:23:18 UTC 2016 (duration 4m 23s) [production]
02:18 <l10nupdate@tin> scap sync-l10n completed (1.29.0-wmf.6) (duration: 06m 39s) [production]
00:33 <mobrovac> starting back cassandra on restbase1011 [production]
2016-12-18 §
22:34 <ariel@tin> Finished deploy [dumps/dumps@92946f0]: make monitoring more robust (duration: 00m 01s) [production]
22:34 <ariel@tin> Starting deploy [dumps/dumps@92946f0]: make monitoring more robust [production]
22:17 <ariel@tin> Finished deploy [dumps/dumps@2a35e23]: fix checkpoint prefetch jobs (duration: 00m 02s) [production]
22:17 <ariel@tin> Starting deploy [dumps/dumps@2a35e23]: fix checkpoint prefetch jobs [production]
18:32 <WMFlabs> Testing [production]
16:45 <elukey> starting cassandra instances on restbase1009, restbase1011 and restbase1013 (one at the time) - T153588 [production]
12:38 <mobrovac> started back cassandra restbase1009-a [production]
12:27 <mobrovac> started back cassandra restbase1011-c [production]
12:17 <mobrovac> started back cassandra restbase1013-c [production]
12:08 <mobrovac> disabling puppet on restbase1009, restbase1011 and restbase1013 due to cassandra OOMs [production]
08:57 <elukey> forced restart of cassandra-c on restbase1011 [production]
08:51 <elukey> forced restart of cassandra-b/c on restbase1013 (b not really needed, my error) [production]
08:49 <elukey> forced restart for cassandra-a on restbase1009 (still OOMs) [production]
08:43 <elukey> forced puppet on restbase1009 to bring up cassandra-a (stopped due to OOM issues) [production]
07:07 <godog> force git-fat pull for twcs on restbase1* to restore twcs jar [production]
02:23 <l10nupdate@tin> ResourceLoader cache refresh completed at Sun Dec 18 02:23:11 UTC 2016 (duration 4m 20s) [production]
02:18 <l10nupdate@tin> scap sync-l10n completed (1.29.0-wmf.6) (duration: 06m 39s) [production]
2016-12-17 §
09:38 <elukey> ran apt-get clean and removed some /tmp files on stat1002 to free some space [production]
09:24 <elukey> restarted stuck hhvm on mw1168 (forgot to run hhvm-dump-debug) [production]
02:37 <l10nupdate@tin> ResourceLoader cache refresh completed at Sat Dec 17 02:37:21 UTC 2016 (duration 4m 30s) [production]
02:32 <l10nupdate@tin> scap sync-l10n completed (1.29.0-wmf.6) (duration: 13m 14s) [production]
2016-12-16 §
23:53 <mutante> same fix for other broken 'mw-canary' mw1261 - 1268 - killed dpkg, dpkg --configure -a , apt-get install php5 (after upgrade to 5.6.29 in combination with php-pear hangs at postinst) [production]
23:45 <mutante> mw1262 - killed dpkg, dpkg --configure -a , apt-get install php5 [production]
23:41 <mutante> tungsten - fixed hanging dpkg install, killed, dpkg-reconfigure libapache2-mod-php5 [production]
23:19 <mutante> upgrading php5 to 5.6.29 on mw canary (DSA-3737-1) [production]
22:39 <eevans@tin> Finished deploy [cassandra/twcs@0b0c838]: (no message) (duration: 00m 05s) [production]
22:39 <eevans@tin> Starting deploy [cassandra/twcs@0b0c838]: (no message) [production]
22:38 <mobrovac> restbase deployed the latest code and pooled restbase1018 [production]