2951-3000 of 10000 results (53ms)
2017-06-27 §
21:24 <bblack> cp1074: restart backend (mailbox lag) [production]
21:03 <twentyafterfour@tin> rebuilt wikiversions.php and synchronized wikiversions files: group0 wikis to 1.30.0-wmf.7 refs T167536 [production]
20:46 <twentyafterfour@tin> Finished scap: sync 1.30.0-wmf.7 and promote to test wikis - refs T167536 (duration: 30m 44s) [production]
20:16 <twentyafterfour@tin> Started scap: sync 1.30.0-wmf.7 and promote to test wikis - refs T167536 [production]
18:41 <godog> switch thumbor back on with a fix for T168949 [production]
18:35 <godog> upgrade thumbor to 0.1.41 [production]
18:25 <gehel> reduce cluster_concurrent_rebalance to 8 and node_concurrent_recoveries to 4 on elasticsearch eqiad [production]
18:05 <hashar> Some CI jobs are broken with "tidy.so: cannot open shared object file: No such file or directory" see T169004 [production]
17:52 <twentyafterfour> branching 1.30.0-wmf.7 - T167536 [production]
17:44 <bblack> restart pybal on lvs4004 [production]
16:37 <mutante> releases1001 - setting boot parameters to network, rebooting [production]
16:26 <mutante> rebooting ganeti instance releases1001 - which is down network-wise but was running [production]
16:23 <godog> revert back to imagescalers for thumbs - T168949 [production]
16:22 <twentyafterfour> restarted apache on iridium, phabricator was running an old version of libphutil [production]
14:22 <elukey> stop jobcron/jobrunner on mw1300 and mw1301 and reboot the hosts for kernel updates [production]
13:52 <marostegui> Rename table enwiki.localisation_file_hash on db1089 - T119811 [production]
12:35 <marostegui> Deploy alter table on s4 directly on codfw master (db2019) to let it replicate - T168661 [production]
12:19 <marostegui> Deploy alter table on s5 directly on codfw master (db2023) to let it replicate - T168661 [production]
12:06 <elukey> stop jobcron/jobrunner on mw1167 and mw1299 and reboot the hosts for kernel updates [production]
11:58 <marostegui> Deploy alter table on s6 directly on codfw master (db2028) to let it replicate - T168661 [production]
11:54 <elukey> stop nova-spiceproxy and neutron-metadata-agent on labtestnet2001 to avoid root partition to fill up [production]
11:48 <akosiaris> upload apertium-spa-cat_2.1.0~r79717-1 to apt.wikimedia.org/jessie-wikimedia/main [production]
11:36 <elukey> stop jobcron/jobrunner on mw116[56] and reboot the hosts for kernel updates [production]
11:36 <akosiaris> upload apertium-spa_1.1.0~r79716-1+wmf1 to apt.wikimedia.org/jessie-wikimedia/main [production]
11:36 <akosiaris> upload apertium-cat_2.2.0~r79715-1+wmf1 to apt.wikimedia.org/jessie-wikimedia/main [production]
10:29 <elukey> stop jobcron/jobrunner on mw116[34] and reboot the hosts for kernel updates [production]
10:25 <elukey> re-enabled puppet and eventlogging_sync on db1047 [production]
09:49 <marostegui> executing alter tables to the log database on dbstore1002 for https://phabricator.wikimedia.org/T167162#3340421 [production]
09:43 <bawolff@tin> Synchronized php-1.30.0-wmf.6/api.php: Use redirect for api requests with pathinfo (duration: 00m 43s) [production]
09:24 <gehel> restart of maps eqiad cluster completed [production]
08:59 <elukey> stop puppet and eventlogging_sync on db1047 [production]
08:46 <elukey> executing alter tables to the log database on db1047 for https://phabricator.wikimedia.org/T167162#3340421 [production]
08:44 <gehel> reboot maps eqiad cluster [production]
08:33 <gehel> restart of maps codfw cluster completed [production]
08:25 <akosiaris> upload etherpad-lite_1.6.0-3 to apt.wikimedia.org/jessie-wikimedia/main [production]
08:18 <elukey> stop jobcron/jobrunner on mw116[12] and reboot the hosts for kernel updates [production]
08:14 <marostegui> Re-enable event scheduler on dbstore2001 - T168354 [production]
08:08 <godog> roll-restart swift-proxy on ms-fe1* to pick up thumbor changes [production]
07:57 <gehel> reboot maps codfw cluster [production]
07:16 <marostegui> Temporarily disable event scheduler on dbstore2001 - T168354 [production]
07:11 <marostegui> Deploy alter table db1034 - T166208 [production]
06:48 <marostegui> Deploy alter table s7 on labsdb1001 - T166208 [production]
06:47 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1034 - T166208 (duration: 00m 43s) [production]
06:40 <marostegui> Deploy alter table s7 - dbstore1002 - no_replicate_T166208.sh [production]
05:58 <elukey> restored rdb2004 as slave of rdb2003 (end of experiment) [production]
05:08 <marostegui> Global rename of Green Cardamom → GreenC - T168776 [production]
05:04 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1079 - T166208 (duration: 00m 43s) [production]
03:43 <mutante> smokeping on stretch means 2.6.11-3 vs 2.6.9-1 we had before [production]
03:35 <mutante> smokeping - stop/rsync/fix permissions/start one more time to minimize gaps in graphs - now fully migrated netmon1001->netmon1002, historic data has been copied (T159756) [production]
03:28 <mutante> netmon1002 - ganglia apache_status.py broken in stretch (?), ganglia deprecated, stopping gmond, aggregator role got removed, was for torrus [production]