2016-02-03
§
|
15:30 |
<hashar> |
MediaWiki 1.27.0-wmf.12, from 1.27.0-wmf.12, successfully checked out. |
[production] |
15:23 |
<jynus@mira> |
Synchronized wmf-config/db-eqiad.php: Depool db1060 (duration: 00m 43s) |
[production] |
15:21 |
<hashar> |
mira: cloning 1.27.0-wmf.12 (no link updates) |
[production] |
15:15 |
<bblack> |
rebooting cp1060 (depooled/downtimed) |
[production] |
15:11 |
<bblack> |
depooling cp1060 temporarily from cache_mobile varnish backends |
[production] |
14:55 |
<jynus@mira> |
Synchronized wmf-config/db-eqiad.php: Repool db1054 with low weight, repool db1067 with original weight (duration: 01m 22s) |
[production] |
14:50 |
<bblack> |
rebooting cp1008 for kernel |
[production] |
14:28 |
<godog> |
investigating uwsgi processes for graphite-web not coming up after reboot |
[production] |
14:10 |
<moritzm> |
rebooting graphite1001 for kernel update |
[production] |
13:41 |
<godog> |
powercycle ms-be2015 |
[production] |
13:39 |
<jynus> |
restarting and reconfiguring mysql at db1054 |
[production] |
13:27 |
<jynus@mira> |
Synchronized wmf-config/db-eqiad.php: Repool db1067 at low weight; depool db1054 (duration: 01m 16s) |
[production] |
11:45 |
<jynus> |
restarting and reconfiguring mysql at db1067 |
[production] |
11:11 |
<moritzm> |
repooling restbase1001 |
[production] |
11:04 |
<akosiaris> |
OTRS database upgraded to 3.3, moving on with 4.0 |
[production] |
11:00 |
<jynus@mira> |
Synchronized wmf-config/db-eqiad.php: Repool db1063 at 100% load; depool db1067 for maintenance (duration: 01m 16s) |
[production] |
10:48 |
<moritzm> |
depooling restbase1001 for kernel/Java update |
[production] |
10:37 |
<_joe_> |
ending the load test on the eqiad apaches |
[production] |
10:11 |
<moritzm> |
reboot francium for kernel update |
[production] |
09:53 |
<jynus> |
m2 backup finished on /srv/backups/2016-02-03_08-51-06, filename 'db1020-bin.000842', position 220103947 |
[production] |
09:50 |
<moritzm> |
restarting neodymium for kernel update |
[production] |
09:49 |
<_joe_> |
doing some basic load test on appservers in eqiad |
[production] |
08:52 |
<akosiaris> |
stop otrs-daemon on mendelevium |
[production] |
08:51 |
<jynus> |
starting mysql backup on db1020 (/srv/backups) |
[production] |
08:44 |
<akosiaris> |
stop slave on db2011, db1020's (m2-master) slave, for OTRS migration. DO NOT ENABLE |
[production] |
08:40 |
<akosiaris> |
stop exim4, cron, apache2 on iodine, mendelevium |
[production] |
08:39 |
<akosiaris> |
disabling puppet on iodine, mendelevium, OTRS migration |
[production] |
08:24 |
<jynus@mira> |
Synchronized wmf-config/db-eqiad.php: Repool db1063 with low weight (duration: 01m 20s) |
[production] |
2016-02-02
§
|
23:13 |
<demon@mira> |
Finished scap: everything re-sync one more time for good measure (duration: 17m 04s) |
[production] |
22:56 |
<demon@mira> |
Started scap: everything re-sync one more time for good measure |
[production] |
22:50 |
<bblack> |
repooling scap proxies: mw10033, mw1070, mw1097, mw1216 |
[production] |
22:45 |
<chasemp> |
restart hhvm & apache2 on mw1235.eqiad.wmnet |
[production] |
22:44 |
<_joe_> |
restarted hhvm on mw1231, stat_cache again |
[production] |
22:42 |
<demon@mira> |
Finished scap: resync final batch with master (duration: 06m 48s) |
[production] |
22:35 |
<demon@mira> |
Started scap: resync final batch with master |
[production] |
22:31 |
<demon@mira> |
Finished scap: re-sync batch of mw1136-50, mw1190-1220, mw2150-mw2200 with master (duration: 09m 33s) |
[production] |
22:22 |
<demon@mira> |
Started scap: re-sync batch of mw1136-50, mw1190-1220, mw2150-mw2200 with master |
[production] |
22:20 |
<ori> |
restarted HHVM on mw1243. Lock-up. Backtrace in /tmp/hhvm.2897.bt |
[production] |
22:20 |
<demon@mira> |
Finished scap: re-sync batch of mw1101-1135,1240-1260, 2101-2150 with master (duration: 12m 51s) |
[production] |
22:07 |
<demon@mira> |
Started scap: re-sync batch of mw1101-1135,1240-1260, 2101-2150 with master |
[production] |
22:00 |
<demon@mira> |
Finished scap: re-sync batch of mw1151-mw1225, mw2174-mw2214 with master (duration: 11m 24s) |
[production] |
21:48 |
<demon@mira> |
Started scap: re-sync batch of mw1151-mw1225, mw2174-mw2214 with master |
[production] |
21:45 |
<demon@mira> |
Finished scap: re-sync batch of mw1051-1100, mw2051-2100 with master (duration: 13m 41s) |
[production] |
21:31 |
<demon@mira> |
Started scap: re-sync batch of mw1051-1100, mw2051-2100 with master |
[production] |
21:28 |
<demon@mira> |
Finished scap: re-sync batch of mw1025-1050 and mw2007-mw2050 with master (2nd try) (duration: 14m 33s) |
[production] |
21:27 |
<_joe_> |
depooling eqiad scap-proxies |
[production] |
21:13 |
<demon@mira> |
Started scap: re-sync batch of mw1025-1050 and mw2007-mw2050 with master (2nd try) |
[production] |
21:04 |
<demon@mira> |
scap aborted: re-sync batch of mw1025-1050 and mw2007-mw2050 with master (duration: 10m 11s) |
[production] |
20:54 |
<demon@mira> |
Started scap: re-sync batch of mw1025-1050 and mw2007-mw2050 with master |
[production] |
20:32 |
<hashar> |
mw1114-mw1119 are canary api appservers Finished syncing |
[production] |