2018-11-05
§
|
13:57 |
<jynus_> |
increase consistency of db2050, dbstore2002 s3 after them catching up replication T208462 |
[production] |
12:33 |
<ladsgroup@deploy1001> |
Finished deploy [ores/deploy@096ffb3]: T208577 T181632 T208608 (duration: 22m 58s) |
[production] |
12:23 |
<zeljkof> |
EU SWAT finished |
[production] |
12:23 |
<zfilipin@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:471088|Increase wikidata dispatchers to 3]] (duration: 00m 54s) |
[production] |
12:16 |
<zfilipin@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:470985|Set wgForeignUploadTargets to [] for zhwiki (T208397)]] (duration: 00m 54s) |
[production] |
12:10 |
<ladsgroup@deploy1001> |
Started deploy [ores/deploy@096ffb3]: T208577 T181632 T208608 |
[production] |
12:05 |
<zfilipin@deploy1001> |
Synchronized static/images/project-logos/: SWAT: [[gerrit:469027|Revert "Anniversary logo for cswiki" (T207589)]] (duration: 00m 58s) |
[production] |
10:02 |
<godog> |
reformat xfs filesystems on ms-be1040 - T199198 |
[production] |
09:17 |
<elukey@deploy1001> |
Finished deploy [analytics/refinery@9d39efa]: fixing stat1004 (duration: 00m 04s) |
[production] |
09:17 |
<elukey@deploy1001> |
Started deploy [analytics/refinery@9d39efa]: fixing stat1004 |
[production] |
09:08 |
<joal@deploy1001> |
Finished deploy [analytics/refinery@9d39efa]: regular analytics weekly deploy (duration: 05m 21s) |
[production] |
09:02 |
<joal@deploy1001> |
Started deploy [analytics/refinery@9d39efa]: regular analytics weekly deploy |
[production] |
2018-11-04
§
|
23:42 |
<jynus_> |
deleting the same row on all s8 broken servers |
[production] |
23:39 |
<jynus_> |
deleting one row on db1104 |
[production] |
20:38 |
<krinkle@deploy1001> |
Synchronized php-1.33.0-wmf.2/extensions/FlaggedRevs/frontend/specialpages/reports/ProblemChanges_body.php: T176232 - Ia43626584e (duration: 01m 17s) |
[production] |
18:32 |
<jynus_> |
reduce temp. consistency level of s4, s5, and s6 codfw masters to prevent excessive lagging due to ongoing mediawiki core maintenance |
[production] |
08:42 |
<eileen> |
process-control config revision is e832b5a04a renable running job list (all jobs on again now0 |
[production] |
08:38 |
<eileen> |
process-control config revision is e16b2c1c61 renable jobs |
[production] |
02:00 |
<eileen> |
I think I got the rest of the jobs off process-control config revision is 4422254128 |
[production] |
01:52 |
<eileen> |
process-control config revision is 6ec67b3d01 - also turn off omnirecipient repair job |
[production] |
01:40 |
<eileen> |
process-control config revision is 5b72cfe874 - reapply turn off q jobs |
[production] |
2018-11-02
§
|
17:04 |
<thcipriani> |
rollback group2 wikis to 1.33.0-wmf.1 on mwdebug100{1,2} |
[production] |
16:54 |
<thcipriani> |
deploying 1.33.0-wmf.2 to group2 wikis on mwdebug1002 |
[production] |
16:43 |
<_joe_> |
live-hacking removal of time limit on mwdebug1001 |
[production] |
16:32 |
<thcipriani> |
deploying 1.33.0-wmf.2 to group2 wikis on mwdebug1001 |
[production] |
15:12 |
<jynus> |
restarting replication @ db2074 after db2094:s3 table fix T208565 |
[production] |
15:00 |
<jynus> |
stopping replication on db2074 to fix db2094:s3 T208565 |
[production] |
14:01 |
<vgutierrez> |
reimaging eeden.wikimedia.org as jessie test system - T208583 |
[production] |
11:43 |
<jynus> |
ignoring cawikimedia.archive replication on db2094:s3 until a reimport happens T208565 |
[production] |
11:29 |
<jijiki> |
Rebooting mw2244 (spare system) for maintenance |
[production] |
10:52 |
<ema> |
restart varnish-be on cp3032 T208574 |
[production] |
08:19 |
<jynus> |
performing alter table on dbstore2002 s3 and reducing consistency to improve recovery time T208462 T204006 |
[production] |
08:01 |
<jynus> |
reducing consitency on db2050 to improve recovery time T208462 |
[production] |
07:59 |
<jynus> |
performing alter table on db2050 T208462 T204006 |
[production] |
07:38 |
<godog> |
reformat ms-be1043 xfs filesystems - T199198 |
[production] |
07:38 |
<jynus> |
reducing consistency temporarily (flush, binlog sync) at db2040 to prevent lagging |
[production] |
07:26 |
<jynus> |
reducing consistency temporarily (flush, binlog sync) at db2035 to prevent lagging |
[production] |
2018-11-01
§
|
23:01 |
<shdubsh> |
restart hhvm on mw1261 |
[production] |
22:29 |
<ejegg> |
restarted fundraising queue consumer jobs |
[production] |
22:21 |
<ejegg> |
updated fundraising CiviCRM from 65130ef3dd to 042eeaeca9 |
[production] |
22:18 |
<ejegg> |
turned off fundraising queue jobs for civi update |
[production] |
22:12 |
<_joe_> |
rolling restart of hhvm on appservers and api in eqiad |
[production] |
22:09 |
<shdubsh> |
cumin -b 2 -s 30 "O:mediawiki::appserver and *.eqiad.wmnet" "restart-hhvm" |
[production] |
22:05 |
<_joe_> |
restarting hhvm on mw1238,1240 |
[production] |
22:02 |
<_joe_> |
restart hhvm on mw1244 |
[production] |
21:52 |
<shdubsh> |
restart hhvm on mw1247 |
[production] |
21:49 |
<_joe_> |
depooling mw1238 for debugging |
[production] |
21:09 |
<thcipriani@deploy1001> |
rebuilt and synchronized wikiversions files: group2 back to 1.33.0-wmf.1 |
[production] |
20:55 |
<hoo> |
Restarted hhvm on mwdebug2002 |
[production] |