2018-11-05
§
|
18:05 |
<XioNoX> |
disable ae2 on cr2-eqiad - T183585 |
[production] |
18:02 |
<XioNoX> |
set vrrp priority 70 on cr2-eqiad:ae2 to failover VIP to cr1 - T183585 |
[production] |
16:49 |
<XioNoX> |
Update LLDP config on cr3-ulsfo - T208630 |
[production] |
16:48 |
<vgutierrez> |
uploaded certcentral 0.5 to apt.wikimedia.org (stretch) - T208572 T208378 |
[production] |
16:06 |
<anomie@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Setting MCR to read-new on all wikis (T198308) (duration: 00m 55s) |
[production] |
13:57 |
<jynus_> |
increase consistency of db2050, dbstore2002 s3 after them catching up replication T208462 |
[production] |
12:33 |
<ladsgroup@deploy1001> |
Finished deploy [ores/deploy@096ffb3]: T208577 T181632 T208608 (duration: 22m 58s) |
[production] |
12:23 |
<zeljkof> |
EU SWAT finished |
[production] |
12:23 |
<zfilipin@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:471088|Increase wikidata dispatchers to 3]] (duration: 00m 54s) |
[production] |
12:16 |
<zfilipin@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:470985|Set wgForeignUploadTargets to [] for zhwiki (T208397)]] (duration: 00m 54s) |
[production] |
12:10 |
<ladsgroup@deploy1001> |
Started deploy [ores/deploy@096ffb3]: T208577 T181632 T208608 |
[production] |
12:05 |
<zfilipin@deploy1001> |
Synchronized static/images/project-logos/: SWAT: [[gerrit:469027|Revert "Anniversary logo for cswiki" (T207589)]] (duration: 00m 58s) |
[production] |
10:02 |
<godog> |
reformat xfs filesystems on ms-be1040 - T199198 |
[production] |
09:17 |
<elukey@deploy1001> |
Finished deploy [analytics/refinery@9d39efa]: fixing stat1004 (duration: 00m 04s) |
[production] |
09:17 |
<elukey@deploy1001> |
Started deploy [analytics/refinery@9d39efa]: fixing stat1004 |
[production] |
09:08 |
<joal@deploy1001> |
Finished deploy [analytics/refinery@9d39efa]: regular analytics weekly deploy (duration: 05m 21s) |
[production] |
09:02 |
<joal@deploy1001> |
Started deploy [analytics/refinery@9d39efa]: regular analytics weekly deploy |
[production] |
2018-11-04
§
|
23:42 |
<jynus_> |
deleting the same row on all s8 broken servers |
[production] |
23:39 |
<jynus_> |
deleting one row on db1104 |
[production] |
20:38 |
<krinkle@deploy1001> |
Synchronized php-1.33.0-wmf.2/extensions/FlaggedRevs/frontend/specialpages/reports/ProblemChanges_body.php: T176232 - Ia43626584e (duration: 01m 17s) |
[production] |
18:32 |
<jynus_> |
reduce temp. consistency level of s4, s5, and s6 codfw masters to prevent excessive lagging due to ongoing mediawiki core maintenance |
[production] |
08:42 |
<eileen> |
process-control config revision is e832b5a04a renable running job list (all jobs on again now0 |
[production] |
08:38 |
<eileen> |
process-control config revision is e16b2c1c61 renable jobs |
[production] |
02:00 |
<eileen> |
I think I got the rest of the jobs off process-control config revision is 4422254128 |
[production] |
01:52 |
<eileen> |
process-control config revision is 6ec67b3d01 - also turn off omnirecipient repair job |
[production] |
01:40 |
<eileen> |
process-control config revision is 5b72cfe874 - reapply turn off q jobs |
[production] |
2018-11-02
§
|
17:04 |
<thcipriani> |
rollback group2 wikis to 1.33.0-wmf.1 on mwdebug100{1,2} |
[production] |
16:54 |
<thcipriani> |
deploying 1.33.0-wmf.2 to group2 wikis on mwdebug1002 |
[production] |
16:43 |
<_joe_> |
live-hacking removal of time limit on mwdebug1001 |
[production] |
16:32 |
<thcipriani> |
deploying 1.33.0-wmf.2 to group2 wikis on mwdebug1001 |
[production] |
15:12 |
<jynus> |
restarting replication @ db2074 after db2094:s3 table fix T208565 |
[production] |
15:00 |
<jynus> |
stopping replication on db2074 to fix db2094:s3 T208565 |
[production] |
14:01 |
<vgutierrez> |
reimaging eeden.wikimedia.org as jessie test system - T208583 |
[production] |
11:43 |
<jynus> |
ignoring cawikimedia.archive replication on db2094:s3 until a reimport happens T208565 |
[production] |
11:29 |
<jijiki> |
Rebooting mw2244 (spare system) for maintenance |
[production] |
10:52 |
<ema> |
restart varnish-be on cp3032 T208574 |
[production] |
08:19 |
<jynus> |
performing alter table on dbstore2002 s3 and reducing consistency to improve recovery time T208462 T204006 |
[production] |
08:01 |
<jynus> |
reducing consitency on db2050 to improve recovery time T208462 |
[production] |
07:59 |
<jynus> |
performing alter table on db2050 T208462 T204006 |
[production] |
07:38 |
<godog> |
reformat ms-be1043 xfs filesystems - T199198 |
[production] |
07:38 |
<jynus> |
reducing consistency temporarily (flush, binlog sync) at db2040 to prevent lagging |
[production] |
07:26 |
<jynus> |
reducing consistency temporarily (flush, binlog sync) at db2035 to prevent lagging |
[production] |