2018-12-11
§
|
10:47 |
<fsero> |
pooling mw1272 |
[production] |
10:42 |
<fsero> |
scap pull mw1272 |
[production] |
09:30 |
<ema> |
mw1272 down for the past 12h. Nothing in console, power-cycling |
[production] |
09:08 |
<marostegui> |
Deploy schema change on db1087 with replication (this will generate lag on labsdb:s8) T202167 T86338 |
[production] |
09:08 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1087 T86338 T202167 (duration: 00m 46s) |
[production] |
09:04 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1092 T86338 T202167 (duration: 00m 46s) |
[production] |
09:01 |
<marostegui> |
Depool labsdb1010 - T86338 |
[production] |
08:15 |
<marostegui> |
Deploy schema change on db1092 T202167 T86338 |
[production] |
08:15 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1092 T86338 T202167 (duration: 00m 46s) |
[production] |
08:10 |
<godog> |
decommissioning cassandra-b, restbase2005 -- T210843 |
[production] |
07:56 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1104 T86338 T202167 (duration: 00m 46s) |
[production] |
07:32 |
<oblivian@deploy1001> |
Synchronized wmf-config/PhpAutoPrepend.php: Hotfix for logging on php7 (2/2) (duration: 02m 51s) |
[production] |
07:29 |
<oblivian@puppetmaster1001> |
conftool action : set/pooled=inactive; selector: name=mw1272.* |
[production] |
07:28 |
<oblivian@deploy1001> |
Synchronized wmf-config/php7.php: Hotfix for logging on php7 (1/2) (duration: 02m 50s) |
[production] |
07:06 |
<marostegui> |
Deploy schema change on db1104 T202167 T86338 |
[production] |
07:06 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1104 T86338 T202167 (duration: 02m 51s) |
[production] |
06:59 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1109 T86338 T202167 (duration: 02m 52s) |
[production] |
06:45 |
<marostegui> |
Rename flaggedrevs tables on srwikinews on db1078 - T209761 |
[production] |
06:13 |
<marostegui> |
Deploy schema change on db1109 T202167 T86338 |
[production] |
06:12 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1109 T86338 T202167 (duration: 02m 55s) |
[production] |
05:57 |
<marostegui> |
Deploy schema change on s4 primary master (db1068) T202167 T86338 |
[production] |
01:03 |
<catrope@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Configure localized logos for nywiki (T211570) (duration: 01m 36s) |
[production] |
01:01 |
<catrope@deploy1001> |
Synchronized static/images/project-logos/: Add localised logos for nywiki (T211570) (duration: 01m 00s) |
[production] |
00:52 |
<catrope@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Use new HD logos for zhwiktionary, zhwikivoyage, zhwikinews (T150618) (duration: 01m 16s) |
[production] |
00:50 |
<RoanKattouw> |
mw1272 is down (does not respond to ping), but scap still tries to deploy to it |
[production] |
00:50 |
<catrope@deploy1001> |
Synchronized static/images/project-logos/: Add HD logos for zhwikinews, zhwikivoyage, zhwiktionary (T150618) (duration: 02m 30s) |
[production] |
00:15 |
<mutante> |
icinga2001 - killed all nagios processes, restarted nsca service, something is different from icinga1001, service failed when trying to restart (T211641) |
[production] |
2018-12-10
§
|
23:51 |
<andrewbogott> |
silencing the kvm process count alert on cloudvirt1023 until I can figure out why it's misfiring |
[production] |
22:13 |
<mutante> |
Welcome new Mediawiki deployer Christoph 'WMDE-Fisch' Jauera (T211014) |
[production] |
21:29 |
<arlolra@deploy1001> |
Finished deploy [parsoid/deploy@dc9b3a1]: Updating Parsoid to 19560da (duration: 11m 15s) |
[production] |
21:20 |
<ladsgroup@deploy1001> |
Finished deploy [ores/deploy@03b9c98]: Add celery4 configs back to the deploy repo (duration: 15m 25s) |
[production] |
21:19 |
<mholloway-shell@deploy1001> |
Finished deploy [mobileapps/deploy@9f4b567]: More internal promisification and other performance tweaks (T202642) (duration: 04m 17s) |
[production] |
21:17 |
<arlolra@deploy1001> |
Started deploy [parsoid/deploy@dc9b3a1]: Updating Parsoid to 19560da |
[production] |
21:14 |
<mholloway-shell@deploy1001> |
Started deploy [mobileapps/deploy@9f4b567]: More internal promisification and other performance tweaks (T202642) |
[production] |
21:05 |
<ladsgroup@deploy1001> |
Started deploy [ores/deploy@03b9c98]: Add celery4 configs back to the deploy repo |
[production] |
20:35 |
<cdanis> |
T210416: grafana.wikimedia.org switch to point to grafana1001.eqiad.wmnet (running grafana 5.4.1) |
[production] |
20:32 |
<jforrester@deploy1001> |
Synchronized wmf-config/extension-list: Uninstall the ParserMigration extension, Part III I332939809 (duration: 00m 46s) |
[production] |
20:30 |
<jforrester@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Uninstall the ParserMigration extension, Part II I1f7266f55a (duration: 00m 46s) |
[production] |
20:29 |
<jforrester@deploy1001> |
Synchronized wmf-config/CommonSettings.php: Uninstall the ParserMigration extension, Part I I338a3d8a87fd (duration: 00m 47s) |
[production] |
20:26 |
<cdanis> |
T210416: switching grafana.wikimedia.org to point to grafana1001.eqiad.wmnet |
[production] |
20:25 |
<robh> |
messing with ulsfo power for 103.02.23 tower b, shouldnt disrupt anything T209101 |
[production] |
20:20 |
<cdanis> |
T210416: setting grafana.wikimedia.org (currently served by krypton) to read-only and copying to grafana1001 (serving grafana-beta) |
[production] |
20:13 |
<urandom> |
decommissioning cassandra-a, restbase2005 -- T210843 |
[production] |
19:58 |
<cdanis> |
T210416: updating grafana to 5.4.1 in stretch-wikimedia: reprepro --restrict grafana update stretch-wikimedia |
[production] |
18:15 |
<onimisionipe@deploy1001> |
Finished deploy [wdqs/wdqs@dcde39f]: GUI Update (duration: 09m 31s) |
[production] |
18:05 |
<onimisionipe@deploy1001> |
Started deploy [wdqs/wdqs@dcde39f]: GUI Update |
[production] |
17:59 |
<banyek> |
restarting mysql instance on labsdb1004 to restore replication filters to the original state - T211210 |
[production] |
17:58 |
<banyek> |
restarting mysql instance on labsdb1004 to restore replication filters to the original state |
[production] |
17:29 |
<jforrester@deploy1001> |
Synchronized wmf-config/CommonSettings.php: T211527 Hot-deploy Disable ParserMigration now that Raggett has been dropped (duration: 00m 47s) |
[production] |
16:03 |
<moritzm> |
installing PHP updates on netmon1002 |
[production] |