2018-10-15
§
|
05:14 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1109 (duration: 00m 50s) |
[production] |
05:11 |
<marostegui> |
Stop MySQL on db1116:3318 to use it to clone db1109 |
[production] |
05:10 |
<kart_> |
Update cxserver to b51f363 |
[releng] |
04:00 |
<thcipriani> |
deployment-deploy01:sudo rm -rf /tmp/scap_l10n* |
[releng] |
03:18 |
<kartik@deploy1001> |
Finished deploy [cxserver/deploy@5a70ef1]: Update cxserver to 47a864b (T205420, T203077, T205700, T205616) (duration: 04m 44s) |
[production] |
03:14 |
<kartik@deploy1001> |
Started deploy [cxserver/deploy@5a70ef1]: Update cxserver to 47a864b (T205420, T203077, T205700, T205616) |
[production] |
00:45 |
<krinkle@deploy1001> |
Synchronized multiversion/MWRealm.php: I79fb3d194a58: use env.php (duration: 00m 49s) |
[production] |
00:08 |
<krinkle@deploy1001> |
Synchronized wmf-config/: I79fb3d194a: add env.php file (not yet used) (duration: 00m 50s) |
[production] |
2018-10-14
§
|
23:42 |
<krinkle@deploy1001> |
Synchronized multiversion/getMWVersion: Ice9a74e73481 no-op (duration: 00m 49s) |
[production] |
23:21 |
<krinkle@deploy1001> |
Synchronized wmf-config/ProductionServices.php: If4d8faa4 (duration: 00m 48s) |
[production] |
21:48 |
<krinkle@deploy1001> |
Synchronized multiversion/MWMultiVersion.php: I83b2bdd53c13e (duration: 00m 50s) |
[production] |
20:47 |
<krinkle@deploy1001> |
Synchronized wmf-config/import.php: beta-only (duration: 00m 54s) |
[production] |
19:26 |
<Cam11598> |
12:25:33 PM <ChanServ> Flags +AV were set on Operator873 in #cvn-sw. |
[cvn] |
16:34 |
<volans> |
forcing a puppet run on all eqsin hosts with batch 1 to clear most of the alarms - T206861 |
[production] |
09:15 |
<elukey> |
restart yarn resource manager on an-coord1002 (failover happened due to jvm issues) |
[analytics] |
09:15 |
<elukey> |
restart apps-session-metrics with spark 2.3.1 oozie libs (modified the coordinator.properties file manually on disk) |
[analytics] |
08:54 |
<elukey> |
restart Yarn resource manager on an-master1002 to force an-master1001 to take the leadership back - T206943 |
[production] |
08:34 |
<elukey> |
powercycle restbase1015 (frozen, no ssh, no metrics, no root console via serial available) |
[production] |
00:48 |
<krinkle@deploy1001> |
Synchronized php-1.32.0-wmf.24/extensions/CentralAuth/includes/specials/SpecialGlobalGroupMembership.php: T203767 - If2bfa092b (duration: 00m 50s) |
[production] |
2018-10-12
§
|
21:13 |
<legoktm> |
deployed https://gerrit.wikimedia.org/r/465049 |
[releng] |
20:32 |
<Krinkle> |
deploy01 /usr/local/bin/mwscript update.php --wiki=simplewiki --quick |
[releng] |
20:32 |
<Krinkle> |
deploy01 /usr/local/bin/mwscript update.php --wiki=ruwiki --quick |
[releng] |
20:22 |
<Krinkle> |
Upgrade mono packages to latest from mono-project (5.16) |
[cvn] |
20:21 |
<Krinkle> |
Adding mono-project to apt sources on cvn-app8 and cvn-app9 |
[cvn] |
20:14 |
<mutante> |
contint1001 - gzip merge-debug.log.2018-* debug.log.2018-* etc.. all in /var/log/zuul/ there seems to be no compression (but max age is 1 month) |
[releng] |
20:11 |
<mutante> |
contint1001 - gzip zuul.log.2018-* in /var/log/ to prevent running out of disk |
[releng] |
20:08 |
<mutante> |
contint1001 - apt-get clean - frees 1GB to bring it _just_ under the Icinga warning threshold :p |
[releng] |
19:23 |
<legoktm> |
rebuilding docker images for https://gerrit.wikimedia.org/r/465606 |
[releng] |
18:56 |
<brion> |
restarted vp9 background transcodes in eqiad, via mwmaint1002 |
[production] |
18:37 |
<addshore> |
modified attachLatest.php script finished running over 9395 pages T206743 |
[production] |
18:25 |
<addshore> |
running modified attachLatest.php script over ~9000 pages on wikidatawiki (with added wait for slaves) T206743 |
[production] |
17:46 |
<wikibugs> |
Updated channels.yaml to: 3a91169fd100b09dd7619b39bcc966e9c3f56f9f Fix phab board name for #wikimedia-dev-africa |
[tools.wikibugs] |
17:18 |
<marxarelli> |
bringing integration-slave-docker-1021 back online |
[releng] |
17:15 |
<marxarelli> |
killing 7 long-running containers (> 1 hour) on docker integration nodes (T198517) |
[releng] |
16:42 |
<marxarelli> |
killing 9 long-running containers (> 1 hour) on docker integration nodes (T198517) |
[releng] |
16:31 |
<marxarelli> |
taking integration-slave-docker-1021 offline (host is unresponsive) |
[releng] |
16:25 |
<marxarelli> |
deploying https://gerrit.wikimedia.org/r/c/integration/config/+/465671 for 321 affected jobs (T198517) |
[releng] |
16:05 |
<marxarelli> |
deploying https://gerrit.wikimedia.org/r/c/integration/config/+/465671 for 1 job mediawiki-quibble-vendor-mysql-php70-docker (T198517) |
[releng] |
15:50 |
<mutante> |
repair /dev/sde1 on ms-be2041 - T199198 |
[production] |
15:48 |
<mutante> |
repair /dev/sdh1 on ms-be1043 - T199198 |
[production] |