801-850 of 10000 results (54ms)
2017-12-11 §
08:52 <marostegui> Stop replication in sync on db1034 and db1039 - T163190 [production]
08:12 <elukey> powercycle ganeti1008 - all vms stuck, console com2 showed a ton of printks without a clear indicator of the root cause [production]
07:49 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1034 - T182556 (duration: 00m 45s) [production]
07:44 <_joe_> restarting hhvm on mw1189,mw1229,mw1235,mw1282,mw1285,mw1315,mw1316, all stuck with a kernel hang [production]
06:59 <_joe_> restarted hhvm, nginx on mw1280, hanging kernel operations [production]
06:45 <marostegui> Deploy schema change on s2 db1060 with replication enabled, this will generate some lag on s2 on labs - T174569 [production]
06:45 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1060 - T174569 (duration: 00m 44s) [production]
06:22 <marostegui> Compress s6 on db1096 - T178359 [production]
06:21 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1096:3316 to compress InnoDB there - T178359 (duration: 00m 45s) [production]
02:43 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.11) (duration: 09m 21s) [production]
2017-12-10 §
20:33 <elukey> execute restart-hhvm on mw1312 - hhvm stuck multiple times queueing requests [production]
20:01 <elukey> ran kafka preferred-replica-election for the kafka analytics cluster (1012->1022) to re-add kafka1012 to the kafka brokers acting as partition leaders (will spread the load in a better way) [production]
05:52 <zhuyifei1999_> deployed e835a46 to quarry-main-01 and restarted uwsgi T165169 [quarry]
05:49 <zhuyifei1999_> quarry-main-01: `ALTER IGNORE TABLE star ADD UNIQUE INDEX star_user_query_index (user_id, query_id);` Records: 728 Duplicates: 17 Warnings: 0 T165169 [quarry]
2017-12-09 §
17:38 <Amir1> ladsgroup@wikilabels-01:/srv/wikilabels/config$ less ~/eswikiquote.revisions_for_review.5k_2017.json | sudo -u www-data ../venv/bin/wikilabels task_inserts 64 (T177762) [wikilabels]
17:38 <Amir1> ladsgroup@wikilabels-01:/srv/wikilabels/config$ sudo -u www-data /srv/wikilabels/venv/bin/wikilabels new_campaign eswikiquote "Editar calidad (20k muestra aleatoria, 2017)" damaging_and_goodfaith DiffToPrevious 1 50 (T177762) [wikilabels]
17:00 <apergos> restarted hhvm on mw1276, the same old hang with the same old symptoms [production]
16:10 <awight@tin> Finished deploy [ores/deploy@1c0ede0]: Reducing ORES Celery log verbosity (take 4\!) (duration: 03m 01s) [production]
16:07 <awight@tin> Started deploy [ores/deploy@1c0ede0]: Reducing ORES Celery log verbosity (take 4\!) [production]
16:02 <awight@tin> Finished deploy [ores/deploy@1c0ede0]: Reducing ORES Celery log verbosity (duration: 05m 58s) [production]
15:56 <awight@tin> Started deploy [ores/deploy@1c0ede0]: Reducing ORES Celery log verbosity [production]
15:55 <awight@tin> Finished deploy [ores/deploy@1c0ede0]: Reducing ORES Celery log verbosity (duration: 00m 17s) [production]
15:55 <awight@tin> Started deploy [ores/deploy@1c0ede0]: Reducing ORES Celery log verbosity [production]
15:53 <awight@tin> Finished deploy [ores/deploy@1c0ede0]: Reducing ORES Celery log verbosity (duration: 00m 31s) [production]
15:53 <awight@tin> Started deploy [ores/deploy@1c0ede0]: Reducing ORES Celery log verbosity [production]
15:53 <apergos> did same on scb1002,3,4 [production]
15:48 <awight> Making an emergency deployment to ORES logging config to reduce verbosity. [production]
15:45 <apergos> on scb1001 moved daemon.log out of the way, did "service rsyslog rotate", saved the last 5000 entries for use by ores team, removed the log [production]
11:44 <apergos> that server list: mw1278, 1277, 1226, 1234, 1230 [production]
11:42 <apergos> restarted hhvm on api servers after lockup [production]
11:19 <legoktm@tin> Synchronized wmf-config/InitialiseSettings.php: Disable ORES in fawiki - T182354 (duration: 00m 45s) [production]
03:13 <legoktm> deployed https://gerrit.wikimedia.org/r/396486 [releng]
00:11 <Jamesofur> removed 2FA from EVinente after verification T182373 [production]
2017-12-08 §
23:23 <hashar> force ran puppet on contint2001 [production]
23:08 <Sagan> CAC: Rerunning update.php foreachwiki after fixing config error [rcm]
23:08 <Sagan> CAC: Updating composer [rcm]
23:04 <Sagan> Xenon: Updating Phabricator [rcm]
23:02 <Sagan> CAC: Updating vagrant (vagrant git-update) [rcm]
23:01 <Sagan> CAC: Updating vagrant (git pull of vagrant dir) [rcm]
23:00 <Sagan> Neon: Updating packages [rcm]
22:56 <Sagan> Tin: Updating packages [rcm]
22:55 <Sagan> CAC: Updating packages [rcm]
22:55 <Sagan> Oxygen: Updating packages [rcm]
22:55 <Sagan> Xenon: Updating packages [rcm]
22:15 <madhuvishy> Kicked off rsync of /data/xmldatadumps/public to labstore1006 & 7 [production]
22:05 <smalyshev@tin> Finished deploy [wdqs/wdqs@353b3cb]: temporary fix for T182464, better fix coming soon (duration: 05m 55s) [production]
21:59 <smalyshev@tin> Started deploy [wdqs/wdqs@353b3cb]: temporary fix for T182464, better fix coming soon [production]
21:20 <legoktm> deployed https://gerrit.wikimedia.org/r/396480 [releng]
20:22 <aaron@tin> Synchronized php-1.31.0-wmf.11/includes/Setup.php: a319c3e7ab61 - disable cpPosTime injection (duration: 00m 45s) [production]
19:47 <legoktm> deployed https://gerrit.wikimedia.org/r/396453 [releng]