2701-2750 of 10000 results (24ms)
2020-07-05 §
21:46 <qchris> Restarting gerrit on gerrit2001 to pick up new war and jars. [production]
21:45 <qchris@deploy1001> Finished deploy [gerrit/gerrit@fbd0684]: Bump gerrit to 3.2.2-102-g3bbb138e13, zuul plugin to master-0-g7accc67, and gitiles to v3.2.2-1-g00c5ca0-with-0e3b533 on gerrit2001 (duration: 00m 10s) [production]
21:45 <qchris@deploy1001> Started deploy [gerrit/gerrit@fbd0684]: Bump gerrit to 3.2.2-102-g3bbb138e13, zuul plugin to master-0-g7accc67, and gitiles to v3.2.2-1-g00c5ca0-with-0e3b533 on gerrit2001 [production]
21:32 <qchris> Restarting gerrit on gerrit1002 to pick up new wars and jars. [production]
21:32 <qchris@deploy1001> Finished deploy [gerrit/gerrit@fbd0684]: Bump gerrit to 3.2.2-102-g3bbb138e13 and zuul plugin to master-0-g7accc67 (duration: 00m 08s) [production]
21:32 <qchris@deploy1001> Started deploy [gerrit/gerrit@fbd0684]: Bump gerrit to 3.2.2-102-g3bbb138e13 and zuul plugin to master-0-g7accc67 [production]
21:20 <qchris> Enable puppet on gerrit1002 (gerrit-test) again to let it catch up again [production]
16:01 <gehel> restart elastic-psi on elastic1052 (high GC rate) [production]
15:56 <gehel> restart blazegraph + updater on wdqs1007 and depool to allow catching up on lag [production]
2020-07-04 §
23:51 <Amir1> deleted deployment-sentry01 (T106915) [releng]
19:23 <qchris@deploy1001> Finished deploy [gerrit/gerrit@b78914b]: Bump gitiles to v3.2.2-1-g00c5ca0-with-0e3b533 on gerrit1002 (duration: 00m 08s) [production]
19:23 <qchris@deploy1001> Started deploy [gerrit/gerrit@b78914b]: Bump gitiles to v3.2.2-1-g00c5ca0-with-0e3b533 on gerrit1002 [production]
16:04 <wm-bot> <lucaswerkmeister> deployed cbf5ad6440 (Norwegian Bokmål) [tools.lexeme-forms]
14:05 <qchris> Disable puppet on gerrit1002 (gerrit-test) to deploy Gerrit UI updates there to gather feedback [production]
12:42 <reedy@deploy1001> Synchronized wmf-config/interwiki.php: Update interwiki cache (duration: 02m 24s) [production]
10:52 <joal> Rerun mediawiki-geoeditors-monthly-wf-2020-06 after heisenbug (patch provided for long-term fix) [analytics]
08:56 <hashar> Fixed Jenkins collapsible section parsing for Quibble. A logger changed from quibble.cmd to quibble.commands. # T220586 [releng]
02:28 <reedy@deploy1001> Synchronized php-1.35.0-wmf.39/extensions/Score/includes/Score.php: Short circuit lilypond version check to allow usage of cached files T257066 (duration: 00m 55s) [production]
2020-07-03 §
21:49 <reedy@deploy1001> Synchronized php-1.35.0-wmf.39/extensions/Score/: Sync maintenance script (duration: 00m 58s) [production]
21:44 <RhinosF1> decom sopel.bot [tools.zppixbot]
19:20 <joal> restart failed webrequest-load job webrequest-load-wf-text-2020-7-3-17 with higher thresholds - error due to burst of requests in ulsfo [analytics]
19:13 <joal> restart mediawiki-history-denormalize oozie job using 0.0.115 refinery-job jar [analytics]
19:05 <joal> kill manual execution of mediawiki-history to save an-coord1001 (too big of a spark-driver) [analytics]
18:53 <joal> restart webrequest-load-wf-text-2020-7-3-17 after hive server failure [analytics]
18:52 <joal> restart data_quality_stats-wf-event.navigationtiming-useragent_entropy-hourly-2020-7-3-15 after have server failure [analytics]
18:51 <joal> restart virtualpageview-hourly-wf-2020-7-3-15 after hive-server failure [analytics]
18:47 <cdanis> ✔️ cdanis@an-coord1001.eqiad.wmnet ~ 🕒☕ sudo systemctl restart hive-server2.service [production]
16:51 <krinkle@deploy1001> Synchronized wmf-config/CommonSettings.php: Ifa929b2ad4 (duration: 00m 57s) [production]
16:41 <joal> Rerun mediawiki-history-check_denormalize-wf-2020-06 after having cleaned up wrong files and restarted a job without deterministic skewed join [analytics]
16:02 <reedy@deploy1001> Synchronized wmf-config/CommonSettings.php: Rename wgRestrictionMethod to wgShellRestrictionMethod (duration: 00m 58s) [production]
15:46 <jayme@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) [production]
15:43 <jayme@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) [production]
15:43 <jynus@cumin1001> dbctl commit (dc=all): 'Reduce db1118 weight to spread load mode evenly', diff saved to https://phabricator.wikimedia.org/P11730 and previous config saved to /var/cache/conftool/dbconfig/20200703-154337-jynus.json [production]
15:40 <jayme@cumin1001> START - Cookbook sre.ganeti.makevm [production]
15:38 <jayme@cumin1001> START - Cookbook sre.ganeti.makevm [production]
15:09 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.stop-cluster (exit_code=0) [production]
15:02 <elukey@cumin1001> START - Cookbook sre.hadoop.stop-cluster [production]
14:11 <elukey@cumin1001> END (FAIL) - Cookbook sre.hadoop.stop-cluster (exit_code=99) [production]
14:11 <_joe_> restarted php-fpm on wtp1033, stuck in sigill [production]
13:59 <elukey@cumin1001> START - Cookbook sre.hadoop.stop-cluster [production]
12:51 <arturo> [codfw1dev] galera cluster should be up and running, openstack happy (T256283) [admin]
12:41 <hashar> Restarting Zuul / CI [production]
11:44 <arturo> [codfw1dev] restoring glance database backup from bacula into cloudcontrol2001-dev (T256283) [admin]
11:39 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) [production]
11:39 <arturo> [codfw1dev] stopped mysql database in the galera cluster T256283 [admin]
11:36 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single [production]
11:36 <arturo> [codfw1dev] dropped glance database in the galera cluster T256283 [admin]
11:32 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) [production]
11:29 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single [production]
11:29 <moritzm> rebooting urldownloader standby hosts for kernel updates (1002/2002) [production]