2951-3000 of 10000 results (41ms)
2016-08-08 §
02:20 <mwdeploy@tin> scap sync-l10n completed (1.28.0-wmf.13) (duration: 09m 21s) [production]
2016-08-07 §
23:39 <reedy@tin> Synchronized wmf-config: last 2 to wfLoadExtension (duration: 00m 59s) [production]
23:12 <reedy@tin> Synchronized wmf-config: Handful more extensions to wfLoadExtension (duration: 00m 49s) [production]
22:57 <reedy@tin> Synchronized wmf-config: RestbaseUpdateJobs to wfLoadExtension (duration: 00m 51s) [production]
22:55 <reedy@tin> Synchronized wmf-config/CommonSettings.php: Image Area to 100MP (duration: 00m 48s) [production]
22:44 <reedy@tin> Synchronized wmf-config/extension-list: 2 more to extension.json (duration: 00m 48s) [production]
22:35 <reedy@tin> Synchronized docroot/noc/conf: Cleanup! (duration: 00m 50s) [production]
22:30 <reedy@tin> Synchronized docroot: trusted-xff symlink updates (duration: 00m 50s) [production]
22:28 <reedy@tin> Synchronized wmf-config/: Swap trusted-xff from cdb to php (duration: 00m 51s) [production]
20:49 <legoktm@tin> Synchronized php-1.28.0-wmf.13/extensions/GlobalBlocking/extension.json: Adding globalblock-exempt grant for OAuth - T142306 (duration: 00m 57s) [production]
16:22 <cwd|afk> disabled globalcollect recurring donations [production]
16:13 <akosiaris> restarted apache2 on palladium for full depool to take place [production]
12:47 <hashar> root cause of CI outage is T126552 [production]
12:41 <hashar> CI fully back. Root cause was Jenkins that could not properly create slaves config due to : Could not create rootDir /var/lib/jenkins/config-history/xxxx . Deleting via find /var/lib/jenkins/config-history/nodes/ -path '*_deleted_*' -delete [production]
12:12 <hashar> CI stuck spawning instances via Nodepool apparently due to : Quota exceeded for instances: Requested 1, but already used 10 of 10 instances (HTTP 403) --- Though there is only 8 instances ... [production]
12:10 <hashar> CI stuck spawning instances via Nodepool apparently due to : Quota exceeded for instances: Requested 1, but already used 10 of 10 instances (HTTP 403) --- Though there is only 8 instances ... [production]
02:24 <l10nupdate@tin> ResourceLoader cache refresh completed at Sun Aug 7 02:24:55 UTC 2016 (duration 5m 51s) [production]
02:19 <mwdeploy@tin> scap sync-l10n completed (1.28.0-wmf.13) (duration: 08m 55s) [production]
2016-08-06 §
23:09 <yuvipanda> cleaned and re-accepted salt-key for labvirt1014, minion back up now [production]
22:49 <yuvipanda> run 'service mariadb start' on labsdb1003, puppet run didn't do anything [production]
19:43 <andrewbogott> rebooting labvirt1012 for a kernel downgrade [production]
19:12 <andrewbogott> rebooting labvirt1013 for kernel downgrade [production]
09:36 <akosiaris> revert back to old backed up bayes database on mendelevium.eqiad.wmnet (OTRS) to get bayes training working again [production]
02:26 <l10nupdate@tin> ResourceLoader cache refresh completed at Sat Aug 6 02:26:00 UTC 2016 (duration 5m 48s) [production]
02:20 <mwdeploy@tin> scap sync-l10n completed (1.28.0-wmf.13) (duration: 08m 46s) [production]
01:02 <andrewbogott> re-imaging labvirt1014 [production]
2016-08-05 §
23:39 <tgr@tin> Synchronized php-1.28.0-wmf.13/includes/api/ApiLogin.php: temporarily re-add dropped API feature to unbreak Pywikibot T142155 (duration: 00m 48s) [production]
22:37 <andrewbogott> rebooting labvirt1014 as part of a protracted iptables/nova-compute investigation [production]
21:03 <reedy@tin> Synchronized wmf-config/CommonSettings.php: Add transitionary timeline config primarily for beta (duration: 00m 57s) [production]
18:26 <andrewbogott> restarting rabbitmq-server on labcontrol1001 [production]
17:27 <ejegg> rolled back SmashPig to 26a475bf5ae03d88ebc4c2fe9707d562d8e3afe3 [production]
17:25 <ejegg> updated SmashPig from 26a475bf5ae03d88ebc4c2fe9707d562d8e3afe3 to 2e8a2f4c92840bd999a8742211e0a65d484fde00 [production]
16:03 <akosiaris> T107306 uploaded to apt.wikimedia.org jessie-wikimedia: apertium-spa-arg_0.4.0~r64399-1+wmf1 [production]
16:02 <akosiaris> T107306 uploaded to apt.wikimedia.org jessie-wikimedia: apertium-arg-cat_0.1.0~r64925-1+wmf1 [production]
15:16 <akosiaris> T135176 pool wtp1019-wtp1024 [production]
15:13 <akosiaris@palladium> conftool action : set/pooled=yes; selector: wtp1024.eqiad.wmnet (tags: ['dc=eqiad', 'cluster=parsoid', 'service=parsoid']) [production]
15:13 <akosiaris@palladium> conftool action : set/pooled=yes; selector: wtp1023.eqiad.wmnet (tags: ['dc=eqiad', 'cluster=parsoid', 'service=parsoid']) [production]
15:13 <akosiaris@palladium> conftool action : set/pooled=yes; selector: wtp1022.eqiad.wmnet (tags: ['dc=eqiad', 'cluster=parsoid', 'service=parsoid']) [production]
15:13 <akosiaris@palladium> conftool action : set/pooled=yes; selector: wtp1021.eqiad.wmnet (tags: ['dc=eqiad', 'cluster=parsoid', 'service=parsoid']) [production]
15:13 <akosiaris@palladium> conftool action : set/pooled=yes; selector: wtp1020.eqiad.wmnet (tags: ['dc=eqiad', 'cluster=parsoid', 'service=parsoid']) [production]
15:13 <akosiaris@palladium> conftool action : set/pooled=yes; selector: wtp1019.eqiad.wmnet (tags: ['dc=eqiad', 'cluster=parsoid', 'service=parsoid']) [production]
13:56 <akosiaris> strontium has issues, see https://phabricator.wikimedia.org/T142187 [production]
13:53 <moritzm> uploaded gerrit 2.12.2-wmf2 for jessie-wikipedia to apt.wikimedia.org [production]
13:49 <ostriches> gerrit: quick restart to pick up apache and java updates [production]
13:22 <paravoid> started spamassassin/exim4 on mendelevium [production]
12:19 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Pool db1079 as an api server; reduce main load (duration: 00m 49s) [production]
12:14 <akosiaris> just encountered https://wikitech.wikimedia.org/wiki/OTRS#SpamAssassin_stops_reporting_Bayes_results. Recovered the db with sa-learn --sync and then force spam/ham runs via the web interface [production]
11:06 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Document T142135 and apply workaround (duration: 00m 52s) [production]
11:03 <jynus@tin> Synchronized wmf-config/db-codfw.php: Document T142135 (duration: 00m 56s) [production]
10:54 <gehel> killing elasticsearch on logstash1004 (stuck during shutdown) [production]