5601-5650 of 10000 results (92ms)
2019-10-09 §
08:39 <vgutierrez> Switch cp1082 from nginx to ats-tls - T231433 [production]
08:24 <moritzm> draining ganeti1006 for upcoming reboot (combined kernel/qemu security updates) [production]
08:18 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
08:18 <jmm@cumin2001> START - Cookbook sre.hosts.downtime [production]
08:18 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
08:18 <jmm@cumin2001> START - Cookbook sre.hosts.downtime [production]
08:14 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
08:14 <jmm@cumin2001> START - Cookbook sre.hosts.downtime [production]
08:01 <vgutierrez> Switch cp2011 from nginx to ats-tls - T231433 [production]
07:48 <moritzm> reduced RAM assignment for boron to 8G [production]
07:38 <vgutierrez> Switch cp3038 from nginx to ats-tls - T231433 [production]
07:37 <hashar> Build Quibble 0.0.37 docker containers [releng]
07:29 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
07:29 <jmm@cumin2001> START - Cookbook sre.hosts.downtime [production]
07:28 <hashar> Tag Quibble 0.0.37 @ 387d33c13 [releng]
06:34 <vgutierrez> switching from nginx to ats-tls on cp4024 - T231433 [production]
05:47 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) [production]
05:47 <marostegui@cumin1001> START - Cookbook sre.hosts.decommission [production]
05:45 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool es1013, es1014 T227536 (duration: 01m 00s) [production]
05:19 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1085 for schema change - lag will be generated on s6 labs', diff saved to https://phabricator.wikimedia.org/P9274 and previous config saved to /var/cache/conftool/dbconfig/20191009-051911-marostegui.json [production]
05:11 <marostegui> Restart gerrit as it is down [production]
04:59 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1105:3312 for schema change', diff saved to https://phabricator.wikimedia.org/P9273 and previous config saved to /var/cache/conftool/dbconfig/20191009-045941-marostegui.json [production]
04:47 <marostegui@cumin1001> dbctl commit (dc=all): 'Repool db1103:3312', diff saved to https://phabricator.wikimedia.org/P9272 and previous config saved to /var/cache/conftool/dbconfig/20191009-044752-marostegui.json [production]
04:39 <vgutierrez> switching cp5004 from nginx to ats-tls - T231433 [production]
2019-10-08 §
23:28 <mutante> phab1001 - replacing tin.eqiad.wmnet with deploy1001.eqiad.wmnet in phabricator/deployment-cache/.config:git_server - wondering if we can ever get rid of tin (T190568) [production]
23:05 <ebernhardson@deploy1001> Synchronized wmf-config/: [cirrus] drop support for HHVM connection pooling (duration: 00m 59s) [production]
22:13 <James_F> Zuul: Actually restrict mwcore-codehealth-master-non-voting and mwext-codehealth-master-non-voting to master [releng]
21:58 <jforrester@deploy1001> Synchronized wmf-config/CommonSettings.php: Split out the CSP configuration s it can be more easily over-ridden (duration: 00m 59s) [production]
21:28 <XenoRyet> updated payments-wiki from d2e2637275 to 8a65f57874 [production]
21:09 <chaomodus> restarted nagios-nrpe-server on notebook1003 [production]
20:38 <mutante> labweb1001 - disabled 2fa for myself on Wikitech using disableOATHAuthForUser.php --wiki=labswiki to debug T234996 [production]
20:24 <mutante> labweb1001 - edit /srv/mediawiki/wmf-config/wikitech.php to and change "false" to "true" on line 52 to enable LDAP debug logging for T234996 [production]
20:19 <hashar> Tag Quibble 0.0.36 @ 4b46156 # T181942 T190829 T199939 T230340 T233140 T233143 [releng]
19:51 <marxarelli> 1.35.0-wmf.1 promoted to group0, cc: T233849. no rise in error rates. no new relevant errors [production]
19:43 <dduvall@deploy1001> rebuilt and synchronized wikiversions files: group0 to 1.35.0-wmf.1 [production]
19:39 <bstorm_> drained tools-worker-1007/8 to rebalance the cluster [tools]
19:38 <dduvall@deploy1001> Synchronized php-1.35.0-wmf.1/skins/MinervaNeue/: sync T233521 backport prior to group0 (duration: 00m 59s) [production]
19:34 <bstorm_> drained tools-worker-1009 and then 1014 for rebalancing [tools]
19:29 <shdubsh> adding swagger exporter to apt repo [production]
19:27 <bstorm_> drained tools-worker-1005 for rebalancing (and put these back in service as I went) [tools]
19:24 <bstorm_> drained tools-worker-1003 and 1009 for rebalancing [tools]
19:13 <dduvall@deploy1001> Finished scap: testwiki to php-1.35.0-wmf.1 and rebuild l10n cache (duration: 19m 21s) [production]
18:54 <dduvall@deploy1001> Started scap: testwiki to php-1.35.0-wmf.1 and rebuild l10n cache [production]
18:53 <godog> codfw-prod: more weight to ms-be205[1-6] - T233638 [production]
18:45 <dduvall@deploy1001> Pruned MediaWiki: 1.34.0-wmf.24 (duration: 08m 24s) [production]
17:32 <marxarelli> cutting wmf/1.35.0-wmf.1 [production]
16:27 <James_F> Docker: [mediawiki-phan] Fix phan version validation [releng]
16:17 <cstone> civicrm revision changed from db7ef10bfa to 2ba100486e [production]
16:00 <@> helmfile [CODFW] Ran 'apply' command on namespace 'wikifeeds' for release 'production' . [production]
15:58 <@> helmfile [EQIAD] Ran 'apply' command on namespace 'wikifeeds' for release 'production' . [production]