401-450 of 10000 results (62ms)
2019-09-03 §
11:05 <joal> Kill-restart data-quality bundle [analytics]
11:01 <joal> Kill-restart cassandra bundle (beginning of month) [analytics]
10:56 <joal> Hotfixing webrequest-load job to prevent redeploying [analytics]
10:50 <joal> Kill/restart webrequest bundle [analytics]
10:17 <ema> cp1083: varnish-backend-restart -- mbox lag, fetch failures [production]
09:59 <_joe_> removing old lvs-related scripts from ores* [production]
09:46 <moritzm> moved uid=smalyshev from cn=wmf to cn=nda [production]
09:46 <mutante> install1002 - import GPG key for getenvoy repo, importing envoy for jessie with reprepro update [production]
09:17 <hashar> Reloaded Zuul for https://gerrit.wikimedia.org/r/533916 "Apply REL based pipelines" [releng]
09:16 <hashar> Deploy refactor of Zuul pipelines which might mean that some repos/branches would miss jobs or have extra unwanted jobs. In such case please fill in a task against #continuous-integration-config [releng]
09:16 <hashar> Deploy refactor of Zuul pipelines which might mean that some repos/branches would miss jobs or have extra unwanted jobs. In such case please fill in a task against #continuous-integration-config [production]
09:04 <ema> cp1085: varnish-backend-restart, mbox lag and fetch failures [production]
09:03 <gehel> reset kartotherian password -T231842 [production]
08:54 <ema> cp1089: varnish-backend-restart due to mbox lag and fetch failures [production]
08:49 <ema@puppetmaster1001> conftool action : set/pooled=yes; selector: name=cp1075.eqiad.wmnet,service=ats-be [production]
08:49 <ema> cp1075: pool ats-be with caching enabled T228629 [production]
08:32 <hashar> reloading zuul for https://gerrit.wikimedia.org/r/#/c/integration/config/+/533905/ "Remove wmf pipelines from non mediawiki repos" [releng]
08:32 <hashar> err [releng]
08:32 <hashar> reloading zuul for https://gerrit.wikimedia.org/r/#/c/integration/config/+/533909/ "zuul: add pipelines for MediaWiki releases" [releng]
08:26 <marostegui> Add REPLICATION grant to wikiuser and wikiadmin on db1073 with replication enabled - T229657 [production]
08:24 <JeanFred> Deploy latest from Git master: 39aec68, 59e2ffb [tools.heritage]
08:21 <gehel> purging maps / info.json from cache - T231842 [production]
08:21 <joal> Kill-restart mediawiki-load and geoeditors-load jobs after corrective deploy [analytics]
08:10 <joal> Deploy refinery onto HDFS [analytics]
08:09 <marostegui@cumin1001> dbctl commit (dc=all): 'Pool db1133 with weight 0 T229657', diff saved to https://phabricator.wikimedia.org/P9031 and previous config saved to /var/cache/conftool/dbconfig/20190903-080958-marostegui.json [production]
08:04 <joal@deploy1001> Finished deploy [analytics/refinery@4810dfa]: Regular weekly analytics deploy train - Second try (duration: 00m 27s) [production]
08:03 <joal@deploy1001> Started deploy [analytics/refinery@4810dfa]: Regular weekly analytics deploy train - Second try [production]
08:02 <joal@deploy1001> deploy aborted: Regular weekly analytics deploy train (duration: 27m 47s) [production]
07:16 <marostegui> Change min_replicas to 6 on s1 for eqiad and codfw T231019 [production]
06:39 <marostegui@cumin1001> dbctl commit (dc=all): 'Pool db1133 with weight 0 T229657', diff saved to https://phabricator.wikimedia.org/P9029 and previous config saved to /var/cache/conftool/dbconfig/20190903-063932-marostegui.json [production]
06:10 <mutante> running puppet on cp-text_eqiad to switch people.wm.org to https backend [production]
06:04 <marostegui> Change min_replicas to 4 on s7 for eqiad and codfw T231019 [production]
05:53 <mutante> people.wikimedia.org - switching to TLS termination with envoy [production]
05:52 <marostegui@cumin1001> dbctl commit (dc=all): 'Reorganize s7 codfw T230106', diff saved to https://phabricator.wikimedia.org/P9028 and previous config saved to /var/cache/conftool/dbconfig/20190903-055234-marostegui.json [production]
05:47 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Reorganize s7 codfw T230106 (duration: 00m 54s) [production]
05:22 <marostegui> Rename tables on the puppet database on m1 master - T231539 [production]
05:17 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Promote db2118 to s7 codfw master (db2047 -> db2118) T230106 (duration: 00m 54s) [production]
05:16 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2047 old master from s7 T230106', diff saved to https://phabricator.wikimedia.org/P9027 and previous config saved to /var/cache/conftool/dbconfig/20190903-051619-marostegui.json [production]
05:14 <marostegui@cumin1001> dbctl commit (dc=all): 'Promote db2118 to s7 codfw master (db2047 -> db2118) T230106', diff saved to https://phabricator.wikimedia.org/P9026 and previous config saved to /var/cache/conftool/dbconfig/20190903-051450-marostegui.json [production]
05:02 <marostegui> Promote db2118 to s7 codfw master (db2047 -> db2118) T230106 [production]
04:50 <marostegui> Drop filejournal table on s3 - T51195 [production]
04:49 <vgutierrez> repooling cp2002 - T231433 [production]
04:36 <vgutierrez> upgrading ATS to 8.0.5-1wm4 on cp2002 - T231433 [production]
04:28 <vgutierrez> Switching cp2002 from nginx to ats-tls - T231433 [production]
2019-09-02 §
22:08 <ebernhardson> ban elastic1027 from production-search-chi [production]
20:48 <ebernhardson> restart production-search-eqiad on elastic1027 again [production]
20:33 <mbsantos@deploy1001> Finished deploy [kartotherian/deploy@453ee8a]: Make osm-pbf source private (T231842) (duration: 02m 09s) [production]
20:31 <mbsantos@deploy1001> Started deploy [kartotherian/deploy@453ee8a]: Make osm-pbf source private (T231842) [production]
20:05 <wm-bot> <lucaswerkmeister> deployed c601f81355 (retry earlier after wiki read-only) [tools.quickcategories]
19:54 <ebernhardson> restart elasticsearch_6@production-search-eqiad on elastic1027 [production]