4001-4050 of 10000 results (71ms)
2019-09-03 §
10:17 <ema> cp1083: varnish-backend-restart -- mbox lag, fetch failures [production]
09:59 <_joe_> removing old lvs-related scripts from ores* [production]
09:46 <moritzm> moved uid=smalyshev from cn=wmf to cn=nda [production]
09:46 <mutante> install1002 - import GPG key for getenvoy repo, importing envoy for jessie with reprepro update [production]
09:16 <hashar> Deploy refactor of Zuul pipelines which might mean that some repos/branches would miss jobs or have extra unwanted jobs. In such case please fill in a task against #continuous-integration-config [production]
09:04 <ema> cp1085: varnish-backend-restart, mbox lag and fetch failures [production]
09:03 <gehel> reset kartotherian password -T231842 [production]
08:54 <ema> cp1089: varnish-backend-restart due to mbox lag and fetch failures [production]
08:49 <ema@puppetmaster1001> conftool action : set/pooled=yes; selector: name=cp1075.eqiad.wmnet,service=ats-be [production]
08:49 <ema> cp1075: pool ats-be with caching enabled T228629 [production]
08:26 <marostegui> Add REPLICATION grant to wikiuser and wikiadmin on db1073 with replication enabled - T229657 [production]
08:21 <gehel> purging maps / info.json from cache - T231842 [production]
08:09 <marostegui@cumin1001> dbctl commit (dc=all): 'Pool db1133 with weight 0 T229657', diff saved to https://phabricator.wikimedia.org/P9031 and previous config saved to /var/cache/conftool/dbconfig/20190903-080958-marostegui.json [production]
08:04 <joal@deploy1001> Finished deploy [analytics/refinery@4810dfa]: Regular weekly analytics deploy train - Second try (duration: 00m 27s) [production]
08:03 <joal@deploy1001> Started deploy [analytics/refinery@4810dfa]: Regular weekly analytics deploy train - Second try [production]
08:02 <joal@deploy1001> deploy aborted: Regular weekly analytics deploy train (duration: 27m 47s) [production]
07:16 <marostegui> Change min_replicas to 6 on s1 for eqiad and codfw T231019 [production]
06:39 <marostegui@cumin1001> dbctl commit (dc=all): 'Pool db1133 with weight 0 T229657', diff saved to https://phabricator.wikimedia.org/P9029 and previous config saved to /var/cache/conftool/dbconfig/20190903-063932-marostegui.json [production]
06:10 <mutante> running puppet on cp-text_eqiad to switch people.wm.org to https backend [production]
06:04 <marostegui> Change min_replicas to 4 on s7 for eqiad and codfw T231019 [production]
05:53 <mutante> people.wikimedia.org - switching to TLS termination with envoy [production]
05:52 <marostegui@cumin1001> dbctl commit (dc=all): 'Reorganize s7 codfw T230106', diff saved to https://phabricator.wikimedia.org/P9028 and previous config saved to /var/cache/conftool/dbconfig/20190903-055234-marostegui.json [production]
05:47 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Reorganize s7 codfw T230106 (duration: 00m 54s) [production]
05:22 <marostegui> Rename tables on the puppet database on m1 master - T231539 [production]
05:17 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Promote db2118 to s7 codfw master (db2047 -> db2118) T230106 (duration: 00m 54s) [production]
05:16 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2047 old master from s7 T230106', diff saved to https://phabricator.wikimedia.org/P9027 and previous config saved to /var/cache/conftool/dbconfig/20190903-051619-marostegui.json [production]
05:14 <marostegui@cumin1001> dbctl commit (dc=all): 'Promote db2118 to s7 codfw master (db2047 -> db2118) T230106', diff saved to https://phabricator.wikimedia.org/P9026 and previous config saved to /var/cache/conftool/dbconfig/20190903-051450-marostegui.json [production]
05:02 <marostegui> Promote db2118 to s7 codfw master (db2047 -> db2118) T230106 [production]
04:50 <marostegui> Drop filejournal table on s3 - T51195 [production]
04:49 <vgutierrez> repooling cp2002 - T231433 [production]
04:36 <vgutierrez> upgrading ATS to 8.0.5-1wm4 on cp2002 - T231433 [production]
04:28 <vgutierrez> Switching cp2002 from nginx to ats-tls - T231433 [production]
2019-09-02 §
22:08 <ebernhardson> ban elastic1027 from production-search-chi [production]
20:48 <ebernhardson> restart production-search-eqiad on elastic1027 again [production]
20:33 <mbsantos@deploy1001> Finished deploy [kartotherian/deploy@453ee8a]: Make osm-pbf source private (T231842) (duration: 02m 09s) [production]
20:31 <mbsantos@deploy1001> Started deploy [kartotherian/deploy@453ee8a]: Make osm-pbf source private (T231842) [production]
19:54 <ebernhardson> restart elasticsearch_6@production-search-eqiad on elastic1027 [production]
17:57 <mateusbs17> regenerating tiles from z0 to z9 in eqiad and codfw- T231691, T230511 [production]
15:08 <moritzm> installing libssh2 security updates [production]
14:36 <moritzm> installing ghostscript updates on thumbor1001 [production]
14:24 <@> helmfile [STAGING] Ran 'apply' command on namespace 'sessionstore' for release 'staging' . [production]
14:21 <@> helmfile [STAGING] Ran 'apply' command on namespace 'sessionstore' for release 'staging' . [production]
14:10 <@> helmfile [STAGING] Ran 'apply' command on namespace 'sessionstore' for release 'staging' . [production]
13:44 <akosiaris> resync the sessionstore staging release as there was wrong port mapping (port 8080 instead of 8081) for both netpol and service [production]
13:43 <@> helmfile [STAGING] Ran 'sync' command on namespace 'sessionstore' for release 'staging' . [production]
13:40 <@> helmfile [STAGING] Ran 'sync' command on namespace 'sessionstore' for release 'staging' . [production]
13:09 <vgutierrez> upgrading prometheus-trafficserver-exporter to version 0.3.2 on the cache cluster - T231533 [production]
12:58 <vgutierrez> upgrading prometheus-trafficserver-exporter to version 0.3.2 on cp5001 - T231533 [production]
12:46 <vgutierrez> uploaded prometheus-trafficserver-exporter 0.3.2 to apt.wikimedia.org (stretch) - T231533 [production]
12:40 <moritzm> installing freetype security updates on jessie (stretch/buster already fixed) [production]