5801-5850 of 10000 results (90ms)
2019-05-11 §
02:01 <mutante> actinium - low disk space - apt-get clean - gzip /var/log/squid3/access.log.1 [production]
2019-05-10 §
18:58 <cdanis> cdanis@cumin1001.eqiad.wmnet ~ % sudo cumin -b 15 -p 95 '*' 'run-puppet-agent -q --failed-only' [production]
18:51 <cdanis> cdanis@cumin1001.eqiad.wmnet ~ % sudo cumin -b 15 -p 95 '*' 'run-puppet-agent -q --failed-only' [production]
18:49 <cdanis> cdanis@cumin1001.eqiad.wmnet ~ % sudo cumin '*' 'enable-puppet "Puppet breakages on all hosts -- cdanis"' [production]
18:39 <cdanis> cdanis@cumin1001.eqiad.wmnet ~ % sudo cumin '*' 'disable-puppet "Puppet breakages on all hosts -- cdanis"' [production]
16:50 <reedy@deploy1001> Synchronized dblists/: Update size related dblists (duration: 00m 49s) [production]
16:31 <ebernhardson> drop archive indices from cloudelastic [production]
16:11 <ariel@deploy1001> Finished deploy [dumps/dumps@70e8498]: look for dumpstatus json file per wiki run (duration: 00m 05s) [production]
16:11 <ariel@deploy1001> Started deploy [dumps/dumps@70e8498]: look for dumpstatus json file per wiki run [production]
16:05 <ejegg> moved adyen smashpig job runner to frdev1001 [production]
15:25 <_joe_> wiped opcache clean on all api, appservers [production]
15:05 <cdanis> cdanis@mw1239.eqiad.wmnet ~ % sudo php7adm /opcache-free [production]
15:05 <Krinkle> fix opcache krinkle@mw1268:~$ scap pull [production]
15:04 <cdanis> cdanis@mw1268.eqiad.wmnet ~ % sudo php7adm /opcache-free [production]
15:03 <Krinkle> ran 'scap pull' on mw1239.eqiad.wmnet to fix opcache corruption [production]
14:56 <jbond42> uploade zuul_2.5.10-wmf9 to jessie-wikimedia [production]
14:54 <krinkle@deploy1001> Synchronized wmf-config/CommonSettings.php: T99740 / d9dbecad9c7b (duration: 00m 51s) [production]
14:33 <akosiaris@deploy1001> scap-helm eventgate-analytics finished [production]
14:32 <akosiaris@deploy1001> scap-helm eventgate-analytics cluster staging completed [production]
14:32 <akosiaris@deploy1001> scap-helm eventgate-analytics upgrade -f lala.yaml staging stable/eventgate-analytics [namespace: eventgate-analytics, clusters: staging] [production]
14:30 <akosiaris@deploy1001> scap-helm eventgate-analytics finished [production]
14:30 <akosiaris@deploy1001> scap-helm eventgate-analytics cluster eqiad completed [production]
14:30 <akosiaris@deploy1001> scap-helm eventgate-analytics upgrade -f eventgate-analytics-eqiad-values.yaml production stable/eventgate-analytics [namespace: eventgate-analytics, clusters: eqiad] [production]
14:30 <akosiaris@deploy1001> scap-helm eventgate-analytics finished [production]
14:30 <akosiaris@deploy1001> scap-helm eventgate-analytics cluster codfw completed [production]
14:30 <akosiaris@deploy1001> scap-helm eventgate-analytics upgrade -f eventgate-analytics-codfw-values.yaml production stable/eventgate-analytics [namespace: eventgate-analytics, clusters: codfw] [production]
13:30 <ema> pool cp3038 w/ ATS backend T222937 [production]
12:19 <ema> depool cp3038 and reimage as upload_ats T222937 [production]
11:52 <jbond42> (un)load edac kernel modules on elastic1029 to test resetting counters [production]
11:04 <jbond42> restart refinery-eventlogging-saltrotate on an-coord1001 [production]
10:30 <moritzm> installing symfony security updates [production]
09:17 <jynus> disabling replication lag alerts for backup source hosts on s1, s4, s8 T206203 [production]
07:14 <moritzm> uploaded linux-meta 1.21 for jessie-wikimedia (pointing to the new -9 ABI introduced with the 4.9.168 kernel) [production]
07:12 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1100 into API (duration: 00m 50s) [production]
06:55 <ema> swift-fe: rolling restart to enable ensure_max_age T222937 [production]
06:40 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1100 into API (duration: 00m 50s) [production]
06:27 <ema> ms-fe1005: pool with ensure_max_age T222937 [production]
06:26 <ariel@deploy1001> Finished deploy [dumps/dumps@6f9a5a4]: remove sleep between incr dumps of wikis (duration: 00m 05s) [production]
06:26 <ariel@deploy1001> Started deploy [dumps/dumps@6f9a5a4]: remove sleep between incr dumps of wikis [production]
06:22 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1100 (duration: 00m 50s) [production]
06:17 <ema> ms-fe1005: depool and test ensure_max_age T222937 [production]
06:09 <_joe_> depooling mw1261 for tests [production]
05:41 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Pool db2105 db2109 into s3 T222772 (duration: 00m 49s) [production]
05:40 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Pool db2105 db2109 into s3 T222772 (duration: 00m 52s) [production]
05:40 <elukey> execute kafka preferred-replica-election on kafka-jumbo1001 as attempt to rebalance traffic (1002 seems handling way more than others since some days) [production]
05:32 <elukey> restart eventlogging daemons on eventlog1002 - kafka consumer errors in the logs, some lag built over time [production]
05:08 <marostegui> Stop MySQL on db1100 [production]
05:04 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1100 (duration: 00m 50s) [production]
04:56 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Repool db2112 (duration: 00m 51s) [production]
00:15 <smalyshev@deploy1001> Finished deploy [wdqs/wdqs@e13facb]: Downgrade LDF server back for T222471 (duration: 00m 37s) [production]