2551-2600 of 10000 results (62ms)
2019-05-13 §
13:06 <akosiaris@deploy1001> scap-helm cxserver cluster staging completed [production]
13:06 <akosiaris@deploy1001> scap-helm cxserver upgrade -f cxserver-staging-values.yaml staging stable/cxserver [namespace: cxserver, clusters: staging] [production]
13:06 <akosiaris@deploy1001> scap-helm cxserver finished [production]
13:06 <akosiaris@deploy1001> scap-helm cxserver cluster eqiad completed [production]
13:06 <akosiaris@deploy1001> scap-helm cxserver upgrade -f cxserver-eqiad-values.yaml production stable/cxserver [namespace: cxserver, clusters: eqiad] [production]
13:06 <akosiaris@deploy1001> scap-helm cxserver finished [production]
13:06 <akosiaris@deploy1001> scap-helm cxserver cluster codfw completed [production]
13:05 <akosiaris@deploy1001> scap-helm cxserver upgrade -f cxserver-codfw-values.yaml production stable/cxserver [namespace: cxserver, clusters: codfw] [production]
13:04 <arturo> install libjs-jquery from stretch in cloudnet servers T222862 [production]
13:02 <arturo> enable puppet in cloudvirt1024 to refresh some apt config T222862 [production]
12:49 <moritzm> updating puppetdb on deployment-puppetdb02 to 4.4.0-1~wmf2 (T219803) [production]
12:36 <cdanis> root@ms-be2013.codfw.wmnet ~ # umount /srv/swift-storage/sda1 && mount /srv/swift-storage/sda1 && umount /srv/swift-storage/sdb1 && mount /srv/swift-storage/sdb1 [production]
12:36 <krinkle@deploy1001> Synchronized php-1.34.0-wmf.4/resources/src/startup/startup.js: I76a2c8d52fa (duration: 00m 51s) [production]
12:33 <cdanis> root@ms-be2013.codfw.wmnet ~ # mount /srv/swift-storage/sdf1 [production]
12:25 <cdanis> cdanis@ms-be2015.codfw.wmnet ~ % sudo umount /srv/swift-storage/sdl1 && sudo mount /srv/swift-storage/sdl1 [production]
12:25 <cdanis> cdanis@ms-be2015.codfw.wmnet ~ % sudo umount /srv/swift-storage/sdf1 && sudo mount /srv/swift-storage/sdf1 [production]
12:18 <cdanis> cdanis@ms-be2015.codfw.wmnet /var/log % sudo mount /srv/swift-storage/sda1 [production]
12:08 <reedy@deploy1001> Synchronized php-1.34.0-wmf.4/extensions/Wikibase/lib/includes/Formatters/CachingKartographerEmbeddingHandler.php: T223085 (duration: 00m 50s) [production]
11:59 <reedy@deploy1001> Synchronized php-1.34.0-wmf.4/composer.json: T215746 (duration: 00m 49s) [production]
11:58 <reedy@deploy1001> Synchronized php-1.34.0-wmf.4/vendor/: T215746 (duration: 01m 30s) [production]
11:43 <reedy@deploy1001> Synchronized php-1.34.0-wmf.4/extensions/VisualEditor/: T222639 (duration: 00m 52s) [production]
11:04 <ema> cp-ats rolling restart to apply https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/509456/ [production]
10:39 <jforrester@deploy1001> Synchronized php-1.34.0-wmf.4/includes/http/HttpRequestFactory.php: T222935 Hot-deploy fix for HttpRequestFactory (duration: 00m 50s) [production]
10:38 <jbond42> update puppet5 and facter3 in eqiad [production]
10:17 <vgutierrez> rebooting cloudvirt1024 - T209707 [production]
09:40 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1064 T217396 (duration: 00m 49s) [production]
09:33 <hashar> Upgrading Zuul 2.5.1-wmf7 -> 2.5.1-wmf9 T105474 [production]
07:27 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully pool db1130 (s5) and db1138 (s4) T222682 (duration: 00m 50s) [production]
07:08 <elukey> slow roll restart of celery on ores* nodes to allow cores to be generated upon segfault - T222866 [production]
07:05 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: More traffic for db1130 (s5) and db1138 (s4) T222682 (duration: 00m 50s) [production]
06:53 <moritzm> installing ghostscript security updates [production]
06:44 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: More traffic for db1130 (s5) and db1138 (s4) T222682 (duration: 00m 49s) [production]
06:09 <marostegui> Compress s2, s6 and s7 on labsdb1012 - T222978 [production]
05:50 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: More traffic for db1130 (s5) and db1138 (s4) T222682 (duration: 00m 49s) [production]
05:41 <marostegui> Optimize tables on pc2007 [production]
05:18 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Pool db1130 into s5 and db1138 into s4 T222682 (duration: 00m 49s) [production]
05:17 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Pool db1130 into s5 and db1138 into s4 T222682 (duration: 00m 51s) [production]
2019-05-12 §
15:32 <elukey> rollback python-kafka one eventlog1002 to 1.4.1-1~stretch1 - T222941 [production]
12:14 <elukey> restart eventlogging on eventlog1002 - all processors stuck due to kafka python (T222941) [production]
05:31 <marostegui> DIsable notifications for db1116:s8 Slave LAG check as this is a snapshot source [production]
2019-05-11 §
18:26 <reedy@deploy1001> Synchronized wmf-config/interwiki.php: Update interwiki cache (duration: 02m 57s) [production]
06:37 <elukey> restart eventlogging on eventlog1002 - huge kafka consumer lag accumulated (T222941) [production]
02:01 <mutante> actinium - low disk space - apt-get clean - gzip /var/log/squid3/access.log.1 [production]
2019-05-10 §
18:58 <cdanis> cdanis@cumin1001.eqiad.wmnet ~ % sudo cumin -b 15 -p 95 '*' 'run-puppet-agent -q --failed-only' [production]
18:51 <cdanis> cdanis@cumin1001.eqiad.wmnet ~ % sudo cumin -b 15 -p 95 '*' 'run-puppet-agent -q --failed-only' [production]
18:49 <cdanis> cdanis@cumin1001.eqiad.wmnet ~ % sudo cumin '*' 'enable-puppet "Puppet breakages on all hosts -- cdanis"' [production]
18:39 <cdanis> cdanis@cumin1001.eqiad.wmnet ~ % sudo cumin '*' 'disable-puppet "Puppet breakages on all hosts -- cdanis"' [production]
16:50 <reedy@deploy1001> Synchronized dblists/: Update size related dblists (duration: 00m 49s) [production]
16:31 <ebernhardson> drop archive indices from cloudelastic [production]
16:11 <ariel@deploy1001> Finished deploy [dumps/dumps@70e8498]: look for dumpstatus json file per wiki run (duration: 00m 05s) [production]